0% found this document useful (0 votes)
14 views

Development and Psychometric Integrity of a Measure of Ideational Behavior

The article discusses the development and psychometric integrity of the Runco Ideational Behavior Scale (RIBS), a measure designed to assess creative ideation. It highlights the importance of ideation in creativity research and critiques existing assessments that focus primarily on creative products. The RIBS aims to provide a more objective criterion for evaluating creative potential by focusing on actual behaviors related to idea generation.

Uploaded by

eterno0306jin
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Development and Psychometric Integrity of a Measure of Ideational Behavior

The article discusses the development and psychometric integrity of the Runco Ideational Behavior Scale (RIBS), a measure designed to assess creative ideation. It highlights the importance of ideation in creativity research and critiques existing assessments that focus primarily on creative products. The RIBS aims to provide a more objective criterion for evaluating creative potential by focusing on actual behaviors related to idea generation.

Uploaded by

eterno0306jin
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/247807623

Development and Psychometric Integrity of a Measure of Ideational Behavior

Article in Creativity Research Journal · October 2001


DOI: 10.1207/S15326934CRJ1334_16

CITATIONS READS
363 6,731

3 authors, including:

Mark A. Runco Jonathan Plucker


Southern Oregon University Johns Hopkins University
355 PUBLICATIONS 25,522 CITATIONS 289 PUBLICATIONS 12,383 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Mark A. Runco on 26 September 2014.

The user has requested enhancement of the downloaded file.


Creativity Research Journal Copyright 2000–2001 by
2000–2001, Vol. 13, Nos. 3 & 4, 393–400 Lawrence Erlbaum Associates, Inc.

Development and Psychometric Integrity of a Measure


of Ideational Behavior M. A. Runco,
Runco Ideational
J. A. Plucker,
Behavior
and W.Scale
Lim

Mark A. Runco
University of Hawaii, Hilo, and California State University, Fullerton
Jonathan A. Plucker and Woong Lim
Indiana University

ABSTRACT: Although creativity is an important part of Treffinger & Poggio, 1972; Wallach, 1976). Even Rog-
cognitive, social, and emotional activity, high-quality ers (1961), who suggested that creativity may be
creativity assessments are lacking. This article de- inextricable from self-actualization, said that “there
scribes the rationale for and development of a measure must be something observable, some product of cre-
of creative ideation. The scale is based on the belief that ation. Though my fantasies may be extremely novel,
ideas can be treated as the products of original, diver- they cannot usefully be defined as creative unless they
gent, and creative thinking—a claim J. P. Guilford eventuate in some observable product” (p. 349).
(1967) made years ago. Guilford himself assessed The creative product approach—in contrast to the
ideation with tests of divergent thinking, although study of creative processes or related personality char-
through the years scores from these tests have only acteristics—has the clear virtue of objectivity. Prod-
moderate predictive validity. This may be because pre- ucts can be easily quantified, and judgments about
vious research has relied on inappropriate criteria. For products can be quite reliable (e.g., Hennessey, 1994;
this reason, the Runco Ideational Behavior Scale Lindauer, 1993; Runco, 1989). Several product-rating
(RIBS) was developed. It can be used as a criterion of scales have been developed in recent years (Besemer &
creative ideation. Most items describe actual behaviors O’Quin, 1986, 1993; Hargreaves, Galton, & Robinson,
(i.e., overt actions and activities) that clearly reflect an 1996; O’Quin & Besemer, 1989; Reis & Renzulli,
individual’s use of, appreciation of, and skill with 1991; Treffinger, 1989; Westberg, 1991). Products also
ideas. Results obtained using both exploratory and are used in consensual assessments (Amabile, 1983,
confirmatory factor analysis are reported in this arti- 1996; Hennessey & Amabile, 1988; Kasof, 1995) and
cle. These suggest the presence of 1 or 2 latent factors in a great deal of the archival, historiometric, and cita-
within the scale. Based on the theoretical underpin- tion research on creative persons (e.g., Lindauer, 1993;
nings of the scale, a 1-factor solution was judged to be Simonton, in press). Paintings, publications, composi-
more interpretable than a 2-factor solution. Analyses tions, and citations each have been counted and used as
also supported the discriminant validity of the RIBS. an indicators of creative achievement.
Yet, the product approach has several serious limita-
tions. It does not apply well to children and non-
Creativity often is defined in terms of products. professionals, for example, and any claims about the
MacKinnon (1978), for example, argued that “the bed- mechanisms that actually underlie creative work are
rock of all studies of creativity … is an analysis of cre-
ative products, a determination of what it is that makes
Manuscript received December 1, 1998; accepted July 13, 1999.
them different from more mundane products” (p. 187). Correspondence and requests for reprints should be sent to Mark
This view is shared by many creativity researchers A. Runco, Department of Psychology, University of Hawaii, Hilo,
(Besemer & Treffinger, 1981; Ghiselin, 1963; Guil- 200 West Kawili Street, Hilo, HI 96720. E-mail: runco@ex-
ford, 1957; Jackson & Messick, 1965; Taylor, 1960; change.fullerton.edu.

Creativity Research Journal 393


M. A. Runco, J. A. Plucker, and W. Lim

entirely inferential (Runco, 1989; Runco, McCarthy, & of predictive validity (see Plucker & Renzulli, 1999;
Svensen, 1994). Furthermore, the discriminant validity Wallach, 1976). To make matters worse, most reviews
of judges’ ratings of creativity is debatable, at least of and references to the validity of ideational tests (e.g.,
within specific domains (Lindauer, 1990, 1991). Weisberg, 1993) cite only the older, less convincing re-
search.
The problem may not be with the predictor, how-
Ideation as Product ever, but instead may reflect reliance on inappropriate
criteria. In many studies, actual behavior and perfor-
Our contention is that ideas can be treated as the mances are used as criteria. Wallach and Wing (1969),
products of original, divergent, and even creative for example, used self-report checklists that provided
thinking. The voluminous literature on divergent scores for the frequency of activity and achievement in
thinking tests demonstrates that ideas can be quantified a variety of extracurricular domains. Milgram (in
in much the same manner as other products (e.g., press), Kogan and Pankove (1972), Runco (1986a),
Guilford, 1967; Runco, 1991, in press). Similarly, ob- Torrance (1972), and many others used the same sort of
jective judgments of the originality of ideas can be ob- checklist as a criterion. The problem is that the activi-
tained and tend to be reliable (e.g., Hocevar, 1981; ties listed on the checklists rarely have an obvious con-
Runco & Albert, 1985). It is important to note that nection to ideation. Ideation may be involved, but
analyses of ideas do not share the limitations that char- many of the activities require much more than ideation.
acterize many of the product analyses noted previ- Many of the activities on the checklist criteria ask
ously. The ideas of children and nonprofessionals can about achievements and performances that would re-
be examined, and their originality (and flexibility) can quire particular resources, opportunities, and do-
be objectively determined. Ideas are produced by ev- main-specific skills in addition to ideation.
eryone and thus may be especially useful for under- Tests of divergent thinking assess ideation; the only
standing “everyday creativity.” There is a growing ap- way to access the validity of those assessments is with
preciation of everyday creativity (Runco & Richards, a criterion that focuses as much as possible on ideation.
1998), and in a sense ideas are everyday products. That criterion should capture the facets of divergent
A final virtue of ideas is that the underlying mecha- thinking, including originality, fluency, and flexibility.
nisms have been described convincingly. Both Guil- Divergent thinking is tied to creative potential largely
ford’s (1967) Structure of Intellect model and Med- because it reflects the individual’s ability to be origi-
nick’s (1962) associative theory describe how ideas are nal, flexible, and fluent—with ideas.
generated, how ideas are connected to one another, and Thus, the most appropriate criterion for studying the
what influences ideation. This recognition of underly- predictive validity of divergent thinking tests is one that
ing mechanisms is in direct contrast to most research emphasizes ideation. With this in mind, Runco (in press)
on products. In research emphasizing products, the proposed the Runco Ideational Behavior Scale (RIBS).
products are all-important. Very little is said about the It is called a behavior scale because, whenever possible,
actual origins of the products and the mechanisms used the items describe actual overt behavior—behavior that
to create the products. clearly reflects the individual’s use of, appreciation of,
There are criticisms of assessments that rely on and skill with ideas. This article describes the develop-
ideation. The most significant criticism is probably ment, refinement, and reliability of this instrument. It is
that predictive validity estimates have been only mod- the first empirical investigation of the RIBS.
erate in magnitude. Runco (1986b; Runco & Mraz,
1994) and Plucker (1999) reported some of the highest
predictive validity coefficients (i.e., paths ranging from
Method
.5–.6). In comparison to predictive validity studies of
other psychological measures (e.g., intelligence tests),
such estimates are considered to be acceptable evi- Samples
dence of predictive validity. However, the majority of
the validity studies of divergent thinking and ideation Students from three universities completed the sur-
have produced significantly less convincing evidence vey instrument: one in the Mid-Atlantic, one in the

394 Creativity Research Journal


Runco Ideational Behavior Scale

Northeast, and one in the Western United States. Data plicitly reflected ideation, produced a pool of 24 items.
from students in the Mid-Atlantic and one class in the Factor analysis of the corresponding data from the ini-
Northeast (n = 97) were gathered first and formed the tial sample produced interpretable loadings for 23 of
sample for the initial analyses, and data from the third these items (see Appendix).
university and a second class in the Northeast school After removal of the 24th item, internal consistency
were used as a comparison sample (n = 224) to confirm estimates were calculated, and a more formal set of ex-
the results of the initial analyses. Students completed ploratory factor analyses was conducted. Confirmatory
the survey as an extra credit assignment in undergradu- factor analysis was then used to confirm the factor
ate educational psychology and teacher education structure for the comparison sample data for the same
courses. The average age of Mid-Atlantic and North- 23 items. Removal of statistical outliers had a negligi-
eastern sample was 21.2 years (SD = 4.5), but the pres- ble effect on the analyses, as did transformations per-
ence of several nontraditional students at the North- formed on specific item data to reduce skewness and
eastern university inflated the average (i.e., 76% of the kurtosis.
sample reported an age of 21 or younger). The average Evaluation of model appropriateness during the
sample GPA was 3.0 (SD = .5), and 62% of the partici- confirmatory factor analyses was guided by the use of
pants were women. Participants from the Western uni- several goodness-of-fit estimates: Chi-square divided
versity completed the surveys for extra credit in an un- by degrees of freedom, with values under 2 or 3 repre-
dergraduate child development class (average age = senting a good fit; Tucker–Lewis Index and compara-
24.7, SD = 3.6; average GPA = 2.98, SD = .6). tive fit index, ranging from 0 to 1 with values of .9 or
higher representing reasonably good fit; root mean
square error of approximation, with values of .05 or
Measures less representing a good fit and .08 or less suggesting
an adequate fit; and the Akaike Information Criterion
Measure of attitudes. Basadur’s (Runco & and Consistent Akaike Information Criterion, both
Basadur, 1993) 14-item measure of attitudes was admin- measures of badness-of-fit that are used to compare the
istered to a subsample of the students. This was later fit of different models to the same data set (i.e., models
used (see Results) to evaluate the discriminant validity that fit relatively well have lower values than relatively
of the RIBS. Basadur’s self-report has 8 items reflect- poor fitting models).
ing “openness to divergence” attitudes (which support
creative thinking) mixed with 6 items reflecting “ten-
dency toward premature closure” (which may inhibit
Results and Discussion
creative thinking). A Likert scale is given with each
item, ranging from 1 (never) to 5 (very often). Further
details are provided by Runco and Basadur (1995). Initial Sample

Cronbach’s alpha for the data in the initial sample


RIBS development. Mark A. Runco and Jona- was .92. Removal of any item or set of items did not ap-
than A. Plucker created an initial item pool of approxi- preciably improve estimates of internal consistency.
mately 100 items. After removing redundancies, we ar- Additional descriptive statistics appear in Table 1.
rived at an instrument of 93 items, with approximately The exploratory factor analysis used principal axis
one third of the items reverse-coded and a response factoring. Extraction procedures revealed four eigen-
scale ranging from 1 (never) to 5 (very often). The origi- values in excess of .90: 8.5, 1.7, 1.0, and .91, account-
nal goal was to create an instrument that contained ing for 37%, 7%, 5%, and 4% of the variance, respec-
many different kinds of ideational behaviors, but initial tively. Use of the Scree test suggested the presence of
analysis of pilot administration data suggested that the one factor, consistent with the theoretical basis of the
items were in fact too diverse (i.e., exploratory factor item selection. The parallel analysis technique recom-
analyses showed the existence of one strong factor and mended by Thompson and Daniel (1996), in which
more than 12 uninterpretable factors). A priori item se- eigenvalues from a data set with identical parameters
lection, which tightened the focus on the items that ex- but randomly selected data are compared to the eigen-

Creativity Research Journal 395


M. A. Runco, J. A. Plucker, and W. Lim

Table 1. Descriptive Statistics for Student Scores in the Initial and Comparison Samples

Initial Sample Comparison Sample

Item M SD Skewness Kurtosis M SD Skewness Kurtosis

1 3.53 1.04 –.17 –.66 3.21 1.15 .25 –1.11


2 3.34 .94 –.04 –.28 3.15 1.01 .02 –.54
3 3.8 .91 –.55 .06 3.62 .96 –.20 –.77
4 3.61 .91 –.59 .25 3.46 .86 –.15 –.44
5 3.08 .90 .11 –.37 2.89 .86 .12 –.01
6 3.56 1.11 –.49 –.69 3.17 1.01 .00 –.55
7 3.61 .97 –.28 –.25 3.25 .98 –.02 –.44
8 3.28 .93 –.03 –.31 3.30 .99 –.42 .02
9 3.68 .99 –.46 –.23 3.43 1.09 .18 2.52
10 4.26 .75 –.77 .26 4.03 .92 –.90 1.09
11 2.85 .96 .23 .21 2.64 1.00 .12 –.29
12 3.27 1.34 –.25 –1.11 3.20 1.27 –.25 –.94
13 3.02 1.19 .11 –.89 2.88 1.12 .07 –.65
14 3.55 1.14 –.38 –.86 3.19 1.11 –.11 –.72
15 3.00 1.34 .03 –1.22 2.82 1.22 .26 –.91
16 3.28 1.08 –.18 –.36 3.11 1.13 .06 –.81
17 3.56 1.23 –.51 –.78 2.92 1.13 .04 –.74
18 3.03 1.32 .06 –1.13 2.70 1.38 .66 .60
19 3.42 .92 –.44 –.20 3.37 1.00 –.35 –.30
20 2.93 .90 .15 –.13 2.82 .85 –.17 –.12
21 3.20 .85 .04 –.32 2.97 .85 .05 .06
22 3.66 .84 –.42 .27 3.45 .89 –.01 –.19
23 3.10 1.11 –.05 –.70 2.91 .94 –.01 –.12

Note: Initial sample standard error of skewness = .25; standard error of kurtosis = .49; comparison sample standard error of skewness = .16;
standard error of kurtosis = .32.

values from the actual analyses, suggested the pres- factor was derived from the earlier exploratory analy-
ence of one or possibly two factors. Given the results of ses and represented a higher order factor for observed
the Scree test and the theoretical foundation of the variables 11, 14, 15, 16, 17, and 18); the fourth model
scale, a one-factor solution was selected (three or more included the same two latent factors but allowed for
factor solutions resulted in uninterpretable patterns of them to be correlated; and the fifth model elaborated
factor loadings and high factor correlations). Item on model four by adding correlated uniquenesses be-
communalities and loadings appear in Table 2. tween variable pairs 1 and 11, 8 and 9, 1 and 21, and 20
and 23. Goodness-of-fit indexes for each of the models
appear in Table 3.
Comparison Sample Results of the analyses suggested that the one factor
with correlated uniquenesses and two correlated fac-
Calculation of Cronbach’s alpha for the comparison tors with correlated uniquenesses models had the best
sample produced an estimate (.91) that was nearly degree of fit to the data. However, distinguishing be-
identical to that obtained with the initial sample data. tween the fit of the two models was not as easy as we
To gather evidence related to the generalizability of the would have preferred—the two-factor model partially
factor structures obtained with the initial sample data, supported by the analyses with the initial sample fit
several models were fit to the data using confirmatory only slightly better than the theoretically supported
factor analysis: The first model hypothesized one la- one-factor model.
tent factor; the second model consisted of one latent To distinguish further between the ability of these
factor with correlated uniquenesses; the third model two models to fit the comparison sample data, boot-
contained two uncorrelated latent factors (the second strapping was used. Each model (including the inde-

396 Creativity Research Journal


Runco Ideational Behavior Scale

pendence and saturated models) was fit to the same that the two-correlated-factor model with correlated
1,000 subsamples randomly selected from the original uniquenesses was the more appropriate of the tested
comparison sample. The difference between popula- models for this particular set of data. Parameter esti-
tion moments (i.e., those in the original sample) and mates and bootstrapped standard errors for this model
the average moments estimated from the 1,000 boot- appear in Table 4.
strap samples appears in Table 3 under the column
heading “Mean Discrepancy.” The smaller the mean
discrepancy, the better the model fit. In this study, the Discriminant Validity
bootstrapping procedure provided additional evidence
Correlational analyses supported the discriminant
Table 2. Factor Loadings and Postextraction validity of the RIBS. The correlation between. the
Communalities for Exploratory Factor Analysis Using RIBS and GPA (n = 90) was .106 (p = .319), for exam-
Principal Axis Factoring for Initial Sample ple, and the correlations between the RIBS and the two
Item h2 Factor Loading
Basadur scales were .32 for the Premature Closure
Scale (six items) and .34 for the Openness to Diver-
1 .380 .616 gence Scale. Although these last two coefficients were
2 .552 .743 statistically significant (ps = .003 and .001, respec-
3 .270 .519
tively; both ns = 91), the most important psychometric
4 .373 .610
5 .460 .678 concern is shared variance and not probability. The
6 .470 .685 RIBS shared very little variance with GPA (1%), and
7 .293 .541 the two Basadur scales (10–12%).
8 .474 .689
9 .594 .771
10 .234 .484
11 .216 .465
Conclusions
12 .465 .682
13 .286 .535 The RIBS appears to be a sufficiently reliable in-
14 .434 .659 strument for use with groups and individuals. Our re-
15 .382 .618 sults, however, did not provide unambiguous evidence
16 .191 .437
about the construct validity of the RIBS. Statistically,
17 .304 .551
18 .192 .438 the existence of two factors appears to have been sup-
19 .091 .302 ported, although the theoretical distinction between the
20 .510 .714 factors is difficult to determine. The lack of theory sug-
21 .492 .702 gesting two factors and the high correlation between
22 .282 .531
them suggests that the one-factor structure should
23 .383 .619
guide interpretation of RIBS results.

Table 3. Goodness-of-Fit Estimates for Confirmatory Factor Analysis of Comparison Sample Scores

Mean
Model df χ2 χ2/df TLI CFI RMSEA AIC CAIC Discrepancy

Independence 253 2,036 8.1 .000 .000 .18 (.17–.19) 2,082 2,183 2079.19 (.50)
One Factor 230 602 2.6 .771 .792 .09 (.08–.09) 694 897 677.40 (.58)
One Factor With Correlated Uniquenesses 224 452 2.0 .856 .872 .07 (.06–.08) 556 785 536.48 (.61)
Two Factor 230 476 2.1 .767 .788 .10 (.08–.11) 568 742 622.82 (.59)
Two Factor, Correlated 229 426 1.9 .812 .830 .09 (.07–.10) 520 698 534.86 (.59)
Two Factor With Correlated Uniquenesses 225 395 1.8 .893 .905 .06 (.05–.07) 497 723 481.15 (.63)
Saturated 0 0 1.000 552 1,770 403.92 (2.22)

Note: TLI = Tucker–Lewis index; CFI = comparative fit index; RMSEA = root mean square error of approximation; AIC = Akaike Information
Criterion; CAIC = Consistent Akaike Information Criterion; Mean discrepancy = difference between population moments (i.e., those in the
original sample) and the average moments estimated from 1,000 bootstrap samples.

Creativity Research Journal 397


M. A. Runco, J. A. Plucker, and W. Lim

Table 4. Standardized Parameter Estimates and Besemer, S. P., & O’Quin, K. (1993). Assessing creative products:
Bootstrapped Standard Errors for the Two Correlated Progress and potentials. In S. G. Isaksen, M. C. Murdock, R. L.
Factor Models With Correlated Uniquenesses Fit to the Firestein, & D. J. Treffinger (Eds.), Nurturing and developing
Comparison Sample Data creativity: The emergence of a discipline (pp. 331–349).
Norwood, NJ: Ablex.
Item Factor 1 Factor 2 SE SMC SE Besemer, S. P., & Treffinger, D. J. (1981). Analysis of creative prod-
ucts: Review and synthesis. Journal of Creative Behavior, 15,
1 .56 .06 .32 .06 158–178.
2 .62 .05 .38 .06 Ghiselin, B. (1963). Ultimate criteria for two levels of creativity. In
3 .58 .05 .34 .06 C. W. Taylor & F. Barron (Eds.), Scientific creativity: Its
4 .52 .05 .27 .06 recognition and development (pp. 30–43). New York:
5 .70 .04 .48 .05 Wiley.
6 .65 .04 .43 .05 Guilford, J. P. (1957). Creative abilities in the arts. Psychological Re-
7 .51 .06 .26 .06 view, 64, 110–118.
8 .67 .04 .45 .06 Guilford, J. P. (1967). The nature of human intelligence. New York:
9 .65 .06 .42 .07 McGraw-Hill.
10 .30 .07 .09 .04 Hargreaves, D. J., Galton, M. J., & Robinson, S. (1996). Teachers’
11 .55 .06 .31 .06 assessments of primary children’s classroom work in the cre-
12 .72 .04 .51 .06 ative arts. Educational Research, 38, 199–211.
13 .44 .07 .19 .06 Hennessey, B. A. (1994). The consensual assessment technique:
14 .64 .05 .41 .07 An examination of the relationship between ratings of product
15 .54 .06 .29 .07 and process creativity. Creativity Research Journal, 7,
16 .74 .05 .55 .07 193–208.
17 .70 .04 .49 .05 Hennessey, B. A., & Amabile, T. M. (1988). The conditions of cre-
18 .74 .04 .55 .06 ativity. In R. J. Sternberg (Ed.), The nature of creativity: Con-
19 .42 .06 .18 .05 temporary psychological perspectives (pp. 11–38). New York:
20 .58 .05 .34 .06 Cambridge University Press.
21 .67 .05 .46 .07 Hocevar, D. (1981). Measurement of creativity: Review and critique.
22 .49 .06 .24 .06 Journal of Personality Assessment, 45, 450–464.
23 .58 .05 .34 .06 Jackson, P. W., & Messick, S. (1965). The person, the product, and
the response: Conceptual problems in the assessment of creativ-
Note: SMC = squared multiple correlation. Factor correlation = .68; ity. Journal of Personality, 33, 309–329.
uniqueness correlations: Items 1 and 11 = .38, Items 8 and 9 = .26, Kasof, J. (1995). Explaining creativity: The attributional perspec-
Items 1 and 21 = –.23, Items 20 and 23 = .22. tive. Creativity Research Journal, 8, 311–366.
Kogan, N., & Pankove, E. (1972). Creative ability over a five year
span. Child Development, 43, 427–433.
The RIBS is currently being modified in an attempt Lindauer, M. S. (1990). Reactions to cheap art. Empirical Studies of
to increase evidence of reliability and validity. Future the Arts, 8, 95–110.
studies will use the refined version to test the conten- Lindauer, M. S. (1991). Comparisons between museum and
tion that it is a useful criterion of divergent thinking mass-produced art. Empirical Studies of the Arts, 9, 11–22.
Lindauer, M. S. (1993). The span of creativity among long-lived his-
and original ideation (i.e., gather evidence of crite- torical artists. Creativity Research Journal, 6, 221–239.
rion-related and ecological validity). This evidence MacKinnon, D. W. (1978). In search of human effectiveness: Iden-
will also be useful in the construction of administra- tifying and developing creativity. Buffalo, NY: The Creative
tion, scoring, and interpretation guidelines for use with Education Foundation.
children and adults. Mednick, S. A. (1962). The associative basis for the creative process.
Psychological Review, 69, 220–232.
Milgram, R. (in press). Creativity: An idea whose time has come and
gone? In M. A. Runco & R. S. Albert (Eds.), Theories of cre-
ativity (Rev. ed.). Cresskill, NJ: Hampton.
References Mraz, W., & Runco, M. A. (1994). Suicide ideation and creative
problem solving. Suicide and Life Threatening Behavior, 24,
Amabile, T. M. (1983). The social psychology of creativity. New 38–47.
York: Springer-Verlag. O’Quin, K., & Besemer, S. P. (1989). The development, reliability,
Amabile, T. M. (1996). Creativity in context: Update to the social and validity of the revised creative product semantic scale. Cre-
psychology of creativity. Boulder, CO: Westview. ativity Research Journal, 2, 267–278.
Besemer, S. P., & O’Quin, K. (1986). Analyzing creative products: Plucker, J. (1999). Is the proof in the pudding? Reanalyses of
Refinement and test of a judging instrument. Journal of Cre- Torrance’s (1958–present) longitudinal study data. Creativity
ative Behavior, 20, 115–126. Research Journal, 12, 103–114.

398 Creativity Research Journal


Runco Ideational Behavior Scale

Plucker, J., & Renzulli, J. S. (1999). Psychometric approaches to the Appendix


study of human creativity. In R. J. Sternberg (Ed.), Handbook of
Runco Ideational Behavior Scale
human creativity (pp. 35–60). New York: Cambridge Univer-
sity Press.
Reis, S. M., & Renzulli, J. S. (1991). The assessment of creative
First Factor
products in programs for gifted and talented students. Gifted
Child Quarterly, 35, 128–134.
Rogers, C. (1961). Toward a theory of creativity. In On becoming a 1. I have many wild ideas.
person (pp. 347–359). Boston: Houghton Mifflin. 2. I think about ideas more often than most people.
Runco, M. A. (1986a). Divergent thinking and creative performance 3. I often get excited by my own new ideas.
in gifted and nongifted children. Educational and Psychologi-
4. I come up with a lot of ideas or solutions to
cal Measurement, 46, 375–384.
Runco, M. A. (1986b). Predicting children’s creative performance. problems.
Psychological Reports, 59, 1247–1254. 5. I come up with an idea or solution other people
Runco, M. A. (1989). The creativity of children’s art. Child Study have never thought of.
Journal, 19, 177–189. 6. I like to play around with ideas for the fun of it.
Runco, M. A. (1991). Divergent thinking. Norwood, NJ: Ablex.
7. It is important to be able to think of bizarre and
Runco, M. A. (in press). Divergent thinking and creative ideation.
Cresskill, NJ: Hampton. wild possibilities.
Runco, M. A., & Albert, R. S. (1985). The reliability and validity of 8. I would rate myself highly in being able to come
ideational originality in the divergent thinking of academically up with ideas.
gifted and nongifted children. Educational and Psychological 9. I have always been an active thinker—I have
Measurement, 45, 483–501.
lots of ideas.
Runco, M. A., & Basadur, M. (1993). Assessing ideational and
evaluative skills and creative styles and attitudes. Creativity and 10. I enjoy having leeway in the things I do and
Innovation Management, 2,166–173. room to make up my own mind.
Runco, M. A., McCarthy, K. A., & Svensen, E. (1994). Judgments of 12. I would take a college course which was based
the creativity of artwork from students and professional artists. on original ideas.
Journal of Psychology, 128, 23–31.
13. I am able to think about things intensely for
Runco, M. A., & Richards, R. (1998). Eminent creativity, everyday
creativity, and health. Greenwich, CT: Ablex. many hours.
Simonton, D. K. (in press). History, chemistry, psychology, and ge- 19. I try to exercise my mind by thinking things
nius: An intellectual autobiography of historiometry. In M. A. through.
Runco & R. S. Albert (Eds.), Theories of creativity (Rev. ed.). 20. I am able to think up answers to problems that
Cresskill, NJ: Hampton.
haven’t already been figured out.
Taylor, D. W. (1960). Thinking and creativity. Annals of New York
Academy of the Sciences, 91, 108–127. 21. I am good at combining ideas in ways that oth-
Thompson, B., & Daniel, L. G. (1996). Factor analytic evidence for ers have not tried.
the construct validity of scores: A historical overview and some 22. Friends ask me to help them think of ideas and
guidelines. Educational and Psychological Measurement, 56, solutions.
197–208.
23. I have ideas about new inventions or about how
Torrance, E. P. (1972). Predictive validity of the Torrance Tests of
Creative Thinking. Journal of Creative Behavior, 6, 236–252. to improve things.
Treffinger, D. J. (1989). Student invention evaluation kit: Field test
edition. Sarasota, FL: Center for Creative Learning.
Treffinger, D. J., & Poggio, J. P. (1972). Needed research on the mea-
surement of creativity. Journal of Creative Behavior, 6, Second Factor
253–267.
Wallach, M. A. (1976, January–February). Tests tell us little about
11. My ideas are often considered “impractical” or
talent. American Scientist, 64(1), 57–63.
Wallach, M. A., & Wing, C. W., Jr. (1969). The talented student: A even “wild.”
validation of the creativity–intelligence distinction. New York: 14. Sometimes I get so interested in a new idea that I
Holt, Rinehart & Winston. forget about other things that I should be doing.
Weisberg, R. W. (1993). Creativity: Beyond the myth of genius. New 15. I often have trouble sleeping at night, because
York: Freeman.
so many ideas keep popping into my head.
Westberg, K. L. (1991). The effects of instruction in the inventing
process on students’ development of inventions. Dissertation 16. When writing papers or talking to people, I of-
Abstracts International, 51. (University Microfilms No. ten have trouble staying with one topic because
9107625) I think of so many things to write or say.

Creativity Research Journal 399


M. A. Runco, J. A. Plucker, and W. Lim

17. I often find that one of my ideas has led me to other 18. Some people might think me scatterbrained or
ideas that have led me to other ideas, and I end up absentminded because I think about a variety of
with an idea and do not know where it came from. things at once.

400 Creativity Research Journal


View publication stats

You might also like