Thursday, July 4, 2024

Educational Evaluation and Research: Similarities and Differences

1970

Educational Evaluation and Research: Similarities and Differences

Gene V. Glass and Blaine R. Worthen
Laboratory of Educational Research
University of Colorado

This paper is excerpted in large part from the authors' paper "Educational Inquiry and the Practice of Education," which is scheduled to appear chapter in Frameworks for Viewing Educational Research, Development, Diffusion, and Evaluation, ed. by H. Del Shalock (Monmouth, Oregon: Teaching Research Division of Oregon State System of Higher Education, press).
Curriculum evaluation is complex. It is not a simple matter of stating behavioral objectives or building a test or analyzing some data, though it may include these. A thorough curriculum evaluation will contain elements of a dozen or more distinct activities. The mixture of activities that a particular evaluator concocts will, of course, be influenced by resources time, money, and expertise, the good-will of curriculum developers, and the like. But equally important (and more readily influenced) is what image the evaluator holds of his speciality: its responsibilities, duties, uniquenesses, and similarities to related endeavors. Some readers may think that entirely too much fuss is being made defining "evaluation." But we cannot help being concerned with the of words and, more importantly, with how they influence action. We frequently meet persons responsible for evaluating a program whose efforts are victimized by the particular semantic problem addressed paper. By happenstance, habit, or methodological bias, they may, for example label the trial and investigation of a new curriculum program with the epithet "research" or "experiment" instead of "evaluation." Moreover, the inquiry they conduct is different for their having chosen to call it a research project or an experiment, and not an evaluation. Their choice predisposes the literature they read (it will deal with research or experimental design), the consultants they call in (only acknowledged experts in designing and analyzing experiments), and how they report the results (always in the best tradition of the American Educational Research Journal or the Journal of Experimental Psychology). These are not the paths to relevant data or rational decision-making about curricula. Evaluation is an undertaking separate from research. Not every educational researcher can evaluate a new curriculum, anymore than every physiologist can perform a tonsilectomy.

Educational research and evaluation have much in common. However, since this is a time when the two are frequently confused with each other, there is a point in emphasizing their differences rather than their similarities. The best efforts of the investigator whose responsibility is evaluation but whose conceptual perspective is research may eventually prove to be worthless as either research or evaluation.

This paper represents an attempt to deal with one problem that should be resolved before much conceptual work on evaluation can proceed effectively-- that is, how to distinguish the more familiar educational research from the newer, less familiar activity of educational evaluation. In the sections that follow, we deal with three interrelated aspects of the problem: first, we attempt to define and distinguish between research and evaluation as general classes of educational inquiry, second, we consider the different types of research and evaluation within each general class, and third, we discuss eleven characteristics that differentiate research from evaluation.

In spite of the shortcomings of simple verbal definitions, such definitions can serve as a point of departure. Those that follow serve only as necessary precursors and will be elaborated and defined more fully through the discussion of each activity later in this paper.

a) Research is the activity aimed at obtaining generalizable knowledge by contriving and testing claims about relationships among variables or describing generalizable phenomena. This knowledge, which may result in theoretical models, functional relationships, or descriptions, may be obtained by empirical or other systematic methods and may or may not have immediate application.

b) Evaluation is the determination of the worth of a thing. It is the process of obtaining information for judging the worth of an educational program, product, procedure, or educational objective, or the potential utility of alternative approaches designed to attain specified objectives. According to Scriven: "The activity consists simply in gathering and combining of performance data with a weighted set of goal scales to yield either comparative or numerical ratings; and in the justification of (a) the data- gathering instruments, (b) the weightings, and (c) the selection of goals." [Note 1]

We have not defined research very differently from the way in which it is generally viewed (e.g., by Kerlinger, 1964). However, definitions of evaluation are much more varied [Note 2], and while most of these definitions are relevant to evaluation, in that they describe or define parts of the total evaluation process or activities attendant on evaluation, they seem to address only obliquely what is for us the touchstone of evaluation: the determination of merit or worth. Our definition is intended to focus directly on the systematic collection and analysis of information to determine the worth of a thing.

Working within this framework, the curriculum evaluator would first identify the curriculum goals and, using input from appropriate reference groups, determine whether or not the goals are good for the students, parents, and community served by the curriculum program. He would then collect evaluative information that bears on those goals as well as on identifiable side-effects that result from the program. When the data have been analyzed and interpreted, the evaluator would judge the worth of the curriculum and in most cases communicate this in the form of a recommendation to the individual or body ultimately responsible for making decisions about the program.

Both research and evaluation can be further divided into subactivities. The distinction between basic and applied research seems to be well entrenched in the parlance of educational research. Although these constructs might more properly be thought of as the ends of a continuum than as a dichotomy, they do have utility in differentiating between broad classes of activities. [Note 3] The distinction between the two also helps when the relationship of research to evaluation is considered. The United States National Science Foundation adopted the following definitions of basic and applied research:

Basic research is directed toward increase of knowledge; it is research where the primary aim of the investigator is a fuller understanding of the subject under study rather than a practical application thereof. Applied research is directed toward practical applications of knowledge. [Applied research projects] have specified commercial objectives with respect to either products or processes. [Note 4]
Applied research, when successful, results in plans, blueprints, or directives for development in ways that basic research does not. In applied research, the knowledge produced must have almost immediate utility, whereas no such constraint is imposed on basic research. Basic research results in a deeper understanding of phenomena in systems of related phenomena, and the practical utility of the knowledge thus gained need not be foreseen.

Two activities that might be considered variants of applied research are institutional research and operations research, activities aimed at supplying institutions or social systems with data relevant to their operations. To the extent that the conclusions of inquiries of this type are generalizable, at least across time, these activities may appropriately be subsumed under the "research" rubric. However, where the object of the search becomes nongeneralizable information on performance characteristics of a specific program or process, the label "evaluation" might more appropriately be applied.

Evaluation has sometimes been considered merely a form of applied research that focuses only on one curriculum program, one course, or one lesson. This view ignores an obvious difference between the two--the level of generality of the knowledge produced. Applied, as opposed to basic, research is mission-oriented and aimed at producing knowledge relevant to providing a solution (generalizable) to a general problem. Evaluation is focused on collecting specific information relevant to a specific problem, program, or product.

It was mentioned earlier that many "types" of evaluation have been proposed in writings on the subject. For example, the process of needs analysis (identifying and comparing intended outcomes of a system with actual outcomes on specified variables) might well qualify as an evaluation activity if, as Scriven (1967) and Stake (1970) have suggested, the intended outcomes are themselves thoroughly evaluated. The assessment of alternative plans for attaining specified objectives might also be considered a unique evaluation function (see the discussion of "input" evaluation by Stufflebeam, 1968), although it seems to us that such an assessment might be considered a variant. form of outcome evaluation that occurs earlier in the temporal sequence and attempts to establish the worth of alternative plans for meeting desired goals. Other proposed evaluation activities such as "program monitoring" (Worthen & Gagne, 1969) or "process evaluation" (Stufflebeam, 1968) seem in retrospect to belong less to evaluation than to operations management or some other function in which information is collected but no evaluation occurs.

The difficulty inherent in deciding whether or not other activities of the types discussed above should be considered as "evaluation" may well stem from the conflict of the roles and goals of evaluation that was mentioned by Scriven (1967). Evaluation can contribute to the construction of a curriculum program, the prediction of academic success, the improvement of an existing course, or the analysis of a school district's need for compensatory education. But these are roles it can play and not the goal it seeks. The goal of evaluation must be to answer questions of selection, adoption, support, and worth of educational materials and activities. It must be directed toward answering such questions as, "Are the benefits of this curriculum program worth its cost?" or "Is this textbook superior to its competitors?" The typical evaluator is trained to play other roles besides evaluating. However, all of his activities (e.g., test construction, needs assessment, context description) do not become evaluation by merit of the fact that they are done by an evaluator. (Evaluators brush their teeth, but brushing teeth is not therefore evaluation.)

Unless inclusion of hybrid activities becomes essential to the point under consideration, the terms "research" and "evaluation" will be used in the remainder of this paper to refer to the "purest" type of each, basic research and outcome evaluation. That this approach results in over- simplification is admitted, but the alternative of attempting to discuss all possible nuances is certain to result in such complexity that the major points in this paper would be completely obfuscated.

Eleven characteristics of inquiry that distinguish research from evaluation are discussed below.

  1. Motivation of the Inquirer. Research and evaluation appear generally to be undertaken for different reasons. Research is pursued largely to satisfy curiosity; evaluation is done to contribute to the solution of a particular problem. The researcher is intrigued; the evaluator (or at least, his client) is concerned. The researcher may believe that his work has greater long-range payoff than the evaluator's. However, one must be nimble to avoid becoming bogged down in the seeming paradox that the policy decision to support basic inquiry because of its ultimate practical payoff does not imply that researchers are pursuing practical ends in their daily work.
  2. The Objectives of the Search. Research and evaluation seek different ends. Research seeks conclusions; evaluation leads to decisions (see Tukey, 1960). Cronbach and Suppes distinguish between decision-oriented and conclusion-oriented inquiry. In a decision-oriented study the investigator is asked to provide information wanted by a decision-maker: a school administrator, a government policy- maker, the manager of a project to develop a new biology textbook, or the like. The decision-oriented study is a commissioned study. The decision-maker believes that he needs information to guide his actions and he poses the question to the investigator. The conclusion-oriented study, on the other hand, takes its direction from the investigator's commitments and hunches. The educational decision-maker can, at most, arouse the investigator's interest in a problem. The latter formulates his own question, usually a general one rather than a question about a particular institution. The aim is to conceptualize and understand the chosen phenomenon; a particular finding is only a means to that end. Therefore, he concentrates on persons and settings that he expects to be enlightening. [Note 5] Conclusion-oriented inquiry is much like what is referred to here as research; decision-oriented inquiry typifies evaluation as well as any three words can.
  3. Laws versus Descriptions. Closely related to the distinction between conclusion-oriented and decision-oriented are the familiar concepts of nomothetic (law-giving) and idiographic (descriptive of the particular). Research is the quest for laws, that is, for statements of the relationships among two or more variables or phenomena. Evaluation merely seeks to describe a particular thing with respect to one or more scales of value.
  4. The Role of Explanation. The nomothetic and idiographic converge in the act of explanation, namely, in the conjoining of general laws with descriptions of particular circumstances, as in "if you like three-minute eggs back home in Vancouver you'd better ask for a five-minute egg at the Brown Palace in Denver because the boiling point of water is directly proportional to the absolute pressure [the law], and at 5,280 ft. the air pressure is so low in Denver that water boils at 1950F [the circumstances]." Scientific explanations require scientific laws, and the disciplines related to education appear to be far from the discovery of general laws on which explanations of incidents of schooling can be based. There is considerable confusion among investigators in education about the extent to which evaluators should explain ("understand") the phenomena they evaluate. A fully proper and useful evaluation can be conducted without producing an explanation of why the product or program being evaluated is good or bad or of how it operates to produce its effects. It is fortunate that this is so, since evaluation in education is so needed and credible explanations of educational phenomena are so rare.
  5. Autonomy of the Inquiry. The important principle that science is an independent and autonomous enterprise is well stated by Kaplan: It is one of the themes of this book that the various sciences, taken together, are not colonies subject to the governance of logic, methodology, philosophy of science, or any other discipline whatever, but are, and of right ought to be, free and independent. Following John Dewey, I shall refer to this declaration of scientific independence as the principle of autonomy of inquiry. It is the principle that the pursuit of truth is accountable to nothing and to no one not a part of that pursuit itself.
  6. Not surprisingly, autonomy of inquiry has proved to be an important characteristic typifying research. As Cronbach and Suppes have indicated, evaluation is undertaken at the behest of a client, while the researcher sets his own problems. It will be seen later that the differing degree of autonomy that the researcher and the evaluator enjoy has implications for how they should be trained and how their respective inquiries are pursued. [Note 6] Properties of the Phenomena That Are Assessed. Evaluation seeks to assess social utility directly. Research may yield evidence of social utility, but only indirectly--because empirical verifiability of general phenomena and logical consistency may eventually be socially useful. A touchstone for discriminating between an evaluator and a researcher is to ask whether the inquiry would be regarded as a failure if it produced no information on whether the phenomenon studied was useful or useless. A researcher answering qua researcher would probably say "No." Inquiry may be seen as directed toward the assessment of three properties of statements about a phenomenon: its empirical verifiability by accepted methods, its logical consistency with other accepted or known facts, and its social utility. Most disciplined inquiry aims to assess each property in varying degrees. In Figure 1, several areas of inquiry within psychology are classified with respect to the degree to which they seek to assess each of the above three properties. Their distance from each vertex is inversely related to the extent to which they seek the property it represents. [Note 7]

  7. "Universality" of the Phenomena Studied. Perhaps the highest correlate of the research-evaluation distinction is the "universality" of the phenomena being studied. (We apologize for the grandness of the term "universal" and our inability to find a more modest one to convey the same meaning.) Researchers work with constructs having a currency and scope of application that make the objects one evaluates seem parochial by comparison. An educational psychologist experiments with "reinforcement" or "need achievement," which he regards as neither specific to geography nor to one point in time. The effects of positive reinforcement following upon a response are assumed to be phenomena shared by most men in most times; moreover the number of specific instances of human behavior that are examples of the working of positive reinforcement is great. Not so with the phenomena studied in evaluation. A particular textbook, an organizational plan, and a filmstrip have a short life expectancy and may not be widely shared. However, whenever their cost or potential payoff rises above negligible level, they are of interest to the evaluator. Three aspects of the generalizability ("universality") of a phenomenon can be identified: generality across time (Will the phenomenon--a textbook, "self-concept," etc.--be of interest fifty years hence?); generality across geography (Is the phenomenon of any interest to people in the next town, the next state, across the ocean?); applicability of the general phenomenon to a number of specific instances (Are there many specific examples of the phenomenon being studied or is this the "one and only"?). These three features of the object of an educational inquiry can be used to classify different inquiry types. Three types of inquiry are program evaluation (the evaluation of a complex of people, materials and organization that make up a particular educational program), product evaluation (the evaluation of a medium of schooling, such as a book, a film, or a recorded tape) and educational research. Program evaluation is depicted as concerned with a phenomenon (an "educational program") that has limited generalizability across time and geography. For example, the innovative "ecology curriculum" (including instructional materials, staff, students, and other courses in the school) in the Middletown Public Schools will probably not survive the decade; it is of little interest to the schools in Norfolk, which have a different set of environmental problems and instructional resources, and has little relation- ship to other curricula with other objectives. Product evaluation is concerned with assessing the worth of something, such as a new ecology textbook or an overhead projector, which can be widely disseminated geographically but will not be of interest ten years hence or produce any reliable knowledge about the educational process in general if its properties are studied. The concepts upon which educational research is carried out are supposed to be relatively permanent and applicable to schooling nearly everywhere. As such they should subsume a large number of instances of teaching and learning.
  8. Salience of the Value Question. At least in theory, a value can be placed on the outcome of an inquiry, and all inquiry is directed toward the discovery of something worthwhile and useful. In what we are calling evaluation, it is usually quite clear that some question of value is being addressed. Indeed, in evaluation, value questions are the sine qua non, and usually they determine what information is sought, whereas in research they are not the direct object. This is not to say that value questions are not germane in research, however. The goals may be the same in both research and evaluation--placing values on alternative explanations or hypotheses--but the roles, the use to which information may be put, may be quite different. For example, the acquisition of knowledge and improvement of self-concept are clearly value laden. The value question in the derivation of a new oblique transformation technique in factor analysis is not so obvious, but it is there nonetheless. Our purpose in raising this point is to call attention to the fact that, with respect to assessing the value of things, the difference between research and evaluation is one of degree, not of kind.
  9. Investigative Techniques. Many have recently expressed the opinion that research and evaluation should employ different techniques for gathering and processing data, that the methods appropriate to research--such as comparative experimental design--are not appropriate to evaluation, or that, with respect to techniques of empirical inquiry, evaluation is a thing apart. [Note 8] These arguments have been reviewed and answered elsewhere (Glass, 1968). We shall not cover old ground here; we wish simply to note that while there may be legitimate differences between research and evaluation methods (Worthen, 1968), we see far more similarities than differences between research and evaluation with regard to the techniques by which empirical evidence is collected and judged to be sound. As Stake and Denny have indicated: "The distinction between research and evaluation can be overstated as well as understated.... Research and evaluators work within the same inquiry paradigm.... [Training programs for] both must include skill development in general educational research methodology. ," [Note 9] Hemphill has expressed the same opinion: The consequence of the differences between the proper function of evaluation studies and research studies is not to be found in differences in the subject interest or in the methods of inquiry of the researcher and of the evaluator." [Note 10] The notion that evaluation is really only sloppy research has a low incidence in writing but a high incidence in conversation--usually among researchers but also among some evaluators. This bit of slander arises from misconstruing the concept of "experimental control." One form of experimental control is achieved by the randomization of extraneous influences in a comparative experiment so that the effect of a possibly gross intervention (or treatment) on a dependent variable can be observed. Such control can be achieved either in the laboratory or the field; achieving it is a simple matter of designing an internally valid plan for assigning experimental units to treatment conditions. Basic research has no proprietary rights to such experimental control; it can be attained in the comparative study of two reinforcement schedules as well as in the comparative study of two curricula. The second form of control concerns the ability of the experimenter to probe the complex of conditions he creates when he intervenes to set up an "independent variable" and to determine which critical element in a swarm of elements is fundamental in the causal relationship between the independent and dependent variables. Control of this type occupies the greater part of the efforts of the researcher to gain understanding; however, it is properly of little concern to the evaluator. It is enough for the evaluator to know that something attendant upon the installation of Curriculum A (and not an extraneous, "uncontrolled" influence unrelated to the curriculum) is responsible for the valued outcome. To give a more definite answer about what that something is would carry evaluation into analytical research. Analytical research on the non-generalizable phenomena of evaluation is seldom worth the expense. It is only in this sense that evaluation (in the abstract) can be sloppy.
  10. Criteria for Judging the Activity. The two most important criteria for judging the adequacy of research are internal validity (to what extent the results of the study are unequivocal and not confounded with extraneous or systematic error variance) and external validity (to what extent the results can be generalized to other units--subjects, classrooms, etc.--with characteristics similar to those used in the study). If one were forced to choose the two most important of the several criteria that might be used for judging the adequacy of evaluation, they would probably be isomorphism (to what extent the information obtained is isomorphic with the reality-based information desired) and credibility (to what extent the information is viewed as believable by clients who need to use the information).
  11. Disciplinary Base. The call to make educational research multi- disciplinary is good advice for the research community as a whole; it is doubtful that the individual researcher is well advised, however, to attack his area of interest simultaneously from several different disciplinary bases. Some researchers can fruitfully work in the cracks between disciplines; but most will find it challenge enough to deal with the problems of education from the perspective of one discipline at a time. The specialization required to master even a small corner of a discipline works against the wish to make Leonardos of all educational researchers. That the educational researcher can afford to pursue inquiry within one paradigm and the evaluator cannot is one of many consequences of the autonomy of inquiry. When one is free to define his own problems for solution (as the researcher is), he seldom asks a question that takes him outside of the discipline in which he was trained. Psychologists pose questions that can be solved by the methods of psychology, as do sociologists, economists, and other scientists, each to his own. The seeds of the answer to a research question are planted with the question. The curriculum evaluator enjoys less freedom in the definition of the questions he must answer. Hence, the answers are not as likely to be found by use of a stereotyped methodology. Typically, then, the evaluator finds it necessary to employ a wider range of inquiry perspectives and techniques to deal with questions that do not have predestined answers. [Note 11]
Conclusion

To state as succinctly as possible the views that have led us to take the positions presented previously:

  1. Before adequate evaluation theory can be developed, it is necessary to differentiate clearly between evaluation and research.
  2. Distinguishing evaluation from research at an abstract level has direct implications for conducting evaluations and training evaluation personnel.
  3. Evaluation must include not only the collection and reporting of information that can be used for evaluative purposes, but also the actual judgment of worth.
  4. Evaluation and research are both disciplined inquiry activities that depend heavily on empirical inquiry. Evaluation draws more on philosophical inquiry and less on historical inquiry than does research.
  5. There are a variety of types of evaluation and research that differ from one another and increase the difficulty of distinguishing research from evaluation.
  6. More is to be gained at this time from drawing distinctions between evaluation and research than from emphasizing their communality. There are at least eleven characteristics of inquiry that distinguish evaluation from research. Each is important to the curriculum evaluator in guiding his approach to the evaluation of school programs and curriculum materials.
NOTES

1. Scriven (1967, p. 40).

2. See, for example, definitions of evaluation provided (either explicitly or implicitly) in the following writings: Provus (1969), Scriven (-1967), Stake (1967), Stufflebeam et al. (1970), and Tyler (1949).

3. Guba and Clark (n.d.) argue that the basic distinction applied is dysfunctional and that the two kinds of activity do not rightfully belong on the same continuum. The authors admit to problems with this, as with any, classification scheme. However, attempts to replace such accepted distinctions with yet another classification system seem destined to meet with little more success than have attempts to discard the descriptors "Democrat" and "Republican" because of the wide variance within the political parties thus identified.

4. National Science Foundation (1960, p. 5).

5. Cronbach and Suppes (1969, pp. 20-21).

6. Kaplan (1964, pp. 3-6).

7. Since this conceptualization of inquiry was first presented (Glass, 1969), the authors have found some interesting corroboration of an authoritative sort. Definition 3(a) of "theory" in Webster's Third New International Dictionary is tripartite: "The coherent set of hypothetical, conceptual and pragmatic principles forming the general form of reference for a field of inquiry (as for deducing principles, formulating hypotheses for testing, undertaking actions)." The three inquiry activities in Webster's definition correspond closely to the three inquiry properties in Figure 1.

8. See, for example, Carroll (1965), Cronbach (1963), Guba and Stufflebeam (1968), and Stufflebeam (1968).

9. Stake and Denny (1969, p. 374).

10. Hemphill (1969, p. 220).

11. See Hastings (1969). The discussion has taken on an utopian tone. In reality much evaluation is becoming as stereotyped as most research. Evaluators often take part in asking the questions they will ultimately answer, and they are all too prone to generate questions out of a particular "evaluation model" (e.g., the Stake model, the CIPP model) rather than a "discipline." Stereotyping of method threatens evaluation as it threatens research (Glass, 1969).

REFERENCES

Carroll, J. B. (1965). School Learning Over the Long Haul. Learning and the Educational Process. Edited by J. D. Krumboltz. Chicago: Rand McNally.

Cronbach, L. J. (1963). Course improvement through evaluation. Teachers College Record, 64, 672-683.

Cronbach, L. J. and Suppes, P. (1969) Research for Tomorrow's Schools: Disciplined Inquiry for Education. New York: Macmillan.

Galfo, A. J. and Miller, E. (1970). Interpreting Educational Research. 2nd Ed. Dubuque, Iowa: Wm. C. Brown Co.

Glass, G. V. (1967). Reflections on Bloom's "Toward a Theory of Testing Which Includes Measurement-Evaluation-Assessment." Research Paper No. 8. Boulder, Colorado: Laboratory of Educational Research, University of Colorado.

Glass, G. V. (1968). Some Observations on Training Educational Researchers. Research Paper No. 22. Boulder, Colorado: Laboratory of Educational Research, University of Colorado.

Glass, G. V. (1969). The Growth of Evaluation Methodology. Research Paper No. 27. Boulder, Colorado: Laboratory of Educational Research, University of Colorado.

Guba, E. G. and Stufflebeam, D. L. (1968). Evaluation: The Process of Stimulating, Aiding, and Abetting Insightful Action. An address delivered at the Second National Symposium for Professors of Educational Research, Boulder, Colorado, November 21, 1968.

Guba, E. G. and Clark, D. L. (n.d.) Types of Educational Research. Columbus, Ohio: The Ohio State University (Mimeographed.)

Hastings, J. T. (1969). The kith and kin of educational measures. Journal of Educational Measurement, 6, 127-130.

Hemphill, J. K. (1969). The Relationship Between Research and Evaluation Studies. Educational Evaluation: New Roles, New Means. Edited by R. W. Tyler. The 68th Yearbook of the National Society for the Study of Education, Part II. Chicago, Ill.: National Society for the Study of Education.

Hillway, T. (1964). Introduction to Research. 2nd Ed. Boston: Houghton Mifflin.

Kaplan, A. (1964). The Conduct of Inquiry. San Francisco: Chandler.

Kerlinger, F. N. (1964). Foundations of Behavioral Research. New York: Holt, Rinehart & Winston.

National Science Foundation. (1960). Reviews of Data on Research and Development, No. 17. NSF-60-10.

Provus, M. (1969). Evaluation of Ongoing Programs in the Public Schools. Educational Evaluation: New Roles, New Means . Edited by R. W. Tyler. The 68th Yearbook of the National Society for the Study of Education, Part II. Chicago, Ill.: National Society for the Study of Education.

Scriven, M. (1958). Definitions, Explanations, and Theories. Minnesota Studies in the Philosophy of Science, Vol. 2. Edited by H. Feigl, M. Scriven, and G. Maxwell. Minneapolis, Minnesota: University of Minnesota Press.

Scriven, M. (1967). The Methodology of Evaluation. Perspectives of Curriculum Evaluation. Edited by R. E. Stake. Chicago: Rand McNally.

Stake, R. E. (1967). Educational Information and the Ever-Normal Granary. Research Paper No. 6. Boulder, Colorado: Laboratory of Educational Research, University of Colorado.

Stake, R. E. and Denny, T. (1969). Needed Concepts and Techniques for Utilizing More Fully the Potential of Evaluation." Educational Evaluation: New Roles, New Means. Edited by R. W. Tyler. The 68th Yearbook of the National Society for the Study of Education, Part II. Chicago, Ill.: National Society for the Study of Education, 1969.

Stake, R. E. (1970). Objectives, Priorities, and Other Judgment Data. Review of Educational Research, 40(2), 181-212.

Stufflebeam, D. L. (1968). Evaluation as Enlightenment for Decision-Making. Columbus, Ohio: Evaluation Center, Ohio State University.

Stufflebeam, D. L. et al. (1970). Educational Evaluation and Decision-Making. Columbus, Ohio: Evaluation Center, Ohio State University.

Tukey, J. W. (1960). Conclusions vs. decisions. Technometrics, 2, 423-433.

Tyler, R. W. (1949). Basic Principles of Curriculum and Instruction. Chicago: University of Chicago Press.

Worthen, B. R. (1968). Toward a taxonomy of evaluation designs. Educational Technology, 8(15), 3-9.

Worthen, B. R. and Gagne, R. M. (1969). The Development of a Classification System for Functions and Skills Required of Research and Research-Related Personnel in Education. Technical Paper No. 1. Boulder, Colorado: AERA Task Force on Research Training.

No comments:

Post a Comment

Me & Saul

Me & Saul Saul – Kripke, that is – has been labeled the most influential philosopher of the second half of the 20th Century. Wik...