Wednesday, October 12, 2022

2011

Evaluating Teachers with Students' Test Scores

Gene V Glass

Introduction
In 1981, Jason Millman presented a chapter entitled "Student Achievement as a Measure of Teacher Competence" in the first edition of the Handbook of Teacher Evaluation. His approach was analytic: he broke down the global problem into its constituent systems; he defined terms and illustrated concepts and generally equipped the reader with the understanding of the elements of an evaluation process that would seek evidence of teachers' accomplishments in the learning of their students. I could not improve on Millman's analysis and I will not attempt to do so; the reader who desires the analytic account of the problem can not do better than read this chapter's counterpart in the first edition.

My intent is different. I wish to present a synthetic account of how the elements Millman identified are likely to fit together into an actual system for evaluating teachers. The whole is not merely the sum of its parts, as the shop-worn tag goes; but more importantly, the machine that is meant to do the world's work must be observed in the actual world where designs meet reality. When technical desiderata meet practical and political limitations, a truth of its own special sort is created. I have attempted to describe how a system of teacher evaluation that makes genuine use of student achievement data will work when attempted in contemporary schools. The issue of "pupil performance in teacher evaluation" needs to be discussed in a way that has not been separated from the many other pressures that shape a personnel system of assessment and rewards. The issue cannot be judged apart from how it exists in real places. The validity of such a process of evaluating teachers, its economy, its power to engender belief and win acceptance, depend on how it fits with many other realities of the teaching profession. What now must be addressed is how this notion gets applied in a context as complicated as education. How is the idea transformed as it moves from the statistician's mind to the real world? How does it fare when the tradeoffs and balances are struck? Is the concept of evaluating teachers by student progress trusted? Is it more a symbol of certain values that the community requires be honored than a reality of school personnel practices? Presenting hypothetical possibilities will not answer these questions, rather they are answered by portraying complex realities. The description to be presented is based on the actual experiences of a half-dozen school districts in which student achievement scores of tests were clearly tied to the evaluation of teachers for pay raises. Persons who viewed the process from many different sides were interviewed for the purpose of assembling this narrative; in addition, documents and publications about the programs were read and analyzed. Facts from the different sites merged naturally into a single coherent picture of what probably happens when teachers are evaluated and rewarded on the basis of student achievement measures. Consequently, I have presented a single composite of the half dozen sites. The name for the district is fictitious.

The Composite Sketch

The Montview Unified School District initiated a teacher career ladder program in 1975. The administrative staff had reviewed the merit pay programs of many school districts around the country. They read dozens of plans that referred vaguely to teachers being evaluated by pupil achievement; but never could they see where this promise was kept in a direct and systematic way for an entire district. They came to regard such rhetoric as mere lip-service paid to pupil growth, and they vowed to do better. The career ladder program they designed provided financial bonuses in amounts up to $2,500 for teachers who achieved the goals identified by the Superintendent and his staff who designed the program, without advice or consent of the local NEA and AFT affiliates. Even this simple principle of incentive and reward did not go unchallenged by community leaders. Critics argued that teachers were adequately compensated and that a system of bonuses could cause teachers to elevate personal gain above pupil well-being; proponents of the career ladder idea argued that pupil well-being could be embodied in the criteria for merit bonuses, so that no conflict of values need arise. Moderates argued that teachers should receive bonuses in the form of fees for professional workshops instead of taking them as income. The Montview school administration backed this proposal briefly before its quick death at the hands of an aroused teachers organization.

In the end, an ambitious superintendent's desire to make a reputation combined with a growing sense that American education was facing a crisis of international proportions overcame any doubts that might have remained.

Setting the Standard for Merit Reward. Bonuses for teachers who achieved pupil growth goals were a big part of the Montview Teacher Career Ladder (TCL) program. Though it was only one of several criteria for earning extra money, assuring pupil mastery of basic skills was politically crucial for winning public approval of the TCL program. In the Spring of 1974, the Superintendent revealed the details of the TCL plan:
"Teachers will receive bonuses for a) improving their own attendance in the classroom; b) meeting goals for student academic growth; c) teaching a subject in which there is a critical shortage of teachers (math, science, and special education); d) teaching in a school with a high percentage of minority pupils. Teachers would have to apply to be reviewed for TCL bonuses, with applications due by June 15th of the summer following the year under review. In this way, teachers who might not be in line for a merit bonus could avoid the embarrassment of being publicly turned down; if asked by a colleague, they need only say that they forgot or did not have the time to apply."

A 50-page manual of procedures was released in June that set out the details of the TCL bonus program. Teachers serving in schools with more than 50% minority enrollment would receive an additional $500 pay for the 1975-76 school year. Teachers of math, science (at junior high and high school levels) and special education would receive an additional $500 bonus. The teacher attendance criterion was the focus of particular attention. Normally teachers were eligible for ten sick leave days per school year. Substitute teachers cost the district about $50 a day. Teachers who missed fewer than five days during the school year would qualify for a financial bonus; for each day under five days total absences, the teacher was to receive $100. Hence, a teacher with no absences during the year would receive $500, one absence was worth $400, two absences earned $300, etc.

The Pupil Achievement Growth Standard. The regulations governing the pupil growth bonus were the most complex. At its first meeting, the administrative staff proposed a quota system for allocating bonuses to teachers for outstanding pupil progress. By whatever means ultimately chosen, teachers would be ranked from highest to lowest class average achievement gain during the school year, and the top half of the teachers would receive a $500 bonus. Nothing else would truly motivate teachers to excel, it was reasoned. Protests to this plan from teachers were immediate and loud. They promised to reject any plan that even smacked of a quota system. The recent experiences of a neighboring state were invoked to support the teachers' case; when the Legislature of Tennessee enacted a career ladder program for teachers, they stipulated in the act that rewards must not be allocated on a quota basis; all who qualify must be rewarded, and teachers must compete against an external standard instead of being pitted against each other. The Montview schools administrative staff had lost round one. A week later they presented their second attempt at a student progress bonus plan.

First, it was recognized that academic progress in some areas of the school curriculum was impossible to measure in any practical way. A number could be put on reading achievement for a class—though many would dispute its meaning—but measuring progress in art, physical education and the like was an entirely different matter. Thus an important element of the TCL plan from the beginning was the decision to base the $500 bonus not on the progress of an individual teacher's class but instead on the progress of the pupils in the entire school. Moreover, it was clear to the teachers organization that academic progress in basic skills at the secondary school level was less relevant than at the elementary level. Before school began in the fall, the regulations were revised: "All teachers in a school in which 75% of the teachers meet their pupil growth goal will receive a $500 bonus for the year. The pupil growth goal will be defined in terms of standardized test achievement in reading and math at the elementary school level, and in terms of criterion referenced test mastery at the secondary school level." The superintendent's staff felt that the negotiated regulation was the teachers' way of reducing competition, though they were willing to discuss it with the teachers as if it were a technical problem of quantifying difficult-to-measure goals.

The Superintendent and his staff saw clearly that the teacher attendance criterion could potentially save the district tens of thousands of dollars in substitute teacher fees. Indeed the criterion was designed to do just that, though publicly one heard far more about the need to provide continuity in the instructional program than of the cost savings by reducing substitute teacher use. So at the last stage in drafting the regulations for the TCL program, they reinforced the importance of the attendance criterion and made the pupil growth criterion dependent on it: only those teachers who met the attendance criterion of five or fewer days absent would receive the pupil growth bonus. The TCL bonus system could reward a teacher with as much as $2,000 additional pay in a school year. For example, a teacher of math with one day absent from a ghetto school assignment where the school met its student growth goal would receive the maximum $2,000 bonus. An elementary school teacher in a non-minority school who missed seven days work would receive no bonus regardless of the academic progress of the school or the teacher's class.

Refining the Pupil Growth Standard. The administrative staff set to work in the fall to define the student growth criteria that would qualify schools for teacher bonuses. They recognized immediately that teachers work in dissimilar settings. Some account must be taken of pupil ability, home circumstances, past school experiences and the like in setting a pupil growth criterion. Any criterion such as "The pupils in your class will average above the 50th percentile on the Metropolitan" would be seen as unfair to teachers of large classes of slow track pupils or of pupils in transient neighborhoods where half the students present at the end of the year were there when the school year began. The staff flirted temporarily with the notion that pupils in each grade could be randomly assigned to the teachers, thus assuring that each class started the year somewhere near equality on achievement, learning potential and all the rest. This idea was rejected without serious consideration when it was realized that the differences between schools were surely as large as any differences within them and no one could imagine randomizing across schools, and when the staff remembered that parents' preferences for particular teachers were very strong.

The first serious suggestion for a criterion was made for the elementary grades : the school must show a gain on standardized tests of one grade equivalent year to qualify for the teacher bonus. The district research and evaluation office did a quick check of how many elementary schools met this criterion in the two previous years, and discovered that 60% of the schools showed gains of a year or greater. When the teachers organization learned of these data, they protested that the criterion was unrealistically severe. The administration backed off. A consultant from the local university advised the district to adopt a method of linear regression residuals to measure teachers' contributions to pupils' academic growth. The consultant showed how entry level achievement, ability and socioeconomic level could be taken into account and used to adjust each pupil's year-end achievement score so it would reflect growth from a equal starting point. The research and evaluation department's resident statistician protested that the pretest measures of ability and achievement were less than completely reliable, that motivation wasn't captured accurately in any of these correction variables, and that when all the adjusting was done, the administration would still face the problem of deciding how large an adjusted gain must be to qualify for a bonus. The teachers organization rejected the consultant's advice as too esoteric to explain to the public or to teachers. Finally, several persons suggested almost simultaneously that if the school building's standardized achievement test grade equivalent gain was calculated for the previous year and two months added, a criterion that was clear and fair would result. The objection that the addition should be two months in richer schools and one month in poorer schools was quickly overridden in the interest of arriving at a solution.

At the secondary school level, the derivation of a pupil progress criterion was less convenient. Few of the teachers in a school above grade seven are directly involved in teaching basic academic skills. Standardized achievement tests would not solve the problem. The administration recommended that each teacher develop an end-of-year criterion referenced test and set a criterion of success in terms of the percent of items answered correctly. If 80% of the teachers in the secondary school met their criteria, all the teachers in the school would qualify for the merit bonus.

The First Year: Everybody Wins. The criteria having been set, the TCL program ran for its first year without any serious complications. By June 15th of 1976, the data were analyzed and the bonuses paid. One-third of the teachers received $1,000 in bonuses for teaching in subjects of critical need in minority schools. Of the 600 elementary and secondary teachers in the district, 85% reached the pupil growth criterion. Only three elementary schools did not meet the "Last year's gain + 2 mos" criterion for pupil growth. The teachers in one of the schools protested that the pupils had learned about the bonus feature and purposely bombed the posttest to make the teachers look bad. A second complaint was registered: some classes had too many bright pupils and enjoyed an unfair advantage. The research and evaluation office agreed to throw out the bottom 5% and top 5% of scores in any school and recalculate the gain. The school then achieved the criterion and so did one of the other elementary schools. This "throw out 10%" rule became a standard part of the measuring procedure in later years to guard against sabotage and improve the appearances of comparability. Against the recalculated standard, 97% of the teachers qualified for the student growth bonus; only 3% of the teachers worked in a school that didn't meet the criterion. Virtually all teachers qualified solely on the basis of student progress for a $500 bonus, but about 70 of them lost it because they had been absent from work more than five days during the year. As a result, about 85% of the elementary and secondary teachers received $1,000 bonuses for attendance and pupil growth.

Only one teacher in the district refused to volunteer for evaluation under the TCL program. Regina Tyle taught English Composition at the Junior and Senior level. Tyle was active in the Montview Teachers Association, and her reputation as an excellent teacher was well known. Assistant Superintendent for Instruction Ralph Marshall called her in when it became apparent that Tyle was not going along with the program.

Marshall: "Why aren't you in it, Regina; you're one of the best? It's a sure $1,000."

Tyle: "I don't believe in it. Where's a test for creativity? Where's the test that gets at what I'm trying to give these kids?"

Marshall: "Look, just make up something yourself. It doesn't have to be any big deal; we just want something we can point to and say, 'Yes, she met the criterion.'"

Tyle refused, on principle. Within 18 months, Tyle assumed the presidency of the state teachers association and left the district.

The first year of operation of the TCL program was regarded as a success. Standardized test data for the Montview district were up more than two months above the level of previous years for the elementary grades; at the secondary grades, no difference was apparent in test performance, but administrators reported a new seriousness of purpose among teachers and more business-like classrooms. One set of figures permitted no disputed interpretations. The average number of days teachers missed work in 1974-75, the year before the TCL program, was 12.5; during the initial year of the TCL program, the average teacher missed only three days. In 1974-75, the Montview Unified School District paid out $375,000 for substitute teacher fees for absent teachers. In 1975-76 under the TCL bonus plan, the average teacher was absent three days during the school year, which amounted to $100,000 in attendance bonus money (since 15% of the teachers missed more than five days and failed to qualify for the bonus) to the teachers and $90,000 in substitute teacher costs. Indisputably, the TCL attendance bonus plan saved the district $185,000 in personnel costs. The administration hailed the TCL program as a tremendous success. Superintendent Stevens's stock rose, his phone began to ring with inquiries from around the country.

The Second Year: New Efforts to Reach the Standard. The second year of the TCL program began amid growing realization by the administration and the school board that the pupil growth feature was costing the district $255,000 per year in bonus money and yet it was nearly perfectly correlated with the attendance bonus. Virtually all teachers qualified for the pupil growth bonus; it was denied only to those who missed too many work days. The $255,000 for the pupil growth/attendance bonus combined with the $190,000 in direct attendance bonus money put the cost of the TCL bonuses at $445,000, which was $70,000 more than the district spent for substitute teachers the year before the TCL program began. The TCL bonus system started to come under heavy fire at board meetings in the early fall of 1976-77. Having announced the bonus program with great publicity and having declared it a success, the administration and the board could not dissolve it without embarrassment, and besides, the test scores were up two months above past levels, at least at the elementary level. Nonetheless, the deliberations were tense as the board undertook its major item of business on the fall agenda, the revamping of the teacher salary schedule. Three month's deliberation and argument produced a new salary schedule that reflected a slightly reduced rate of growth in teacher salaries for the Montview district compared with the recent past rate and neighboring districts, which had begun to complain to Stevens and the Board that the historically higher Montview schedule was placing pressure on them to raise their schedules. The clinching argument for decelerating the growth rate in the salary schedule was the existence of the TCL bonus program which was adding about $1,000 to the average teacher's salary each year. In effect then, by the middle of the second year of the TCL merit program, the Montview Unified School District had merely designated a portion of teachers' salaries as due to meritorious attendance on the job and a second portion as due to meritorious enhancement of pupil academic progress. It would have been politically impossible to sell the public on the notion that teachers should receive a $1,000 bonus for merely showing up to do a job for which they were already being paid; so $500 was for attendance and $500 was for improved pupil progress. Thus pupil achievement came to serve as a proxy for improved teacher attendance.

By the middle of the second year, teachers began to see the difficulties of continuing to meet the pupil growth criterion. Teachers at the secondary level seemed little concerned. At meetings between secondary teachers and curriculum specialists the word was given out that last year's criterion referenced mastery goals would have to improve by at least 2% for teachers to qualify for the bonus. The teachers insisted that they be given the opportunity to make minor revisions in their tests to improve reliability and validity. This eminently reasonable request was granted and nothing more was heard. The data reported by secondary teachers in June showed that all secondary schools had once again reached the pupil academic progress goal.

At the elementary level, things were different. By the rules specified at the start of the TCL program, an elementary school that made 8 months grade equivalent gain in 1975-76 would have to make 10 months gain in 1976-77 to earn the bonus. The "throw out the top and bottom 10%" procedure had already been applied to the '75-76 data so that would not help improve the picture for '76-77. An inquiry from the teachers to the administration about switching from the Metropolitan to the Iowa Test was met with promise of rejection were such a request to be made. Several teachers remarked that their '75-76 scores had been unfairly depressed by children transferring into their class after Christmas. They requested that for '76-77, no scores be included for pupils who spent less than one full semester in their school; the administration granted the request. Uncertain that this new proviso alone would result in a significant increase in the '76-77 gain as compared with the '75-76 gain, the teachers set to work by January to insure that their pupils would be ready for the Metropolitan on its April visit. Over the next three and one-half months, test-like worksheets in language arts and math made increasingly frequent appearances in the Montview elementary school classes. At grades two and three, a series of units on addition and subtraction were hurriedly rewritten to change vertical display of math facts to horizontal, since the Metropolitan employed the latter format (perhaps, as a means of reducing paper costs). By April, the pupils were ready. As in the first year of the TCL program, teachers administered the standardized tests in their own classes. Many were helped by student teachers from the local university. Some student teachers asked their professors at the university what they thought of the practice of "talking the kids through the Metro." Everyone in the class understood what this meant. When asked by the professor whether they would do such a thing if their salary depended on it, no one demurred.

The June data for the elementary schools were encouraging. The average gain for the district was 11.3 months grade equivalent units. Only one elementary school failed to make the 10 months growth criterion, but since it was a high minority enrollment school, all the teachers received a $500 bonus for service in a minority school. In the second year of the TCL program, no teacher failed to receive at least $500 bonus money, and the average was $1,100; 90% of the teachers received $1,000 for the attendance and the pupil growth bonuses.

Years Three and Beyond. Year three of the TCL program ran without any significant changes. The elementary pupil growth criterion had risen to 12 months, but all elementary schools reached it. In year four, the criterion was 1.2 years, and the teachers began to complain that the criterion was becoming unrealistic. The school board heard the teachers organization's appeal for relief from the increasingly severe pupil progress criterion, but the steady outlay of over a half million dollars per year for TCL bonuses and growing pressure on the board to bring the basic district salary schedule more in line with those of neighboring districts forced them to hold the line on the "Last year's growth + 2 mos" criterion. In year five, the elementary schools were beginning to bump against the ceiling of the Metropolitan achievement test. Eight of the twenty elementary schools failed to make the pupil growth standard. The average TCL bonus in the district dropped to $700. The teachers began to complain vigorously.

Before the start of the 1980-81 school year, the teachers organization secured an agreement with the administration that teachers could choose which of the various levels (Primary, Intermediate I, Intermediate II, etc.) of the Metropolitan would be given in their class. Most of the teachers elected to have their pupils tested on the level of the test one step above that which they had administered for the last five years. The typical criterion in year six was 14 months grade equivalent units. In June of 1981, all but one elementary school once again made the pupil growth criterion. But the success was short-lived. The criterion of 16 months in 1982 was reached by only two-thirds of the schools, and in 1983, a third of the schools exceeded the growth standard of 15 grade equivalent months. Academic excellence had topped out in the Montview Unified School District, and elementary teachers bonuses, now obviously paid for low absences, dropped to an average of $600.

Administrative staff and principals watched as the amount spent on bonuses in the TCL program shrunk from about $600,000 in 1976 to about $350,000 in 1983. Teacher attendance had reached the level of about three absences per year in the first year of the program and remained at that level. But the number of teachers rewarded for outstanding pupil growth dwindled each year under the increasing pressure of producing greater growth rates year-in-year-out. During the summer of 1983, the administrative staff in collaboration with the building principals designed a School Collaborative Productivity Plan that entailed joint goal setting by teachers and the building principal in such areas as teacher attendance, pupil attendance, pupil academic growth and energy conservation. The SCPP quickly superseded the TCL since it subsumed the major elements of the TCL, but in addition now provided incentives for principals. Under the SCPP, principals could earn bonuses up to a maximum of $9,000; teachers could earn a maximum bonus of $1,000. At the end of year one of the SCPP, the calculations showed the average teacher receiving bonuses of $660, and the average principal earning bonuses of $8,300. The net cost of the SCPP program in its first year of operation was about $600,000. Bad times lay ahead for the principal bonus for student growth. Eventually two principals were detected altering answer sheets before they were sent for computer scoring; the principals were given classroom teaching assignments the next year and permanent replacements were installed in their administrative positions. The testing company selling scoring services to Montview devised a technique for automatically detecting unusually high rates of erasures of wrong answers. The Testing and Evaluation Office felt this would solve the problem of cheating.

In the spring of 1984, the State Education Agency analyzed and reported the results of its state-wide testing of elementary pupils on the Iowa Tests. The Montview Unified School District elementary school program was placed on Probationary Status for scoring significantly lower than expectation.

The vague air of scandal that hung around the TCL program had become a political liability. Teachers spoke out more openly about its drawbacks. In the Spring of 1985, Superintendent Stevens left his position to assume the superintendency of a large school district out of state. His successor suspended the TCL program.

Some Reflections on Teacher Evaluation With Student Achievement Data

This description focused on systems of evaluation that related teacher performance to pupil achievement. The goal of the evaluation was to allocate rewards to teachers—pay raises. Such "high stakes" evaluation subjects the system to severe stress; it is little surprise, perhaps, that it crumbles under the pressure. Some persons imagine a more modest role for teacher evaluation using student achievement. Perhaps student progress data could be used to diagnose and correct weaknesses in teaching style or procedures. I have not discussed this approach because I have not seen it in operation, and I doubt that it would work. While there is much to recommend evaluation of teaching through direct observation by supervisors and other experts, inferring errors of teaching procedure from students' test performance would be too dubious a practice to sustain any serious interest. Others, still holding on to hope for what seems a good idea, may argue that student achievement still can contribute to the evaluation of teacher "effectiveness," without specifying in what context, for what purpose or with what particular decisions in mind. Teachers should beware of such obfuscation. Student achievement data can not tell teachers how to teach; they are not viewed as credible for distinguishing good teachers from bad ones; and data once gathered will tend to be used.

Few still believe that facts are simply facts. What we call facts are the result of long, complex processes of interpretation. Even more are general conclusions the obvious product of a process of interpretation that goes far beyond "the data" and taps the beliefs and world view of those who adduce them. Some may read the above account and come away convinced that it speaks only to the urgent need to make all tests criterion referenced or to step up procedures for detecting and punishing deception. Others may see nothing but reasons to abandon what they think was a bad idea. The reflections which follow are rooted in the author's experience with similar endeavors; they are consistent with his attitude toward most attempts to make persons and organizations accountable for performance on tests (Glass & Ellwein, 1987; Ellwein, Glass & Smith, 1988).

    Using student achievement data to evaluate teachers...
  • ...is nearly always undertaken at the level of a school (either all or none of the teachers in a school are rewarded equally) rather than at the level of individual teachers since a) no authoritative tests exist in most areas of the secondary school curriculum, nor for most special roles played by elementary teachers; and b) teachers reject the notion that they should compete with their colleagues for raises, privileges and perquisites;
  • ...is always combined with other criteria (such as absenteeism or extra work) which prove to be the real discriminators between who is rewarded and who is not;
  • ...is too susceptible to intentional distortion and manipulation to engender any confidence in the data; moreover teachers and others believe that no type of test nor any manner of statistical analysis can equate the difficulty of the teacher's task in the wide variety of circumstances in which they work;
  • ...elevates tests themselves to the level of curriculum goals, obscuring the distinction between learning and performing on tests;
  • ...is often a symbolic administrative act undertaken to reassure the lay public that student learning is valued and assiduously sought after.
References

Brandt, R.M., and B.M. Gansneder. (1987). Teacher incentive pay programs in Virginia. Charlottesville, Virginia: University of Virginia. 52 pp. E

llett, C.D., and J.S. Garland. (1987). Teacher evaluation practices in our largest school districts: are they measuring up to 'state of the art' systems? Journal of Personnel Evaluation in Education, 1, 69-92.

Ellwein, M.C., Glass, G.V & Smith, M.L. (1988). Standards of competence: Propositions on the nature of testing reforms. Educational Researcher, 168, 4-9.

Florio, D.H. (1986). Student tests for teacher evaluation: a critique. Educational Evaluation and Policy Analysis, 8, 45- 60.

Glass, G.V. & Ellwein, M. C. (1986). Reform by raising test standards. Evaluation Comment, 10, 1-6.

Haertel, E. (1986). The valid use of student performance measures for teacher evaluation. Educational Evaluation and Policy Analysis, 8, 45-60.

Hatry, H.P. and J.M. Greiner. (1985). Issues and Case Studies in Teacher Incentive Plans. Washington, D.C.: The Urban Institute Press.

McNeil, J.D. (1981). Politics of Teacher Evaluation. In J. Millman (Ed.) Handbook of Teacher Evaluation (pp. 272291). National Council on Measurement in Education. Beverly Hills, CA: SAGE Publications.

Miller, L. and E. Say. (1982). This bold incentive pay plan pits capitalism against teacher shortages. The American School Board Journal, 169, Sept., 24-25.

Millman, J., (1981). Student Performance as a Measure of Teacher Competence. Pp. 146-66 in Millman, J. (Editor), Handbook of Teacher Evaluation. National Council on Measurement in Education. Beverly Hills, Calif.: SAGE Publications.

Moore, N. (1987). Regression residual approach to deriving a teacher effectiveness index using student achievement: four variations. Tempe, Arizona: College of Education, Arizona State University. 59 pp.

Ryan, J.M., and G.G. Rowzie. (1987). A Manual for the Student Achievement Compnent of the Teacher Incentive Program. Columbia, South Carolina: College of Education, University of South Carolina.

Say, E., and L. Miller. (1982). The second mile plan: incentive pay for Houston teachers, Phi Delta Kappan, 64, 270- 271.

Scriven, M. (1981). Summative Teacher Evaluation. In J. Millman (Ed.) Handbook of Teacher Evaluation (pp. 244-271). National Council on Measurement in Education. Beverly Hills, Calif.: SAGE Publications.

Scriven, M. (1987). Validity in personnel evaluation. Journal of Personnel Evaluation in Education, 1, 9-23.

Shannon, P. (1986). Merit pay and student test scores. Reading Research Quarterly, pp. 20-35. [approx. title.] Soar, R.S., D.M. Medley, and H. Coker. (1983). Teacher Evaluation: A critique of currently used methods. Phi Delta Kappan, 65, 239-246.

Wise, A.E., and L. Darling-Hammond. (1984). Teacher Evaluation and teacher professionalism. Educational Leadership, 42, 28- 31.

Wise, A.E., L. Darling-Hammond, M. McLaughlin, and H.T. Bernstein. (1984). Teacher Evaluation: A Study of Effective Practices. Santa Monica, CA: RAND Corp.

No comments:

Post a Comment

Evaluating testing, maturation, and gain effects in a pretest-posttest quasi-experimental design

1965 Glass, G.V. (1965). Evaluating testing, maturation, and gain effects in a pretest-posttest quasi-experimental design. American Edu...