Writings of Some General Interest, Not Readily Available Elsewhere. To receive a printable copy of an article, please email gvglass @ gmail.com.
Wednesday, November 30, 2022
Review of Hedges and Olkin, Statistical Methods for Meta-analysis
1986
Glass, G.V. (1986). Review of Hedges and Olkin, Statistical Methods for Metaanalysis. American Journal of Sociology, 92 (1), 255-6.Tuesday, November 29, 2022
Review of Wachter & Straf, The Future of Meta-analysis
1991
Glass, G.V (1991). Review of Wachter & Straf, The Future of Meta-analysis. Journal of the American Statistical Association, 86(416), 1141-1142.Reliability of Residual Change Scores
1968
Glass, G.V. (1968). Response to Traub's "Note on the reliability of residual change scores." Journal of Educational Measurement, 5, 265-267.Monday, November 28, 2022
Correlations with Products of Variables
1968
Glass, G.V (1968). Correlations with products of variables: derivations and implications for methodology. American Educational Research Journal, 5, 4, 721-728.Sunday, November 27, 2022
Wednesday, November 23, 2022
Algebraic Proof that the Sum of Squared Errors in Estimating Y from X via b1 and b0 is Minimal
1968
Stanley, J.C. & Glass, G.V. (1968). An algebraic proof that the sum of squared errors in estimating Y from X via b1 and b0 is minimal. The American Statistician, 23, 25-26. (Reprinted as Una demonstracian algebraica de que las suma de los cuadrados de los errores es minima cuando se estima Y a partir de X via b1 y b0. Estadistica, 26, 775-777.)
Tuesday, November 22, 2022
A Posteriori Correction for Guessing in Recognitive Tasks
1964
Glass, G. V & McLean, L. D. (1964). A posteriori correction for guessing in recognitive tasks. The American Journal of Psychology, 77, 4, 664-667.
High-Stakes AIMS is a Brutal Test That Hurts the Students
2003
High-Stakes AIMS is a Brutal Test That Hurts the Students
Gene V. Glass
The Republic's faith that the AIMS test will improve the effectiveness of our schools is based on the misconception that it measures a "specific set of skills that the state spelled out seven years ago." ("Let's use AIMS in the right way," Editorial, Friday.) The "state," in this case, was fewer than a dozen volunteers who met one weekend in October 1995 at a Scottsdale resort for an "Academic Summit" (so named by then Superintendent of Public Instruction Lisa Graham Keegan). The mathematics panel, for example, included no math teachers or math specialists; when asked why, one Arizona Department of Education official remarked: "We don't want to know what they know. We deliberately cut them out of the process."
The resulting "summit" was in fact a group of laypersons easily manipulated by Keegan's hired consultants and corporate partner "facilitators." The most ridiculous miscarriage to emerge from this weekend of frantic confusion was the math standards. Brought to the summit by Keegan's consultants, the math standards survived a months' long charade of public feedback that left them virtually unchanged.
The standards served as the blueprint for the AIMS math test. Much has been written about this test. A past president of the National Council of Teachers of Mathematics labeled the AIMS math standards a ridiculous basis for a high school exit exam. Suffice it to say that the AIMS math test is readily recognized to be an examination so difficult that only a minority of college graduates could pass it.
But what of the claim that the AIMS test represents what Arizona high school graduates need to know to function in the world of work? I recently completed a study in conjunction with Cheryl Edholm, formerly of the East Valley Institute of Technology. About 50 employers in Maricopa County - representing manufacturing, health services, retail, legal services and the like - were shown a representative sample of Grade 10 AIMS math questions and asked, "Do your employees use math like this? Do you require it of them?" A sample of their employees was asked. "Do you use the math tested by these questions?"
The answers were overwhelmingly "no."
Ninety percent of the employers reported that their employees did not use such skills in their daily work. An equal percentage reported that they do not require such skills, and the employees confirmed that AIMS math is not a part of their work lives.
These results are no surprise to anyone who has seen the test; it involves advanced algebra, trigonometry, analytic geometry, and probability and statistics. By contrast, the most difficult math question on the Texas high school exit exam asked the student to estimate the size of an envelope required to hold an 8.5- by 11-inch piece of paper folded in thirds (a diagram was provided for assistance).
Lisa Keegan discounts the findings of ASU researchers David Berliner and Audrey Amrein that high-stakes test have been an ineffective reform because the "preliminary version of (their) study was published in material edited by Gene Glass, whose opposition to AIMS-like testing is fervent. (" 'Study' of AIMS-like tests shows a low regard for children," My Turn, Jan. 3)
The "material" in question is a peer-viewed scholarly journal that has in its decade of existence published research on both sides of the high-stakes testing controversy. Keegan's characterization of my opposition to such testing was accurate, however. It is a costly and brutal mistake that punishes students, demoralizes teachers and benefits no one.
The Republic has urged newly sworn-in Arizona Superintendent of Public Instruction Tom Horne to "take aggressive steps to make sure that all schools have adopted a curriculum that includes the skills tested by AIMS and that they're actually teaching it." This is bad advice. He should scrap the current version of AIMS and put as much distance as possible between himself and the failed efforts of his predecessors.
Gene V. Glass is a professor in the College of Education at Arizona State University and a member of the National Academy of Education. His e-mail address is glass@asu.edu.
2014
The Productivity of Public Charter Schools
Gene V
Glass
Arizona State University
On July
26, 2014, the University of Arkansas Center for Education Reform released a
report addressing the relative productivity of charter schools as compared to
traditional
public schools. The report is entitled The Productivity of Public Charter
Schools. (Note 1) The
report includes a main section in which the cost effectiveness of charter
schools (CS) is
compared to that of traditional public schools (TPS) for 21 states plus the
District of
Columbia and three Appendices in which various methods and data sources are
described.
The authors of the report are as follows: 1) Patrick Wolf, Distinguished
Professor of
Education Policy and 21st Century Endowed Chair in School Choice in the
Department of
Education Reform at the University of Arkansas in Fayetteville; 2) Albert
Cheng, a
graduate student in the Department of Education Reform, referred to as a
“Distinguished
Doctoral Fellow”; 3) Meagan Batdorff, founder of
Progressive EdGroup and a Teach for
America alumna; 4) Larry Maloney, president of Aspire Consulting; 5) Jay F.
May, founder
and senior consultant for EduAnalytics; and 6) Sheree T. Speakman, founder and CEO
of
CIE Learning and former evaluation director for the Walton Family Foundation.
The
report reviewed here is the third in a succession of reports on charter schools
that
have generally claimed that in relation to traditional public schools (TPS),
charter schools
(CS) are more effective in producing achievement (Note 2) and less costly per
pupil. (Note 3) The current report attempts to combine these attributes into an
analysis of the cost effectiveness of CS versus TPS. Results are reported in
terms of National Assessment of Educational Progress (NAEP) scores per $1,000
expenditure at the state level for 21 states and the District of Columbia. The
claim is made based on these analyses that charter schools, while spending far
less per pupil than traditional public schools, generally produce achievement
as good as or superior to that of traditional public schools.
The
claims made in the report rest on shaky ground. The comparison of achievement
scores between the CS and TPS sectors suffers from multiple sources of
invalidity. The
assessment of expenditures in the two sectors rests on questionable data. In
the
examination of cost effectiveness analyses, weaknesses in these two areas leave
little
evidence on which to base any valid conclusions. If one is calculating “bang
for the buck,”
what is left if neither the bang nor the buck can be believed?
II. Findings and Conclusions of the Report
Comparing NAEP achievement obtained in charter schools versus that in
traditional public
schools for 21 states and DC, the report concludes that the charter school
sector delivers a
weighted average of an additional 17 NAEP points per $1000 invested in math,
representing a productivity advantage of 40% for charters. The report goes on:
· In reading, the charter sector delivers an additional 16 NAEP points
per $1000
invested, representing a productivity advantage of 41% for charters;
· Percentage differences in cost effectiveness for charters compared to
that for TPS in
terms of NAEP math score points per $1000 invested ranges from 7 percent
(Hawaii) to 109 percent (Washington DC);
· Percentage differences in cost effectiveness for charters compared to
that for TPS i n
terms of NAEP reading score points per $1000 invested ranges from 7 percent
(Hawaii) to 122 percent (Washington
DC). (p. 7).
Curiously,
in spite of report’s executive This
caveat, which basically undercuts the conclusions and recommendations of the
report,
summary touting the superior cost
effectiveness of charter schools versus
traditional public schools, the authors insert
the following caveat in a later section:
. . . our cost effectiveness calculation using
NAEP scores has important limitations.
Most importantly, it is merely descriptive,
not causal, because charter schools might
be reporting higher NAEP scores per $1000
invested than TPS because of the characteristics of students attracted to the
charter school sector and not because they actually do a better job educating
similar students and at a lower cost. (p. 21).
is missing from the press releases and media coverage. (Note 4)
The report concludes with calculations of the superior lifetime earnings,
labeled Return on
Investment (ROI), that accrue to pupils educated in charter schools when
compared with
those of pupils in traditional public schools. An example of a conclusion from
this analysis
follows:The
higher ROI [essentially lifetime earnings] for charters compared to TPS
ranges from +0.4 percent (New Mexico) to +4 percent (Washington DC)
assuming a single year of charter schooling and from 3 percent to 33 percent
assuming a student spends half of their K-12 years in charters. (p. 7).
of National Assessment of Educational Progress test score averages from those 21 states
and the District of Columbia in which the average score is reported separately for the two
sectors. Associated with these achievement test data are estimates of the per pupil
expenditure in the two sectors for each state. The simple division of the average test score
by the average expenditure at the level of the state is said to produce a cost-effectiveness
ratio that can be compared between the two sectors. Such calculations are frequently
described in basic textbooks, (Note 5) but they are rarely applied in the area of education due to
the complexity of capturing both the outcomes of teaching and the expenditure of funds
for instruction. (Note 6) One might as well ask, “What is the return on expending one’s effort to improve one’s marriage?”
IV. The Report’s Use of Research Literature
Issues such as the relative costs and effectiveness of CS and TPS are hotly
contested in the
research literature of the past two decades. (Note 7) By and large, researchers
have taken positions
of advocacy and cited related work that supports their position while ignoring
conflicting
evidence. The present report continues that pattern. Virtually absent from the
report are
citations to works that dispute the position assumed by these authors, their
academic
affiliations, or their sponsors. While they do cite Bruce Baker’s work that
disputes their
recent claims regarding the inequity of charter school funding (Note 8), they do
so only in the
context of what they regard as a refutation of its claims. That refutation is
wanting and has
been responded to by Baker. (Note 9) Just as important, a wide-ranging
literature disputing the
claim of superior effectiveness of charter schools is completely ignored. (Note
10)
In the current case, the failure to reconcile the reported findings with a
large literature of
contrary evidence is particularly egregious. At this stage in the accumulation
of research
evidence, those who claim positive effects for charter schools in comparison
with
traditional public schools have a burden of proof to demonstrate that their
research is not
only sound but that the findings of prior research are somehow invalid.
V. Review of the Report’s Methods
The main argument of the report hinges on the estimation of two things: 1) the
relative
performance on achievement tests of CS and TPS, and 2) the cost of educating
the average
pupil in CS versus TPS.
For the former estimate, the authors have chosen to use the average statewide
scores in
math and reading for those states that report NAEP scores for both sectors, CS
and TPS for
FY11. For the latter estimate, i.e., cost, the choice was made to use “revenues
received”
rather than “expenditures made.” Relying on data from an earlier report, (Note 11)
the authors
concluded that the “main conclusion of our charter school revenue study was
that, on
average, charter schools nationally received $3,814 less in revenue per-pupil
than did
traditional public schools.” (p. 10). At this early point, the report’s
analysis runs off the
rails. Revenues received and actual expenditures are quite different things.
Revenues
received by traditional public schools frequently involve funds not even
intended for
instruction. The report purports to compare “all revenues” received by
“district schools”
and by “charter schools,” claiming that comparing expenditures would be too
complex. The
problem is that revenues for public schools often fund programs not provided by
charter
schools (special education, compensatory education, food, transportation,
special
populations, capital costs, state mandated instructional activities, physical
education, and
the like.) Charter funding is in most states and districts received by pass
-through from
district funding, and districts often retain some or all responsibility for the
provision of
services to charter school students—a reality that the report
acknowledges but then does
nothing to correct for in the data. There are huge variations across states and
within states.
This non-comparability problem alone invalidates the findings and conclusions
of the
study.
A sensible comparison of cost-effectiveness between the two sectors would require
at a
minimum a parsing of these expenditures that isolates funds spent directly and
indirectly
on instruction. No such parsing of revenues was even attempted in the present
report. The
report suggests the reader do this for their state(s) of interest.
By employing different spending definitions according to each state’s format
(“State
system of record”), comparability across states and aggregation of data is
rendered
meaningless. Nevertheless, the report both ranks and aggregates the
non-comparable data
in arriving at its conclusions. In Appendix B (p. 39), the report lists several
comparability
problems but these problems are ignored thereafter. The deficiencies in the
work of the
Department of Educational Reform with respect to funding that were addressed by
Baker (Note 12)
have not been corrected. The report proceeds as if to mention a deficiency in
data renders
it inoperative.
The primary limitation on the availability of achievement data was whether a
state had
reported NAEP averages for both sectors, CS and TPS, separately. The District
of Columbia
and 21 states did so and were included in the analysis. NAEP data for grade 8
were
employed in the analyses. However, NAEP tests are also administered at grades 4
and 12,
though these data were ignored claiming that 4th grade NAEP scores would
underestimate
effects and 12th grade scores would overestimate them. This rationale is
unclear, and by
foregoing any analyses at the other two grades, the report passes on the
opportunity to
explore the robustness of their findings.
A cost effectiveness ratio was calculated by dividing the average NAEP score
for a sector
and a state by the average cost (per pupil revenue received in thousands of
dollars) for a
sector and a state. For example, for the state of Illinois, the average NAEP
math score for
TPS was 283 and the average per pupil revenue in thousands of dollars was $6.73
yielding
a cost effectiveness ratio of CE = 283/6.73 = 42 NAEP points in math per $1,000
of
revenue.
The gist of the report’s argument involved the comparison of cost effectiveness
ratios
between sectors. At this point, the logic and the validity of the research run
into
difficulties. The validity of the comparison of “effectiveness” (i.e., NAEP
averages) depends
on the truth of the counterfactual that if CS pupils had attended TPS, they
would have
scored where the TPS pupils scored on the NAEP test. (This is in fact the same
as the logic
of any valid, controlled comparative experiment.) In a slightly different
context, the
authors have presented data that argues against the truth of this
counterfactual.
In attempting to refute claims by Miron (Note 13) and
Baker (Note 14) that there are not big funding
inequities between the two sectors, the report presents data on the percentages
of Free or
Reduced-Price Lunch (FRL) and Special Education (SP) pupils in each sector for
each
state. Baker had argued previously that the differential in revenues between CS
and TPS
was due in large part to the fact that the latter serve a greater percentage of
pupils who are
poor or who require special services. In their attempt to refute this claim,
the authors
assert:
The charter sectors in our study actually tend to enroll a higher percentage of
low-income students than the TPS sectors, regardless of whether one uses free
lunch or FRL as the poverty measure. The special education enrollment gap of
just 3 percentage points is far too small to explain much of the charter school
funding gap, even if many of the additional special education students in the
TPS sector had the most severe, highest cost, disabilities imaginable. As our
revenue study concluded, a far more obvious explanation for the large charter
school funding gap is that state and local policies and practices deny public
charter schools access to some educational funding streams . . . (p. 11).
In support of the claim that the charter schools in the study enroll more poor pupils and
only slightly fewer special-needs pupils, the report presents Table 1 on page 12, which has
been reproduced below.
Of the 31 states in Table 1, only 22 were involved in the calculation of
comparative cost
effectiveness ratios, due to availability of NAEP data for both sectors. Of
those 22 used to
calculate cost-effectiveness ratios, 10 states had a higher proportion of poor
(FL) pupils
enrolled in CS than in TPS. In 11 states, TPS enrolled a higher percentage of
poor pupils
than did CS. (Hawaii had equal percentages in both sectors.) So for the
purposes of
calculating the comparative effectiveness of TPS vs
CS, it is noteworthy that there are
slightly more states in which TPS has a greater percentage of poor students
than CS.
below is presented the scatter plot relating the poverty differential to the achievement
differential.
(Note: NAEP scores are for math. Hawaii data excluded as an outlier. Correlation = -.72)
The
greater the incidence of poverty in TPSs than in CSs, the greater the CS
advantage over
the TPSs in achievement, as indicated by the strong negative relationship depicted
in
Figure 1 for math. In fact, the correlation coefficient between POVERTY(TPS) –
POVERTY(CS) and NAEP(TPS) – NAEP(CS) is -.72 (excluding Hawaii which is
an outlier
in the scatter diagram). Thus, the productivity differentials between CS and TPS which
constitute the numerator of the report’s cost effectiveness measure are
confounded with
differences in poverty between the two sectors, as one would expect. The same
analysis for
reading produces almost identical results with a correlation coefficient equal
to -.71.
The implication of the strong negative correlation between the poverty
differential and the
NAEP score differential is that the effectiveness measure employed in the cost-
effectiveness calculation is really a measure of the differences in poverty at
the state level
between the charter school and the traditional public school sectors. It is
well established
that poverty level is one of the strongest influences on standardized test
scores, even
outweighing the influence of schooling itself in most instances. (Note 15)
Data on the percentage of pupils in Special Education similarly show
differences that
would bias effectiveness estimates in favor of charter schools. Of 31 states and
the District
of Columbia, 16 states show a larger percentage of pupils classified as special
needs in TPS
than in CS; 4 show the reverse and 11 states did not have data available by
sector. Of the 22
states employed in the report for cost effectiveness analysis, 12 showed higher
percentages
of special education pupils in TPS than in CS; 3 showed the reverse; one showed
no
difference and 6 have no data available by sector. The authors ignored the
implication s of
these data to the validity of their comparisons of NAEP scores and instead
attempted to
discredit the special education discrepancy between the sectors as an
explanation for the
greater pupil revenues in TPS. (See p. 11 of the report.)
VI. Review of the Validity of the Findings and Conclusions
The very title of the report, The Productivity of Public Charter Schools,
invites the
interpretation that charter schools produce greater academic achievement at
lesser cost.
However, it is difficult and in many cases impossible to parse revenues into
those that are
directed at promoting academic learning and those that serve other purposes
(e.g.,
administration, guidance, special services to disabled children and the like).
Such parsing
is particularly difficult in the charter school sector where accounting practices
are often lax
and transparency of expenditures is sometimes completely lacking. Thus, to
argue that a
simple arithmetic ratio of NAEP points and revenues describes a school’s
“productivity” is
little more than a weak metaphor.
As noted above, the principal conclusions of the report are as follows:
The validity of these conclusions rests in essential ways on the estimates of relative
· In reading, the charter sector delivers an additional 16 NAEP points per $1000
invested, representing a productivity advantage of 41% for charters;
· Percentage differences in cost effectiveness for charters compared to that for TPS
in terms of NAEP math score points per $1000 invested ranges from 7 percent
(Hawaii) to 109 percent (Washington DC);
· Percentage differences in cost effectiveness for charters compared to that for TPS
in terms of NAEP reading score points per $1000 invested ranges from 7 percent
(Hawaii) to 122 percent (Washington DC). (p. 7).
effectiveness of TPS and CS as reflected in state average NAEP scores in math and reading.
But these estimates have been shown to be seriously biased in favor of CS due to the non -
comparability of the CS and TPS students (disadvantaging the TPS both in wealth and incidence of special needs pupils). Moreover, the conflating of “revenues received” and
“costs” to produce a given effect has reduced the reports exercise to little more than
political arithmetic. (Note 16)
It is astounding that the report offers a proviso concerning its findings while
simultaneously proceeding as though the proviso had never been mentioned:
. . . our cost effectiveness calculation using NAEP
scores has important
limitations. Most importantly, it is merely descriptive, not causal, because
charter schools might be reporting higher NAEP scores per $1000 invested than
TPS because of the characteristics of students attracted to the charter school
sector and not because they actually do a better job educating similar students
and at a lower cost. (p. 21).
By any reasonable interpretation of the language employed in the report, the
calculations
are put forward as a description of a causal claim. “Effectiveness” implies
“effect” and in
both common and academic parlance, effects are the results of causes. The
report speaks
out of both sides of its mouth, but only softly does it whisper limits while
shouting the
alleged superior productivity of charter schools.
VII. Usefulness of the Report for Guidance of Policy and Practice
The report continues a program of advocacy research that will be cited by
supporters of the
charter school movement. It can be expected to be mentioned
frequently when arguments
are made that charter schools deserve higher levels of funding. The report will
be cited
when attempts are made to refute research on the poor academic performance of
charter
schools. Nothing in the report provides any guidance to educators in either
sector, CS and
TPS, on how to improve the practice of education. Although the evidence reported
provides no credible foundation for evaluating either the costs or the
effectiveness of
charter schools, it can be expected that the report will find frequent use by politicians
and
companies managing charter schools as they pursue their reform agenda.
Notes and
References
The productivity of public
charter schools. Fayetteville, AR: Department of Education Reform
University of Arkansas. Retrieved August
5, 2014, from
http://www.uaedreform.org/downloads/2014/07/the-productivity-of-public-charter-schools.pdf.
Report on Four-Year Achievement Gains. Fayetteville, AR: Department of
Education Reform University of
Arkansas. Retrieved August 5, 2014, from
http://www.uaedreform.org/downloads/2012/02/report-31-milwaukee-independent-charter-schools-study-
final-report-on-four-year-achievement-gains.pdf.
Inequity increases. Fayetteville, AR: School Choice Demonstration Project, University of Arkansas. Retrieved
August 5, 2014, from
http://www.uaedreform.org/wp-content/uploads/charter-funding-inequity-expands.pdf.
Author. Retrieved August 7, 2014, from http://www.uaedreform.org/der-in-the-news/.
Thousand Oaks, Calif.: SAGE.
Revista de Educacion, 276, 61-102;
Levin, H.M., Glass, G.V. & Meister, G.M. (1986). The political arithmetic of cost -effectiveness analysis.
Kappan, 68 (1), 69-72.
Alto: CREDO, Stanford University. Retrieved July 10, 2013, from
http://credo.stanford.edu/research-reports.html.
Center. Retrieved August 7, 2014, from
http://nepc.colorado.edu/thinktank/review-charter-funding-inequity.
productivity claims (blog post). School Finance 101. Retrieved August 7, 2014, from
http://schoolfinance101.wordpress.com/2014/07/22/uark-study-shamelessly-knowingly-uses-bogus-
measures-to-make-charter-productivity-claims/.
Record. Retrieved August 5, 2014, from
http://www.tcrecord.org/content.asp?contentid=16917;
Powers, J.M. and Glass, G.V (2014). When statistical significance hides more than it reveals. Teachers College
Record. Retrieved August 5, 2014, from
http://www.tcrecord.org/Content.asp?ContentID=17591;
Briggs, D.C. (2009). Review of "Charter Schools in Eight States: Effects on Achievement, Attainment,
Integration and Competition." Boulder, CO: National Education Policy Center. Retrieved August 5, 2014, from
http://nepc.colorado.edu/thinktank/review-charter-schools-eight-states;
Effects on achievement, attainment, integration, and competition. Washington DC: RAND Corporation.
Retrieved August 5, 2014, from http://www.rand.org/pubs/monographs/MG869.html.
Inequity increases. School Choice Demonstration Project, University of Arkansas, Fayetteville, AR. Retrieved
August 5, 2014, from
http://www.uaedreform.org/wp-content/uploads/charter-funding-inequity-expands.pdf.
Center. Retrieved August 7, 2014, from
http://nepc.colorado.edu/thinktank/review-charter-funding-inequity.
and Choice Blog). Education Week (online). Retrieved August 7, 2014, from
http://blogs.edweek.org/edweek/charterschoice/2014/05/charter_schools_receive_inequitable
_funding_says_report.html.
Center. Retrieved August 7, 2014, from
http://nepc.colorado.edu/thinktank/review-charter-funding-inequity.
Teachers College Record, 115(12). Retrieved August 8, 2014, from
http://www.tcrecord.org/Content.asp?ContentId=16889.
Kappan, 68 (1), 69-72.
DOCUMENT
REVIEWED: The Productivity of Public Charter
Schools
AUTHORS: Patrick J. Wolf, Albert Cheng, Meagan
Batdorff, Larry Maloney, Jay F. May,
and Sheree T. Speakman.
PUBLISHER/THINK TANK: Department of Education Reform,
University of Arkansas
Sunday, November 20, 2022
Factor Analytic Methodology
1966
Gene V. Glass & Peter A. Taylor. (1966). Factor analytic methodology. Review of Educational Research, 36, 5, 566-587.
Me & Saul
Me & Saul Saul – Kripke, that is – has been labeled the most influential philosopher of the second half of the 20th Century. Wik...
-
1968 Gene V Glass. (1968). Analysis of Data on the Connecticut Speeding Crackdown as a Time-Series Quasi- Experiment, Law ...
-
Gene Glass: Professional Résumé Professional Résumé for Gene V Glass Some publications of Gene V Glass are ...
-
Searching for Loren Eiseley: An Attempt at Reconstruction from a Few Fragments Gene V Glass Arizona State University ...