Susan Notes: Gene Gallagher offered this 'deconstruction' of the MCAS scoring methods on the CARE listserv. You don't have to be from Massachusetts to learn from it.
This isn't the first time Gene has offered a critique on the Emperor's Clothes. My question is: Why are so many other researchers sitting in silence?
Imagine 3 scenarios, with 3 identical students scoring 19 points on the Spring 2001 Math MCAS (the last full test for which the technical details are available). All 3 students would receive a failing 218 MCAS score, 2 points below the 220 cutoff. The report issued by DOE would indicate that the 'error' for this test would extend to only 219.5, indicating that the failure was not likely due to test-to-test variability. However, the 95% confidence interval for a score of 19 correct answers would range from 214.7 to 228 (see my updated review of confidence limits):
Let's see how these 3 students might fare on an appeal to the head of DOE, Dr. David Driscoll. Let us assume all 3 students are "C" students with an occasional "D" grade and have perfect attendance.
A) Student A attends a suburban school where most students pass MCAS. His guidance counselor submits a well-documented appeal, with the required DOE excel spreadsheet, showing that there were 20-30 "C" students in the student's cohort taking similar math classes. The majority of C and even C- students in this school passed MCAS.
Likely result: Appeal granted.
B) Student B attends a suburban school with a moderate failure rate. After 2 MCAS failures, the student is transferred out of the mainstream math curriculum into an MCAS tutorial program with 3 other students. The student's cohort of students taking the similar curriculum now consists of 3 students, all who failed MCAS multiple times with no passing scores. When the student's councillor submits an appeal with strong letters saying this student should graduate, Dr. Driscoll denies the appeal because the cohort size is too small.
Actual result for a student from a South Shore High School: Appeal denied.
c) Student C attends an urban school with a high MCAS failure rate, earning C grades in Math with a few D's. Few if any of the students with a low C average pass MCAS in this urban school. The cohort analysis indicates that virtually all of the students who've passed MCAS in this urban school are at least B- students. Of the students with a C average, this student has the highest MCAS score.
Likely result: Appeal denied.
If the MCAS 220 cutmark is supposed to be a state standard, then I see no role for the pseudo-statistical analysis with cohort sizes and matched curriculum within a school that the Dept. of Education is employing. We know what the variance is on these tests! Why should appeals from poor performing schools be treated differently from those from schools where most students pass MCAS. Why should small cohort sizes, especially small cohorts created because of MCAS screening, be used to deny appeals when we know what the test-to-test variability is on these exams from sample sizes of 60,000 and more state-wide.
A 218 on this test should be granted a waiver. What difference does it make whether a high percentage of students with the same grades have passed MCAS???
Professor Eugene Gallagher
INDEX OF RESEARCH THAT COUNTS