The Need for an Independent Review of the Reading Mastery StudyConducted by Dr. Randall Ryder
Susan Notes: gh here speaking: now this is downright interesting. How to pick up a cool million for those who want it is right here in this article, if it is possible to do, that is.
Educational Mirage Award
A Request for An Independent Review of Randall Ryder's Reading Mastery Study
(A Possible Award Winner Depending on the Findings of an Independent Review Panel.)
Will "Dr." Greg Cynaumon Accept My $1,000,000 Challenge?
Or Is He An Imposter?
Will Myrna Culbreath Accept My $500,000 Challenge?
Can PhonicsOpoly Make Huge Reading Improvements in Just 16 Hours?
Do The Hooked On Phonics and The Phonics Game Companies Have Proof That They Can Improve Students' Grades within 30-60 Days?
First Educational Mirage Award Winner:
The Phonics Game
These reports will be changing with new information. Send me an email if you would like an update. email@example.com
Gary L. Adams, PhD firstname.lastname@example.org
7115 NE Roselawn St.
Portland, OR 97218
Telephone (8:30-5:00 Pacific Time)
877-EDPROOF (Toll-Free for Sales and Consulting Information)
(503) 218-1015 (24 hours)
Gary Adams, PhD
After reading yearly reports of the three-year $340,000 study conducted by Dr. Randall Ryder, I would recommend that officials from the Wisconsin legislature and the University of Wisconsin-Milwaukee contact Dr. Hilda Borko, the president of the American Educational Research Association (http://www.aera.net/about/whoswho/execbd.htm). The appropriate Wisconsin officials should request that Dr. Borko create an independent board to review Dr. Ryder's costly study.
The intent of this study was to compare students who did and did not receive the Direct Instruction (DI) reading program called Reading Mastery (authored by Siegfried Engelmann and associates). Dr. Ryder's third-year 102-page report stating that the Direct Instruction program was ineffective was described in a University of Wisconsin-Milwaukee press release (http://www.uwm.edu/News/PR/04.01/Reading.html), which was picked up by several publications including Education Week "Study Challenges Direct Reading Method"on January 28, 2004 (http://www.edweek.org/ew/ewstory.cfm?slug=20Methods.h23&keywords=ryder). Dr. Ryder will be presenting his findings at the 16th Annual Research Conference Reading the Research (University of Wisconsin-Milwaukee) on March 4, 2004. The first two yearly reports are available at http://www.soe.uwm.edu/pages/welcome/Departments/Education_Outreach/Programs___Courses/reading_symposium/readings.
The results of these reports should be questioned on many grounds. My colleagues (Scott Gallagher, Sarah McCright, Randy Uchytil, William Conlon, and James Davis) and I will be releasing an extensive review of Dr. Ryder's study in the next few days on the www.edresearch.com web site. However, because Dr. Ryder will be conducting a presentation of his study at the UW-M conference, there is the opportunity to ask questions about confusing aspects of this costly study.
The original intent of this study was to compare students who did and did not receive the Reading Mastery program in selected Milwaukee and Franklin Public Schools over a three-year period. The design of this study was changed in Year 2 when Franklin Public Schools administrators decided to focus using Reading Mastery on first grade students with reading difficulties. This leads to the first question: "Why did Dr. Ryder continue to take Grade 2 and 3 data on Franklin Public School students after stating that Reading Mastery was limited to first grade students?" This leads to another question: "Was the study supposed to be a comparison of students who received all three years of one type of instruction or was it a Grade 2 and 3 follow-up of students who only got five months of DI or Non-DI instruction during Grade 1"? Then the following question is "If it is a follow-up of Grade 1 students, why were no data taken on FPS DI group during Grade 3?"
Dr. Ryder's most questionable action was not mentioned in the final report, but was discovered in the Year 2 report. He states that Reading Mastery was ineffective. In the results section, however, he states "Compared to FPS Non-DI student, FPS DI students demonstrated greater gains in their decoding ability from the end of first grade to the end of second grade. These results may have been partially explained by the observation that the scores of 15 FPS Non-DI students declined from the first to the second grade. When the scores of these students were eliminated from the analysis there was no longer a significant difference between the method of instruction and decoding subtest scores" (p. 30). If there was question about the test administration process, why were the data analyzed first and then removed?
Another hidden questionable action goes to the issue of whether teachers using Reading Mastery were sufficiently trained to be identified as DI teachers. Dr. Ryder simply states that they were. Year 2 and 3 teacher surveys had items about the quality of the initial Reading Mastery teacher training and follow-up, but Dr. Ryder never reported those results. Based on the comments by many teachers in the DI group that were selected for interviews, these teachers clearly lacked basic knowledge about the Reading Mastery program.
A question that seems so elementary is "How many teachers and students are in this study?" The numbers never match across tables or the three reports. The research condition labels of each school are often missing. In the Year 1 report, the number of teachers per district per condition is missing, as is the number of students who had valid tests. In the Year 3 report, Ryder does not even provide a table of schools and teachers involved in each condition.
The number of teachers is particularly interesting because in the Year 3 report Ryder states that all MPS first-third grade teachers turned over every year (a total of 33 teachers) and 16 of 19 FPS teachers turned over every year. When I asked Dr. Ryder how this could occur in the suburban Franklin Public Schools district, he stated that the FPS teachers rotated with their children (the first teacher and students became the second grade group the next year). However, that would mean that there would not be teacher turnover. I called one of the FPS elementary schools and the secretary stated that their teachers did not change grades and that none of the FPS schools use the rotating system that Dr. Ryder described. When I called the Franklin Public Schools personnel department, they said that they have had little turnover of Grades 1-3 teachers. Why is there is discrepancy between Dr. Ryder's statements and FPS information?
I also asked Dr. Ryder how it is possible that since there were no students in the FPS 3rd Grade group, how could they have an average score of -13.046 shown on Graph 3 (Year 3 Report, p. 27)? I have never received an answer to that question. Also, Dr. Ryder should explain how the average pretest scores of the Year 1 and the Year 3 FPS groups are exactly the same 327.5 – that would appear to be a statistical improbability.
Based on Tables 2c, Dr. Ryder never conducted a longitudinal study. A longitudinal study would require the testing of the same students over 3 years. Table 2c shows that only 3 students who were tested in Year 1 were tested in Year 2. Also, Dr. Ryder based his statistical analysis on change/gain scores. Most graduate students are taught in their introductory research/statistics classes that you shouldn't conduct statistical analyses on change scores. The correct statistical analyses would have provided transparency about what was actually occurring in this study. While speaking of what is taught in introductory research classes, I am appalled by Dr. Ryder's References section. He correctly cited less than 25% of the references.
To summarize, some questions that Dr. Ryder needs to answer include:
* Why did he remove data from the statistical analysis favoring the students in the Reading Mastery after the analysis was completed?
* If you do not test the same students over time, how can he say that he conducted a longitudinal study? Said another way, if the same students were not tested, aren't the results meaningless?
* Why was there such a high turnover of first-third grade teachers in all of the schools and why doesn't this statement match school district records?
* Since there is such a huge discrepancy, how many teachers and students were in each DI/Non-DI group per year?
* How could two groups have the exact same pretest scores?
* Why didn't Dr. Ryder include teacher survey data on Reading Mastery teacher training?
* Why did Dr. Ryder use the incorrect statistical analysis and will he provide the actual pre- and post-test scores without conversions?
Our longer report will be posted on our web site (www.edresearch.com) in the next few days and it will raise further questions. We apologize for the delay in posting this report. We only recently received the Years 1 and 2 reports and they vary considerably with the Year 3 final report.
Again, based on the cost of this study and the questionable results that have been highly publicized, I would recommend that an outside panel of experts review this study.
If you have further questions, please contact Gary L Adams, PhD at email@example.com or 503.381.2574.
Gary Adams, PhD
INDEX OF RESEARCH THAT COUNTS