#blog: Are we too quick to judge innovation grant findings?
by Kathryn Kennedy
Cross post from the Digital Learning Collaborative Blog
A recent column from The Hechinger Report shared findings from U.S. Department of Education’s innovation grants and what Hechinger calls “the ‘dirty secret.’” These grants were created to boost the economy after the 2008 recession and served as a “first test of using rigorous scientific evidence as a way of issuing grants in education.” Those programs that had a concept that was well-proven were issued $25-50 million while programs who did not have an evidence-based concept were given $5 million or less to help build that base. Unfortunately, the results show that only 18%, or 12 out of 67, innovations have shown an increase in student achievement. Hechinger notes, many in the field are disappointed at this information while others are sharing they are not surprised. I’m in the latter camp and agree wholeheartedly with three statements that Dr. Saro Mohammed, partner at The Learning Accelerator, made in the Hechinger article that continue to need to be highlighted in our field, especially for those who don’t engage regularly in the research process.
“It’s sometimes hard to prove that an innovation works because of unintended consequences when schools try something new. For example, if a school increases the amount of time that children read independently to try to boost reading achievement, it might shorten the amount of time that students work together collaboratively or engage in a group discussion. ‘Your reading outcomes may turn out to be the same [as the control group], but it’s not because independent reading doesn’t work. It’s because you inadvertently changed something else. Education is super complex. There are lots of moving pieces.’”
“The study results are not all bad. Only one of the 67 programs produced negative results, meaning that kids in the intervention ended up worse off than learning as usual. Most studies ended up producing ‘null’ results and she said that means ‘we’re not doing worse than business as usual. In trying these new things, we’re not doing harm on the academic side.’”
“Learning improvements are slow and incremental. It can take longer than even the three-to-five-year time horizon that the innovation grants allowed.”
We’re grateful to have Dr. Mohammed serving as a guest blogger for the Digital Learning Collaborative, and she’ll be following up on these particular points more specifically in her upcoming posts. In the meantime, what does this report say about our field and how research is used? From the first bullet, when examining whether or not an innovation/intervention works, are we not taking into account the many other moving parts of the education puzzle? Based on the second bullet, are we asking the right questions in our research? Case in point, as the article mentioned, “18 of the studies had to be thrown out because of problems with the data or the study design.” And last but certainly not least, are we expecting too much from a program to show improvement over several school years?