Education

How Improving Research Practices Can Enhance Education Research And Policy


A paper published in Science brought widespread attention to the importance of improving research practices when it showed that many findings published in a handful of psychology’s top journals failed to replicate. Even before that paper was published, work published in Educational Researcher by Matthew Makel and Jonathan Plucker showed that only 0.13% of education articles were found to be replications and of those, the replication rate was similar to those of psychology. Science writer Christie Aschwanden wrote a compelling piece for FiveThirtyEight arguing that all of this acknowledgment of failure was important and that “failure is moving science forward” in that “the replication crisis is a sign that science is working.”

Now, Matthew Makel, Jaret Hodges, Bryan Cook, and Jonathan Plucker have published a new Educational Researcher article titled “Both questionable and open research practices are prevalent in education research.” In this new study they asked education researchers about whether open science research practices and questionable research practices were prevalent in the field and to what extent. What follows is an interview done with Matthew Makel, who first teaches us what questionable research practices are and how he thinks improving research practices can enhance education research and policy.

What are Questionable Research Practices?

The best class I ever took was AP Psychology my senior year of high school. It was taught by the best teacher I ever had, Mrs. Sandy Strobel Johnson. She had wonderful passion and made everything relatable and important. She inspired me to be who I am today. At the same time, many of the most memorable studies we covered have results that are now considered questionable (e.g., Stanford Prison Study, Pygmalion effect, marshmallow test). What do all these have in common? Their results are both widely known and now considered with skepticism. This is either because of subsequent review of results or attempted replications failed to report the same findings. We don’t always have evidence to support what we supposedly know and teach. 

I think one thread connecting such studies is the lack of consensus over what methods should be used and under which circumstances. Any research practice that falls into this “grey zone” between misconduct and agreed-upon best practice has been dubbed questionable research practices (QRPs). QRPs can include behaviors such as selectively reporting results, reporting exploratory findings as though they were predicted, and excluding data based on how they influence overall results. Others have proposed that Questionable Reporting Practices may be a better label. It’s the incomplete reporting that is the primary issue, not the procedure itself. 

Some may think that using QRPs is necessary for success (getting hired, getting promoted, getting grant funding) in the current academic research system. Others say that they once believed these types of practices were wrong like jaywalking, but now believe they are robbing-a-bank level wrong. There are even web comics making fun of inaccurate reporting of research. 

If QRPs were rare, it may not be much to worry about. However, many researchers in psychology (in multiple countries), ecology and evolution, criminology, and communication self-report use of QRPs that are not rare; QRPs appear common.  

What did you find?

In our study, we asked education researchers what they thought about various research practices and how often they had used them in their own research. We found that QRPs were used by researchers of all experience levels, methodological types, and geographic locations. 

Use of QRPs was prevalent in education research. For example, we found that nearly 46% reported an exploratory finding as having been predicted, 67% reported having omitted some analyses from published studies. If researchers only report a portion of the analyses they run, readers do not get the full story. 

What does this mean for educational policy and practice?

There are urgent and important scientific and societal questions that need answers. For example, what has remote schooling done to student learning and their psychological well-being? What interventions are effective at helping students get where we’d like them to be? We need to know when we can rely on research results to make important decisions. Growing consensus on which research practices are—and are not—acceptable will help foster more transparent research more worthy of the public trust. 

One way to do this is through greater use of what are called open science research practices. These practices seek to make research more transparent. Key funders are already experimenting with how to promote application of open practices in federally funded research. For example, the Department of Education’s Institute of Education Science Director Mark Schneider has introduced Standards for Excellence in Education Research Principles. My hunch is that research practices will follow the money. But training, hiring, and promotion practices will need to keep up too. 

Is this controversial in the education research community?

Researchers may disagree on whether QRPs are wrong in all circumstances. If there was consensus, then they wouldn’t be “questionable.” They’d either be considered best practice or misconduct. Regardless, problems arise when there is not a consensus in a field on acceptable practices. If one group of researchers does not report all analyses they conduct while another group reports everything, which group will be rewarded under the current incentive system? If this process is repeated enough times, the researchers who selectively report results could receive more recognition and reward than those who report all their results, giving them greater opportunity for publication, recognition, promotion, awards, and more. This catalyzes a vicious cycle.

This vicious cycle has important consequences. When practitioners and policymakers look to research, do they know whether the results they rely on to make decisions give them the full story? Or do they only see a polished version that makes the results appear shinier than they should?

In education, we hunger for magic wands to solve urgent and large problems. But this urgency often drives us to prematurely adopt practices that have appear to have strong initial support. Once that view of “X works” becomes entrenched, it can be really difficult to remove it from policy and practice. Even if subsequent research gives us pause. By developing stronger consensus over what research practices are acceptable in what situations and using open science practices, I believe we can more effectively discover how we can help students develop their talents and flourish.



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.