Education

The Malleability Of Statistical Perception


Does it matter how effect sizes are communicated to non-academic audiences? Two new studies illustrate the malleability of statistical perception. One paper examined how educational effects are communicated to teachers and the other paper examined how educational effects are communicated to the public in the context of college admission test policy preferences.

In a paper published in Educational Researcher by Hugues Lortie-Forgues, Ut Na Sio, and Matthew Inglis titled “How should educational effects be communicated to teachers?” the researchers found:

“Research findings regarding the effects of educational interventions—typically reported in units of standard deviations (e.g., Cohen’s d)—are often translated into more intuitive metrics before being communicated to teachers. However, there is no consensus about the most suitable metric, and no study has systematically examined how teachers respond to the different options. We conducted two preregistered studies addressing this issue. We found that teachers have strong preferences concerning effect size metrics in terms of informativeness, understandability, and helpfulness. These preferences challenge current research reporting recommendations. Most importantly, we found that different metrics induce different perceptions of an intervention’s effectiveness—a situation that could cause teachers to have unrealistic expectations about what a given intervention may achieve. Implications for how educational effects should be communicated are discussed.”

In a paper published in Collabra: Psychology led by Don C. Zhang titled “Malleability of statistical perception: Impact of validity presentation on college admission test policy preferences” the researchers found:

“Research evidence in the social sciences often relies on effect size statistics, which are hard to understand for the public and do not always provide clear information for decision-makers. One area where interpretation of research evidence has profound effects on policy is college admission testing. In this paper, we conducted two experiments testing how different effect size presentations affect validity perception and policy preferences toward standardized admission tests (e.g., ACT, SAT). We found that compared to traditional effect size statistics (e.g., correlation coefficient), participants perceived admission tests to be more predictively valid when the same evidence was presented using an alternative effect size presentation. The perceived validity of the admission test was also positively associated with admission test policies (e.g., test-optional policy) preferences. Our findings show that policy preferences toward admission tests depend on the perception of statistical evidence, which is malleable and depends on how evidence is presented.”



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.