Attending a psychology conference this month, I was struck by an unsettling trend in social-science research. The abstract book revealed that nearly all of the 1,500 posters and talks related to some kind of confirmatory work — using statistical analyses to test an existing hypothesis. Hardly any reported the sort of open-ended exploratory research needed to come up with those hypotheses.
In my view, it’s crucial for the social sciences to embrace a research culture that balances hypothesis testing with hypothesis generation, embracing exploratory studies alongside confirmatory ones.
Replication games: how to make reproducibility research more systematic
To understand the difference between exploratory and confirmatory studies, consider a research group that is interested in links between children who watch violent films and aggressive behaviour. In an exploratory study, the group might ask parents about the children’s aggressive behaviour and collect information about household habits, individual characteristics and media content that could influence the children. The authors would inspect the data in many ways, looking for any patterns. The patterns that emerge, alongside theoretical background knowledge, could then be used to build specific predictions — for instance, that parents’ explanation of violent content might decrease media-induced violence in adolescents, but not younger children.
Confirmatory studies can seek to support or question these hypotheses using a fresh data set. To conduct a confirmatory study properly, best practice dictates that researchers outline their analysis plan before beginning their work, often through preregistration — recording it in a public repository. Preregistration helps to prevent researchers from constructing their hypotheses and analyses after peeking at their data. In the example above, for instance, if a group’s analyses do not confirm the general hypothesis, the researchers might be tempted to tweak the type of analysis they perform until it shows a statistically significant effect for, say, early adolescent boys, after controlling for family income. This broadly condemned practice opens the door to false-positive findings.
Like many in the social sciences, I was trained to think of hypothesis testing as the main job of a researcher, with exploration hardly worthy of publication. But I changed my mind as I began to lose faith in my own ability to preregister my studies properly. I wholeheartedly believe that preregistration is crucial to transparent science. But often, I would find that I hadn’t done enough exploration to plan my hypothesis-testing analyses properly.
Reward research for being useful — not just flashy
For example, how could a researcher tackling the aggression question know, without exploration, which of many variables — age, gender, socio-economic background and more — might be linked to behavioural change, and how? Similarly, in my own work, I’ve found myself asking: should I have devised my statistical model differently? Would a different analysis bring different conclusions? Did I miss something more interesting in my data? I felt unable to let my data speak for themselves. I should have done more exploration before committing to my confirmatory analysis.
Yet, despite the importance of exploratory studies, they are not valued highly enough. Hundreds of journals promote registered reports — a way to preregister confirmatory research with the journal — yet exploratory science is rarely endorsed. And in the current reproducibility drive, with questions raised about how to make the social sciences more rigorous, robust and reproducible, most solutions have involved improving how we test our hypotheses, with little thought given to exploratory work.
Many researchers are not trained to conduct exploratory studies. What’s more, many feel uneasy about doing such work. One study found that exploratory research is viewed as ‘less scientific’ than confirmatory research (H. K. Collins et al. Organ. Behav. Hum. Decis. Process. 164, 179–191; 2021), probably owing to the emphasis the latter has received.
‘Doing good science is hard’: retraction of high-profile reproducibility study prompts soul-searching
To help turn the tide, journals should make it clear that they welcome exploratory studies. I know of a handful that already advertise exploratory reports — articles that can include not just exploratory data analyses, but also descriptive studies, or those that add constraints to existing theories. More journals should embrace this idea.
Researchers undertaking exploratory work should know that this doesn’t mean they can be lax about scientific rigour. The criteria that peer reviewers should consider when evaluating exploratory studies are similar to those for confirmatory studies: the rationale for a particular study; its potential value to the field; whether its data are presented clearly; its theoretical relevance; and the transparency of the report. The aim of peer review should be to ascertain that an exploratory study provides well-founded and specified predictions for future confirmatory studies to test.
Institutions must not regard exploratory studies as lower-league research, either. Hypothesis testing, as the core of social-sciences research, is often front and centre in undergraduate courses. But students of the social sciences should also be taught exploratory techniques and theory building throughout their degrees.
When we are unsure about what we are after, we should stop and explore. Instead of committing ourselves to arbitrary specifications, we could let the data talk first. It could be the start of a healthier relationship.
Competing Interests
The author declares no competing interests.
Source link