News & Events

Management Research Is Fishy, Says New Management Research

Don't believe everything you read in management journals.

An analysis of academic articles on topics like job performance and an entrepreneurial mindset reveals that papers were often substantially changed between their publication as dissertations and their appearance in peer-reviewed journals.

(Yes, the researchers—from the University of Iowa and Longwood University—recognize the irony in publishing a study about faults in published management research. The paper, "The Chrysalis Effect: How Ugly Initial Results Metamorphosize into Beautiful Articles," is forthcoming from the Journal of Management.)

At the dissertation level, 82 hypotheses were supported for every 100 that were unsupported, meaning that researchers' theories were disproven by their findings more often than not. By the time the papers made it into journals, the ratio shifted to 194:100—a significantly higher success rate.

Faulty research that finds its way into print can have an impact well beyond the ivory tower. Companies have built hiring screens based on academic research about employee selection, created training programs to align with studies about employee engagement, and split the CEO and chairman roles based on professors' guidance.

"If practitioners can't trust what's coming out of academia, we don't have a reason to exist," says Ernest O'Boyle Jr., an assistant professor of management and organizations at University of Iowa's Henry B. Tippie College of Business and a coauthor of the report. He blames an academic system that ties tenure and pay to publication in elite journals.

"The rewards are all based on the ends, and there's just not much attention paid to the means," he said.

The team tracked changes in 142 dissertations that then ran in refereed journals over the past 12 years. They found the dissertations using search terms like "workplace deviance," "entrepreneurial orientation," and other management-related topics. The papers came from academics at 89 schools, including the University of Maryland, Stanford University, and Florida State University. Their work appeared in 83 journals.

Nearly 90% of papers dropped or added hypotheses from one draft to the other. Seventy percent of the hypotheses that were added were statistically significant, and those that were dropped were 1.5 times as likely to not be supported in a statistically significant way.

In all, the dissertations tested 1,978 hypotheses while their published journal iterations tested just 978.

Meanwhile, twenty percent of the academics dropped subjects from the studies. Changes in sample sizes were more than twice as likely to result in unsupported dissertation hypotheses changing to statistically significant journal hypotheses than vice versa.

O'Boyle and colleagues are running related studies in social work and psychology research, with larger sample sizes and similar results.

O'Boyle says he'd like to see journal editors require that contributors sign an honor code and post original versions of the papers online so readers can determine revisions. (Just 18% of the articles studied in this analysis even mentioned the dissertation as the basis of that newer work.) He also hopes more researchers are rewarded for replicating studies, something that now offers little benefit in terms of pay or tenure progress.

If rewarding replication is the carrot, there needs to be a stick as well. O'Boyle suggests forcing researchers to retract their findings if they prove unwilling to share the data to allow others to try replicating their results.

In his own experience asking for data from other researchers, "There's this pandemic of computer crashes and office fires and lost moving boxes."


Return to top of page