Research Repository

Crowdsourcing hypothesis tests: Making transparent how design choices shape research results

Landy, Justin and Jia, Miaolei and Ding, Isabel and Viganola, Domenico and Tierney, Warrent and Dreber, Anna and Johanneson, Magnus and Pfeiffer, Thomas and Ebersole, Charles and Gronau, Quentin and Ly, Alexander and Bergh, Don van den and Marsman, Maarten and Derks, Koen and Wagenmakers, Eric-Jan and Proctor, Andrew and Bartels, Daniel and Bauman, Christopher and Brady, William and Cheung, Felix and Cimpian, Andrei and Dohle, Simone and Donnellan, Brent and Hahn, Adam and Hall, Michael and Jimenez-Leal, William and Johnson, David and Lucas, Richard and Monin, Benoit and Montealegre, Andres and Mullen, Elizabeth and Pang, Jun and Ray, Jennifer and Reinero, Diego and Reynolds, Jesse and Sowden, Walter and Storage, Daniel and Su, Runkun and Tworek, Christina and Bavel, Jay Van and Walco, Daniel and Wills, Julian and Xu, Xiaobing and Yam, Kai Chi and Yang, Xiaoyu and Cunningham, William and Schweinsberg, Martin and Urwitz, Molly and Adamkovic, Matus and Alaei, Ravin and Albers, Casper and Allard, Aurelien and Anderson, Ian and Andreychik, Michael and Babincak, Peter and Baker, Bradley and Banik, Gabriel and Baskin, Ernest and Bavolar, Josef and Berkers, Ruud and Bialek, Michal and Blanke, Joel and Breuer, Johannes and Brizi, Ambra and Brown, Stephanie and Bruhlmann, Florian and Bruns, Henrik and Caldwell, Leigh and Campercy, Jean-Francois and Chan, Eugene and Chang, Yen-Ping and Cheung, Benjamin and Chin, Alycia and Cho, Kit and Columbus, Simon and Conway, Paul and Corretti, Conrad and Craig, Adam and Curran, Paul and Danvers, Alexander and Dawson, Ian and Day, Martin and Dietl, Erik and Doerflinger, Johannes and Domenici, Alice and Dranseika, Vilius and Edelsbrunner, Peter and Edlund, John and Fisher, Matthew and Fung, Anna and Genschow, Oliver and Gnambs, Kimo and Goldberg, Matthew and Graf-Vlachy, Lorenz and Hafenbrack, Andrew and Hafenbradl, Sebastian and Hartanto, Andree and Heck, Patrick and Heffner, Joseph and Hilgard, Joseph and Holzmeister, Felix and Horchak, Oleksandr and Huang, Tina and Huffmeier, Joachim and Hughes, Sean and Hussey, Ian and Imhoff, Roland and Jaeger, Bastian and Jamro, Konrad and Johnson, Samuel and Jones, Andrew and Keller, Lucas and Kombeiz, Olga and Krueger, Lacy and Lantian, Anthony and Laplante, Justin and Lazarevic, Ljiljana and Leclerc, Jonathan and Legate, Nicole and Leonhardt, James and Leung, Desmond and Levitan, Carmel and Lin, Hause and Liu, Qinglan and Liuzza, Marco and Locke, Kenneth and Ly, Albert and MacEacheron, Melanie and Madan, Christopher and Manley, Harry and Mari, Silvia and Martoncik, Marcel and McLean, Scott and McPhetres, Jonathon and Mercier, Brett and Michaels, Corinna and Mullarkey, Michael and Musser, Erica and Nalborczyk, Ladislas and Nilsonne, Gustav and Otis, Nicholas and Otner, Sarah and Otto, Philipp and Oviedo-Trespalacios, Oscar and Paruzel-Czachura, Mariola and Pellegrini, Francesco and Pereira, Vitor and Perfecto, Hannah and Pfuhl, Gerit and Phillips, Mark and Plonsky, Ori and Pozzi, Maura and Puric, Danka and Raymond-Barker, Brett and Redman, David and Reynolds, Caleb and Ropovik, Ivan and Roseler, Lukas and Ruessmann, Janna and Ryan, William and Sablaturova, Nika and Schuepfer, Kurt and Schutz, Astrid and Sirota, Miroslav and Stefan, Matthias and Stocks, Eric and Strosser, Garrett and Suchow, Jordan and Szabelska, Anna and Tey, Kian and Teokhin, Leonid and Troian, Jais and Utesch, Till and Vasquez-Echevarria, Alejandro and Vaughn, Leigh Ann and Verschoor, Mark and Helverson, Bettina von and Wallisch, Pascall and Weissgerber, Sophia and Wichman, Aaron and Woike, Jan and Zezelj, Iris and Zickfield, Janis and Ahn, Yeonsin and Blaettchen, Philippe and Kang, Xi and Lee, Yoo Jin and Parker, Philip and Parker, Paul and Song, Jamie and Very, May-Anne and Wong, Lynn and Uhlmann, Eric (2019) 'Crowdsourcing hypothesis tests: Making transparent how design choices shape research results.' Psychological Bulletin. ISSN 0033-2909 (In Press)

[img]
Preview
Text
BUL-2018-1302_R3.pdf - Accepted Version

Download (2MB) | Preview

Abstract

To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from two separate large samples (total N > 15,000) were then randomly assigned to complete one version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: materials from different teams rendered statistically significant effects in opposite directions for four out of five hypotheses, with the narrowest range in estimates being d = -0.37 to +0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for two hypotheses, and a lack of support for three hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, while considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim.

Item Type: Article
Uncontrolled Keywords: Crowdsourcing, scientific transparency, stimulus sampling, forecasting, conceptual replications, research robustness
Divisions: Faculty of Science and Health > Psychology, Department of
Depositing User: Elements
Date Deposited: 04 Nov 2019 12:11
Last Modified: 04 Nov 2019 12:11
URI: http://repository.essex.ac.uk/id/eprint/25784

Actions (login required)

View Item View Item