Examining the replicability of online experiments selected by a decision market.

dc.contributor.authorHolzmeister F
dc.contributor.authorJohannesson M
dc.contributor.authorCamerer CF
dc.contributor.authorChen Y
dc.contributor.authorHo T-H
dc.contributor.authorHoogeveen S
dc.contributor.authorHuber J
dc.contributor.authorImai N
dc.contributor.authorImai T
dc.contributor.authorJin L
dc.contributor.authorKirchler M
dc.contributor.authorLy A
dc.contributor.authorMandl B
dc.contributor.authorManfredi D
dc.contributor.authorNave G
dc.contributor.authorNosek BA
dc.contributor.authorPfeiffer T
dc.contributor.authorSarafoglou A
dc.contributor.authorSchwaiger R
dc.contributor.authorWagenmakers E-J
dc.contributor.authorWaldén V
dc.contributor.authorDreber A
dc.coverage.spatialEngland
dc.date.accessioned2024-11-27T00:30:41Z
dc.date.available2024-11-27T00:30:41Z
dc.date.issued2024-11-19
dc.description.abstractHere we test the feasibility of using decision markets to select studies for replication and provide evidence about the replicability of online experiments. Social scientists (n = 162) traded on the outcome of close replications of 41 systematically selected MTurk social science experiments published in PNAS 2015-2018, knowing that the 12 studies with the lowest and the 12 with the highest final market prices would be selected for replication, along with 2 randomly selected studies. The replication rate, based on the statistical significance indicator, was 83% for the top-12 and 33% for the bottom-12 group. Overall, 54% of the studies were successfully replicated, with replication effect size estimates averaging 45% of the original effect size estimates. The replication rate varied between 54% and 62% for alternative replication indicators. The observed replicability of MTurk experiments is comparable to that of previous systematic replication projects involving laboratory experiments.
dc.description.confidentialfalse
dc.identifier.author-urlhttps://www.ncbi.nlm.nih.gov/pubmed/39562799
dc.identifier.citationHolzmeister F, Johannesson M, Camerer CF, Chen Y, Ho T-H, Hoogeveen S, Huber J, Imai N, Imai T, Jin L, Kirchler M, Ly A, Mandl B, Manfredi D, Nave G, Nosek BA, Pfeiffer T, Sarafoglou A, Schwaiger R, Wagenmakers E-J, Waldén V, Dreber A. (2024). Examining the replicability of online experiments selected by a decision market.. Nat Hum Behav.
dc.identifier.doi10.1038/s41562-024-02062-9
dc.identifier.eissn2397-3374
dc.identifier.elements-typejournal-article
dc.identifier.pii10.1038/s41562-024-02062-9
dc.identifier.urihttps://mro.massey.ac.nz/handle/10179/72098
dc.languageeng
dc.publisherNature Research
dc.publisher.urihttps://www.nature.com/articles/s41562-024-02062-9
dc.relation.isPartOfNat Hum Behav
dc.rights(c) 2024 The Author/s
dc.rightsCC BY 4.0
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.titleExamining the replicability of online experiments selected by a decision market.
dc.typeJournal article
pubs.elements-id492360
pubs.organisational-groupCollege of Health
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
492360 PDF.pdf
Size:
9.97 MB
Format:
Adobe Portable Document Format
Description:
Published version.pdf
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
9.22 KB
Format:
Plain Text
Description:
Collections