Reproducibility in the absence of selective reporting: An illustration from large-scale brain asymmetry research

Authors: Kong, X.Z., Akudjedu, T.N. et al.

Journal: Human Brain Mapping

Volume: 43

Issue: 1

Pages: 244-254

eISSN: 1097-0193

ISSN: 1065-9471

DOI: 10.1002/hbm.25154

Abstract:

The problem of poor reproducibility of scientific findings has received much attention over recent years, in a variety of fields including psychology and neuroscience. The problem has been partly attributed to publication bias and unwanted practices such as p-hacking. Low statistical power in individual studies is also understood to be an important factor. In a recent multisite collaborative study, we mapped brain anatomical left–right asymmetries for regional measures of surface area and cortical thickness, in 99 MRI datasets from around the world, for a total of over 17,000 participants. In the present study, we revisited these hemispheric effects from the perspective of reproducibility. Within each dataset, we considered that an effect had been reproduced when it matched the meta-analytic effect from the 98 other datasets, in terms of effect direction and significance threshold. In this sense, the results within each dataset were viewed as coming from separate studies in an “ideal publishing environment,” that is, free from selective reporting and p hacking. We found an average reproducibility rate of 63.2% (SD = 22.9%, min = 22.2%, max = 97.0%). As expected, reproducibility was higher for larger effects and in larger datasets. Reproducibility was not obviously related to the age of participants, scanner field strength, FreeSurfer software version, cortical regional measurement reliability, or regional size. These findings constitute an empirical illustration of reproducibility in the absence of publication bias or p hacking, when assessing realistic biological effects in heterogeneous neuroscience data, and given typically-used sample sizes.

http://eprints.bournemouth.ac.uk/34473/

Source: Scopus

Reproducibility in the absence of selective reporting: An illustration from large-scale brain asymmetry research.

Authors: Kong, X.-Z., ENIGMA Laterality Working Group and Francks, C.

Journal: Hum Brain Mapp

Volume: 43

Issue: 1

Pages: 244-254

eISSN: 1097-0193

DOI: 10.1002/hbm.25154

Abstract:

The problem of poor reproducibility of scientific findings has received much attention over recent years, in a variety of fields including psychology and neuroscience. The problem has been partly attributed to publication bias and unwanted practices such as p-hacking. Low statistical power in individual studies is also understood to be an important factor. In a recent multisite collaborative study, we mapped brain anatomical left-right asymmetries for regional measures of surface area and cortical thickness, in 99 MRI datasets from around the world, for a total of over 17,000 participants. In the present study, we revisited these hemispheric effects from the perspective of reproducibility. Within each dataset, we considered that an effect had been reproduced when it matched the meta-analytic effect from the 98 other datasets, in terms of effect direction and significance threshold. In this sense, the results within each dataset were viewed as coming from separate studies in an "ideal publishing environment," that is, free from selective reporting and p hacking. We found an average reproducibility rate of 63.2% (SD = 22.9%, min = 22.2%, max = 97.0%). As expected, reproducibility was higher for larger effects and in larger datasets. Reproducibility was not obviously related to the age of participants, scanner field strength, FreeSurfer software version, cortical regional measurement reliability, or regional size. These findings constitute an empirical illustration of reproducibility in the absence of publication bias or p hacking, when assessing realistic biological effects in heterogeneous neuroscience data, and given typically-used sample sizes.

http://eprints.bournemouth.ac.uk/34473/

Source: PubMed

Reproducibility in the absence of selective reporting: An illustration from large-scale brain asymmetry research

Authors: Kong, X.-Z. and Francks, C.

Journal: HUMAN BRAIN MAPPING

Volume: 43

Issue: 1

Pages: 244-254

eISSN: 1097-0193

ISSN: 1065-9471

DOI: 10.1002/hbm.25154

http://eprints.bournemouth.ac.uk/34473/

Source: Web of Science (Lite)

Reproducibility in the absence of selective reporting: An illustration from large‐scale brain asymmetry research

Authors: Kong, X., ENIGMA Laterality Working Group, Akudjedu, T.N. and Francks, C.

Journal: Human Brain Mapping

Publisher: Wiley-Blackwell

ISSN: 1065-9471

Abstract:

The problem of poor reproducibility of scientific findings has received much attention over recent years, in a variety of fields including psychology and neuroscience. The problem has been partly attributed to publication bias and unwanted practices such as p‐hacking. Low statistical power in individual studies is also understood to be an important factor. In a recent multisite collaborative study, we mapped brain anatomical left–right asymmetries for regional measures of surface area and cortical thickness, in 99 MRI datasets from around the world, for a total of over 17,000 participants. In the present study, we revisited these hemispheric effects from the perspective of reproducibility. Within each dataset, we considered that an effect had been reproduced when it matched the meta‐analytic effect from the 98 other datasets, in terms of effect direction and significance threshold. In this sense, the results within each dataset were viewed as coming from separate studies in an “ideal publishing environment,” that is, free from selective reporting and p hacking. We found an average reproducibility rate of 63.2% (SD = 22.9%, min = 22.2%, max = 97.0%). As expected, reproducibility was higher for larger effects and in larger datasets. Reproducibility was not obviously related to the age of participants, scanner field strength, FreeSurfer software version, cortical regional measurement reliability, or regional size. These findings constitute an empirical illustration of reproducibility in the absence of publication bias or p hacking, when assessing realistic biological effects in heterogeneous neuroscience data, and given typically‐used sample sizes.

http://eprints.bournemouth.ac.uk/34473/

Source: Manual

Reproducibility in the absence of selective reporting: An illustration from large-scale brain asymmetry research.

Authors: Kong, X.-Z., ENIGMA Laterality Working Group and Francks, C.

Journal: Human brain mapping

Volume: 43

Issue: 1

Pages: 244-254

eISSN: 1097-0193

ISSN: 1065-9471

DOI: 10.1002/hbm.25154

Abstract:

The problem of poor reproducibility of scientific findings has received much attention over recent years, in a variety of fields including psychology and neuroscience. The problem has been partly attributed to publication bias and unwanted practices such as p-hacking. Low statistical power in individual studies is also understood to be an important factor. In a recent multisite collaborative study, we mapped brain anatomical left-right asymmetries for regional measures of surface area and cortical thickness, in 99 MRI datasets from around the world, for a total of over 17,000 participants. In the present study, we revisited these hemispheric effects from the perspective of reproducibility. Within each dataset, we considered that an effect had been reproduced when it matched the meta-analytic effect from the 98 other datasets, in terms of effect direction and significance threshold. In this sense, the results within each dataset were viewed as coming from separate studies in an "ideal publishing environment," that is, free from selective reporting and p hacking. We found an average reproducibility rate of 63.2% (SD = 22.9%, min = 22.2%, max = 97.0%). As expected, reproducibility was higher for larger effects and in larger datasets. Reproducibility was not obviously related to the age of participants, scanner field strength, FreeSurfer software version, cortical regional measurement reliability, or regional size. These findings constitute an empirical illustration of reproducibility in the absence of publication bias or p hacking, when assessing realistic biological effects in heterogeneous neuroscience data, and given typically-used sample sizes.

http://eprints.bournemouth.ac.uk/34473/

Source: Europe PubMed Central

Reproducibility in the absence of selective reporting: An illustration from large‐scale brain asymmetry research

Authors: Kong, X., ENIGMA Laterality Working Group, Akudjedu, T.N. and Francks, C.

Journal: Human Brain Mapping

Volume: 43

Issue: 1

Pages: 244-254

ISSN: 1065-9471

Abstract:

The problem of poor reproducibility of scientific findings has received much attention over recent years, in a variety of fields including psychology and neuroscience. The problem has been partly attributed to publication bias and unwanted practices such as p‐hacking. Low statistical power in individual studies is also understood to be an important factor. In a recent multisite collaborative study, we mapped brain anatomical left–right asymmetries for regional measures of surface area and cortical thickness, in 99 MRI datasets from around the world, for a total of over 17,000 participants. In the present study, we revisited these hemispheric effects from the perspective of reproducibility. Within each dataset, we considered that an effect had been reproduced when it matched the meta‐analytic effect from the 98 other datasets, in terms of effect direction and significance threshold. In this sense, the results within each dataset were viewed as coming from separate studies in an “ideal publishing environment,” that is, free from selective reporting and p hacking. We found an average reproducibility rate of 63.2% (SD = 22.9%, min = 22.2%, max = 97.0%). As expected, reproducibility was higher for larger effects and in larger datasets. Reproducibility was not obviously related to the age of participants, scanner field strength, FreeSurfer software version, cortical regional measurement reliability, or regional size. These findings constitute an empirical illustration of reproducibility in the absence of publication bias or p hacking, when assessing realistic biological effects in heterogeneous neuroscience data, and given typically‐used sample sizes.

http://eprints.bournemouth.ac.uk/34473/

Source: BURO EPrints