Can we shift belief in the 'Law of Small Numbers'?

Authors: Bishop, D.V.M., Thompson, J. and Parker, A.J.

Journal: R Soc Open Sci

Volume: 9

Issue: 3

Pages: 211028

ISSN: 2054-5703

DOI: 10.1098/rsos.211028

Abstract:

'Sample size neglect' is a tendency to underestimate how the variability of mean estimates changes with sample size. We studied 100 participants, from science or social science backgrounds, to test whether a training task showing different-sized samples of data points (the 'beeswarm' task) can help overcome this bias. Ability to judge if two samples came from the same population improved with training, and 38% of participants reported that they had learned to wait for larger samples before making a response. Before and after training, participants completed a 12-item estimation quiz, including items testing sample size neglect (S-items). Bonus payments were given for correct responses. The quiz confirmed sample size neglect: 20% of participants scored zero on S-items, and only two participants achieved more than 4/6 items correct. Performance on the quiz did not improve after training, regardless of how much learning had occurred on the beeswarm task. Error patterns on the quiz were generally consistent with expectation, though there were some intriguing exceptions that could not readily be explained by sample size neglect. We suggest that training with simulated data might need to be accompanied by explicit instruction to be effective in counteracting sample size neglect more generally.

Source: PubMed

Can we shift belief in the 'Law of Small Numbers'?

Authors: Bishop, D.V.M., Thompson, J. and Parker, A.J.

Journal: ROYAL SOCIETY OPEN SCIENCE

Volume: 9

Issue: 3

ISSN: 2054-5703

DOI: 10.1098/rsos.211028

Source: Web of Science (Lite)

Can we shift belief in the 'Law of Small Numbers'?

Authors: Bishop, D.V.M., Thompson, J. and Parker, A.J.

Journal: Royal Society open science

Volume: 9

Issue: 3

Pages: 211028

eISSN: 2054-5703

ISSN: 2054-5703

DOI: 10.1098/rsos.211028

Abstract:

'Sample size neglect' is a tendency to underestimate how the variability of mean estimates changes with sample size. We studied 100 participants, from science or social science backgrounds, to test whether a training task showing different-sized samples of data points (the 'beeswarm' task) can help overcome this bias. Ability to judge if two samples came from the same population improved with training, and 38% of participants reported that they had learned to wait for larger samples before making a response. Before and after training, participants completed a 12-item estimation quiz, including items testing sample size neglect (S-items). Bonus payments were given for correct responses. The quiz confirmed sample size neglect: 20% of participants scored zero on S-items, and only two participants achieved more than 4/6 items correct. Performance on the quiz did not improve after training, regardless of how much learning had occurred on the beeswarm task. Error patterns on the quiz were generally consistent with expectation, though there were some intriguing exceptions that could not readily be explained by sample size neglect. We suggest that training with simulated data might need to be accompanied by explicit instruction to be effective in counteracting sample size neglect more generally.

Source: Europe PubMed Central