Data-driven studies in face identity processing rely on the quality of the tests and data sets
Authors: Bobak, A.K., Jones, A.L., Hilker, Z., Mestry, N., Bate, S. and Hancock, P.J.B.
Journal: Cortex
Volume: 166
Pages: 348-364
eISSN: 1973-8102
ISSN: 0010-9452
DOI: 10.1016/j.cortex.2023.05.018
Abstract:There is growing interest in how data-driven approaches can help understand individual differences in face identity processing (FIP). However, researchers employ various FIP tests interchangeably, and it is unclear whether these tests 1) measure the same underlying ability/ies and processes (e.g., confirmation of identity match or elimination of identity match) 2) are reliable, 3) provide consistent performance for individuals across tests online and in laboratory. Together these factors would influence the outcomes of data-driven analyses. Here, we asked 211 participants to perform eight tests frequently reported in the literature. We used Principal Component Analysis and Agglomerative Clustering to determine factors underpinning performance. Importantly, we examined the reliability of these tests, relationships between them, and quantified participant consistency across tests. Our findings show that participants’ performance can be split into two factors (called here confirmation and elimination of an identity match) and that participants cluster according to whether they are strong on one of the factors or equally on both. We found that the reliability of these tests is at best moderate, the correlations between them are weak, and that the consistency in participant performance across tests and is low. Developing reliable and valid measures of FIP and consistently scrutinising existing ones will be key for drawing meaningful conclusions from data-driven studies.
https://eprints.bournemouth.ac.uk/38645/
Source: Scopus
Data-driven studies in face identity processing rely on the quality of the tests and data sets.
Authors: Bobak, A.K., Jones, A.L., Hilker, Z., Mestry, N., Bate, S. and Hancock, P.J.B.
Journal: Cortex
Volume: 166
Pages: 348-364
eISSN: 1973-8102
DOI: 10.1016/j.cortex.2023.05.018
Abstract:There is growing interest in how data-driven approaches can help understand individual differences in face identity processing (FIP). However, researchers employ various FIP tests interchangeably, and it is unclear whether these tests 1) measure the same underlying ability/ies and processes (e.g., confirmation of identity match or elimination of identity match) 2) are reliable, 3) provide consistent performance for individuals across tests online and in laboratory. Together these factors would influence the outcomes of data-driven analyses. Here, we asked 211 participants to perform eight tests frequently reported in the literature. We used Principal Component Analysis and Agglomerative Clustering to determine factors underpinning performance. Importantly, we examined the reliability of these tests, relationships between them, and quantified participant consistency across tests. Our findings show that participants' performance can be split into two factors (called here confirmation and elimination of an identity match) and that participants cluster according to whether they are strong on one of the factors or equally on both. We found that the reliability of these tests is at best moderate, the correlations between them are weak, and that the consistency in participant performance across tests and is low. Developing reliable and valid measures of FIP and consistently scrutinising existing ones will be key for drawing meaningful conclusions from data-driven studies.
https://eprints.bournemouth.ac.uk/38645/
Source: PubMed
Data-driven studies in face identity processing rely on the quality of the tests and data sets
Authors: Bobak, A.K., Jones, A.L., Hilker, Z., Mestry, N., Bate, S. and Hancock, P.J.B.
Journal: CORTEX
Volume: 166
Pages: 348-364
eISSN: 1973-8102
ISSN: 0010-9452
DOI: 10.1016/j.cortex.2023.05.018
https://eprints.bournemouth.ac.uk/38645/
Source: Web of Science (Lite)
Data-driven studies in Face Identity Processing rely on the quality of the tests and data sets
Authors: Bobak, A.K., Jones, A.L., Hilker, Z., Mestry, N., Bate, S. and Hancock, P.J.B.
Journal: Cortex
Publisher: Elsevier
ISSN: 0010-9452
DOI: 10.1016/j.cortex.2023.05.018
https://eprints.bournemouth.ac.uk/38645/
Source: Manual
Data-driven studies in face identity processing rely on the quality of the tests and data sets.
Authors: Bobak, A.K., Jones, A.L., Hilker, Z., Mestry, N., Bate, S. and Hancock, P.J.B.
Journal: Cortex; a journal devoted to the study of the nervous system and behavior
Volume: 166
Pages: 348-364
eISSN: 1973-8102
ISSN: 0010-9452
DOI: 10.1016/j.cortex.2023.05.018
Abstract:There is growing interest in how data-driven approaches can help understand individual differences in face identity processing (FIP). However, researchers employ various FIP tests interchangeably, and it is unclear whether these tests 1) measure the same underlying ability/ies and processes (e.g., confirmation of identity match or elimination of identity match) 2) are reliable, 3) provide consistent performance for individuals across tests online and in laboratory. Together these factors would influence the outcomes of data-driven analyses. Here, we asked 211 participants to perform eight tests frequently reported in the literature. We used Principal Component Analysis and Agglomerative Clustering to determine factors underpinning performance. Importantly, we examined the reliability of these tests, relationships between them, and quantified participant consistency across tests. Our findings show that participants' performance can be split into two factors (called here confirmation and elimination of an identity match) and that participants cluster according to whether they are strong on one of the factors or equally on both. We found that the reliability of these tests is at best moderate, the correlations between them are weak, and that the consistency in participant performance across tests and is low. Developing reliable and valid measures of FIP and consistently scrutinising existing ones will be key for drawing meaningful conclusions from data-driven studies.
https://eprints.bournemouth.ac.uk/38645/
Source: Europe PubMed Central
Data-driven studies in Face Identity Processing rely on the quality of the tests and data sets
Authors: Bobak, A.K., Jones, A.L., Hilker, Z., Mestry, N., Bate, S. and Hancock, P.J.B.
Journal: Cortex
Volume: 166
Pages: 348-364
Publisher: Elsevier
ISSN: 0010-9452
Abstract:There is growing interest in how data-driven approaches can help understand individual differences in face identity processing (FIP). However, researchers employ various FIP tests interchangeably, and it is unclear whether these tests 1) measure the same underlying ability/ies and processes (e.g., confirmation of identity match or elimination of identity match) 2) are reliable, 3) provide consistent performance for individuals across tests online and in laboratory. Together these factors would influence the outcomes of data-driven analyses. Here, we asked 211 participants to perform eight tests frequently reported in the literature. We used Principal Component Analysis and Agglomerative Clustering to determine factors underpinning performance. Importantly, we examined the reliability of these tests, relationships between them, and quantified participant consistency across tests. Our findings show that participants’ performance can be split into two factors (called here confirmation and elimination of an identity match) and that participants cluster according to whether they are strong on one of the factors or equally on both. We found that the reliability of these tests is at best moderate, the correlations between them are weak, and that the consistency in participant performance across tests and is low. Developing reliable and valid measures of FIP and consistently scrutinising existing ones will be key for drawing meaningful conclusions from data-driven studies.
https://eprints.bournemouth.ac.uk/38645/
Source: BURO EPrints