N170 components of real and computer-generated facial images

Authors: Saul, M., Charles, F. and He, X.

Conference: Neuroadaptive Technology Conference

Dates: 16-18 July 2019

Abstract:

The exposure to computer-generated (CG) stimuli in the real world is vast, with the prominence of CG in movies, television and video games today. There is an increasing number of scientific researches which utilise the medium of CG to conduct experiments [2], therefore it is necessary to know whether our brains process these stimuli the same way in which we process real stimuli. The N170 component can be observed at the occipitotemporal brain areas between 140-200ms post-stimulus onset, typically characterised by a negative peak. With facial stimuli, the N170 component can also be further identified by having a stronger amplitude compared to that of non-facial stimuli [1]. Current studies which investigate whether there are any differences between the activities of the N170 component between presentation of CG images and real images is scarce and present conflicting results. Due to the lack of a validated measure of ‘computer-generated-ness’ or ‘cartoon-ness’, it is difficult to obtain or create the appropriate means of presentation for CG facial stimuli [3]. In this study, we attempt to detect and evaluate any modulation effects from the finer facial cues of static CG stimuli, by applying a simple cartoon filter, and compare that to the N170 component activity results of static real images. The hypothesis is that there will be little to no difference in the activity of the N170 component between the two conditions. We obtained face images from the Stirling face database [4] and converted them into the CG counterparts. House images, real and CG, were included as a control to demonstrate the face-specific activity of the N170 component. Electroencephalogram (EEG) from 10 participants (1 male, 9 females; age 20.5 ± 2.59) was recorded at 32 scalp locations following the extended 10-20 system with BrainVision BrainAmp DC amplifiers. The images were presented with a Unity program consisting of 16 blocks. Real and CG images were presented across blocks in a random order, with face and house images randomly assorted and shown in each block. Each image was presented for 5000ms, with a jittered interval of 1000-2000ms between images. N170 activities were quantified as the mean amplitude between 134 and 154ms post-stimulus at channels P7/P8, PO7/PO8 and PO9/PO10. Statistical analysis was then conducted using a 2 (face vs house) x 2 (real vs CG) x 2 (left hemisphere vs right hemisphere) repeated measures analysis of variance (ANOVA). Faces generated stronger N170 than houses (F(face, house) = 6.047, p = 0.036). Furthermore, this observation was found to be generally stronger in the right hemisphere in comparison to the left hemisphere (F(left, right) = 5.140, p = 0.049). On the other hand, the main effects and interactions of real images vs CG images did not approach significance (F(real, CG) = 0.472, p = 0.509). These results support the hypothesis that CG images, to the extent of the cartoon filter used in this study, does not modulate the effects of the N170 component differently to that of real images.

Source: Manual