Functional magnetic resonance imaging data of brain area activity when recognising facial expressions

DOI

Data resulting from an experiment which used brain scanning or functional magnetic resonance imaging (fMRI) to investigate the brain areas active when recognising facial expressions and to learn about how they are connected and how they communicate with each other. The dataset consists of volumetric 3D scans of brains, necessarily stored in a special, purpose-made file format. The dataset also contains information necessary for analysing the data, i.e. stimuli and their onsets times. The dataset lastly contains participant ratings of the stimuli collected in a behavioural testing session, following scanning. Our analyses of these data are reported in papers: (1) Furl N, Henson RN, Friston KJ, Calder AJ. 2013. Top-down control of visual responses to fear by the amygdala. J Neurosci 33:17435-43. (2) Furl N, Henson RN, Friston KJ, Calder AJ. 2013. Network Interactions Explain Sensitivity to Dynamic Faces in the Superior Temporal Sulcus. Cereb Cortex. 2015 Sep; 25(9): 2876–2882. Although a person's facial identity is immutable, faces are dynamic and undergo complex movements which signal critical social cues (viewpoint, eye gaze, speech movements, expressions of emotion and pain).  These movements can confuse automated systems, yet humans recognise moving faces robustly. Our objective is to discover the stimulus information, neural representations and computational mechanisms that the human brain uses when recognising social categories from moving faces. We will use human brain imaging to put an existing theory to the test. This theory proposes that recognition of changeable attributes (eg, expression) and facial identity are each recognised separately by two different brain pathways, each in a different part of the temporal lobe of the brain. The evidence we provide might indeed support and fill in many gaps in this theory. Nevertheless, we expect instead to instantiate a new alternative theory. By this new theory, some brain areas can recognise both identities and expressions, using unified representations, with one of the two pathways specialised for representing movement. Thus, the successful completion of our project will provide a new theoretical framework sufficient to motivate improved automated visual systems and advance new directions of research on human social perception.

Functional magnetic resonance imaging (fMRI) data were collected from 18 healthy, right-handed participants (mean 18 years, 13 female). The experiment used a block design, with 18 main experiment runs and two localizer runs. All blocks were 11 s, comprised eight 1375 ms presentations of greyscale stimuli, and were followed by a 1 s interblock fixation interval. Participants fixated on a gray dot in the center of the display, overlaying the image, and pressed a key when the dot turned red for a random one-third of stimulus presentations. In each localizer run, participants viewed six types of blocks, each presented six times. Face blocks contained dynamic facial expressions taken from the Amsterdam Dynamic Facial Expression Set (van der Schalk et al., 2011) or the final static frames from the dynamic facial videos, capturing the expression apexes. Eight different identities (four male and four female) changed among neutral and disgust, fearful, happy, or sad expressions. The eight identities and four expressions appeared in a pseudo-random order, with each of the four expressions appearing twice. Object blocks included eight dynamic objects or the final static frames from the dynamic object videos, shown in a pseudo-random order. The low-level motion blocks consisted of dynamic random-dot pattern videos with motion-defined oriented gratings. The stimuli depicted 50% randomly luminous pixels, which could move at one frame per second horizontally, vertically, or diagonally left or right. Oriented gratings were defined by moving the dots within four strips of pixels in the opposite direction to the rest of the display, but at the same rate. Each motion direction was shown twice per block in a pseudo-random order. There were also corresponding low-level static blocks composed of the final static frames from the low-level motion videos. The remaining runs comprised the main experiment. Each of these main experiment runs had 12 blocks. Each block contained a distinct type of stimulus and was presented in a pseudorandom order. Six of the blocks contained faces, using the same four female and four male identities as in the localizer runs. In each block, all faces were either dynamic or static and showed just one of three expressions: disgust, happy, or fearful. The remaining six blocks were Fourier phase-scrambled versions of each of the six face blocks (dynamic videos were phase-scrambled in three dimensions). After scanning, participants made speeded categorizations of the emotion, expressed in the dynamic and static faces, as disgust, happy, or fearful and rated their emotional intensity on a 1–9 scale. They also rated on a 1–9 scale the intensity of the motion they perceived in each of the dynamic stimuli. Stimuli were presented for the same duration as in the fMRI experiment, and the next stimulus appeared once the participant completed a rating. The data collection methodology is also described in detail in the two papers listed amongst Related Resources and in the attached Readme file.

Identifier
DOI https://doi.org/10.5255/UKDA-SN-851780
Metadata Access https://datacatalogue.cessda.eu/oai-pmh/v0/oai?verb=GetRecord&metadataPrefix=oai_ddi25&identifier=1280c45db6aa679ec2a0125d670903edb03c947b90839c920dc85f87f33acb62
Provenance
Creator Furl, N, MRC Cognition and Brain Sciences Unit, Cambridge
Publisher UK Data Service
Publication Year 2016
Funding Reference ESRC
Rights Nicholas Furl, MRC Cognition and Brain Sciences Unit, Cambridge; The Data Collection is available to any user without the requirement for registration for download/access
OpenAccess true
Representation
Resource Type Numeric
Discipline Psychology; Social and Behavioural Sciences
Spatial Coverage cambridge, united kingdom; United Kingdom