Pupil dilation and microsaccades provide complementary insights into the dynamics of arousal and instantaneous attention during effortful listening

DOI

These are the data (and related documents) reported in Contadini-Wright et al (2023) J Neurosci.  See readme file for more details.  The data-set includes the eye tracking (pupil dilation and gaze position) data reported in the paper.

The experimental session lasted approximately 1.5 hours and was comprised of three stages: (1) Threshold estimation: A speech-in-noise reception threshold was first obtained from each participant, using the CRM task (see Threshold estimation subsection below). We used an adaptive procedure to determine the 50% correct threshold.  (2) Pupil screening procedures:': Prior to the main experimental session, we performed a series of brief basic measures of pupil reactivity (Light reflex, Dark reflex etc.), commonly used to assess pupil function. These included measuring pupil responses to a slow, gradual change in screen brightness; to a sudden flashing white screen; to a sudden flashing black screen; and to a sudden presentation of a brief auditory stimulus (harmonic tone). These measurements were used to confirm normal pupil responsivity (Wang et al., 2018; Bitsios et al., 1996; Loewenfeld, 1999) and to identify outlying participants (none here). (3) Main experiment: In the main experiment participants performed two blocks of the CRM task while their ocular data were being recorded. In one of the blocks (‘High load’) the signal-to-noise ratio (SNR) was set to the threshold obtained in (1), simulating a difficult listening environment. In the second block (‘Low load’) the SNR was set to the threshold obtained in (1) plus 10 dB to create a much easier listening environment (as in McGarrigle et al, 2020). The order of the two blocks was counterbalanced across participants.  All experimental tasks were implemented in MATLAB and presented via Psychophysics Toolbox Version 3 (PTB-3). Threshold estimation: Auditory stimuli were sentences introduced by Messaoud-Galusi, Hazan, & Rosen (2011) –which are a modified version of the CRM (“Coordinate Response Measure”) corpus described by Bolia et al., 2000. Sentences in Experiment 1 (including threshold estimation) were in the form “Show the dog where the [color] [number] is”. Sentences in Experiment 2 (including threshold estimation) were in the form “[color] [number] is show the dog where the”. The colors that could appear within a target sentence were black, red, white, blue, green and pink. The numbers could be any digit from 1-9 with the exception of 7, as its bisyllabic phonetic structure makes it easier to identify. Consequently, there were a total of 48 possible combinations of color and number. Sentence duration ranged between 1.9 and 2.4s, with the majority having a duration of 2.1s. Sentences were embedded in Gaussian noise. The overall loudness of the noise+speech mixture was fixed at ~70dB SPL. The SNR between the noise and speech was initially set to 20dB, and was adjusted using a one-up-one-down adaptive procedure, tracking the 50% correct threshold. Initial steps were of 12dB SNR, and decreased steadily following each reversal (8dB, then 5dB) up to a minimum step size of 2dB. The test ended after 7 reversals or after a total of 25 trials and took about 2 minutes to complete. The speech reception threshold was calculated as the mean SNR of the final four reversals. Participants completed 3 runs in total (the first was used as a practice). The threshold obtained from the final run was used for the ‘High load’ condition in the main experiment. The threshold plus 10dB was used for the ‘Low load’ condition in the main experiment.  Main task: In the main experiment (‘High load’ (HL) and ‘Low load’ (LL) blocks; 15 min total), the same stimuli were used as for threshold estimation, but the SNR was fixed as described above. Each block contained 30 trials. Participants fixated on a black cross presented at the center of the screen (grey background). The structure of each trial is schematized in Figure 1. Trials began with 0.5s of noise, followed by the onset of the sentence in noise (~2s long) and then a silent period (3s). A response display then appeared on the screen and participants logged their responses to the task by selecting the correct color first, then the number, using a mouse. Visual feedback was provided. At the end of each trial, participants were instructed to re-fixate on the cross in anticipation of the next stimulus.  Procedure: Participants sat with their head fixed on a chinrest in front of a monitor (24-inch BENQ XL2420T with a resolution of 1920x1080 pixels and a refresh rate of 60 Hz) in a dimly lit and acoustically shielded room (IAC triple walled sound-attenuating booth). They were instructed to continuously fixate on a black cross presented at the center of the screen against a grey background. An infrared eye-tracking camera (Eyelink 1000 Desktop Mount, SR Research Ltd.) placed below the monitor at a horizontal distance of 62cm from the participant was used to record pupil data. Auditory stimuli were delivered diotically through a Roland Tri-capture 24-bit 96 kHz soundcard connected to a pair of loudspeakers (Inspire T10 Multimedia Speakers, Creative Labs Inc, California) positioned to the left and right of the eye tracking camera. The loudness of the auditory stimuli was adjusted to a comfortable listening level for each participant. The standard five-point calibration procedure for the Eyelink system was conducted prior to each experimental block and participants were instructed to avoid any head movement after calibration. During the experiment, the eye-tracker continuously tracked gaze position and recorded pupil diameter, focusing binocularly at a sampling rate of 1000 Hz. Participants were instructed to blink naturally during the experiment and encouraged to rest their eyes briefly during inter-trial intervals. Prior to each trial, the eye-tracker automatically checked that the participants’ eyes were open and fixated appropriately; trials would not start unless this was confirmed.

Identifier
DOI https://doi.org/10.5522/04/22650472.v1
Related Identifier https://ndownloader.figshare.com/files/40208554
Metadata Access https://api.figshare.com/v2/oai?verb=GetRecord&metadataPrefix=oai_datacite&identifier=oai:figshare.com:article/22650472
Provenance
Creator Chait, Maria; Contadini-Wright, Claudia
Publisher University College London UCL
Contributor Figshare
Publication Year 2023
Rights https://creativecommons.org/publicdomain/zero/1.0/
OpenAccess true
Contact researchdatarepository(at)ucl.ac.uk
Representation
Language English
Resource Type Dataset
Discipline Other