This study investigated the influence of individual differences in musical abilities and general musical sophistication on the use of prosodic cues (specifically duration) in spoken word recognition. The goal was to determine whether rhythmic and melodic processing skills or self-reported musical sophistication predict how listeners interpret spoken language in quiet and speech-on-speech masking conditions.
The data presented here was collected as part of the ENRICH project:
CSV file with gaze fixation data from the Visual World Paradigm - dat_comp.csv
CSV file with pupil data from the same experiment of Visual World Paradigm - dat_erpd.csv
CSV file with musical ability measures (CA-BAT and MDT) and the Goldsmiths Musical Sophistication Index (Gold-MSI), and listeners musical background information: dat_mus.csv.
ENRICH-Mus2.R
file contains the analysis scripts.