Consider the Head Movements! Saccade Computation in Mobile Eye-Tracking

DOI

How to Cite?

Negar Alinaghi and Ioannis Giannopoulos. 2022. Consider the Head Movements! Saccade Computation in Mobile Eye-Tracking. In 2022 Symposium on Eye Tracking Research and Applications (ETRA '22). Association for Computing Machinery, New York, NY, USA, Article 2, 1–7. https://doi.org/10.1145/3517031.3529624

Abstract

Saccadic eye movements are known to serve as a suitable proxy for tasks prediction. In mobile eye-tracking, saccadic events are strongly influenced by head movements. Common attempts to compensate for head-movement effects either neglect saccadic events altogether or fuse gaze and head-movement signals measured by IMUs in order to simulate the gaze signal at head-level. Using image processing techniques, we propose a solution for computing saccades based on frames of the scene-camera video. In this method, fixations are first detected based on gaze positions specified in the coordinate system of each frame, and then respective frames are merged. Lastly, pairs of consecutive fixations –forming a saccade- are projected into the coordinate system of the stitched image using the homography matrices computed by the stitching algorithm. The results show a significant difference in length between projected and original saccades, and approximately 37% of error introduced by employing saccades without head-movement consideration. 

Code and Data

The data folder contains one sample gaze recording file (named gaze_positions_4bwCZ9awAx_unfamiliar.csv) and the corresponding computed fixations (named fixation_4bwCZ9awAx_unfamiliar.csv).

Note: For data and privacy protection reasons, the corresponding video recording cannot be shared publicly. This vide is therefore published separately on a per-request basis. Check this link for requesting access.

The gaze positions file contains these information:

'gaze_timestamp': the timestamp of the gaze position, starting at 0 (start of recording).

'world_index': number of the frame on the scene camera video

'confidence': a quality measure not yet (June 2022) implemented by PupilLabs and therefore contains integers equal to 0. If you don't have this column, create a column with this header and set all values equal to 0.

'norm_pos_x': normalized x-position of the gaze

'norm_pos_y': normalized y-position of the gaze 

The fixations file contains these information:

'id': incrementing id starting at 0

'time': the duration of the fixation

'world_index': frame index on the scene camera video related to this fixation

'x_mean': normalized x-position of the fixation

'y_mean': normalized y-position of the fixation

'start_frame': the first frame that contains the fixation point

'end_frame': the last frame that contains the fixation point

'dispersion': the computed dispersion of the fixation

The idt.py is the python implementation of the IDT algorithm we used for this paper to compute the fixations from the gaze positions. If you want to use your pre-computed fixations (not using our IDT implementation), just make sure that your fixation file contains the columns mentioned above. In this case just run the main.py using the video and the fixation csv file.

The fixation file and the video file are the two inputs for the main.py which is the algorithm we proposed for the saccadic corrections. main.py creates two outputs:

a csv file containing the fixations with two added columns: transformed_x,transformed_y which show the projected x and y coordinate of the fixation.

a csv file containing the computed saccade length and azimuth based on these newly projected coordinates. 

An Ethical Note

The data collected for this study was reviewed by the  Pilot Research Ethics Committee at TU Wien. The participants gave written consent for their data to be used for research purposes. We also maintained the transparency of the video recordings in public spaces by wearing a sign indicating that a video recording was in progress.

License

All data is published under the CC-BY 4.0 license. The code is under the MIT license.

Identifier
DOI https://doi.org/10.48436/gsyh5-vxz65
Related Identifier IsSupplementedBy https://doi.org/10.48436/4schr-e5g95
Related Identifier IsVersionOf https://doi.org/10.48436/we677-ntp71
Metadata Access https://researchdata.tuwien.ac.at/oai2d?verb=GetRecord&metadataPrefix=oai_datacite&identifier=oai:researchdata.tuwien.ac.at:gsyh5-vxz65
Provenance
Creator Alinaghi, Negar
Publisher TU Wien
Contributor Alinaghi, Negar; Giannopoulos, Ioannis
Publication Year 2024
Rights Creative Commons Attribution 4.0 International; MIT License; https://creativecommons.org/licenses/by/4.0/legalcode; https://opensource.org/licenses/MIT
OpenAccess true
Contact tudata(at)tuwien.ac.at
Representation
Resource Type Dataset
Version 1.0.0
Discipline Other