EMMT (Eyetracked Multi-Modal Translation)

PID

Eyetracked Multi-Modal Translation (EMMT) is a simultaneous eye-tracking, 4-electrode EEG and audio corpus for multi-modal reading and translation scenarios. It contains monocular eye movement recordings, audio data and 4-electrode wearable electroencephalogram (EEG) data of 43 participants while engaged in sight translation supported by an image.

The details about the experiment and the dataset can be found in the README file.

Identifier
PID http://hdl.handle.net/11234/1-4619
Related Identifier https://ufal.mff.cuni.cz/eyetracked-multi-modal-translation
Metadata Access http://lindat.mff.cuni.cz/repository/oai/request?verb=GetRecord&metadataPrefix=oai_dc&identifier=oai:lindat.mff.cuni.cz:11234/1-4619
Provenance
Creator Bhattacharya, Sunit; Kloudová, Věra; Zouhar, Vilém; Bojar, Ondřej
Publisher Charles University, Faculty of Mathematics and Physics, Institute of Formal and Applied Linguistics (UFAL)
Publication Year 2022
Funding Reference info:eu-repo/grantAgreement/EC/H2020/825303
Rights Creative Commons - Attribution 4.0 International (CC BY 4.0); http://creativecommons.org/licenses/by/4.0/; PUB
OpenAccess true
Contact lindat-help(at)ufal.mff.cuni.cz
Representation
Language English; Czech
Resource Type corpus
Format application/zip; application/octet-stream; downloadable_files_count: 1
Discipline Linguistics