-
The Automatic Multimodal Robotic Storyteller
This repository contains the full implementation of the Automatic Multimodal Robotic Storyteller for the Pepper robot. The system enables Pepper to perform any written story... -
Source code and data for the PhD Thesis "Measuring the Contributions of Visio...
This dataset contains source code and data used in the PhD thesis "Measuring the Contributions of Vision and Text Modalities in Multimodal Transformers". The dataset is split... -
Source code and data for the PhD Thesis "Measuring the Contributions of Visio...
This dataset contains source code and data used in the PhD thesis "Measuring the Contributions of Vision and Text Modalities in Multimodal Transformers". The dataset is split... -
MUMIN Annotation Scheme for ANVIL
A multimodal annotation scheme for the tool ANVIL (Kipp 2004) implementing the MUMIN framework. The scheme corresponds to the annotations used in the Danish NOMCO corpus. The... -
MUMIN Annotation Schemes for ANVIL and ELAN (2020-11-04)
A multimodal annotation scheme for the tool ANVIL (Kipp 2004) and a multimodal annotation template for the tool ELAN (Sloetjes and Wittenburg, 2008) implementing the MUMIN... -
Annotated Route Description
This file set existing of a video stream, an audio stream and a multimodal annotation file is a frequently used as show case of how to do complex multimodal annotations with the... -
Data and materials from: "Setting the tone: Threat bias in multimodal emotion...
Emotional stimuli are preferentially processed, in particular fearful faces in the visual system. Most laboratory studies present emotional faces unimodally, although real-life...
