We presented participants with spectrally distorted speech (4-channel noise-vocoded speech). During short training sessions, listeners received auditorily distorted words or pseudowords that were partially disambiguated by concurrently presented lipread information or text. After each training session, listeners were tested with new degraded auditory words. On all trials, participants typed in what they had heard.
The included data sets reflect the raw data obtained from 128 participants (Experiment 1) and the subset of those participants who participated in the follow-up experiment (N = 35), available in spreadsheet format (.csv) or E-prime 3 format (.emrg3).
RQ: How is auditory perception supported by the visual context?