Reading Direct Speech Quotes Increases Theta Phase-locking: Evidence for Theta Tracking of Inner Speech, 2016-2019

DOI

Growing evidence shows that theta-band (4-7Hz) activity in the auditory cortex phase-locks to rhythms of overt speech. Does theta activity also encode the rhythmic dynamics of inner speech? Previous research established that silent reading of direct speech quotes (e.g., Mary said: “This dress is lovely!”) elicits more vivid inner speech than indirect speech quotes (e.g., Mary said that the dress was lovely). As we cannot directly track the phase alignment between theta activity and inner speech over time, we used EEG to measure the brain’s phase-locked responses to the onset of speech quote reading. We found that direct (vs. indirect) quote reading was associated with increased theta phase synchrony over trials at 250-500 ms post-reading onset, with sources of the evoked activity estimated in the speech processing network. An eye-tracking control experiment confirmed that increased theta phase synchrony in direct quote reading was not driven by eye movement patterns, and more likely reflects synchronous phase resetting at the onset of inner speech. These findings suggest a functional role of theta phase modulation in reading-induced inner speech.Written communication (e.g., emails, news reports, social media) is a major form of social information exchange in today's world. However, it is sometimes difficult to interpret the intended meaning of a written message without hearing prosody (rhythm, stress, and intonation of speech) that is instrumental in understanding the writer's feelings, attitudes, and intentions. For example, a prosody-less "thank you" email can be confusing as to whether the sender is being sincere or sarcastic (Kruger et al., 2005). Emails like these are often misinterpreted as being more negative or neutral than intended; such miscommunications can damage social cohesiveness and group identity within organisations and communities, thereby undermining economic performance and societal stability (Byron, 2008). Interestingly, written words may not be entirely "silent" after all. My recent research showed that we mentally (or covertly) simulate speech prosody (or "inner voices") during silent reading of written direct quotations (Mary gasped: "This dress is beautiful!") as if we were hearing someone speaking (Yao et al., 2011, 2012). For example, Yao and colleagues (2011) observed that silent reading of direct quotations elicited higher neural activity in voice-selective areas of the auditory cortex as compared to silent reading of meaning-equivalent indirect speech (Mary gasped that the dress was beautiful.). Can such covert prosody compensate for the lack of overt speech prosody in written language and thus enhance written communication? To address this question, the proposed project will systematically examine the nature (is covert prosody sound- or action-based in nature?), mechanisms (what information processing systems are engaged?) and emotional consequences (does covert prosody induce emotions and thereby influence behaviour?) of covert prosodic processing in silent reading of written direct quotations. Theoretically motivated by the working neural models for "overt" emotional prosodic processing in speech (e.g., Schirmer & Kotz, 2006), the current proposal will probe "where" and "when" in the brain covert prosodic cues of various natures are mentally simulated and integrated into coherent covert prosodic representations and how these representations consequently induce emotional responses and aid in inferring the quoted speaker's mental state. Using complementary neuroimaging techniques, it will localise the neural substrates of systems engaged in covert emotional prosodic processing (fMRI), specify the time courses of the information processes within these systems (EEG, MEG), and integrate this information to form a unified spatio-temporal neural model for covert emotional prosodic processing. The findings of this project have clear implications for the theoretical development of emotional prosody-based social communication, embodied cognition, and speech pragmatics, and will be of interest to all written language users (e.g. communication-based enterprises, social services, and the wider public). This research also has potential impact on early language education and diagnosis of Parkinson's disease (PD). For example, understanding direct quotations requires the reader to take the quoted speaker's perspective and attribute emotions and mental states to them. A quotation-rich teaching method thus may effectively enhance children's Theory of Mind ability (ability to attribute mental states) that is crucial in their cognitive development and social cognition. Moreover, PD patients may struggle in simulating covert emotional prosody due to their motor (articulation) dysfunction. Consequently, they may display difficulty in understanding figurative speech quotations (e.g., they may not detect the sarcasm in - She rolled her eyes, grumbling: "What a sunny day!"). This research could thus motivate the development of a low-cost quotation-based diagnostic tool for monitoring PD progression.

The studied population included English native speakers from England. They were more than 18 years old, right-handed and had no language, neurological or psychiatric disorders. They were recruited via convienience sampling and consisted of predominantly university students and staff members. Experiment 1 presented text stimuli on a computer screen and collected behavioural responses (button presses) and EEG signals from the participant's scalp. Participants were seated in a sound-attenuated and electrically-shielded room to silently read a series of written stories. The experiment was run in OpenSesame (Mathôt, Schreij, & Theeuwes, 2012). The visual stimuli were presented on a grey background in a 30-pixel Sans font on a 24-inch monitor (120 Hz, 1024 × 768 resolution) approximately 100 cm from the participant. The experiments started with 5 filler trials to familiarise participants with the procedure, after which the remaining 120 critical trials and 55 filler trials were presented in a random order. Each trial began with the trial number for 1000 ms, followed by a fixation dot on the left side of the screen (where the text would start) for 500 ms. The story was then presented in five consecutive segments at the centre of the screen. Participants silently read each segment in their own time, and pressed the DOWN key on a keyboard to continue to the next segment. Of the five segments, the first three segments of each story described the story background. The 4th displayed the text preceding the speech quotation (e.g., After checking the upstairs rooms, Gareth bellowed:) and the 5th segment displayed the speech quotation (e.g., “It looks like there is nobody here!”). In about a third of the trials, a simple question (e.g., Was the house empty?) was presented to measure participants’ comprehension, which participants answered by pressing the LEFT (‘yes’) or RIGHT (‘no’) keys. Answering the question triggered the presentation of the next trial. Participants were given a short break every 20 trials, and there were 8 breaks in total. The experiment lasted approximately 45-60 min. EEG and EOG (ocular) activity was recorded with an analog passband of 0.16-100Hz and digitised at a sampling rate of 512Hz using a 64-channel Biosemi Active-Two system. The 64 scalp electrodes were mounted in an elastic electrode cap according to the international 10/20 system. Six external electrodes were used: two were placed on bilateral mastoids, two were placed above and below the right eye to measure vertical ocular activity (VEOG), and another two were placed next to the outer canthi of the eyes to record horizontal ocular activity (HEOG). Electrode-offset values were kept between -25mV and 25mV. Data collection method in Experiment 2 was identical to that in Experiment 1, excepted that the experiment was conducted in an eye tracking lab. A SR-Research EyeLink 1000 eye tracker was used, running at 500Hz sampling rate. Viewing was binocular but only the right eye was tracked. A chin rest was applied to keep the viewing distance constant and to prevent strong head movements during reading. Button presses were recorded by the presentation software and participants' eye movements were recorded by the EyeLink 1000 eye tracker.

Identifier
DOI https://doi.org/10.5255/UKDA-SN-854892
Metadata Access https://datacatalogue.cessda.eu/oai-pmh/v0/oai?verb=GetRecord&metadataPrefix=oai_ddi25&identifier=4c06f6d706beb73d1507cfc2170bc1598c5d1f31d5e30d401bae3f74195753fc
Provenance
Creator Yao, B, University of Manchester
Publisher UK Data Service
Publication Year 2021
Funding Reference Economic and Social Research Council
Rights Bo Yao, University of Manchester; The Data Collection is available to any user without the requirement for registration for download/access.
OpenAccess true
Representation
Resource Type Numeric; Text
Discipline Psychology; Social and Behavioural Sciences
Spatial Coverage England