The catalogue contains study descriptions in various languages. The system searches with your search terms from study descriptions available in the language you have selected. The catalogue does not have ‘All languages’ option as due to linguistic differences this would give incomplete results. See the User Guide for more detailed information.
Reading Direct Speech Quotes Increases Theta Phase-locking: Evidence for Theta Tracking of Inner Speech, 2016-2019
Creator
Yao, B, University of Manchester
Study number / PID
854892 (UKDA)
10.5255/UKDA-SN-854892 (DOI)
Data access
Open
Series
Not available
Abstract
Growing evidence shows that theta-band (4-7Hz) activity in the auditory cortex phase-locks to rhythms of overt speech. Does theta activity also encode the rhythmic dynamics of inner speech? Previous research established that silent reading of direct speech quotes (e.g., Mary said: “This dress is lovely!”) elicits more vivid inner speech than indirect speech quotes (e.g., Mary said that the dress was lovely). As we cannot directly track the phase alignment between theta activity and inner speech over time, we used EEG to measure the brain’s phase-locked responses to the onset of speech quote reading. We found that direct (vs. indirect) quote reading was associated with increased theta phase synchrony over trials at 250-500 ms post-reading onset, with sources of the evoked activity estimated in the speech processing network. An eye-tracking control experiment confirmed that increased theta phase synchrony in direct quote reading was not driven by eye movement patterns, and more likely reflects synchronous phase resetting at the onset of inner speech. These findings suggest a functional role of theta phase modulation in reading-induced inner speech.Written communication (e.g., emails, news reports, social media) is a major form of social information exchange in today's world. However, it is sometimes difficult to interpret the intended meaning of a written message without hearing prosody (rhythm, stress, and intonation of speech) that is instrumental in understanding the writer's feelings, attitudes, and intentions. For example, a prosody-less "thank you" email can be confusing as to whether the sender is being sincere or sarcastic (Kruger et al., 2005). Emails like these are often misinterpreted as being more negative or neutral than intended; such miscommunications can damage social cohesiveness and group identity within organisations and communities, thereby undermining economic performance and societal stability (Byron, 2008).
Interestingly, written words...
Terminology used is generally based on DDI controlled vocabularies: Time Method, Analysis Unit, Sampling Procedure and Mode of Collection, available at CESSDA Vocabulary Service.
Methodology
Data collection period
07/03/2016 - 31/12/2019
Country
England
Time dimension
Not available
Analysis unit
Individual
Universe
Not available
Sampling procedure
Not available
Kind of data
Numeric
Text
Data collection mode
The studied population included English native speakers from England. They were more than 18 years old, right-handed and had no language, neurological or psychiatric disorders. They were recruited via convienience sampling and consisted of predominantly university students and staff members.Experiment 1 presented text stimuli on a computer screen and collected behavioural responses (button presses) and EEG signals from the participant's scalp.Participants were seated in a sound-attenuated and electrically-shielded room to silently read a series of written stories. The experiment was run in OpenSesame (Mathôt, Schreij, & Theeuwes, 2012). The visual stimuli were presented on a grey background in a 30-pixel Sans font on a 24-inch monitor (120 Hz, 1024 × 768 resolution) approximately 100 cm from the participant.The experiments started with 5 filler trials to familiarise participants with the procedure, after which the remaining 120 critical trials and 55 filler trials were presented in a random order. Each trial began with the trial number for 1000 ms, followed by a fixation dot on the left side of the screen (where the text would start) for 500 ms. The story was then presented in five consecutive segments at the centre of the screen. Participants silently read each segment in their own time, and pressed the DOWN key on a keyboard to continue to the next segment. Of the five segments, the first three segments of each story described the story background. The 4th displayed the text preceding the speech quotation (e.g., After checking the upstairs rooms, Gareth bellowed:) and the 5th segment displayed the speech quotation (e.g., “It looks like there is nobody here!”). In about a third of the trials, a simple question (e.g., Was the house empty?) was presented to measure participants’ comprehension, which participants answered by pressing the LEFT (‘yes’) or RIGHT (‘no’) keys. Answering the question triggered the presentation of the next trial.Participants were given a short break every 20 trials, and there were 8 breaks in total. The experiment lasted approximately 45-60 min.EEG and EOG (ocular) activity was recorded with an analog passband of 0.16-100Hz and digitised at a sampling rate of 512Hz using a 64-channel Biosemi Active-Two system. The 64 scalp electrodes were mounted in an elastic electrode cap according to the international 10/20 system. Six external electrodes were used: two were placed on bilateral mastoids, two were placed above and below the right eye to measure vertical ocular activity (VEOG), and another two were placed next to the outer canthi of the eyes to record horizontal ocular activity (HEOG). Electrode-offset values were kept between -25mV and 25mV.Data collection method in Experiment 2 was identical to that in Experiment 1, excepted that the experiment was conducted in an eye tracking lab. A SR-Research EyeLink 1000 eye tracker was used, running at 500Hz sampling rate. Viewing was binocular but only the right eye was tracked. A chin rest was applied to keep the viewing distance constant and to prevent strong head movements during reading. Button presses were recorded by the presentation software and participants' eye movements were recorded by the EyeLink 1000 eye tracker.
Funding information
Grant number
ES/N002784/1
Access
Publisher
UK Data Service
Publication year
2021
Terms of data access
The Data Collection is available to any user without the requirement for registration for download/access.