Summary information

Study title

Mental simulations of phonological representations are causally linked to silent reading of direct versus indirect speech 2016-2019

Creator

Yao, B, University of Manchester

Study number / PID

854460 (UKDA)

10.5255/UKDA-SN-854460 (DOI)

Data access

Open

Series

Not available

Abstract

In three experiments, this project explored the phonological aspect and the causal role of speech simulations in silent reading of tongue twisters in direct speech, indirect speech and non-speech sentences. Embodied theories propose that language is understood via mental simulations of sensory states related to perception and action. Given that direct speech (e.g., She says, “It’s a lovely day!”) is perceived to be more vivid than indirect speech (e.g., She says (that) it’s a lovely day) in perception, recent research shows in silent reading that more vivid speech representations are mentally simulated for direct speech than for indirect speech. This ‘simulated’ speech is found to contain suprasegmental prosodic representations (e.g., speech prosody) but its phonological detail and its causal role in silent reading of direct speech remain unclear. The results demonstrated greater visual tongue-twister effects (phonemic interference) during silent reading (Experiment 1) but not oral reading (Experiment 2) of direct speech as compared to indirect speech and non-speech. The tongue-twister effects in silent reading of direct speech were selectively disrupted by phonological interference (concurrent articulation) as compared to manual interference (finger tapping) (Experiment 3). The results replicated more vivid speech simulations in silent reading of direct speech, and additionally extended them to the phonological dimension. Crucially, they demonstrated a causal role of phonological simulations in silent reading of direct speech, at least in tongue-twister reading. The findings are discussed in relation to multidimensionality and task dependence of mental simulation and its mechanisms.Written communication (e.g., emails, news reports, social media) is a major form of social information exchange in today's world. However, it is sometimes difficult to interpret the intended meaning of a written message without hearing prosody (rhythm, stress, and intonation of...
Read more

Methodology

Data collection period

07/03/2016 - 31/12/2019

Country

United Kingdom

Time dimension

Not available

Analysis unit

Individual

Universe

Not available

Sampling procedure

Not available

Kind of data

Numeric
Text

Data collection mode

The studied population are native English speakers, aged 18 and above, who live in Greater Manchester. They were recruited via convenience and random sampling.Experiment 1 and 3 used eye tracking in silent reading of written vignettes in English. The experiments were conducted using a SR-Research EyeLink 1000 desk-mounted eye-tracking running at 1000 Hz sampling rate. Stimulus presentation was implemented in EyeTrack 0.7.10m (University of Massachusetts Eyetracking Lab). Participants were seated about 70 cm from an LCD display running at 60 Hz refresh rate in 1650 × 1050 pixel resolution. Materials were presented in a 20pt Calibri font printed in black over a light grey background. Line spacing was set to 30 pts such that the fixation locations could be unambiguously mapped onto a corresponding line of text.

Funding information

Grant number

ES/N002784/1

Access

Publisher

UK Data Service

Publication year

2020

Terms of data access

The Data Collection is available to any user without the requirement for registration for download/access.

Related publications

Not available