The catalogue contains study descriptions in various languages. The system searches with your search terms from study descriptions available in the language you have selected. The catalogue does not have ‘All languages’ option as due to linguistic differences this would give incomplete results. See the User Guide for more detailed information.
International Centre for Language and Communicative Development: Speech Intonation Induces Enhanced Face Perception in Infants, 2014-2020
Creator
Sirri, L, Manchester Metropolitan University
Linnert, S,
Reid, V, University of Waikato
Parise, E, Lancaster University
Study number / PID
854902 (UKDA)
10.5255/UKDA-SN-854902 (DOI)
Data access
Open
Series
Not available
Abstract
Infants’ preference for faces with direct compared to averted eye gaze, and for infant-directed over adult-directed speech, reflects early sensitivity to social communication. Here, we studied whether infant-directed speech (IDS), could affect the processing of a face with direct gaze in 4-month-olds. In a new ERP paradigm, the word ‘hello’ was uttered either in IDS or adult-direct speech (ADS) followed by an upright or inverted face. We show that the face-specific N290 ERP component was larger when faces were preceded by IDS relative to ADS. Crucially, this effect is specific to upright faces, whereas inverted faces preceded by IDS elicited larger attention-related P1 and Nc. These results suggest that IDS generates communicative expectations in infants. When such expectations are met by a following social stimulus – an upright face – infants are already prepared to process it. When the stimulus is a non-social one –inverted face – IDS merely increases general attention.The International Centre for Language and Communicative Development (LuCiD) will bring about a transformation in our understanding of how children learn to communicate, and deliver the crucial information needed to design effective interventions in child healthcare, communicative development and early years education.
Learning to use language to communicate is hugely important for society. Failure to develop language and communication skills at the right age is a major predictor of educational and social inequality in later life. To tackle this problem, we need to know the answers to a number of questions: How do children learn language from what they see and hear? What do measures of children's brain activity tell us about what they know? and How do differences between children and differences in their environments affect how children learn to talk? Answering these questions is a major challenge for researchers. LuCiD will bring together researchers from a wide range of different backgrounds to...
Terminology used is generally based on DDI controlled vocabularies: Time Method, Analysis Unit, Sampling Procedure and Mode of Collection, available at CESSDA Vocabulary Service.
Methodology
Data collection period
01/09/2014 - 31/05/2020
Country
United Kingdom
Time dimension
Not available
Analysis unit
Individual
Universe
Not available
Sampling procedure
Not available
Kind of data
Other
Data collection mode
In Experiment 1, thirty-five infants took part in the study: 18 infants (mean age: 144.78 days; range: 115 to 177 days; 5 female) contributed to the auditory ERP analysis, and 19 infants (mean age: 146.47 days; range: 115 to 177 days; 5 female) contributed to the visual ERP analysis. In Experiment 2, thirty-one infants took part in the study: 18 infants contributed to the auditory ERP analysis (mean age: 135.61 days; range: 117 to 161 days; 5 female) and 18 infants contributed to the visual ERP analysis (mean age: 136.06 days; range: 117 to 162 days; 3 female). In both experiments the majority of the infants were included in both auditory and visual ERP analysis (Experiment 1: n = 16, Experiment 2: n = 16; see Supplemental Information for analyses on these subsets of participants). However, some infants contributed enough artifact free segments only in the auditory (Experiment 1: n = 2, Experiment 2: n = 2) or only in the visual (Experiment 1: n = 3, Experiment 2: n = 2) condition. All additional participants were not included in the statistical analyses due to an insufficient amount of artifact free trials or technical issues.All infants were born healthy (≤37 weeks of gestation), and were recruited from a database of parents from the local area who expressed an interest in taking part in developmental research studies. Parents were informed about the aim of the study and gave informed written consent before participation. Infants received a book for their participation. The study was conducted in conformity with the declaration of Helsinki and approved by the University Research Ethics Committee, at Lancaster University.Stimuli: In both experiments, the auditory stimuli were the same as in Senju and Csibra (2008), shared by the senior author: the greeting word “hello” uttered by a female voice in either IDS or ADS. Audio files were digitized and edited with Adobe Audition (CS 5.5), at 16-bit resolution and 44 kHz sampling rate. The speech had different length, 580 ms for ADS and 720 ms for IDS, but primarily differed in pitch and intensity. The mean intensity of speech was 75 dB for ADS and 85 dB for IDS. Auditory stimuli were delivered through loudspeakers located behind the monitor.Visual stimuli consisted of 9 color photographs with a white background, portraying white female adult faces with a neutral expression selected from the NimStim repository (Tottenham et al, 2009). The authors shared the visual stimuli, including instructions as to which faces from their repository can be used in our study and for publication. Each picture measured 355 ×x 473 pixels. At the viewing distance of 60 cm from a 19-inch CRT monitor, each picture subtended horizontal and vertical visual angle of 16.1° and 21.7°, respectively. In Experiment 2 we used the same pictures, but rotated at 180° (examples on Fig. 2).Procedure: Infants sat on their parents’ lap throughout the whole experiment. Mothers were instructed not to talk to their infants during the presentation of the stimuli. Each trial consisted of an auditory and a visual stimulus and the experiment consisted of one block including 108 trials, 54 trials in each ADS and IDS condition. All stimuli were presented with Matlab® (v. 2014b), using PsychToolBox functions and custom-made scripts. Each trial started with a central dynamic visual attention grabber swirling on a grey background for 2150 ms, after which it froze while the auditory stimulus (“hello”) was played. The attention grabber was centred on the screen. Then the attention grabber disappeared, and a face appeared on the screen, with the eyes located in the region previously occupied by the attention grabber. The stimulus onset asynchrony between the auditory and visual stimuli was randomized between 1050 and 1250 ms. The face remained on the screen for 1000 ms. During the inter-trial interval, the grey screen remained blank for a random period varying from 1000 to 1200 ms. To further attract infants’ attention during the experiment, there were 6 different dynamic attention grabbers, changing every 6 trials. The presentation order of the conditions was randomised, and trials were presented as long as the infant was attentive. If the infant lost interest, an animated spiral and a jingle were presented to reorient attention to the presentation screen. If the infant became fussy, the animated spiral was played again or the experimenter gave a short break and played with the baby. The session ended if the infant was no longer attracted to the screen. The whole experiment lasted approximately 15 minutes and was video-recorded for offline data editing purposes.
Funding information
Grant number
ES/L008955/1
Access
Publisher
UK Data Service
Publication year
2021
Terms of data access
The Data Collection is available to any user without the requirement for registration for download/access.