The catalogue contains study descriptions in various languages. The system searches with your search terms from study descriptions available in the language you have selected. The catalogue does not have ‘All languages’ option as due to linguistic differences this would give incomplete results. See the User Guide for more detailed information.
Sonic Enhancement of Virtual Exhibits Data, 2020-2021
Creator
Daly, I, University of Essex
Tymkiw, M, University of Essex
Di Giuseppantonio, P, University of Essex
Al-Taie, I, University of Essex
Williams, D, University of Salford
Study number / PID
855667 (UKDA)
10.5255/UKDA-SN-855667 (DOI)
Data access
Open
Series
Not available
Abstract
We conducted an online experiment to explore how sound influences the interest level, emotional response, and engagement of individuals who view objects within a virtual exhibit. As part of this experiment, we designed a set of different soundscapes, which we presented to participants who viewed museum objects virtually. We then asked participants to report their felt affect and level of engagement with the exhibits.
This dataset contains the data we recorded in these experiments and used in the analysis presented in our paper entitled 'Sonic Enhancement of Virtual Exhibits'.A common feature of UK museums is their recent embrace of virtual exhibits. However, despite this explosion of virtual exhibits, relatively little attention has been given to how sound may be used to create a more engaging experience for audiences. That is, although museums have certainly included sound as parts of virtual exhibits (e.g., through narrated tours or through the addition of music), there is little scholarship about how sound may be harnessed to influence the length of time spent looking at objects within a virtual exhibit; the emotional response audiences have while viewing these objects; or the level of attention or distraction experienced. We conducted a series of experiments to develop a more rigorous understanding of how sound shapes audiences’ experiences in museums.
This project builds on our team’s expertise in sound-tracking and the use of sound effects to influence audience engagement. We also build on our research exploring how sound may be used to modify the affective state of an individual, as well as our research which showed how state-of-the-art neural engineering may be used to dynamically modulate sound and music over time to optimize an “affective trajectory” (a change in felt affect in an audience over time). The project specifically focused on the use of sound effects in the virtual environment, building on our previous studies aimed at understanding how...
Terminology used is generally based on DDI controlled vocabularies: Time Method, Analysis Unit, Sampling Procedure and Mode of Collection, available at CESSDA Vocabulary Service.
Methodology
Data collection period
01/01/2020 - 01/06/2021
Country
United Kingdom
Time dimension
Not available
Analysis unit
Individual
Universe
Not available
Sampling procedure
Not available
Kind of data
Numeric
Text
Data collection mode
The data collection methodology is described in detail in our paper ‘Sonic Enhancement of Virtual Exhibits’ which is published in PLoS One. We also describe the key details of the data collection methodology here.In an online experiment, participants were shown a series of six 3D models of individual objects, each of which was paired with a particular soundscape. These pairings were pseudo-randomized across participants, and at least one pairing included a “no sound” condition to act as a control. After encountering each 3D model-soundscape pair, participants were asked to report their current felt affect, engagement, and sense of presence, and to reflect on how these were affected by the model-soundscape pairing. Further information is available in the documentation.
Funding information
Grant number
N/A
Access
Publisher
UK Data Service
Publication year
2022
Terms of data access
The Data Collection is available to any user without the requirement for registration for download/access.