Binaural Audio and Sonic Narratives For Cultural Heritage
Binaural Audio and Sonic Narratives For Cultural Heritage
Conference Paper 14
Presented at the Conference on
Immersive and Interactive Audio
2019 March 27 – 29, York, UK
This paper was peer-reviewed as a complete manuscript for presentation at this conference. This paper is available in the AES
E-Library (http://www.aes.org/e-lib) all rights reserved. Reproduction of this paper, or any portion thereof, is not permitted
without direct permission from the Journal of the Audio Engineering Society.
ABSTRACT
This paper introduces PlugSonic Soundscape and PlugSonic Sample, two web-based applications for the creation
and experience of binaural interactive audio narratives and soundscapes. The apps are being developed as part
of the PLUGGY EU project (Pluggable Social Platform for Heritage Awareness and Participation). The apps
audio processing is based on the Web Audio API and the 3D Tune-In toolkit. Within the paper, we report on the
implementation, evaluation and future developments. We believe that the idea of a web-based application for 3D
sonic narratives represents a novel contribution to the cultural heritage, digital storytelling and 3D audio technology
domains.
1 INTRODUCTION around a common area interest, connecting the past,
the present and the future.
A heritage that is everywhere, and relevant to everyday Within PLUGGY, several tools are being developed:
life, is one of the preconditions for genuine sustainabil- a Social Platform, a Curatorial Tool, and 4 separate
ity. Currently, there are very few ICT tools to support ’pluggable’ applications, to demonstrate the platform’s
citizens in their everyday activities to shape cultural potential and kick start applications for the after-project
heritage and be shaped by it. Existing applications and life. These applications focus on various aspects of
repositories for heritage dissemination do not foster digital heritage, which include Virtual (VR) and Aug-
the creation of heritage communities. Social platforms mented Reality (AR), Geolocation, Gamification and
certainly offer potential to build networks, but they Sonic Narratives. The latter, called PlugSonic, is the
have not been exploited yet for global cultural heritage focus of the current paper, which will look in partic-
promotion and integration in people’s everyday life [1]. ular at the web-based binaural audio features of the
application.
The PLUGGY project (Pluggable Social Platform for
Heritage Awareness and Participation) [2], aims to Sonic narratives are generally based on music features
bridge this gap by providing the necessary tools to al- (e.g. timbre, pitch-melody, tempo, etc.) [3], and are
low users to share their local knowledge and everyday often not interactive (i.e. simple audio playback). The
experience with others, together with the contribution addition of spatial attributes (e.g. placement of sound
of cultural institutions, building extensive networks sources on a full 360 sphere, and at different distances),
Comunità et al. Web-based binaural audio and sonic narratives
and most of all the addition of interactivity (e.g. to navi- nodes allow for the creation of complex signal pro-
gate soundscapes moving around in the acoustic virtual cessing chains. The API also allows for the spatialisa-
environment), are features which have not been widely tion of sound through an equal power panning and an
explored until now - for example, [4] explored spatial HRTF (Head Related Transfer Function) convolution
sonic narratives, but exploited simple 2-dimensional algorithms. For headphones based applications, the
audio panning techniques. The idea of developing and equal power algorithm “can only give the impression
evaluating a web-based application for the creation and of sounds located between the ears” [12] while, as re-
experience of 3D Sonic Narratives does indeed repre- ported in [13], the HRTF algorithm implemented in
sent a novel contribution to both the digital heritage Google Chrome and Mozilla Firefox embeds only one
and audio technology domains. set of HRTFs from the IRCAM Listen database [14].
[13] discusses the limitations and potential drawbacks
1.1 Binaural spatialisation for users and developers of having only one choice
of HRTFs (e.g. in-head localisation, inaccurate lat-
The aim of binaural spatialisation is to provide the eralisation, poor elevation perception and front-back
listener (through standard headphones) with the im- confusion) and presents work to implement a Binaural-
pression that sound is positioned in a specific location FIRNode class. This class extends the WAA allowing
in the three-dimensional space. The 3D characteristics to import custom HRTFs as FIR filters.
of the sound can be captured during recording with
special hardware, or simulated in post-production via Another limitation of the WAA is in the method used
spatialisation techniques. The theories at the basis of for the simulation of distance; based on attenuation
the binaural spatialisation technique are not particularly only, it does not account for frequency domain effects
recent, and the first binaural recording dates back to the of waves propagation.
end of the 19th Century [5]. However, it is only within
the last twenty years that the increase in the calculation Even with its limitations the WAA is having a great im-
power of personal computers enabled an accurate real- pact on web-based applications and research on spatial
time simulation of three-dimensional sound-field over audio, audio narratives, games and immersive content
headphones. broadcasting. Pike et al. [12] developed an object-
based and binaural rendering player with head-tracking
Several tools currently exist for performing binaural while RadioFrance nouvOson website [15] broadcasts
spatiatlisation; just to mention a few, Anaglyph [6], IR- 5.1 surround and binaural audio.
CAM Spat [7], the IEM binaural audio open library [8],
and the 3D Tune-In Toolkit [9]. very few of these are The WAA has also been used to develop high-order
though implemented within a web-based application, Ambisonics sound processing. Google Omnitone [16]
and therefore available on multiple platforms through implements decoding and binaural rendering up to the
a simple browser. third order. In [17] Politis and Porier-Quinot, present
JSAmbisonics as a library that uses the WAA for inter-
1.2 Web-based spatial audio active spatial sound processing on the web. This work
is interesting for it supports Ambisonics of any order.
In 2011, with the specification and release of the Web
Audio (WAA) [10] and the Web GL (WGL) [11] appli- The INVISO project [18] focused on the development
cation programming interfaces (API), the World Wide of a web-based interface for “designing and experienc-
Web Consortium (W3C) and the Mozilla Foundation ing rich and dynamic sonic virtual realities” suitable
set the basis for the development of modern web appli- for both experts and novices. Here, the WGL was used
cations. As stated in the introduction to the WAA, the to design a 3D interface for the creation of sound ob-
specification of a high-level Javascript (JS) API was jects and navigation of the environment and the WAA
necessary to satisfy the demand for audio and video pro- for the audio rendering. The project presents interest-
cessing capabilities to implement "sophisticated web- ing features like the multi-cone sound object to model
based games or interactive applications". complex emanation patterns; the control over sources’
elevation; and the possibility to define trajectories for
The WAA uses modular routing, built around the Au- moving sources or sound zones in which sounds are
dioNode class. Source, destination and processing not spatialised to create ambient sounds.
AES Conference on Immersive and Interactive Audio, York, UK, 2019 March 27 – 29
Page 2 of 11
Comunità et al. Web-based binaural audio and sonic narratives
1.3 Sonic Narratives for Cultural Heritage project [27] instead, focuses on inclusivity, searching
for ways to design for people with difficulties and/or
Looking at the state of the art in this area, it can be no- disabilities.
ticed how research is delving into solutions to make cul-
tural heritage immersive, adopting AR, VR and spatial 2 IMPLEMENTATION
audio; engaging, using personalisation and emotional
storytelling; adaptive, exploiting context-awareness In the next paragraphs we describe in detail the design
and location-awareness; interactive, using the paradigm criteria, state of development and evaluation of two
of dramas; or open and inclusive, developing content web applications: PlugSonic Sample and PlugSonic
for people with impairments and/or difficulties. Soundscape. The apps, which will be integrated into the
PLUGGY social platform and as a part of the platform’s
In [19] Ardissono et al. give an exhaustive review about
curatorial tool, are developed to manage all the audio
digital storytelling and multi-media content delivery
content necessary to create virtual exhibitions; enhance
with a focus on cultural heritage. Here we will limit to
on-line and/or on-site visits to museums, monuments,
those projects that use exclusively or mainly audio to
archaeological sites; share tangible and intangible cul-
design novel types of experiences.
tural heritage. The social platform and the pluggable
The LISTEN project [20] investigated audio augmented apps will make use of standard (mono/stereo) sound
environments and user adaptation technologies. This files to be used in voice descriptions, audio narratives
involved the development of ListenSpace [21] - graphi- or sound accompaniment to the platform’s exhibitions,
cal authoring tool used to represent the real space and as well as to create interactive and explorable 3D audio
the sound sources’ position - as well as the implementa- narratives and soundscapes. Therefore, we designed
tion of a domain ontology [22] for an exhibition and the the Sample app to edit sound files and apply audio
use of context-awareness to adapt to the user’s interest. effects and the Soundscape app to create and experi-
The main limitations, from the content creators’ per- ence spatialised soundscapes. In this way PLUGGY
spective, could be seen into the the system’s software users, whether institutions or citizens, are given all the
(server-based processing) and hardware (antennas or necessary instruments; without the need for specific
infrared cameras for head-tracking) requirements; and devices, external tools (software and/or hardware), spe-
the necessity for custom development for each exhibi- cialised knowledge or resources. “Everyone, alone or
tion. collectively, has the right to benefit from the cultural
heritage and to contribute towards its enrichment” [28].
In the CHESS project [23] the focus was on personal- Therefore, differently from all the tools described in
isation, profiling first-time visitors [24] to change the previous paragraphs, in our project, with its focus on
narration style; and interaction, delivering the story inclusivity and participation, we are developing intu-
through voice narration and adapting the visit using itive and immediate tools, so that anyone can use them
web browser based applications. to have an impact on cultural heritage.
Interaction and context-awareness (based on geoloca-
tion) was explored in [25] with mobile urban dramas 2.1 General Implementation
(in which the user becomes the main character of a
story). The project used a multimedia style (audio, Both the Sample and the Soundscape app use estab-
video, images, animations) and was implemented to lished web development technologies and libraries.
run on mobile web browsers using XML to describe HTML and CSS languages have been used to create the
the content. Here, the advantage of multi-platform flex- web page and define its aspect; while JavaScript (JS)
ibility, was limited by the need for specific knowledge to make the apps dynamic and interactive. To build the
about the content metadata structure or the consultancy user interface (UI) we used ReactJS [29]. The main
from the researchers for app implementation and web reason behind the use of React is its efficiency in man-
services. aging the UI’s rendering. To manage the app’s state,
and therefore the apps’ UI, we used the Redux library
The EMOTIVE project [26] is working on tools to sup- [30]. The Redux API allows to define the whole app
port cultural and creative industries producing narra- state as a single JS Object; update the state as the user
tives that exploit emotional storytelling. The ARCHES interacts with the app; trigger the re-rendering process
AES Conference on Immersive and Interactive Audio, York, UK, 2019 March 27 – 29
Page 3 of 11
Comunità et al. Web-based binaural audio and sonic narratives
when a state change occurs. Both apps use a responsive performed locally, the playback is not affected by lag,
design to scale the UI to different devices and screen network communication delays or altered by audio
sizes, even if, at the moment of writing, the UI is be- compression.
ing redesigned and is not fully adaptive to tablet and
smartphone use. 2.3 PlugSonic Soundscape
AES Conference on Immersive and Interactive Audio, York, UK, 2019 March 27 – 29
Page 4 of 11
Comunità et al. Web-based binaural audio and sonic narratives
Fig. 1: PlugSonic Sample user interface. In this example the user selected part of a partially reproduced file.
and gain node. The master volume gain node is finally either from local, using a drag and drop area; or
connected to the AudioContext destination node which from Dropbox, using the file URL.
represents the actual audio-rendering device (speakers 2. List of loaded sources with volume control. Each
or headphones). source can be activated/deactivated. Sources are
In terms of user-experience, Soundscape is designed deleted using the Delete Selected button.
to be divided into two versions we called Create and 3. Master volume control.
Experience. The Create version is used for authoring 4. Soundscape environment which shows the listener
and gives full control over the soundscape settings. The and sources position. The listener can be moved
Experience version is used for navigation and gives lim- using mouse, arrow keys or, for touch screens,
ited control. At the moment of writing, the prototype the Touch Arrow Controls shown in section 9.
includes only the Create app, but a mobile (iOS) ver- Sources can be moved using mouse or touch. This
sion of the experience app is currently being developed. section includes the button to play/stop the sound-
With reference to Figure 3 we describe the app UI and scape and two buttons to reset the listener/sources
controls: positions.
1. Sound files in WAV or MP3 format can be added 5. Controls for the soundscape’s shape (rectangular
AES Conference on Immersive and Interactive Audio, York, UK, 2019 March 27 – 29
Page 5 of 11
Comunità et al. Web-based binaural audio and sonic narratives
or round) and size (width and height). macOS Safari. The Soundscape app has also been
6. When a sound source is selected this section ap- tested for loading time and CPU/memory requirements.
pears and allows to set the sound source Reach For the test we used a Lenovo ThinkPad Intel i7-
and Fade-In/Fade-Out times. The Reach controls 7700HQ @ 2.80GHz – 16GB RAM running Windows
the size of a circular area around the source. The 10. Table 1 shows the average and standard deviation
source starts and stops playing as the listener steps of the loading times (in ms) when using Imperial Col-
in or out of the area. lege’s wi-fi network. The quantities were calculated
over five runs using Google Chrome. It can be noticed
7. Listener settings: high-performance mode on/off how a good portion of the loading time (more than
and head circumference. two seconds) is spent on scripting. This is due to the
8. Buttons to Import and Export soundscapes. The initialisation of the binaural processor which includes
buttons allow to save/load a soundscape in two for- the retrieval of the many HRIRs wav files. To test
mats as a JS Object Notation (JSON) file: meta- CPU and memory requirements with the number of
data only or metadata and sound data, with the sources we used a 30 seconds white noise excerpt. Fig-
second version allowing for offline use. ure 4 shows the percentage of CPU when rendering the
9. Toggle to show/hide the touch enabled arrow con- soundscape in several listener’s conditions: stationary,
trols rotating, moving in circle (with and without the per-
formance mode option). For each condition the lines
end at the maximum number of sources the browser
3 PERFORMANCE
was able to render without sound and graphical perfor-
mance degradation. We can notice how there is a linear
To check the compatibility we tested the apps both on
relation between number of sources and CPU percent-
MacOS and Windows using the following browsers:
age with a considerable difference between stationary
Google Chrome, Mozilla Firefox, Microsoft Edge and
AES Conference on Immersive and Interactive Audio, York, UK, 2019 March 27 – 29
Page 6 of 11
Comunità et al. Web-based binaural audio and sonic narratives
Still
Rotating
30 Moving in Circle 2500
Moving in Circle (Performance Mode)
25
2000
RAM (MB)
20
CPU %
1500
15
1000
10
500
5
0
0 0 5 10 15 20 25 30 35
0 5 10 15 20 25 30 35
# of Sources
# of Sources
Fig. 4: PlugSonic Soundscape CPU use as a function Fig. 5: PlugSonic Soundscape memory use as a func-
of the number of sound sources tion of the number of sound sources
and moving conditions. The graph also shows how the to the Sample and the Soundscape apps with short in-
performance mode allows for the rendering of up to struction videos and example sound files. Participants
35 sources. Figure 5 shows how the memory can be were given the task to use the apps for the scenario
a major limiting factor to the rendering process being they have described in the first section or use provided
directly proportional to the number of sources. example sound files to create a soundscape and then
export it and share it with the authors. In the third
Table 1: PlugSonic Soundscape loading time in ms section, we asked open questions about their likes and
dislikes with regard to both apps, to what extent they
were able to create the soundscape, and if there were
Load Script Render Paint Other Idle Total any features they were missing. The fourth section
Avg. 1.6 2382.2 13.8 2.6 246.4 782.4 3429.2
consisted of statements of usability principles to assess
SD 0.6 179.7 0.84 1.3 12.1 136.9 270.9 to what extent they were violated. These statements
considered visibility of system status, match between
system and the real world, user control and freedom,
consistency and standards, recognition, diagnosis and
4 EVALUATION recovering from errors, error prevention, recognition
rather than recall, flexibility and efficiency of use, aes-
We invited sound experts to test PlugSonic and an- thetics and minimalist design, help and documentation,
swer questions about their experience using the proto- skills support, pleasurable and respectful interaction,
type. Questions were a mix of open ended and multiple and privacy. Participants rated the degree to which they
choice, aiming at understanding which features they agree or disagree with the statements using a 5 point
were missing and if, and how, usability principles were Likert scale.
violated, respectively.
4.2 Results
4.1 Method
4.2.1 Participants
We created an online survey with Google Forms and
sent it out to sound experts by email. The first section We evaluated PlugSonic with 8 sound experts, of which
of the survey aimed at collecting demographic infor- 5 were males, 2 females, and one not revealed. Their
mation and an example scenario in which participants job descriptions included director for musical instru-
would create an audio experience and the tools they ments company, computer science researcher with au-
currently use. The second section consisted of links dio editing experience in storytelling, museum special
AES Conference on Immersive and Interactive Audio, York, UK, 2019 March 27 – 29
Page 7 of 11
Comunità et al. Web-based binaural audio and sonic narratives
effects engineer, student and musician, researcher in 4.2.3 Evaluation of PlugSonic Soundscape
acoustics and psycho acoustics, and a student in audi-
ology with music processing experience. A total of 4 When asked to what extent they were able to create a
participants had 4 to 8 years of experience in working soundscape: 62.5% of participants answered 3 out of 5,
with audio, and 3 participants had 20 to 30 years of and 37.5% answered 4 out of 5. They judged Sound-
experience. scape easy to use and generally performing well. P3
and P7 liked the spatial rendering when moving around
Participants were accustomed to use audio for prod-
the sounds and the listener. P4 valued the Dropbox
uct demos, exhibitions and installations. Examples of
integration.
tools and technologies they currently use are Audacity,
Adobe Premier, digital audio workstations (including However, limitations were reported, which were merely
Adobe Audition, Garageband, Logic, Cubase, Reaper), on the user interface and minor glitches. For example,
Ambisonics, Max MSP. P2 and P4, had difficulties differentiating the sources
Complete results for each application and question are since they all had the same colour. There were also
listed at the end of the paper in Table 2 and 3. issues when moving the sources. When P3 selected a
source to move, it would automatically go to the outer
4.2.2 Evaluation of PlugSonic Sample edge of the bounding box. Also P6 could not drag the
sources inside the box unless the mouse was moved to
Almost all participants found the Sample app easy to the bottom of the screen first. P6 and P8 couldn’t move
use; one participant preferred to use his own editing the listener with the arrows. Finally, when exporting the
tool, but did not justify the choice. Specifically, par- soundscape, it was unclear if it was done successfully.
ticipants liked the ease of applying commands such
as fade-in and fade-out and cutting and pasting. They In terms of functionality, P1 wanted to import stereo
further liked the graphical interface with the sound file files and have level meters to judge the overall volume.
changing colour as it is played, the simple audio effects, P2 would like to see a timeline to create timed sounds,
and the fades represented on the waveform. Overall, the for example to create a dialogue. Moreover, P8 would
Sample app was considered useful as a quick editing like to see how far in the loop the playback was. P2
tool. would also want to record the sound while moving
around the listener. P4 was missing a control for source
However, a number of limitations were reported which elevation. And for P5 the navigation method forces
requires further improvements in the user interface and you to reduce an immersive frontal experience to a top
functionalities. For the user interface, P6 found it hard view.
to differentiate between the selected area and the area
that has already played. P1 and P5 could not easily
5 CONCLUSIONS and FUTURE WORK
drag a selection encompassing the start or end of the
audio making it hard to create a fade in/out. In this paper we presented the design criteria and
In terms of features, P2 would like to have undo and the state of development of two web applications,
redo commands. Participants also requested to have PlugSonic Sample and PlugSonic Soundscape, imple-
a time reference on the horizontal axis to be able to mented as part of the PLUGGY project. The project
move the pointer around easily. P4 would like to work aims at developing a social platform and several web
on multiple sound files at once. Moreover, P4 and P6 apps (AR, 3D Audio, Geolocation, Gamification) to
requested more effects and filters such as warping or provide the users with the necessary tools to shape cul-
reversing audio, an editable frequency domain graph to tural heritage; both as a curator and as visitor of virtual
apply equalisation, and reverb settings such as dry and or augmented exhibitions. The two apps presented here
wet mix. Other requested features were by P1, who were developed in order to be integrated in the social
would like to use keyboard shortcuts and loop audio platform and allow to edit sound files and to create and
while P8 and P2 highlighted how, when loading a new experience binaural soundscapes. The main objective
sound, filters are not applied automatically to it. Finally, was to design simple and effective tools that could be
other issues were found on performance. When effects quickly understood and used even by inexperienced
were disabled it stopped sound processing momentarily. users. Being web-based, the two apps help to deliver a
And there were glitches when adjusting the parameters. smooth experience due to the familiarity most people
AES Conference on Immersive and Interactive Audio, York, UK, 2019 March 27 – 29
Page 8 of 11
Comunità et al. Web-based binaural audio and sonic narratives
have with web browsers and the fact that there is no [3] Delle Monache, S., Rocchesso, D., Qi, J., Buech-
need to install any additional software. ley, L., De Götzen, A., and Cestaro, D., “Paper
mechanisms for sonic interaction,” in Proceed-
After describing the technology used and the features ings of the Sixth International Conference on Tan-
implemented so far we presented the results of some gible, Embedded and Embodied Interaction, pp.
performance tests. The tests focused on browser com- 61–68, ACM, 2012.
patibility, loading time and CPU/RAM requirements,
highlighting how, even if the performance is already [4] “Sonic Storytelling: Designing Musical
good, there is margin for improvement with regards to Spaces,” https://adage.com/article/on-design/
the initialisation time and memory requirements. sonic-storytelling-designing-musical-spaces/
138028/, Accessed: 17-09-2018.
We also conducted an evaluation of the two prototypes
with experts in the fields of cultural heritage and/or [5] Collins, P., “Theatrophone: the 19th-century
audio. The initial results were positive both in terms of iPod,” New Scientist, 197(2638), pp. 44–45, 2008.
usability (the apps were perceived as quick and respon-
[6] Poirier-Quinot, D. and Katz, B. F., “The
sive) and functionality (editing options and immersive-
Anaglyph Binaural Audio Engine,” in Audio En-
ness). However, the evaluation suggested that further
gineering Society Convention 144, Audio Engi-
improvements are necessary in terms of user interface
neering Society, 2018.
(UI), flexibility and compatibility; and showed how
there is interest and potential to extend the apps with [7] Carpentier, T., Noisternig, M., and Warusfel, O.,
desirable features. “Twenty years of Ircam Spat: looking back, look-
ing forward,” in 41st International Computer Mu-
The results of the early evaluation are being used to
sic Conference (ICMC), pp. 270–277, 2015.
revise further development of the apps and extend the
set of features. The PLUGGY project will end in De- [8] Musil, T., Noisternig, M., and Höldrich, R., “A li-
cember 2019 and the PlugSonic apps development until brary for realtime 3d binaural sound reproduction
then will include the redesign of the UI and complete in pure data (pd),” in Proc. Int. Conf. on Digital
integration with the social platform and curatorial tool. Audio Effects (DAFX-05), Madrid, Spain, 2005.
Building on the experience with the PlugSonic apps
we also aim at making a more extensive use of the 3D [9] Cuevas-Rodriguez, M., Gonzalez-Toledo, D.,
Tune-In toolkit (e.g. HRTF selection, Binaural Room de La Rubia-Buestas, E., Garre, C., Molina-
Impulse Response reverberation), which will be inte- Tanco, L., Reyes-Lecuona, A., Poirier-Quinot, D.,
grated in a web-based open research tool. and Picinali, L., “An open-source audio renderer
for 3D audio with hearing loss and hearing aid
simulations,” in Audio Engineering Society Con-
6 ACKNOWLEDGMENT
vention 142, Audio Engineering Society, 2017.
This work was supported by the PLUGGY project [10] “WebAudio API,” https://www.w3.org/TR/
(https://www.pluggy-project.eu/), European Union’s webaudio/, Accessed: 17-09-2018.
Horizon 2020 research and innovation programme un-
der grant agreement No 726765. [11] “WebGL,” https://developer.mozilla.org/en-US/
docs/Web/API/WebGL_API, Accessed: 17-09-
2018.
References
[12] Pike, C., Taylour, P., and Melchior, F., “Deliv-
[1] Lim, V., Frangakis, N., Tanco, L. M., and Picinali, ering Object-Based 3D Audio Using The Web
L., “PLUGGY: A Pluggable Social Platform for Audio API And The Audio Definition Model,” in
Cultural Heritage Awareness and Participation,” Proceedings of the 1st Web Audio Conference,
in Advances in Digital Cultural Heritage, pp. 117– 2015.
129, Springer, 2018.
[13] Carpentier, T., “Binaural synthesis with the Web
[2] “PLUGGY project,” https://www.pluggy-project. Audio API,” in 1st Web Audio Conference (WAC),
eu/, Accessed: 17-09-2018. 2015.
AES Conference on Immersive and Interactive Audio, York, UK, 2019 March 27 – 29
Page 9 of 11
Comunità et al. Web-based binaural audio and sonic narratives
[15] Dejardin, H. and Ronciere, E., “nouvOson web- [24] Pujol, L., Katifori, A., Vayanou, M., Roussou,
site: how a public radio broadcaster makes im- M., Karvounis, M., Kyriakidi, M., Eleftheratou,
mersive audio accessible to the general public,” in S., and Ioannidis, Y., “From Personalization to
Audio Engineering Society Conference: 57th In- Adaptivity: Creating Immersive Visits through
ternational Conference: The Future of Audio En- Interactive Digital Storytelling at the Acropolis
tertainment Technology–Cinema, Television and Museum,” 2013.
the Internet, Audio Engineering Society, 2015.
[25] Hansen, F. A., Kortbek, K. J., and Grønbæk, K.,
[16] “Google Omnitone,” https://googlechrome. “Mobile Urban Drama: interactive storytelling in
github.io/omnitone/, Accessed: 17-09-2018. real world environments,” New Review of Hyper-
media and Multimedia, 18(1-2), pp. 63–89, 2012.
[17] Politis, A. and Poirier-Quinot, D., “JSAmbison-
ics: A Web Audio library for interactive spatial [26] “Emotive,” https://emotiveproject.eu/, Accessed:
sound processing on the web,” in Interactive Au- 17-09-2018.
dio Systems Symposium, 2016.
[27] “Arches,” https://www.arches-project.eu/, Ac-
[18] Çamcı, A., Lee, K., Roberts, C. J., and Forbes, cessed: 17-09-2018.
A. G., “INVISO: A Cross-platform User Inter-
face for Creating Virtual Sonic Environments,” in [28] “Faro Convention,” https://www.coe.int/en/web/
Proceedings of the 30th Annual ACM Symposium conventions/full-list/-/conventions/treaty/199, Ac-
on User Interface Software and Technology, pp. cessed: 17-09-2018.
507–518, ACM, 2017. [29] “ReactJS,” https://reactjs.org/, Accessed: 17-09-
[19] Ardissono, L., Kuflik, T., and Petrelli, D., “Per- 2018.
sonalization in cultural heritage: the road trav- [30] “Redux,” https://redux.js.org/, Accessed: 17-09-
elled and the one ahead,” User modeling and user- 2018.
adapted interaction, 22(1-2), pp. 73–99, 2012.
[31] “Plugsonic Sample,” http://plugsonic.pluggy.eu/
[20] Zimmermann, A. and Lorenz, A., “LISTEN: a sample, Accessed: 17-09-2018.
user-adaptive audio-augmented museum guide,”
User Modeling and User-Adapted Interaction, [32] “Wavesurfer.js,” https://wavesurfer-js.org/, Ac-
18(5), pp. 389–416, 2008. cessed: 17-09-2018.
[21] Delerue, O. and Warusfel, O., “Authoring of vir- [33] “Plugsonic Soundscape,” http://plugsonic.pluggy.
tual sound scenes in the context of the Listen eu/soundscape, Accessed: 17-09-2018.
project,” in Audio Engineering Society Confer-
ence: 22nd International Conference: Virtual, [34] “Redux-Saga,” https://redux-saga.js.org/, Ac-
Synthetic, and Entertainment Audio, Audio Engi- cessed: 17-09-2018.
neering Society, 2002. [35] “3D Tune-In Toolkit,” https://github.com/
[22] Zimmermann, A., Lorenz, A., and Birlinghoven, 3DTune-In, Accessed: 17-09-2018.
S., “Listen: Contextualized presentation for [36] “Emscripten,” https://kripken.github.io/
audio-augmented environments,” in Proceedings emscripten-site/index.html, Accessed: 17-
of the 11th Workshop on Adaptivity and User mod- 09-2018.
eling in Interactive Systems, pp. 351–357, 2003.
[23] Vayanou, M., Katifori, A., Karvounis, M., Kour-
tis, V., Kyriakidi, M., Roussou, M., Tsangaris, M.,
Ioannidis, Y., Balet, O., Prados, T., et al., “Author-
ing personalized interactive museum stories,” in
AES Conference on Immersive and Interactive Audio, York, UK, 2019 March 27 – 29
Page 10 of 11
Comunità et al. Web-based binaural audio and sonic narratives
Strongly Strongly
Disagree Neutral Agree
Disagree Agree
The Tool always kept me informed about what is going on through appropri-
0% 0% 42.9% 42.9% 14.3%
ate feedback within reasonable time
The Tool speaks my language, with words, phrases and concepts familiar to
0% 0% 14.3% 57.1% 28.6%
me. The information appear in a natural and logical order
I feel the Tool supports undo and redo 14.3% 14.3% 42.9% 14.3% 14.3%
I never have to wonder whether different words, situations, or actions mean
0% 0% 0% 28.6% 71.4%
the same thing
The Tool helps me to recognise, diagnose and recover from errors 0% 42.9% 42.9% 14.3% 0%
The Tool prevents a problem from occurring in the first place 0% 0% 71.4% 28.6% 0%
I feel I had to remember information from one part of the dialogue to another 14.3% 42.9% 28.6% 14.3% 0%
The Tool supports flexibility and efficiency of use. It allow me to tailor
28.6% 14.3% 14.3% 42.9% 0%
frequent actions
The Tool contains information which is irrelevant or rarely needed 28.6% 71.4% 0% 0% 0%
Help is easy to search, focused on the user’s task, list concrete steps to be
0% 14.3% 71.4% 14.3% 0%
carried out, and not too large
The Tool support, extend, supplement, or enhance my skills, background
0% 0% 42.9% 57.1% 0%
knowledge, and expertise - not replace them
The Tool would enhance the quality of my work-life. The design is aestheti-
14.3% 42.9% 14.3% 14.3% 14.3%
cally pleasing with artistic as well as functional value
I feel the Tool would help me in protecting personal or private information
14.3% 28.6% 57.1% 0% 0%
belonging to me or my clients
Strongly Strongly
Disagree Neutral Agree
Disagree Agree
You were able to create to a great extent the soundscape 0% 0% 62.5% 37.5% 0%
The Tool always keep me informed about what is going on through appro-
0% 12.5% 37.5% 50% 0%
priate feedback within reasonable time
The Tool speaks my language, with words, phrases and concepts familiar
0% 0% 12.5% 62.5% 25%
to me. The information appear in a natural and logical order
I feel the Tool supports undo and redo 25.0% 37.5% 37.5% 0% 0%
I never have to wonder whether different words, situations, or actions mean
12.5% 0% 25% 62.5% 0%
the same thing
The Tool helps me to recognise, diagnose and recover from errors 0.0% 37.5% 50% 12.5% 0%
The Tool prevents a problem from occurring in the first place 0% 50% 37.5% 12.5% 0%
I feel I had to remember information from one part of the dialogue to
25% 62.5% 12.5% 0% 0%
another
The Tool supports flexibility and efficiency of use. It allow me to tailor
12.5% 12.5% 37.5% 25% 12.5%
frequent actions
The Tool contains information which is irrelevant or rarely needed 37.5% 50% 12.5% 0% 0%
Help is easy to search, focused on the user’s task, list concrete steps to be
14.3% 0% 42.9% 28.6% 14.3%
carried out, and not too large
The Tool support, extend, supplement, or enhance my skills, background
0% 0% 50% 37.5% 12.5%
knowledge, and expertise - not replace them
The Tool would enhance the quality of my work-life. The design is aesthet-
0% 37.5% 25% 25% 12.5%
ically pleasing with artistic as well as functional value
I feel the Tool would help me in protecting personal or private information
42.9% 14.3% 42.9% 0% 0%
belonging to me or my clients
AES Conference on Immersive and Interactive Audio, York, UK, 2019 March 27 – 29
Page 11 of 11