Impact of Different Visualizations in Graphical Interpolators Ver31 Shortened
Impact of Different Visualizations in Graphical Interpolators Ver31 Shortened
ABSTRACT
1 Introduction
Graphical interpolation systems provide a simple mechanism for the
The fundamental problem when designing sounds with a
control of sound synthesis systems by providing a level of abstraction
synthesizer is how to configure the often large number of
above the parameters of the synthesis engine, allowing users to explore
synthesizer parameters to create a certain audio output, i.e. how to
different sounds without awareness of the synthesis details. While a
translate sonic intent to parameter values. Although having direct
number of graphical interpolator systems have been developed over
access to every parameter (one-to-one mapping) gives fine control
many years, with a variety of user-interface designs, few have been
over the sound, it can also result in a very complex sound design
subject to user-evaluations. We present the testing and evaluation of
process. Alternatively, it is possible to map a smaller number of
alternative visualizations for a graphical interpolator in order to establish
control parameters to a larger number of synthesizer parameters
if the visual feedback provided through the interface, aids the navigation
(few-to-many mapping) in order to simplify the process. Particular
and identification of sounds with the system. The testing took the form
states of synthesis parameters (“presets”) are associated with
of comparing the users’ mouse traces, showing the journey they made
different control values and then as these control values are
through the interpolated sound space when different visual interfaces
changed, new synthesizer parameter values are generated by
were used. Sixteen participants took part and a summary of the results is
interpolating between the values of the presets. This provides a
presented, showing that the visuals provide users with additional cues that
mechanism for exploring a defined sound space, constrained by the
lead to better interaction with the interpolators.
preset sounds and the changes of the control parameters.
A number of such interpolation systems have been proposed
CCS CONCEPTS
previously and these can be categorized based on whether the
• Human-centered computing~Visualization design and control mechanism is via some form of graphical interface or some
evaluation methods • Human-centered computing~Usability other medium. Those that are of interest here are those that offer a
testing • Human-centered computing~Empirical studies in HCI graphical representation that allows the control of a visual model.
sonic output, or if the visual elements distract from this intention. 2 Graphical Interpolation Framework
Moreover, if the visuals do aid the process, how much they help
In order to evaluate different visualizations, an interpolation system
and what visual cues will best serve the user when using the
was needed that permitted the visual representation for the
interface for sound design tasks.
interpolation model to be modified, whilst leaving all other aspects
the same. The previously created graphical interpolation
framework [9] was used to facilitate comparative user testing,
where only the interpolator’s visual representation was changed.
The user testing took the form of a sound design task with a
subtractive synthesizer, where the participants were asked to match
a given sound which on the interpolator had a fixed, but unknown
to the participants, target location. Each interpolator interface was
populated with different preset sounds, with all of the presets being
created from the same base patch, generating some sonic
Figure 2: Interpolation Space Created with Max nodes Object commonalities between them. However, the starting sounds for
each interpolation space interface were different. The layout of the
nodes and target location within the interpolation space were same
A Journey in (Interpolated) Sound AM '19, September 18–20, 2019, Nottingham, United Kingdom
in each case, but so this was not obvious to the participants, the 1.! Fast space exploration to identify areas of sonic interest
interface was rotated through 90° clockwise for each interpolator. 2.! Localise on regions of interest, but occasionally check
To simulate a real sound design scenario, the participants were that other areas do not produce sonically better results
given only three opportunities to hear the target sound before 3.! Refinement and fine tuning in a localised area to find the
commencing the test and none after that. Participants therefore had ideal result
to retain an idea of the required sound in their “mind’s ear”. All
participants completed the same sound design task with each
interface, but each interface produced different sonic outputs from
different presets. To minimise bias through learning, the order in
which the interfaces were presented was randomised. Each test
lasted a maximum of ten minutes with participants able to stop the
test early if they felt the task had been completed by pressing a
button to register their estimated target location. All of the user’s
interactions with the interfaces were recorded for analysis.
Significance was confirmed (F(1, 15) = 3.132, p = 0.05) with a visual locations encouraged the users to investigate these points and
repeated-measures ANOVA. This indicates that additional visual so explore the defining locations. The full interface not only shows
cues on the interface encourage wider exploration of the the location of the defining sounds, but also indicate to the users’
interpolation space, even though the output and goal of the test was regions of interest (node intersections for this interpolation model),
sonically (not visually) based. where there may be interesting sounds. This seems to focus users’
The locations actually selected by participants as their target exploration on these areas of interest and results in the users getting
sounds were also plotted to see if there were any trends resulting closer to the target location and so “better” sonic results.
from the different interfaces. Figure 7 shows the selected target
locations for all the participants, for Interface 1 – 3, left to right. 6 Conclusions
The identification of three distinct phases of use during the testing
of the graphical interpolators is of significant interest as it suggests
that users interact with the interfaces differently at different stages
during their journey through the interpolation space. Better
understanding of the user behaviour with these systems will in
future work allow the design of new interfaces that provide users
with visuals that will further facilitate the different phases of a
sound design task.
A number of different visual models have been previously
Figure 7: Participants Selected Target Locations by Interface
presented for graphical interpolators [1 - 8], each of these using
very different visualizations. Given the suggested importance of
The results in Figure 7 show that for Interface 1 (no visualization)
the visual feedback provided by each interface, it will be important
there is a fairly wide disbursement of locations selected as the
in future work to evaluate the suitability and relative merits of each
target. From the correct target location, the SDD was calculated as
through further user testing.
0.300 units for Interface 1. For Interface 2 (locations only), there
was a tighter distribution of target locations with SDD of 0.267
units. Finally, for Interface 3 (full nodes) there is an even stronger
ACKNOWLEDGMENTS
localisation with the SDD reducing still further to 0.187 units. This We would like to thank all the participants from Bournemouth
indicates that as the interface provides more visual detail it University and the University of Southampton who took part in the
improves the user’s ability to identify the intended target. user testing of the systems presented.
Comparing by ear the user selected targets with the actual target
sounds showed that in all cases there were sonic differences, but as REFERENCES
the selected locations got closer to the true location on the [1] J.J. van Wijk & C.W. van Overveld (2003). Preset based interaction with high
dimensional parameter spaces. In Data Visualization 2003 (pp. 391-406).
interpolator, as expected these became less distinguishable. Springer US. DOI: https://doi.org/10.1007/978-1-4615-1177-9_27
[2] J.F. Allouis (1982). The SYTER project: Sound processor design and software
overview. In Proceedings of the 1982 International Computer Music Conference
5 Discussion (ICMC). Ann Arbor, MI: Michigan Publishing, University of Michigan Library.
[3] M. Spain & R. Polfreman (2001). Interpolator: a two-dimensional graphical
The testing undertaken indicates that users tend to follow three interpolation system for the simultaneous control of digital signal processing
phases when finding a sound with a graphical interpolator system parameters. Organised Sound. Aug 1;6(02):147-51. DOI:
http://dx.doi.org/10.1017/S1355771801002114
(exploration, localisation and refinement). In the first phase the [4] R. Bencina (2005). The metasurface: applying natural neighbour interpolation to
users make large, fast moves as they explore the space. During the two-to-many mapping. In Proceedings of the 2005 conference on New interfaces
second phase the speed tends to reduce as they localise on specific for musical expression 2005 May 1 (pp. 101-104). National University of
Singapore.
areas of interest. In this phase, though, confirmatory moves have [5] O. Larkin (2007). INT.LIB–A Graphical Preset Interpolator For Max MSP.
been observed when the user quickly checks that there are no other ICMC’07: In Proceedings of the International Computer Music Conference,
2007.
areas that may produce better results. These tend to be done at a [6] nodes. Max Reference, Cycling 74, 2019.
moderate speed, often in multiple directions. In the final phase the [7] C. Drioli, P. Polotti, D. Rocchesso, S. Delle Monache, K. Adiloglu, R. Annies
user refines the sound with small, slow movements as they hone-in and K. Obermayer (2009) Auditory representations as landmarks in the sound
design space. In Proc. of Sound and Music Computing Conference.
on a desired location. These phases appear to be present regardless [8] M. Marier (2012). Designing Mappings for Musical Interfaces Using Preset
of the visual display that is presented, with similar phases being Interpolation. In Conf. on New Interfaces for Musical Expression (NIME 12).
[9] D. Gibson & R. Polfreman (2019). A Framework for the Development and
observed with all three of the interfaces tested. Evaluation of Graphical Interpolation for Synthesizer Parameter Mappings. In
From examination of the mouse traces, the visual feedback Proceedings of Sound and Music Computing Conference, 2019. DOI:
https://doi.org/10.5281/zenodo.3249366
presented by the different interfaces does appear to affect how users [10] A. Mitchell (2005). The ESRI Guide to GIS Analysis, Volume 2: Spatial
interact with the systems. When no visualization is provided, the Measurements & Statistics. ESRI Press.
users were effectively moving “blind” and tended to just make
random movements within the space initially. When the preset
locations were provided, although the users were not aware of
where or how the interpolation was being performed, the provided