Skip to content

Sound Design

Sonification involves the field of sound design, which is the craft of developing sounds to meet a variety of needs. Sound design is common to video game and media industries, which require the creation of sound effects, human sounds, ambiance, and dialogue. Data sonification involves mapping data values to sound parameters, and we often have to think like designers about the communication of the data. If we are mapping amplitude for example, should the sound get louder or quieter? How loud or quiet should the sound become? What is the range of amplitude changes? Should the changes include a threshold where there is more non-linear movement (bursts of amplitude) to highlight extreme ranges in the data? How will listeners perceive the sound in relation to the underlying data?

These questions are part of the design, and we often have to mix together audio concepts and sound parameters with field-specific knowledge into a cohesive sound that broadly communicates the model, phenomena, or experiment.

Because there is a lot to sound design, let’s unpack the process through one example of sound design: simulating the movement of a static recording in space. I won’t use any fancy tools for the example — just basic audio effects, automated parameters, and human perception. As sound moves away from us in space, we hear the sound further away from us. But what has changed about the sound when it moves away? Our perception of the sound has changed, based upon changes in the sound interacting within the environment—the sound is quieter, less bright, and more diffuse. By changing various parameters of the sound over time, we can begin to simulate the sound moving away from us as our ears and brains of our lived experience help us signal that a sound is moving in space.

Some aspects of sound design important to this example are:

  • When sound becomes less bright — the sound is further away.
  • Direct vs reflected sound alters the perceived distance.
  • When sound becomes quieter — the sound recedes into the background.

The designed audio example uses a recording of human feet walking on concrete. The original recording doesn’t change distance; the close-microphone recording simply captures feet on concrete (listen to Sound A).

To move the sound we have to automate multiple parameters over time: volume, EQ (changing the brightness of the sound over time), various reverbs (reflections of sound in a space — the filtered delays of a sound), and EQs of the reverb. Specifically, I will automate the sound’s gain (perceived volume), alter the cutoff frequency of a low-pass filter, adjust the amount of pre-fader buss sends to various reverb units, and adjust the gain of the reverb units (listen to Sound B).

Media

Sound A: Recording of human footsteps on concrete.

Sound B: Sound design of Sound A, a recording of human footsteps on concrete, to simulate the feet moving away from the listener in space.

References

Bellona, Jon. “Physical Composition: The Musicality of Body Movement on Digital Musical Instruments.” University of Virginia, 2018. https://doi.org/10.18130/V3D79595C.

Learn More

Learn more about the parameters of sound.

by Jon Bellona

Follow

Follow this blog

Get every new post delivered right to your inbox.

Email address