As I work on data sonifications and look ahead to fitting these sounds within contextual audio displays, I recognize there could be an outlined structure for what the final result may look and sound like. From experience with my Data Sonification course in creating audio displays, mixing dialogue and sound effects in music and media, and reading about successful projects in creating “data narratives” (Siu et al., 2022), I wanted to generate a prototype of scripted text and sonifications.
I am no oceanographer or science educator -- my idea for creating an audio prototype was mainly about creating a block-content structure for constructive conversation and collective feedback. For example, a structure could have intro text followed by lesson sound 1, then text introducing a second concept followed by lesson sound 2. This could setup the data explanation, which could finally be followed by the data sonification playback, etcetera.
Identifying a possible audio display structure through audio prototype was helpful for me to identify types of sonic content and streamline my workflow— I targeted specific sonifications and sound chunks to embed within narration tracks. The process was helpful for me in working with the core lessons, and provided a starter for analyzing scripted context, the flow of the audio display, and sonification occurrence and breadth.
For the carbon net flux ocean lesson that I prototyped, I ended up creating four minutes of material across four separate, ordered tracks. The four media files are included below. The material consisted of a narrator and twelve sonifications—four data sonifications and eight short contextual sonifications—embedded within the narration. And while I didn’t include natural environment sounds as bed tracks, the prototype could start a larger conversation around identifying environment ocean sounds and identifying the length and number of sonifications needed for each data nugget.
Two quick notes on the media files below: the text content may contain errors — it's a prototype after all — and I did a quick mix. I did not master the audio. Headphones are best for listening.
Siu, Alexa, Gene S-H Kim, Sile O’Modhrain, and Sean Follmer. “Supporting Accessible Data Visualization Through Audio Data Narratives.” In CHI Conference on Human Factors in Computing Systems, 1–19. New Orleans LA USA: ACM, 2022. https://doi.org/10.1145/3491102.3517678.
by Jon Bellona