Check out our Blog!
Sonic Foundations Survey
Earlier in 2022, as I was working on data sonifications for several of the OOI Data Nuggets, I was becoming overwhelmed in design variation. I would create a frequency-mapped version; a filter mapping version; a synthesized and a sample-based sonification; a sonification with all three streams of data; three sonifications with separate data streams that…
Earcons as Sonification Bookends
Another takeaway from the co-design session with teachers from the Perkins School for the Blind involved the use of earcons and auditory icons with devices they use in the classroom. The main tool that struck me was LabQuest, a thermometer that sonifies temperature in real-time, and that produces two beeps to signal the turning on…
Spearcons as X-axis markers in Sonifications
From early sonification development with lead-PI Amy Bower and our co-design session at Perkins School for the Blind, we discovered that sounds for location and context are essential. I started including spearcons, sped-up speech as auditory icons, to indicate x-axis ticks in the data as part of sonification prototypes.* Beyond contextual cues, there is some…
Collected Definitions of Sonification
Having taught classes on sonification prior to the NSF pilot, I have found many working definitions for sonification. The differences in language around the term make a difference, and placing definitions side-by-side reveals some of their nuances and shading. The complexity and nuances are important, so I thought it would be a good idea to…
Using Audio Markers as Sonification Exploration Prototype
While reading “Rich Screen Reader Experiences for Accessible Data Visualization” by Zong, et al., two items struck me related to data sonification. First, the literature review and study’s co-design experience amplified the message that screen reader users desire “an overview,” followed by user exploration as part of “information-seeking goals” (Zong et al. 2018). Even though…
Kyma: Sound Design Workstation
As part of the NSF pilot, I am designing many of the data sonifications using Kyma by Symbolic Sound. Symbolic Sound was founded and is owned by Carla Scaletti and Kurt Hebel. I like to describe Kyma as a sound design workstation, a recombinant sound language, a live-performance machine, a data sonification toolkit, and a…
Audification
One of the most direct ways to sonify a data stream is that of audification, which is “a direct translation of a data waveform to the audible domain” (Kramer 1994). Turning data into a waveform consists of treating data points as amplitude values in an audio signal. The direct correlation of data-as-audio signal has advantages…
Engaging with University of Oregon students
In January and February 2022, the entire team gave a presentation to and heard from students in Jon’s MUS 479/579 Data Sonification course at the University of Oregon. The Data Sonification course involves undergraduate and graduate students and explores data sonification broadly, including authentic datasets, sonification methods, auditory display, accessibility, data mapping, and even grant…
Co-Design Session at Perkins
In February, 2022, our team held a co-design session with two teachers from Perkins School for the Blind. While there were many great topics and items discussed, we had a few key takeaways from the session that are included below. Students use silence. The space of silence can become as important as the sounds themselves….
Inclusive Design: Including Graph Divisions in Sonification
I held a basic assumption that sonifications should only be comprised of “non-speech sound” (Sterne and Akiyama 2012), based upon a few definitions of and experiences listening to sonification. So, I made every effort to avoid the use of speech in sonification. While working on the initial grant with PI Amy Bower, however, I questioned…
- « Previous
- 1
- 2
- 3
- 4
- Next »