Posted on 22nd October 2012
In an article published by the statistics magazine Significance, Nick Bearman and his colleague Ethan Brown have considered the importance of sound on our ability to interpret data.
In everyday life we continually use our ears to discern data. Even a process as simple as making a cup of tea will involve listening and reacting to the increasing rumble of a boiling kettle and the changing pitch of flowing water as the cup fills.
Instruments such as the parking sensor on a modern car use the frequency of beeping noises to depict distance, and the Geiger counter represents radiation levels with audible clicks – a method which remains popular a century after its invention.
Why then, the authors ask, should we not also use our ears to interpet more complex data?
The article argues that we process auditory information differently to visual information and that this difference can give us new insights into our analysis of data; our ears can distinguish pitch, rhythm, loudness, spatial location and complex timbres and can do so with incredible resolution in time. Messrs Brown and Bearman believe that this ability can reveal patterns that are hidden from our eyes.
Both researchers specialise in Sonification, an emerging field of research that is the audio equivalent of visualisation. It explores the opportunities for using our powerful auditory sense to understand the world’s variation and uncertainty.
They believe that Sonification can be used to communicate the general shape and structure of data through the audio equivalent of scatterplots and boxplots. They also hope that it can help to explore complex structures in time series – becoming sound that varies over time, and also known by another term; music.