I'm a huge fan of the practice of sonification. As a musician and science lover, I find it captivating; so anytime I catch it in the news I get very excited.
What is sonification?"Sonification is the use of non-speech audio to extract information from data. It represents the sound analogue to graphical visualization. The method is applied in several disciplines from economy to medicine to physics." 
How is it useful?
I found a wonderful explanation of the potential of this technology in the same paper quoted above, published back in 2005.
"Sonification offers the chance to detect structures in the data sets that have been hidden to the methods applied so far. Data analysis through sonification might especially be useful for displaying results depending on multiple parameters and/or belonging to higher space-time dimensions."
Here is the relevant current event:
News out of University of Michigan applauds the new sonification efforts of doctoral student Robert Alexander.
"Listen to solar storm activity in new sonification video"
"The researcher who created it is Robert Alexander, a University of Michigan design science doctoral student. Alexander is a composer with a NASA fellowship to study how representing information as sound could aid in data mining."
-Nicole Casal Moore University of Michigan
Video Credit: University of Michigan / Robert Alexander
Released by University of Michigan found via Space.com
I really don't want to put the team down, but I think the presentation of information in this article is misleading. The end product is both fascinating and useful, yes. The article, however, seems to suggest that sonification as applied to current science is a novel thing.
The author quotes Jim Raines, a lead mission operations engineer in U-M's Space Physics Research Lab.
"Robert is giving us another research tool," Raines said. "We're used to looking at wiggly-line plots and graphs, but humans are very good at hearing things. We wonder if there's a way to find things in the data that are difficult to see."
This kind of talk is extremely dumbed-down, and seems very strange to me. Human pattern seeking tendencies indeed allow us to spot some anomalies quicker than data crunching.
Alexander's methodology, or perhaps the application of the process to different subject matter may be a new development, but sonification has been around at least since the invention of the geiger counter, as the author of the article mentions.
In this video from late 2010, "ptolemy2000" shows us an animation set to the sonification of an LHC collision at CERN. It's a very exciting crescendo. All the sound is data translation, except for the filming of the control center in the beginning (which is a little high pitched.)
"Sonifying the LHC at CERN"
Video Credit: CERN/LHC/ ptolemy2000
Wonderful, isn't it?
CERN, in a paper released in 2010, reveals at great length the sonification used for the data analysis of experiments with the Large Hadron Collider (LHC.)
"...[The particle beam] oscillation frequencies range from a few tens of Hz to a few kHz, therefore they are audible without any further processing. With this sonification, many details of beam dynamics can be monitored by listening, in parallel to standard observation usually done in the frequency domain by performing real-time Fourier analysis of the beam signals." 
(20Hz-20kHz is the audible frequency range of an undamaged human ear)
They later explain the setup as experienced by the scientists monitoring the audio signal. Personally, this blows my mind.
This is a visualization for the following description:
Scheme of the Sonification"The listener is virtually placed in the center, where the beams collide. The two read-out chambers are situated to the left and right hand side. The strings in the center of the read-out chamber are shorter and higher pitched. The tracks start playing in the point of collision and evolve simultaneously to the left and right hand side. The volume of the sounds represents the charge deposit of the electrons. And finally, in order to not confuse tracks that are close to each other, they sound slightly different - as different instruments of an orchestra. If more electrons are hitting wires within a short time, the sounds overlap and auditory grouping happens. Thus we hear a continuous and coherent sound for each track rather than single tones for each single hit." 
Related to sonification is the processing of "infrasound" into audible signal. Infrasound actually is audio signal, but we can't hear it. This data is, as with the techniques shown above, processed in such a way for us to hear what we otherwise couldn't.
I've written about this technique a couple times before. Once on infrasound resonating the eyeball to create visual artifacts reported as ghosts:
[ Neil DeGrasse Tyson and Vic Tandy on Ghosts and Infrasound ]
And another bit, more recently, here:
[ Audio: That's Not Thunder? The Seismic Waves of Japan's 9.0 Earthquake ]
Alberto de Campo, Natascha Hormann, Harald Markum, Willibald Plessas, & Bianka Sengi (2005). Sonification of lattice data: The spectrum of the Dirac operator across the deconfinement transition Proceedings of Science Other: PoS: LAT2005-152
Katharina Vogt, Robert H¨oldrich, David Pirr`o, Martin Rumori, Stefan Rossegger, Werner Riegler, & Matevˇz Tadel (2010). A SONIC TIME PROJECTION CHAMBER. SONIFIED PARTICLE DETECTION AT CERN International Conference on Auditory Display Other: ISBN: 0-9670904-3-1
-----Sharing via these buttons will share the current article page, unless your current url is the main page of Astronasty. Click the title of the article to go to an individual article page.
Share Spread the Love