Skip to main content

Sound sense

Birdsong generation by Mathematica and Matlab (Left) and Rasmus Ekman's Coagula Industrial- Strength Color-Note Organ (right). Backgroud Painting: Naumann's 1905 'Naturgeschichte der Voegel Mitteleuropas', Hamburg University Faculty of Biology public domain collection. 

Ever since Al Jolson first spoke in The Jazz Singer, few have doubted that sound adds a valuable dimension to visual media. But, given the seamless integration of sound and vision in entertainment, it's perhaps odd that sound as a channel for practical data remains largely limited to end conditions: computerised bleeps, such as the 'battery low' warning on a mobile phone, that tell us only when some state has been reached. Yet there are familiar examples of sound used to monitor data: the sinister rattle of the Geiger counter (a convenient accident of the physics of the detector); the telephone speaking clock; or the audible output of a hospital ECG machine. This is data sonification.

Unsurprisingly, the possibilities for sonification have been mined most thoroughly in applications for the visually impaired. Talking digital gadgets - clocks, thermometers, barometers - are well-established hardware technology for the home, but it's less known that scientific equivalents exist. A classic example was the ULTRA system (Universal Laboratory Training and Research Aid) devised for blind science students by professors David Lunney and Robert Morrison. ULTRA, a data acquisition computer, could be interfaced to give speech readouts from laboratory instruments such as pH probes and resistance thermometers.

Location devices (ultrasonic 'sonar canes' of varying sophistication) also lead into some interesting research territory. It rather dates me that I remember Dr Leslie Kay's 'Sonic Torch' being featured on BBC TV's Tomorrow's World when I was at school. Still going strong as KASPA - Kay's Advanced Spatial Perception Aid Technology - this is the traditional sonar technology, using a bat-like frequency sweep to return detailed textural information. Another sonar device, the Sonic Pathfinder by Perceptual Alternatives, Melbourne, uses a headset with multiple transducers to give both forward and sideways detection, along with microprocessor analysis to prioritise audible warning to the most immediate hazard. Sonar, given its steep learning curve, hasn't yet achieved the popularity of low-tech approaches such as the long cane and the guide dog, but advances in computing and cognitive science may lead to equivalents that are more intuitive to users. According to Dr Robert Massof's 2003 summary paper, Auditory Assistive Devices for the Blind, many blind people perceive their environment in terms of 'auditory flow fields' - the way sound is modulated by the surroundings. Sonified spatial information based on this model could involve software-assigned virtual objects: for instance, bleeping beacons, with filtering to enhance the 'head-related transfer functions' (i.e. the effects of head and external ears) that help all hearers localise sounds.

Another approach to location-finding, image sonification, is the basis of The vOICe, developed by Dutch physicist Dr Peter Meijer (the typography is to emphasise 'Oh I See'). This works on input from a digital camera, spectacle-mounted, or even that of a mobile phone. The software sweeps the image with a vertical scan line, and sonifies features in the scan by representing vertical position as pitch, horizontal position as time within the sweep, and brightness as volume. As with sonar, this takes serious learning for anything more than simple geometrical objects, although like all learning, it's partly unconscious. One wearer, after several months, reported a sudden experience of seeing - literally - depth in the kitchen sink and around her house; the possibility that neural plasticity can evoke this spontaneous synaesthesia (cross-talk between sight and vision) is one intriguing aspect of Meijer's work.

Dr Meijer's support website, www.seeingwithsound.com, is worth visiting for its Java demonstration of The vOICe. Beyond the main application, The vOICe can be used to demonstrate various auditory effects such as Shepard tones, the illusion of an ever-ascending scale. Another feature to play with is the sonification of (x,y) function graphs: that is, sounding a tone where time is the x-axis and pitch the y-axis. You can equally do this with some mathematics packages. In Mathematica, this is done with Play [f, {t, 0, tmax}] - a direct analogue of its 2D graph plotting function Plot [f, {x, xmin, xmax}] function. Matlab has a similar construct sound (y,Fs) where y is the vector of the function, and Fs an optional parameter for sample frequency. To a blind user, however, such output isn't very informative in isolation. As you can hear with the demo of The vOICe, it's easy enough to get a qualitative impression of, say, a function being sinusoidal. But if you can't see the axes, there's no indication of the actual values.

However, many development tools enable blind users to read quantitative information from sonified graphs. For instance, Joshua Miele of the Smith-Kettlewell Eye Research Institute, San Francisco, has written a Matlab braille support and sonification toolbox, SKDTools, for blind engineers and scientists (see www.ski.org/skdtools/ for more detail). The y value of a function is represented as pitch, but there's the option of a 'discrete mode', using fixed-frequency steps that can be counted by ear. Additionally, there are tones for axis ticks, noise bursts for x-axis crossings, and high-pass and low-pass noise to signify high and low out-of-range data. The Java-based Sonification Sandbox, a project of the Psychology Department, Georgia Institute of Technology, offers similar functions, but as part of a more general toolkit to map imported Excel .csv data to multiple audio parameters for export in MIDI format. For example, an auditory graph can be made more comprehensible by overlaying its pitch profile f(t) with a drumbeat with interval representing the slope f'(t) and drum pitch representing the curvature f''(t).

The musical approach
Music, as in the Sonification Sandbox, is a central approach to sonic representation of data. Among computer-savvy musicians, this has a long history. In some cases, it's a form of steganography: a few years back, there was the discovery of an apparent demonic face in the spectrograph of Windowlicker, a track by the techno musician Aphex Twin (Richard David James). When viewed with the proper logarithmic scale, it turned out to be an intentional portrait of Aphex Twin himself. Other artists use scientific data, rather than art, as their source. Some notable examples are Life Music, a sonification of protein data by John Dunn and Mary Anne Clark; Bob L Sturm's Music from the Ocean, using records from deep water buoys in the Pacific; and Marty Quinn's various sonifications drawing on a variety of phenomena such as the 1994 Northridge California earthquake retold musically through his Seismic Sonata. (Dunn is an artist expert with MIDI, Clark a biologist, and Sturm and Quinn scientist-musicians with an interest in bridging the gap between arts and sciences).

But musical sonification, particularly of derived data, is a powerful concept outside art. We're used to hearing subtle distinctions in multi-instrument arrangements, and this is potentially a means to access equally subtle hidden characteristics in data sets. A good example is the Penn State University heart rate sonification project reported by Felix Grant in the previous issue of Scientific Computing World. Using suitable assignments to MIDI channels of derived data such as running means of inter-beat interval, this project attempts to replace the laborious reading of ECG traces with quickly accessible diagnostic 'music': a siren-like oscillation for sleep apnoea, a 'tinkling' timbre for the abnormal intervals of ectopic beats, and so on. An arts-science crossover runs right up to the most powerful computational efforts in this field. For instance, the Cray-powered 'Rolls Royce bulldozer' of sound synthesis, DIASS (Digital Instrument for Additive Sound Synthesis) finds twin roles as a precision tool for the sonification of scientific data and 'the most flexible instrument currently available to composers of experimental music'.

It shouldn't be imagined that sonification involves merely passive conversion of data to sound. The Neuroinformatics Group, Bielefeld University, is one of a number of teams exploring active multidimensional data mining by sound. The more complex techniques include Data Sonograms and Particle Trajectory Sonification. The former is analogous to a seismic charge that propagates and excites data points to make sounds, the latter to shooting a whooshing 'comet' into the data set and listening to how its path is affected by what it encounters.

At its science-fictional extreme, sonification leads to the idea of human augmentation. In 1998, the Associated Press news agency reported on 'the borg', a group of MIT graduate students, headed by Leonard (Lenny) Foner, pioneering wearable computers. Foner's particular invention is the Visor, a system that sonifies radiation detected by a head-mounted Zeiss MMS-1 spectrometer with range 350-1150nm (human vision is 400-700nm). The Visor has utilitarian possibilities, such as melanoma screening and seeing through camouflage. Nevertheless, its main use is sensory extension: a novel and intuitive means to extract more information from the ordinary ('Hey, my lawn looks okay, but it sounds funny today - maybe it's sick'). In both intent and implementation, the Visor has a great deal in common with Eye-Borg.

Sonification, it seems, remains the eccentric artistic relation in the family of data display methods. Given sound's importance in human culture, it's hard to say why. Perhaps vision is, ultimately, our dominant sense, and the dominance of visual displays merely reflects that. Or perhaps there's a subtle selection bias that leads interfaces to being designed mostly by 'techies' whose strongest card is visuo-spatial ability. I don't know; but the possibilities of sound are there to be explored. As Al Jolson promised, you ain't heard nothin' yet.

References

The International Community for Auditory Display www.icad.org/
Its The Sonification Report: Status of the Field and Research Agenda, although written in 1997, gives a good overview of the field, and the online proceedings of the annual International Conference on Auditory Display contain a wealth of papers on data sonification.

Design Rhythmics Sonification Research Lab www.quinnarts.com/srl/index.html

Bob L Sturm www.composerscientist.com/



Late in 2004, the regional media in Devon, UK, covered the story of Neil Harbisson, a student at Dartington College of Arts who has no colour vision due to the disorder achromatopsia. He asked Adam Montandon, a lecturer with an interest in cyborg technology, if a device could be created to enable him to perceive colour. The result of their collaboration was Eye-Borg, a sonification system that uses a laptop PC and camera-earpiece headset to convert hue to musical pitch. Developed in association with the Plymouth company HMC Entertainment Systems, Eye-Borg has since won the Europix Top Talent media award. Neil found its use came so naturally to him that he wears it all day, and has begun painting in colour. Having obtained unprecedented dispensation to wear it for his passport photo, he is, in a sense, officially a cyborg.

Neil Harbisson using the Eye-Borg colour sonifier against the background of one of his new colour paintings

A cynic might write this off as a quirky regional invention story, as I did at first. Hand-held colour identifiers for the blind or visually impaired have been around for a decade or so. For instance, CareTec's ColorTest won the Winston Gordon Award from the Canadian National Institute for the Blind in 1993, and there are at least half-a-dozen others such as the Brytech Color Teller and the Cobolt Talking Colour Detector. However, the unusual technical concept interested me.

Unlike the more common contact devices, which use an LED reflection system, the camera-based Eye-Borg can sense the colour of remote objects. Commercial colour identifiers use a speech chip interface, but instead Eye-Borg maps the visible spectrum to the twelve semitones of a musical octave. In part, this caters to Neil's own preference as a musician. But it also has roots in historical schemes for mapping colour to pitch, such as those of Arcimboldo (better-known for his composite portraits of people made of natural objects such as fish and fruit), Kepler, Newton and Helmholtz. Furthermore, Eye-Borg's interface has encouraged its integration with Neil's vision as an intuitive 'extra sense', a recurring concept in the field of sonification even outside the niche market for visually impaired users.


Topics

Media Partners