También disponible en Español


The digital magazine of InfoVis.net

by Juan C. Dürsteler [message nº 37]

The use of sound in order to represent information is not widespread. But there's still work about it going on.

Auralisation is the modification of sound in order to adopt the acoustic features of a specific room or space, for example a concert hall or a jazz cave.

In this context we'll speak about information auralisation as information visualisation through the use of sound. This apparent contradiction disappears if we take into account that information visualisation can be defined as the formation of a mental model of a certain data set not necessarily in a visual way (see the glossary).

The use of sound to represent data is not widespread. One of the fields where some work has been done is in the auralisation of software. Some researchers propose at least 3 reasons for doing it:

  • Not all people are predominantly visual. The auralisation of programs provides another "point of view" that can be particularly appropriate for people more sensitive to sound than to visualisation. Moreover, some types of information are difficult to represent graphically.
  • Listening can be done passively. It's not necessary to pay great attention to the sound of a normal program execution to notice that something exceptional has happened when the tone of the associated sound changes suddenly. Listening can be done in parallel with visualisation.
  • Sound has an inherent temporal content, just as the execution of programs has.

But you can find some more reasons. While vision is not able to separate the colours that make up white light, audition allows us to distinguish the different frequencies and tones that are mixed in the sound waves. For example, we can identify the sound of trumpets and that of violins in a musical piece. This makes sound a good candidate for parallel programs, where you can find the execution of each processor as a different instrument playing its own song.

Nevertheless, the conclusions that come out from the relatively scarce existing studies indicate that the sonification of data can't substitute visualisation, but complement it.

Nowadays the only data auralisation widely available and integrated in our daily experience is that of "earcons" or audible (ear) icons.

Earcons are brief fragments of sound, typically musical ones, that allow the machine to send a non verbal, audible message to the user. This message provides information about the state of a device, particularly the computer. For example the appearance of a program error, the ending of a user session or of a process. They were first proposed by Meera Blattner in 1989. Almost any telephone incorporates different earcons associated to the numbers or even the ring and most operating systems offer today a wide range of earcons that can be associated to the different system events.

The integration of sound in the human-machine interface is included in the so- called multimodal user interface. It tries to achieve communication with the different devices using several senses at the same time. Among them you can find vision, sound and touch (haptic interfaces). Still in a preliminary stage, the proliferation of wireless devices and electronic gadgets that has to come offers a wide range of applications.

In Japan, for instance, there's a multimillion dollar market for personalised ring tones that instead of being monophonic as until now, incorporate poliphonic sounds. This offers enhanced realism and the possibilities for the personalised ring tones designers have multiplied. (see the web of sonify.org).

Sound is an important part of our interaction with the world that we can't forget in order to complement the visualisation of information.

Links of this issue:

© Copyright InfoVis.net 2000-2018