Yesterday I visited the Barbican for their Brain Waves Weekender, a range of exhibitions fusing art with neuroscience. Along with a kid-friendly dissection of a jelly brain and an invitation to knit a neuron, the event featured two sound-related demonstrations.
The first was Music of the Mind – a performance by Finn Peters, Prof Mark d’Inverno, Dr Mick Grierson and Dr Matthew Yee-King of Goldsmiths University, made using ‘brain computer interfaces’ translated into sound via headsets usually used for gaming, coupled with custom software. Interesting idea, though the music itself was a little too avant garde for my taste. You can see an example of the project here.
The second was a Sonic Tour of the Brain by Guerilla Science, a playlist of about twenty minutes exploring the different sounds relating to the structure and functions of the brain. Two tracks featured the actual sounds of the brain firing – electroencephalograms or EEGs – detected by attaching electrodes to the scalp. (You can also listen to the sounds produced by the ears, these are called otoacoustic emissions, and an interesting article about them can be found here). Another track demonstrated what a sentence would sound like to individuals with cochlear implants of varying numbers of channels, something I had experienced before during a class with a visiting lecturer from the UCL Ear Institute. The Mosquito Frequency track plays tones of 10kHz, 13kHz and 17kHz, to emphasise the loss of high frequency perception with age. 17k was silence to me, but can still hear 16k (I tested myself in the lab last week, you can take an online test here). The tour also talks about binaural localisation, which I will explain:
Sound travelling from one side of the head reaches the closest ear first, with the difference being called the inter-aural time difference (or ITD). If the sound is a pure tone the time difference is detected as a difference in phase, known as the inter-aural phase difference (IPD). Inter-aural intensity difference (IID) occurs as a result of shadowing, since one ear is shaded by the head while the other is fully exposed to the arriving sound. The shadow depends on the size of the wavelength in comparison to the size of the head (see diagram below). Due to the distance between the ears there is a reduced phase difference at around 1500Hz, so localisations errors start to be made at this frequency. An interesting book on this subject is Jens Blauert’s Spatial Hearing.