Audio Experience Design
Imperial College London

< Back

Virtual Audio:
Adapting Technology to People

How we can adapt technology to each of our unique differences to make virtual audio indistinguishable from real-life sound

Personalising audio using artificial intelligence

Everyone receives sound differently due to the shape and size of our heads and ears, meaning we all have a personal audio ‘filter’, called a Head Related Transfer Function (HRTF).

We've built a custom lab designed to measure these filters as part of the SONICOM project. Volunteers have microphones placed in their ears and a series of sounds played at them from various angles.

However, this process takes time and requires expensive equipment.

The aim is to collect enough data to train artificial intelligence to predict HRTFs from, for example, just a picture of someone's ears, without having to physically measure them.

By personalising audio using your HRTF, we can make it seem as realistic as possible.

Learn more about SONICOM

Experience immersive audio

The videos below are demonstrations of immersive audio - of how we can create 3D sound experiences through just a pair of headphones.

As you listen, the video will guide you from mono to stereo to 3D sound that can be manipulated to sound as if it is coming from different directions and distances, or even in different kinds of rooms.

Be sure to wear headphones in order to experience them properly.

Immersive audio example: speech

Immersive audio example: music

Get your own audio filter measured

Interested in having your own HRTF measured at our custom Turret Lab in South Kensington, London, and contributing to our research?

Register your details below to be contacted when spaces become available in the autumn!