As Chief Technical Officer, Rob Reng approaches all things tech with a zeal matched only by his obsession with music. Rob not only writes electronic music but also professionally scores advertisements. His passion for both music and technology that spark positive change make him the perfect person to explain exactly how IRIS technology works to improve sound quality and guide our brain into Active Listening.
The basics of recorded sound
Audio waves repeat cyclically, like ripples on a pond, and ‘phase’ describes how far along the cycle a wave is. During audio recording, the phase gets ‘locked’ in place, which makes the sound we hear seem flat and lifeless.
“While the quality of headphones or speakers you listen through can make a big difference, it will still pale in comparison to the experience of live music, simply due to the richness of texture offered when sound is bouncing around a room before it reaches your ears. You’re stuck listening to the exact phase of waves captured in the recording environment. The IRIS audio technology is able to add phase back in to create that much more natural sound,” explains Rob.
How sound has ‘gone bad’
One of the most amazing developments of the 21st century was our sudden ability to access an increasingly wide range of audio content, anywhere and any time, through streamed content. However, this unparalleled access has come at the cost of sound quality. This is due to the algorithm that most digital audio compression schemes use, which only takes into account a single-phase dimension of the signal whilst disregarding everything else. This therefore eliminates much of the phase information present in the original waveform.
“It’s extremely frustrating that at the same time as the world has been granted more equal access to greater audio content, from music to meditation, it has come at the expense of quality”, notes Ron. “[In addition to compression issues] the bit rate at which sound waves are sampled can further degrade the sound quality, making whatever you’re listening to seem artificial rather than rich and natural, like you’re listening live.”
What IRIS audio technology does differently
The IRIS audio technology works by using an algorithm that resynthesizes an entire phase dimension, thereby restoring phase dynamics to the signal. However, that only tackles half the problem.
“The IRIS algorithm is addressing a fundamental flaw in all recorded sound, in that the phase information in the room where the sound was originally recorded is ‘locked’ at the point where the microphones were placed, ” explains Rob.
When you encounter music in a live setting, the sound waves bounce around the room, off objects or other people, and arrive into your ears in an infinitely detailed mesh of sound information. Your brain then detects minute differences in timing between sound waves arriving into each ear and builds a full directional sound ensemble.
Until now, this was not replicable by recorded sound.
The IRIS algorithm breaks down an audio stream into all its fundamental constituent parts, and then discovers the ‘phase’ information for each part. IRIS then delivers this phase information into the ears in pieces—it is important to note that IRIS does not play ‘a finished piece’, IRIS asks the user to construct an ensemble based on the ingredients we provide to them. This process is called ‘Active Listening.’
What is Active Listening?
“Think of it like a banana smoothie,” suggests Rob. “We can identify all the ingredients and quantities that make it up. We give all the ingredients to each smoothie maker, and they construct their own interpretation with their own unique flavour, according to the process they took to make their recipe.
“This is why IRIS enables you to Listen Well—your brain is interpreting and constructing the sound you hear.”
How to try Active Listening
You can check out our demo player to try IRIS for yourself—and then please, let us know what you think! If you’re interested in Listening Well to the songs you already love, you can download the free IRIS app