Using its earSmart voice processor, using two microphones that work like human ears, Audience’s chip enables analysis of sound in much the same way as the human brain, which when processing audio decides what to remove, what to keep and what to enhance.
Using computational auditory scene analysis (CASA), the processor is able to manage the characterization, grouping and processing of complex mixtures of sound, which means it can hone in on the human voice and filter out background sounds, even if those sounds are overpoweringly loud. The system on a chip also suppresses echoes and other irritating audio effects that affect call quality.
The chip then automatically equalizes and adjusts voice volume so a person can hear and talk naturally, whatever environment they’re in.
The technology is currently integrated in over 60 mobile handsets, including Huawei’s new quad core Ascend D, but the firm, which is in the process of going public, is also involved in almost every new device architecture, form factor, operating system (OS), and network.