OtoSense: the next level in sound-based IoT

It sounds (pardon the pun) as if the IoT may really be taking off as an important diagnostic repair tool.

I wrote a while ago about the Auguscope, which represents a great way to begin an incremental approach to the IoT because it’s a hand-held device to monitor equipment’s sounds and diagnose possible problems based on abnormalities.

Now NPR reports on a local (Cambridge) firm, OtoSense, that is expanding on this concept on the software end. Its tagline is “First software platform turning real-time machine sounds and vibrations into actionable meaning at the edge.”

Love the platform’s origins: it grows out of founder Sebastien Christian’s research on deafness (as I wrote in my earlier post, I view suddenly being able to interpret things’ sounds as a variation on how the IoT eliminates the “Collective Blindness”  that I’ve used to describe our past inability to monitor things before the IoT’s advent):

“[Christian} … is a quantum physicist and neuroscientist who spent much of his career studying deaf children. He modeled how human hearing works. And then he realized, hey, I could use this model to help other deaf things, like, say, almost all machines.”

(aside: I see this as another important application of my favorite IoT question: learning to automatically ask “who else can use this data?” How does that apply to YOUR work? But I digress).

According to Technology Review, the company is concentrating primarily on analyzing car sounds from IoT detectors on the vehicle at this point (working with a number of car manufacturers) although they believe the concept can be applied to a wide range of sound-emitting machinery:

“… OtoSense is working with major automakers on software that could give cars their own sense of hearing to diagnose themselves before any problem gets too expensive. The technology could also help human-driven and automated vehicles stay safe, for example by listening for emergency sirens or sounds indicating road surface quality.

OtoSense has developed machine-learning software that can be trained to identify specific noises, including subtle changes in an engine or a vehicle’s brakes. French automaker PSA Group, owner of brands including Citroen and Peugeot, is testing a version of the software trained using thousands of sounds from its different vehicle models.

Under a project dubbed AudioHound, OtoSense has developed a prototype tablet app that a technician or even car owner could use to record audio for automated diagnosis, says Guillaume Catusseau, who works on vehicle noise in PSA’s R&D department.”

According to NPR, the company is working to apply the same approach to a wide range of other types of machines, from assembly lines to DIY drills. As always with IoT data, handling massive amounts of data will be a challenge, so they will emphasize edge processing.

OtoSense has a “design factory” on the site, where potential customers answer a variety of questions about the sounds they must monitor (such as whether the software will be used indoors or out, whether it is to detect anomalies, etc. that will allow the company to choose the appropriate version of the program.

TechCrunch did a great article on the concept, which underscores really making sound detection precise will take a lot of time and refinement, in part because of the fact that — guess what — sounds from a variety of sources are often mingled, so the relevant ones must be determined and isolated:

“We have loads of audio data, but lack critical labels. In the case of deep learning models, ‘black box’ problems make it hard to determine why an acoustical anomaly was flagged in the first place. We are still working the kinks out of real-time machine learning at the edge. And sounds often come packaged with more noise than signal, limiting the features that can be extracted from audio data.”

In part, as with other forms of pattern recognition such as voice, this is because it will require accumulating huge data files:

“Behind many of the greatest breakthroughs in machine learning lies a painstakingly assembled dataset.ImageNet for object recognition and things like the Linguistic Data Consortium and GOOG-411 in the case of speech recognition. But finding an adequate dataset to juxtapose the sound of a car-door shutting and a bedroom-door shutting is quite challenging.

“’Deep learning can do a lot if you build the model correctly, you just need a lot of machine data,’ says Scott Stephenson, CEO of Deepgram, a startup helping companies search through their audio data. ‘Speech recognition 15 years ago wasn’t that great without datasets.’

“Crowdsourced labeling of dogs and cats on Amazon Mechanical Turk is one thing. Collecting 100,000 sounds of ball bearings and labeling the loose ones is something entirely different.

“And while these problems plague even single-purpose acoustical classifiers, the holy grail of the space is a generalizable tool for identifying all sounds, not simply building a model to differentiate the sounds of those doors.

…”A lack of source separation can further complicate matters. This is one that even humans struggle with. If you’ve ever tried to pick out a single table conversation at a loud restaurant, you have an appreciation for how difficult it can be to make sense of overlapping sounds.

Bottom line: there’s still a lot of theoretical and product-specific testing that must be done before IoT-based sound detection will be an infallible diagnostic tool for predictive maintenance, but clearly there’s precedent for the concept, and the potential payoff are great!

 


LOL: as the NPR story pointed out, this science may owe its origins to two MIT grads of an earlier era, “Click” and “Clack” of Car Talk, who frequently got listeners to contribute their own hilarious descriptions of the sounds they heard from their malfunctioning cars.   BRTTTTphssssBRTTTT…..

http://www.stephensonstrategies.com/">Stephenson blogs on Internet of Things Internet of Things strategy, breakthroughs and management