Why IoT Engineers Need Compulsory Sensitivity Training on Privacy & Security

Posted on 4th April 2018 in AI, data, Essential Truths, Internet of Things, privacy, security

OK, you may say I’m over-sensitive, but a headline today from Google’s blog that others may chuckle about (“Noodle on this: Machine learning that can identify ramen by shop“) left me profoundly worried about some engineers’ tone-deaf insensitivity to growing public concern about privacy and security.

This is not going to be pleasant for many readers, but bear with me — IMHO, it’s important to the IoT’s survival.

As I’ve written before, I learned during my work on corporate crisis management in the 80’s and 90’s that there’s an all-too-frequent gulf between the public and engineers on fear.  Engineers, as left-brained and logical as they come (or, in Myers-Briggs lingo, ISTJs, “logical, detached and detailed” and the polar opposite of ENFP’s such as me, ” caring, creative, quick and impulsive” ) are ideally-suited for the precision needs of their profession — but often (but not always, I’ll admit…) clueless about how the rest of us respond to things such as the Russian disruption of our sacred political institutions via Facebook or any of the numerous violations of personal privacy and security that have taken place with IoT devices lacking in basic protections.

The situation is bad, and getting worse. In one Pew poll, 16% or less of Americans felt that a wide range of institutions, from companies to government, were protecting their information.

Engineers are quick to dismiss the resulting fear because it isn’t logical.  But, as I’ve written before, the fact fear isn’t logical doesn’t mean it isn’t really real for many people, and can cloud their thought processes and decision-making.

Even worse, it’s cumulative and can ensnare good companies as well as bad.  After a while, all the privacy and security violations get conflated in their minds.

Exhibit A for this insensitivity? The despicable memo from Facebook VP Andrew Bosworth:

““Maybe someone dies in a terrorist attack coordinated on our tools. And still we connect people. The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good.”

Eventually he, begrudgingly, apologized, as did Mark Zuckerberg, but, IMHO that was just facesaving. Why didn’t anyone at Facebook demand a retraction immediately, and why did some at Facebook get mad not at Bosworth but instead at anyone who’d leak such information?  They and the corporate culture are as guilty as Bosworth in my mind.

So why do I bring up the story about identifying the source of your ramen using AI, which was surely written totally innocently by a Google engineer who thought it would be a cute example of how AI can be applied to a wide range of subjects? It’s because I read it — with my antennae admittedly sharpened by all the recent abuses — as something that might have been funny several years ago but should have gone unpublished now in light of all the fears about privacy and security. Think of this little fun project the way a lot of the people I try to counsel on technology fears every day would have: you mean they now can and will find out where I get my noodles? What the hell else do they know about me, and who will they give that information to???

Again, I’m quite willing to admit I may be over-reacting because of my own horror about the nonchalance on privacy and security, but I don’t think so.

That’s why I’ll conclude this screed with a call for all IoT engineers to undergo mandatory privacy and security training on a continuing basis. The risk of losing consumer confidence in their products and services is simply too great for them to get off the hook because that’s not their job. If you do IoT, privacy and security is part of the job description.

End of sermon. Go about your business.

 

 

Great Podcast Discussion of #IoT Strategy With Old Friend Jason Daniels

Right after I submitted my final manuscript for The Future is Smart I had a chance to spend an hour with old friend Jason Daniels (we collaborated on a series of “21st Century Homeland Security Tips You Won’t Hear From Officials” videos back when I was a homeland security theorist) on his “Studio @ 50 Oliver” podcast.

We covered just about every topic I hit in the book, with a heavy emphasis on the attitude shifts (“IoT Essential Truths” needed to really capitalize on the IoT and the bleeding-edge concept I introduce at the end of the book, the “Circular Corporation,” with departments and individuals (even including your supply chain, distribution network and customers, if you choose) in a continuous, circular management style revolving around a shared real-time IoT hub.  Hope you’ll enjoy it!

IoT Design Manifesto 1.0: great starting point for your IoT strategy & products!

Late in the process of writing my forthcoming IoT strategy book, The Future Is Smart, I happened on the “IoT Design Manifesto 1.0” site. I wish I’d found it earlier so I could have featured it more prominently in the book.

The reason is that the manifesto is the product (bear in mind that the original team of participants designed it to be dynamic and iterative, so it will doubtlessly change over time) of a collaborative process involving both product designers and IoT thought leaders such as the great Rob van Kranenburg. As I’ve written ad nauseam, I think of the IoT as inherently collaborative, since sharing data rather than hoarding it can lead to synergistic benefits, and collaborative approaches such as smart cities get their strength from an evolving mishmash of individual actions that gets progressively more valuable.

From the names, I suspect most of the Manifesto’s authors are European. That’s important, since Europeans seem to be more concerned, on the whole, about IoT privacy and security than their American counterparts, witness the EU-driven “privacy by design” concept, which makes privacy a priority from the beginning of the design process.

At any rate, I was impressed that the manifesto combines both philosophical and economic priorities, and does so in a way that should maximize the benefits and minimize the problems.

I’m going to take the liberty of including the entire manifesto, with my side comments:

  1. WE DON’T BELIEVE THE HYPE. We pledge to be skeptical of the cult of the new — just slapping the Internet onto a product isn’t the answer, Monetizing only through connectivity rarely guarantees sustainable commercial success.
    (Comment: this is like my “just because you can do it doesn’t mean you should” warning: if making a product “smart” doesn’t add real value, why do it?)*
  2. WE DESIGN USEFUL THINGS. Value comes from products that are purposeful. Our commitment is to design products that have a meaningful impact on people’s lives; IoT technologies are merely tools to enable that.
    (Comment: see number 1!)
  3. “WE AIM FOR THE WIN-WIN-WIN. A complex web of stakeholders is forming around IoT products: from users, to businesses, and everyone in between. We design so that there is a win for everybody in this elaborate exchange.
    (Comment:This is a big one in my mind, and relates to my IoT Essential Truth #2 — share data, don’t hoard it — when you share IoT data, even with competitors in some cases [think of IFTTT “recipes”] — you can create services that benefit customers, companies, and even the greater good, such as reducing global warming).
  4. WE KEEP EVERYONE AND EVERYTHING SECURE. With connectivity comes the potential for external security threats executed through the product itself, which comes with serious consequences. We are committed to protecting our users from these dangers, whatever they may be.
    (Comment: Amen! as I’ve written ad nauseum, protecting privacy and security must be THE highest IoT priority — see next post below!).
  5. WE BUILD AND PROMOTE A CULTURE OF PRIVACY. Equally severe threats can also come from within. Trust is violated when personal  information gathered by the product is handled carelessly. We build and promote a culture of integrity where the norm is to handle data with care.
    (Comment:See 4!).
  6. WE ARE DELIBERATE ABOUT WHAT DATA WE COLLECT. This is not the business of hoarding data; we only collect data that serves the utility of the product and service. Therefore, identifying what those data points are must be conscientious and deliberate.
    (Comment: this is a delicate issue, because you may find data that wasn’t originally valuable becomes so as new correlations and links are established. However, just collecting data willy-nilly and depositing it in an unstructured “data lake” for possible use later is asking for trouble if your security is breeched.).
  7. WE MAKE THE PARTIES ASSOCIATED WITH AN IOT PRODUCT EXPLICIT. IoT products are uniquely connected, making the flow of information among stakeholders open and fluid. This results in a complex, ambiguous, and invisible network. Our responsibility is to make the dynamics among those parties more visible and understandable to everyone.
    (Comment: see what I wrote in the last post, where I recommended companies spell out their privacy and usage policies in plain language and completely).
  8. WE EMPOWER USERS TO BE THE MASTERS OF THEIR OWN DOMAIN. Users often do not have control over their role within the network of stakeholders surrounding an IoT product. We believe that users should be empowered to set the boundaries of how their data is accessed and how they are engaged with via the product.
    (Comment: consistent with prior points, make sure that any permissions are explicit and  opt-in rather than opt-out to protect users — and yourself (rather avoid lawsuits? Thought so…)
  9. WE DESIGN THINGS FOR THEIR LIFETIME. Currently physical products and digital services tend to be built to have different lifespans. In an IoT product features are codependent, so lifespans need to be aligned. We design products and their services to be bound as a single, durable entity.
    (Comment: consistent with the emerging circular economy concept, this can be a win-win-win for you, your customer and the environment. Products that don’t become obsolete quickly but can be upgraded either by hardware or software will delight customers and build their loyalty [remember that if you continue to meet their needs and desires, there’s less incentive for customers to check out competitors and possibly be wooed away!). Products that you enhance over time and particularly those you market as services instead of sell will also stay out of landfills and reduce your pduction costs.
  10. IN THE END, WE ARE HUMAN BEINGS. Design is an impactful act. With our work, we have the power to affect relationships between people and technology, as well as among people.  We don’t use this influence to only make profits or create robot overlords; instead, it is our responsibility to use design to help people, communities, and societies  thrive.
    Comment: yea designers!!)

I’ve personally signed onto the Manifesto, and do hope to contribute in the future (would like something explicit about the environment in it, but who knows) and urge you to do the same. More important, why start from scratch to come up with your own product design guidelines, when you can capitalize on the hard work that’s gone into the Manifesto as a starting point and modify it for your own unique needs?


*BTW: I was contemptuous of the first IoT electric toothbrush I wrote about, but since talked to a leader in the field who convinced me that it could actually revolutionize the practice of dentistry for the better by providing objective proof that  patient had brushed frequently and correctly. My bad!

“The House That Spied on Me”: Finally Objective Info on IoT Privacy (or Lack Thereof)

Posted on 25th February 2018 in data, Essential Truths, Internet of Things, privacy, security, smart home

Pardon a political analogy, Just as the recent indictment of 13 Russians in the horrific bot campaign to undermine our democracy (you may surmise my position on this! The WIRED article about it is a must read!) finally provided objective information on the plot, so too Kasmir Hill’s and Surya Matu’s excruciatingly detailed “The House That Spied on Me”  finally provides objective information on the critical question of how much personal data IoT device manufacturers are actually compiling from our smart home devices.

This is critical, because we’ve previously had to rely on anecdotal evidence such as the Houston baby-cam scandal, and that’s not adequate for sound government policy making and/or advice to other companies on how to handle the privacy/security issue.

Last year, Hill (who wrote one of the first articles on the danger when she was at Forbes) added just about every smart home you can imagine to her apartment (I won’t repeat the list: I blush easily…) . Then her colleague, Matu, monitored the outflow of the devices using a special router he created to which she connected all the devices:

“… I am basically Kashmir’s sentient home. Kashmir wanted to know what it would be like to live in a smart home and I wanted to find out what the digital emissions from that home would reveal about her. Cybersecurity wasn’t my focus. … Privacy was. What could I tell about the patterns of her and her family’s life by passively gathering the data trails from her belongings? How often were the devices talking? Could I tell what the people inside were doing on an hourly basis based on what I saw?”

The answer was: a lot (I couldn’t paste the chart recording the numbers here, so check the article for the full report)!

As Matu pointed out, with the device he had access to precisely the data about Hill’s apartment that Comcast could collect and sell because of a 2017 law allowing ISPs to sell customers’ internet usage data without their consent — including the smart device data.  The various devices sent data constantly — sometimes even when they weren’t being used! In fact, there hasn’t been a single hour since the router was installed in December when at least some devices haven’t sent data — even if no one was at home!

BTW: Hill, despite her expertise and manufacturers’ claims of ease-of-setup, found configuring all of the devices, and especially making them work together, was a nightmare. Among other tidbits about how difficult it was: she had to download 14 different apps!  The system also directly violated her privacy, uploading a video of her walking around the apartment nude that was recorded by the Withings Home Wi-Fi Security (ahem…) Camera with Air Quality Sensors. Fortunately the offending video was encrypted. Small comfort.

Hill came to realize how convoluted privacy and security can become with a smart home:

“The whole episode reinforced something that was already bothering me: Getting a smart home means that everyone who lives or comes inside it is part of your personal panopticon, something which may not be obvious to them because they don’t expect everyday objects to have spying abilities. One of the gadgets—the Eight Sleep Tracker—seemed aware of this, and as a privacy-protective gesture, required the email address of the person I sleep with to request his permission to show me sleep reports from his side of the bed. But it’s weird to tell a gadget who you are having sex with as a way to protect privacy, especially when that gadget is monitoring the noise levels in your bedroom.”

Matu reminds us that, even though most of the data was encrypted, even the most basic digital exhaust can give trained experts valuable clues that may build digital profiles of us, whether to attract us to ads or for more nefarious purposes:

“It turns out that how we interact with our computers and smartphones is very valuable information, both to intelligence agencies and the advertising industry. What websites do I visit? How long do I actually spend reading an article? How long do I spend on Instagram? What do I use maps for? The data packets that help answer these questions are the basic unit of the data economy, and many more of them will be sent by people living in a smart home.”

Given the concerns about whether Amazon, Google, and Apple are constantly monitoring you through your smart speaker (remember when an Echo was subpoenaed  in a murder case?), Matu reported that:

“… the Echo and Echo Dot … were in constant communication with Amazon’s servers, sending a request every couple of minutes to http://spectrum.s3.amazonaws.com/kindle-wifi/wifistub-echo.html. Even without the “Alexa” wake word, and even when the microphone is turned off, the Echo is frequently checking in with Amazon, confirming it is online and looking for updates. Amazon did not respond to an inquiry about why the Echo talks to Amazon’s servers so much more frequently than other connected devices.”

Even the seemingly most insignificant data can be important:

“I was able to pick up a bunch of insights into the Hill household—what time they wake up, when they turn their lights on and off, when their child wakes up and falls asleep—but the weirdest one for me personally was knowing when Kashmir brushes her teeth. Her Philips Sonicare Connected toothbrush notifies the app when it’s being used, sending a distinctive digital fingerprint to the router. While not necessarily the most sensitive information, it made me imagine the next iteration of insurance incentives: Use a smart toothbrush and get dental insurance at a discount!”

Lest you laugh at that, a dean at the BU Dental School told me much the same thing: that the digital evidence from a Colgate smart brush, in this case, could actually revolutionize dentistry, not only letting your dentist how well, or not, you brushed, but perhaps lowering your dental insurance premium or affecting the amount your dentist was reimbursed. Who woulda thunk it?

Summing up (there’s a lot of additional important info in the story, especially about the perfidious Visio Smart TV, that had such a company-weighted privacy policy that the FTC actually forced it to turn it off the “feature” and pay reparations, so do read the whole article), Hill concluded:

“I thought the house would take care of me but instead everything in it now had the power to ask me to do things. Ultimately, I’m not going to warn you against making everything in your home smart because of the privacy risks, although there are quite a few. I’m going to warn you against a smart home because living in it is annoying as hell.”

In addition to making privacy and security a priority, there is another simple and essential step smart home (and Quantified Self) device companies must take.

When you open the box for the first time, the first thing you should see must be a prominently displayed privacy and security policy, written in plain (and I mean really plain) English, and printed in large, bold type. It should make it clear that any data sharing is opt-in, and that you have the right to not agree, and emphasize the need for detailed, unique passwords (no,1-2-3-4 or the ever-popular “password” are not enough.

Just to make certain the point is made, it needs to be at the very beginning of the set-up app as well. Yes, you should also include the detailed legalese in agate type, but the critical points must be made in the basic statement, which needs to be reviewed not just by the lawyers, but also a panel of laypeople, who must also carry out the steps to make sure they’re really easily understood and acted on. This is not just a suggestion. You absolutely must do it or you risk major penalties and public fury. 


Clearly, this article gives us the first objective evidence that there’s a lot more to do to assure privacy and security for smart homes (and that there’s also a heck of a lot of room for improvement on how the devices play together!), reaffirming my judgement that the first IoT Essential Truth remains “make privacy and security your highest priority.” If this doesn’t get the focus it deserves, we may lose all the benefits of the IoT because of legitimate public and corporate concern that their secrets are at risk. N.B.!

Mycroft Brings Open-Source Revolution to Home Assistants

Brilliant!  Crowd-funded (even better!) Mycroft brings the rich potential of open-source to the growing field of digital home assistants.   I suspect it won’t be long until it claims a major part of the field, because the Mycroft platform can evolve and grow exponentially by capitalizing on the contributions of many, many people, not unlike the way IFTTT has with its crowd-sourced smart home “recipes.”

According to a fascinating ZD Net interview with its developer, Joshua Montgomery, his motivation was not profit per se, but to create a general AI intelligence system that would transform a start-up space he was re-developing:

“He wanted to create the type of artificial intelligence platform that ‘if you spoke to it when you walked in the room, it could control the music, control the lights, the doors’ and more.”

                         Mycroft

Montgomery wanted to do this through an open-source voice control system but for there wasn’t an open source equivalent to Siri or Alexa.  After building the natural language, open-source AI system to fill that need (tag line, “An Artificial Intelligence for Everyone”) he decided to build a “reference device” as the reporter terms it (gotta love that techno speak. In other words, a hardware device that could demonstrate the system). That in turn led to a crowdsourced campaign on Kickstarter and Backerkit to fund the home hub, which is based on the old chestnut of the IoT, Raspberry Pi. The result is a squat, cute (looks like a smiley face) unit, with a high-quality speaker.  

Most important, when the development team is done with the AI platform, Mycroft will release all of the Mycroft AI code under GPL V3, inviting the open-source community to capitalize and improve on it.  That will place Mycroft squarely in the open-source heritage of Linux and Mozilla.

Among other benefits, Mycroft will use natural language processing to activate a wide range of online services, from Netflix to Pandora, as well as control your smart home devices.

Mycroft illustrates one of my favorite IoT Essential Truths: we need to share data, not hoard it. I don’t care how brilliant your engineers are: they are only a tiny percentage of the world population, with only a limited amount of personal experience (especially if they’re callow millennials) and interests. When you go open source and throw your data open to the world, the progress will be greater as will be the benefits — to you and humanity.

Human Side of IoT: Local Startup Empowers Forgotten Shop Floor Workers!

Let’s not forget: human workers can and must still pay a role in the IoT!

Sure, the vast majority of IoT focus is on large-scale precision and automated manufacturing (Industrie 4.0 as it is known in Germany, or the Industrial Internet here). However, an ingenious local startup, Tulip, is bringing IoT tools to the workbench and shop floor, empowering individual industrial engineers to create no-code, low-code apps that can really revolutionize things in the factory.  Yes, many jobs will be replaced by IoT tech, but with Tulip, others will be “enabled” — workers will still be there to make decisions, and they’ll be empowered as never before.

Um, I’m thinking superhuman factory Transformers, LOL!

The Tulip IoT gateway allows anyone to add sensors, tools, cameras and even “pick to light bins” (never heard that bit of shop lingo, but they looked cool in video) to the work station, without writing a line of code, because of the company’s diverse drivers support factory floor devices. It claims to “fill the gap between rigid back-end manufacturing IT systems and the dynamic operations taking place on the shop floor.”

Rony Kubat, the young MIT grad who’s the company’s co-founder is on a mission “to revolutionize manufacturing software,” as he says, because people who actually have to play a hands-on roll in product design and production on  shop floor have been ignored in the IoT, and many processes such as training are still paper-based:

“Manufacturing software needs to evolve. Legacy applications neglect the human side of manufacturing and therefore suffer from low adoption. The use of custom, expensive-to-maintain, in-house solutions is rampant. The inability of existing solutions to address the needs of people on the shop floor is driving the proliferation of paper-based workflows and the use of word processing, spreadsheet and presentation applications as the mainstay of manufacturing operations. Tulip aims to change all this through our intuitive, people-centric platform. Our system makes it easy for manufacturers to connect hands-on work processes, machines and backend IT systems through flexible self-serve manufacturing apps”.

While automation in factory floors continues to grow, manufacturers often find their hands-on workforce left behind, using paper and legacy technology. Manufacturers are seeing an enormous need to empower their workforce with intuitive digital tools. Tulip is a solution to this problem. Front-line engineers create flexible shop-floor apps that connect workers, machines and existing IT systems. These apps guide shop-floor operations enabling real-time data collection and making that data useful to workers on factory floors. Tulip’s IoT gateway integrates the devices, sensors and machines on the shop floor, making it easy to monitor and interact with previously siloed data streams (you got me there: I HATE siloed data). The platform’s self-serve analytics engine lets manufacturers turn this data into actionable insights, supporting continuous process improvement.

The company has grown quickly, and has dozens of customers in fields as varied as medical devices, pharma, and aerospace. The results are dramatic and quite varied:

  • Quality: A Deloitte analysis of Tulip’s use at Jabil, a global contract manufacturer, documented 10+% production increases. It reduced quality issues in manual assembly by more than 10%. found production yield increased by more than 10 percent, and manual assembly quality issues were reduced by 60 percent in the initial four weeks of operation.
  • Training: Other customers reduced the amount of time to train new operators by  90 percent, in a highly complicated, customized and regulated biopharmaceutical training situation: “Previously, the only way to train new operators was to walk them repeatedly through all the steps with an experienced operator and a process engineer. Tulip quickly deployed its software along with IoT gateways for the machines and devices on the process, and managed to cut training time almost by half.”
  • Time to Market: They reduced a major athletic apparel maker’s time to market by 50% for hundreds of new product variations. That required constantly evaluating the impact of dozens of different quality drivers to isolate defects’ root causes — including both manual and automated platforms. Before Tulip, it could take weeks of analysis until a process was ready for production. According to the quality engineer on the project, “I used Tulip’s apps to communicate quality issues to upstream operators in real-time. This feedback loop enabled the operators to take immediate corrective action and prevent additional defects from occurring.”

Similar to my friends at Mendix, the no-code/low-code aspect of Tulip’s Manufacturing App Platform lets process engineers without programming backgrounds create shop floor apps through interactive step-by-step work instructions. “The apps give you access through our cloud to an abundance of information and real-time analytics that can help you measure and fine-tune your manufacturing operations,” Tulip Co-Founder Natan Linder says (the whiz-kid is also chairman of 3-D printer startup Formlabs). 

Linder looked at analytics apps that let users create apps through simple tools and thought why not provide the same kind of tools for training technicians on standard operating procedures or for building product or tracking quality defects? “This is a self-service tool that a process or quality engineer can use to build apps. They can create sophisticated workflows without writing code…. Our cloud authoring environment basically allows you to just drag and drop and connect all the different faucets and links to create a sophisticated app in minutes, and deploy it to the floor, without writing code,” he says. Tulip enables sharing appropriate real-time analytics with each team member no matter where they are and to set up personal alerts for the data that’s relevant to each.

IMHO, this is a perfect example of my IoT “Essential Truth” of “empowering every worker with real-time data.”  Rather than senior management parceling out (as they saw fit) the little amount of historical data that was available in the past, now workers can share (critical verb) that data instantly and combine it with the horse sense that can only be gained by those actually doing the work for years. Miracles will follow!

Writ large, the benefits of empowering shop floor workers are potentially huge.  According to the UK Telegraph, output can increase 8-9 %, while cutting costs 7-8%, cutting costs approximately 7-8 percent. The same research estimates that industrial companies “could see as much as a 300 basis point boost to their bottom line.”

Examples of the relevant shop-floor analytics include:

  • “Show real-time metrics from the shop floor
  • Report trends in your operations
  • Send customized alerts based on user defined triggers
  • Inform key stakeholders with relevant data”

IDC Analyst John Santagate neatly sums up the argument for empowering workers through the IoT thusly:

“With all of the talk and concern around the risk of losing the human element in manufacturing, due to the increasing use of robotics, it is refreshing to see a company focus on improving the work that is still done by human hands.  We typically hear the value proposition of deploying robots and automation of improvements to efficiency, quality, and consistency.  But what if you could achieve these improvements to your manufacturing process by simply applying analytics and technology to the human effort?  This is exactly what they are working on at Tulip.  

“Data analytics is typically thought about at the machine level. Manufacturers measure things such as throughput, efficiency, and quality by applying sensors to their manufacturing equipment, capturing the data signals, and conducting analytics.  The analytics provide an understanding of the health of the manufacturing process and enable them to make any necessary changes to improve the process.  Often, such efforts are top down driven.  Management drives these projects in order to improve the performance of the business.  An alternative approach is to enable the production floor to proactively identify improvement opportunities and take action, a bottom-up approach. For this self-service approach to succeed shop-floor engineers need a flexible platform such as Tulip’s, that allows them to replace paper-based processes with technology and build the applications that enable them to manage hands-on processes.  The real time analytics and visibility of hands-on manufacturing processes from Tulip’s platform puts the opportunity to identify improvement opportunities directly in the hands of people engaged in the work cells.

“Digital transformation in manufacturing is about leveraging advanced digital technology to improve how a company operates.  But, as the manufacturing industry focuses on digital transformation it must not forget the value of the human element.  Indeed, we don’t often think about digital transformation in relation to human effort, but this is exactly the sort of thinking that can deliver some of the early wins in digital transformation. “ 

Well said — and thanks to Tulip for filling a critical and often overlooked aspect of the IoT!

I’m reminded of my old friend Steve Clay-Young, who managed the BAC’s shop in Boston, and first alerted me to the “National Home- workshop Guild” which Popular Science started in the Depression and then played a critical part in the war effort. Craftsmen who belonged all got plans and turned out quality products on their home lathes.  I can definitely see a rebirth of the concept as the cost of 3-D printers from Kubat’s other startup, Formlabs drops, and we can have the kind of home (or at least locally-based production that Eric Drexler dreamed of in his great Engines of Creation (which threw in another transformational production technology, nanotech). 

I’m clearing space in my own workshop so I can begin production on IoT/nanotech/3-D printed products. Move over, GE.

OtoSense: the next level in sound-based IoT

It sounds (pardon the pun) as if the IoT may really be taking off as an important diagnostic repair tool.

I wrote a while ago about the Auguscope, which represents a great way to begin an incremental approach to the IoT because it’s a hand-held device to monitor equipment’s sounds and diagnose possible problems based on abnormalities.

Now NPR reports on a local (Cambridge) firm, OtoSense, that is expanding on this concept on the software end. Its tagline is “First software platform turning real-time machine sounds and vibrations into actionable meaning at the edge.”

Love the platform’s origins: it grows out of founder Sebastien Christian’s research on deafness (as I wrote in my earlier post, I view suddenly being able to interpret things’ sounds as a variation on how the IoT eliminates the “Collective Blindness”  that I’ve used to describe our past inability to monitor things before the IoT’s advent):

“[Christian} … is a quantum physicist and neuroscientist who spent much of his career studying deaf children. He modeled how human hearing works. And then he realized, hey, I could use this model to help other deaf things, like, say, almost all machines.”

(aside: I see this as another important application of my favorite IoT question: learning to automatically ask “who else can use this data?” How does that apply to YOUR work? But I digress).

According to Technology Review, the company is concentrating primarily on analyzing car sounds from IoT detectors on the vehicle at this point (working with a number of car manufacturers) although they believe the concept can be applied to a wide range of sound-emitting machinery:

“… OtoSense is working with major automakers on software that could give cars their own sense of hearing to diagnose themselves before any problem gets too expensive. The technology could also help human-driven and automated vehicles stay safe, for example by listening for emergency sirens or sounds indicating road surface quality.

OtoSense has developed machine-learning software that can be trained to identify specific noises, including subtle changes in an engine or a vehicle’s brakes. French automaker PSA Group, owner of brands including Citroen and Peugeot, is testing a version of the software trained using thousands of sounds from its different vehicle models.

Under a project dubbed AudioHound, OtoSense has developed a prototype tablet app that a technician or even car owner could use to record audio for automated diagnosis, says Guillaume Catusseau, who works on vehicle noise in PSA’s R&D department.”

According to NPR, the company is working to apply the same approach to a wide range of other types of machines, from assembly lines to DIY drills. As always with IoT data, handling massive amounts of data will be a challenge, so they will emphasize edge processing.

OtoSense has a “design factory” on the site, where potential customers answer a variety of questions about the sounds they must monitor (such as whether the software will be used indoors or out, whether it is to detect anomalies, etc. that will allow the company to choose the appropriate version of the program.

TechCrunch did a great article on the concept, which underscores really making sound detection precise will take a lot of time and refinement, in part because of the fact that — guess what — sounds from a variety of sources are often mingled, so the relevant ones must be determined and isolated:

“We have loads of audio data, but lack critical labels. In the case of deep learning models, ‘black box’ problems make it hard to determine why an acoustical anomaly was flagged in the first place. We are still working the kinks out of real-time machine learning at the edge. And sounds often come packaged with more noise than signal, limiting the features that can be extracted from audio data.”

In part, as with other forms of pattern recognition such as voice, this is because it will require accumulating huge data files:

“Behind many of the greatest breakthroughs in machine learning lies a painstakingly assembled dataset.ImageNet for object recognition and things like the Linguistic Data Consortium and GOOG-411 in the case of speech recognition. But finding an adequate dataset to juxtapose the sound of a car-door shutting and a bedroom-door shutting is quite challenging.

“’Deep learning can do a lot if you build the model correctly, you just need a lot of machine data,’ says Scott Stephenson, CEO of Deepgram, a startup helping companies search through their audio data. ‘Speech recognition 15 years ago wasn’t that great without datasets.’

“Crowdsourced labeling of dogs and cats on Amazon Mechanical Turk is one thing. Collecting 100,000 sounds of ball bearings and labeling the loose ones is something entirely different.

“And while these problems plague even single-purpose acoustical classifiers, the holy grail of the space is a generalizable tool for identifying all sounds, not simply building a model to differentiate the sounds of those doors.

…”A lack of source separation can further complicate matters. This is one that even humans struggle with. If you’ve ever tried to pick out a single table conversation at a loud restaurant, you have an appreciation for how difficult it can be to make sense of overlapping sounds.

Bottom line: there’s still a lot of theoretical and product-specific testing that must be done before IoT-based sound detection will be an infallible diagnostic tool for predictive maintenance, but clearly there’s precedent for the concept, and the potential payoff are great!

 


LOL: as the NPR story pointed out, this science may owe its origins to two MIT grads of an earlier era, “Click” and “Clack” of Car Talk, who frequently got listeners to contribute their own hilarious descriptions of the sounds they heard from their malfunctioning cars.   BRTTTTphssssBRTTTT…..

A Vision for Dynamic and Lower-Cost Aging in Cities Through “SmartAging”

I’ve been giving a lot of thought recently about how my vision of I0T-based “SmartAging” through a combination of:

  • Quantified Self health apps and devices to improve seniors’ health and turn their health care into more of a partnership with their doctors
  • and smart home devices that would make it easier to manage their homes and “age in place” rather than being institutionalized

could meld with the exciting developments in smart city devices and strategy.  I believe the results could make seniors happier and healthier, reduce the burdens on city budgets of growing aging populations, and spur unprecedented creativity and innovation on these issues. Here’s my vision of how the two might come together. I’d welcome your thoughts on the concept!

 

A Vision for Dynamic and Lower-Cost Aging in Cities Through “SmartAging”

It’s clear business as usual in dealing with aging in America won’t work anymore.  10,000 baby boomers a day retire and draw Social Security. Between now and 2050, seniors will be the fastest growing segment of the population.  How can we stretch government programs and private resources so seniors won’t be sickly and live in abject poverty, yet millennials won’t be bankrupted either?

As someone in that category, this is of more than passing interest to me! 

I propose a new approach to aging in cities, marrying advanced but affordable personal technology, new ways of thinking about aging, and hybrid formal and ad hoc public-private partnerships, which can deal with at least part of the aging issue. Carving out some seniors from needing services through self-reliance and enhancing their well-being would allow focusing scarce resources on the most vulnerable remaining seniors. 

The approach is made possible not only by the plummeting cost and increasing power of personal technology but also the exciting new forms of collaboration it has made possible.

The proposal’s basis is the Internet of Things (IoT).  There is already a growing range of IoT wearable devices to track health indicators such as heart rates and promoting fitness activities, and IoT “smart home” devices controlling lighting, heat, and other systems. The framework visualized here would easily integrate these devices, but they can be expensive, so it is designed so seniors could benefit from the project without having to buy the dedicated devices.

This proposal does not attempt to be an all-encompassing solution to every issue of aging, but instead will create a robust, open platform that government agencies, companies, civic groups, and individuals can build upon to reduce burdens on individual seniors, improve their health and quality of life, and cut the cost of and need for some government services. Even better, the same platform and technologies can be used to enhance the lives of others throughout the life spectrum as well, increasing its value and versatility.

The proposal is for two complementary projects to create a basis for later, more ambitious one.

Each would be valuable in its own right and perhaps reach differing portions of the senior population. Combined, they would provide seniors and their families with a wealth of real-time information to improve health, mobility, and quality of life, while cutting their living costs and reducing social isolation.  The result would be a mutually-beneficial public-private partnerships and, one hopes, improve not only seniors’ lives, but also their feeling of connectedness to the broader community. Rather than treat seniors as passive recipients of services, it would empower them to be as self-reliant as possible given their varying circumstances. They would both be based on the Lifeline program in Massachusetts (and similar ones elsewhere) that give low-income residents basic Internet service at low cost.

Locally, Boston already has a record of achievement in internet-based services to connect seniors with others, starting with the simple and tremendously effective SnowCrew program that Joe Porcelli launched in the Jamaica Plain neighborhood. This later expanded nationwide into the NextDoor site and app, which could easily be used by participants in the program.

The first project would capitalize on the widespread popularity of the new digital “home assistants,” such as the Amazon Echo and Google Home.  One version of the Echo can be bought for as little as $49, with bulk buying also possible.  A critical advantage of these devices, rather than home monitoring devices specifically for seniors, is that they are mainstream, benefit from the “network effects” phenomenon that means each becomes more valuable as more are in use, and don’t stigmatize the users or shout I’M ELDERLY. A person who is in their 50s could buy one now, use it for routine household needs, and then add additional age-related functions (see below) as they age, amortizing the cost.

The most important thing to remember about these devices regarding aging is the fact that they are voice-activated, so they would be especially attractive to seniors who are tech-averse or simply unable to navigate complex devices. The user simply speaks a command to activate the device.

The Echo (one presumes a variation on the same theme will soon be the case with the “Home,” Apple’s forthcoming “Home Pod” and other devices that might enter the space in the future) gets its power from “skills,” or apps, that are developed by third-party developers. They give it the power, via voice, to deliver a wide range of content on every topic under the sun.  Several already released “skills” give an idea of how this might work:

  • Ask My Buddy helps users in an emergency. In an emergency, it can send phone calls or text messages to up to five contacts. A user would say, “Alexa, ask my buddy Bob to send help” and Bob would get an alert to check in on his friend.
  • Linked thermostats can raise or lower the temperature a precise amount, and lights can also be turned on or off or adjusted for specific needs.
  • Marvee can keep seniors in touch w/ their families and lessen social isolation.
  • The Fitbit skill allows the user who also has a Fitbit to trace their physical activity, encouraging fitness.

Again looking to Boston for precedent, related apps include the Children’s Hospital and Kids’ MD ones from Children’s Hospital. Imagine how helpful it could be if the gerontology departments of hospitals provided similar “skills” for seniors!

Most important to making this service work would be to capitalize on the growing number of city-based open-data programs that release a variety of important real-time data bases which independent developers mash up to create “skills”  such as real-time transit apps.  The author was a consultant to the District of Columbia in 2008 when it began this data-based “smart city” approach with the Apps for Democracy contest, which has spawned similar projects worldwide since then.  When real-time city data is released, the result is almost magic: individuals and groups see different value in the same data, and develop new services that use it in a variety of ways at no expense to taxpayers.

The key to this half of the pilot programs would be creating a working relationship with local Meetups such as those already created in various cities for Alexa programmers, which would facilitate the relationship) to stage one or more high-visibility hackathons. Programmers from major public and social service institutions serving seniors, colleges and universities, and others with an interest in the subject could come together to create “skills” based on the local public data feeds, to serve seniors’ needs, such as:

  • health
  • nutrition
  • mobility
  • city services
  • overcoming social isolation (one might ask how a technological program could help with this need. The City of Barcelona, generally acknowledged as the world’s “smartest” city, is circulating an RFP right now with that goal and already has a “smart” program for seniors who need immediate help to call for it) .

“Skills” are proliferating at a dizzying rate, and ones developed for one city can be easily adapted for localized use elsewhere.

Such a project would have no direct costs, but the city and/or a non-profit might negotiate lower bulk-buying rates for the devices, especially the l0wer price ($59 list) Amazon Dot, similar to the contract between the Japan Post Group, IBM, and Apple to buy 5 million iPads and equip them with senior-friendly apps from IBM which the Post Group would then furnish to Japanese seniors. Conceivably, the Dots bought this way might come preloaded with the localized and senior-friendly “skills.” 

The second component of a prototype SmartAging city program would make the wide range of local real-time location-based data available by various cities usable by cities joininh the 100+ cities worldwide who have joined the “Things Network” that create free citywide data networks specifically for Internet of Things use.

The concept uses technology called LoRaWAN: low-cost (the 10 units used in Amsterdam, each with a signal range of about 6 miles, only cost $12,000 total — much cheaper ones will be released soon), and were deployed and operative in less than a month!  The cost and difficulty of linking an entire city has plummeted as more cities join, and the global project is inherently collaborative.

With Things Network, entire cities would be converted into Internet of Things laboratories, empowering anyone (city agencies, companies, educational institutions, non-profits, individuals) to experiment with offering new services that would use the no-cost data sharing network.  In cities that already host Things Networks,  availability of the networks has spawned a wide range of novel local services.  For example, in Dunblane, Scotland, the team is developing a ThingsNetwork- based alarming system for people with dementia.  Even better, as the rapid spread of citywide open data programs and resulting open source apps to capitalize on them has illustrated, a neat app or service created in one city could easily be copied and enhanced elsewhere — virtuous imitation!

The critical component of the prototype programs would be to hold one or more hackathons once the network was in place.  The same range of participants would be invited, and since the Things Network could also serve a wide range of other public/private uses for all age groups and demographics, more developers and subject matter experts might participate in the hackathon, increasing the chances of more robust and multi-purpose applications resulting.

These citywide networks could eventually become the heart of ambitious two-way services for seniors based on real-time data, similar to those in Bolsano, Italy

The Internet of Things and smart cities will become widespread soon simply because of lowering costs and greater versatility, whether this prototype project for seniors happens or not. The suggestions above would make sure that the IoT serves the public interest by harnessing IoT data to improve seniors’ health, reduce their social isolation, and make them more self-sufficient. It will reduce the burden on traditional government services to seniors while unlocking creative new services we can’t even visualize today to enhance the aging process.

#IoT Sensor Breakthroughs When Lives Are On the Line!

One of my unchanging principles is always to look to situations where there’s a lot at stake — especially human lives — for breakthroughs in difficult issues.

Exhibit A of this principle for the IoT is sensor design, where needing to frequently service or recharge critical sensors that detect battlefield conditions can put soldiers’ lives at stake (yes, as long-time readers know, this is particularly of interest to me because my Army officer son was wounded in Iraq).

FedTech reports encouraging research at DARPA on how to create sensors that have ultra-low power requirements, can lie dormant for long periods of time and yet are exquisitely sensitive to critical changes in conditions (such as vehicle or troop movements) that might put soldiers at risk in battlefield conditions.

The  N-ZERO (Near Zero RF and Power Operations)  program is a three-year initiative to create new, low-energy battlefield sensors, particularly for use at forward operating bases where conditions can change quickly and soldiers are constantly at risk — especially if they have to service the sensors:

“State-of-the-art military sensors rely on “active electronics” to detect vibration, light, sound or other signals for situational awareness and to inform tactical planning and action. That means the sensors constantly consume power, with much of that power spent processing what often turns out to be irrelevant data. This power consumption limits sensors’ useful lifetimes to a few weeks or months with even the best batteries and has slowed the development of new sensor technologies and capabilities. The chronic need to service or redeploy power-depleted sensors is not only costly and time-consuming but also increases warfighter exposure to danger.”

…. (the project has) the goal of developing the technological foundation for persistent, event-driven sensing capabilities in which the sensor can remain dormant, with near-zero power consumption, until awakened by an external trigger or stimulus. Examples of relevant stimuli are acoustic signatures of particular vehicle types or radio signatures of specific communications protocols. If successful, the program could extend the lifetime of remotely deployed communications and environmental sensors—also known as unattended ground sensors (UGS)—from weeks or months to years.”

A key goal is a 20-fold battery size reduction while still having the sensor last longer.

What cost-conscious pipeline operators, large ag business or “smart city” transportation director wouldn’t be interested in that kind of product as well?

According to Signal, the three-phase project is ahead of its targets. In the first part, which ended in December, the DARPA team created “zero-power receivers that can detect very weak signals — less than 70 decibel-milliwatt radio-frequency (RF) transmissions, a measure that is better than originally expected.” This is critical to the military (and would have huge benefits to business as well, since monitoring frequently must be 24/7 but reporting of background data  (vs. significant changes) would both deplete batteries while requiring processing of huge volumes of meaningless data). Accordingly, a key goal would be to create “… radio receivers that are continuously alert for friendly radio transmissions, but with near zero power consumption when transmissions are not present.” A target is  “exploitation of the energy in the signal signature itself to detect and discriminate the events of interest while rejecting noise and interference. This requires the development of passive or event-powered sensors and signal-processing circuitry. The successful development of these techniques and components could enable deployments of sensors that can remain “off” (that is, in a state that does not consume battery power), yet alert for detecting signatures of interest, resulting in greatly extended durations of operation.”

The “exploitation of .. energy in the signal signature itself sounds reminiscent of the University of Washington research I’ve reported in the past that would harness ambient back-scatter to allow battery-less wireless transmission, another key potential advance in IoT sensor networks.

The following phrases of N-ZERO will each take a year.

Let’s hope that the project is an overall success, and that the end products will also be commercialized. I’ve always felt sensor cost and power needs were potential IoT Achilles’ heels, so that would be a major boost!

Hippo: IoT-based paradigm shift from passive to active insurance companies

I’m a big advocate of incremental IoT strategies (check out my recent webinar with Mendix on this approach), for existing companies that want to test the waters first. However, I’m enough of a rabble-rouser to also applaud those who jump right in with paradigm-busting IoT (and big data) startups.

Enter, stage left, a nimble (LOL) new home insurance company: Hippo!

IMHO, Hippo’s important both in its own right and also as a harbinger of other startups that will exploit the IoT and big data to break with years of tradition in the insurance industry as a whole, no longer sitting passively to pay out claims when something bad happens, but seizing the initiative to reduce risk, which is what insurance started out to do.

After all, when a Mr. B. Franklin (I’ll tell you: plunk that guy down in 2017 and he’d create a start-up addressing an unmet need within a week!) and his fellow firefighters launched the Philadelphia Contributionship in 1752, one of the first things they did was to send out appraisers to determine the risk of a house burning and suggest ways to make it safer.

Left to right: Eyal Navon, CTO and cofounder; Assaf Wand, CEO cofounder of Hippo

In fact, there’s actually a term for this kind of web-based insurance, coined by McKinsey: insuretec” (practicing what he preached, one of Hippo’s founders had been at McKinsey, and what intrigued the founders about insurance as a target was that it’s a huge industry, hasn’t really innovate for years, and didn’t focus on the customer experience.).

I talked recently to two key staffers, Head of Product Aviad Pinkovezky and Head of Marketing, Growth and Product Innovation Jason White.  They outlined a radically new strategy “with focused attention on loss reduction”:

  • sell directly to consumers instead of using agents
  • cut out legacy coverage leftovers, such as fur coats, silverware & stock certificates in a home safe) and instead cover laptops, water leaks, etc.
  • Leverage data to inform customers about appliances they own that might be more likely to cause problems, and communicate with them on a continuous basis about steps such as cleaning gutters that could reduce problems.

According to Pinkovezky, the current companies “are reactive, responding to something that takes place. Consumer-to-company interaction is non-continuous, with almost nothing between paying premiums and filing a claim.  Hippo wants to build must more of a continuous relationship, providing value added,” such as an IoT-based water-leak detection device that new customers receive.

At the same time, White said that the company is still somewhat limited in what if can do to reduce risk because so much of it isn’t really from factors such as theft (data speaks: he said thefts actually constitute little of claims) but from one, measured by frequency and amount of damage (according to their analysis) that’s beyond their control: weather. As I pointed out, that’s probably going to constitute more of a risk in the foreseeable future due to global warming.

Hippo also plans a high-tech, high-touch strategy, that would couple technnology with a human aspect that’s needed in a stressful situation such as a house fire or flood. According to Forbes:

The company acknowledges that its customers rely on Hippo to protect their largest assets, and that insurance claims often derive from stressful experiences. In light of this, Hippo offers comprehensive, compassionate concierge services to help home owners find hotels when a home becomes unlivable, and to supervise repair contractors when damage occurs.”

While offering new services, the company has firm roots in the non-insuretech world, because its policies are owned and covered by Topa, which was founded more than 30 years ago.

Bottom line: if you’re casting about for an IoT-based startup opportunity, you’d do well to use the lens McKinsey applied to insurance: look for an industry that’s tradition-bound, and tends to react to change rather than initiate it (REMEMBER: a key element of the IoT paradigm shift is that, for the first time, we can piece “universal blindness” and really see inside things to gauge how they are working [or not] — the challenge is to capitalize on that new-found data). 

http://www.stephensonstrategies.com/">Stephenson blogs on Internet of Things Internet of Things strategy, breakthroughs and management