“The House That Spied on Me”: Finally Objective Info on IoT Privacy (or Lack Thereof)

Posted on 25th February 2018 in data, Essential Truths, Internet of Things, privacy, security, smart home

Pardon a political analogy, Just as the recent indictment of 13 Russians in the horrific bot campaign to undermine our democracy (you may surmise my position on this! The WIRED article about it is a must read!) finally provided objective information on the plot, so too Kasmir Hill’s and Surya Matu’s excruciatingly detailed “The House That Spied on Me”  finally provides objective information on the critical question of how much personal data IoT device manufacturers are actually compiling from our smart home devices.

This is critical, because we’ve previously had to rely on anecdotal evidence such as the Houston baby-cam scandal, and that’s not adequate for sound government policy making and/or advice to other companies on how to handle the privacy/security issue.

Last year, Hill (who wrote one of the first articles on the danger when she was at Forbes) added just about every smart home you can imagine to her apartment (I won’t repeat the list: I blush easily…) . Then her colleague, Matu, monitored the outflow of the devices using a special router he created to which she connected all the devices:

“… I am basically Kashmir’s sentient home. Kashmir wanted to know what it would be like to live in a smart home and I wanted to find out what the digital emissions from that home would reveal about her. Cybersecurity wasn’t my focus. … Privacy was. What could I tell about the patterns of her and her family’s life by passively gathering the data trails from her belongings? How often were the devices talking? Could I tell what the people inside were doing on an hourly basis based on what I saw?”

The answer was: a lot (I couldn’t paste the chart recording the numbers here, so check the article for the full report)!

As Matu pointed out, with the device he had access to precisely the data about Hill’s apartment that Comcast could collect and sell because of a 2017 law allowing ISPs to sell customers’ internet usage data without their consent — including the smart device data.  The various devices sent data constantly — sometimes even when they weren’t being used! In fact, there hasn’t been a single hour since the router was installed in December when at least some devices haven’t sent data — even if no one was at home!

BTW: Hill, despite her expertise and manufacturers’ claims of ease-of-setup, found configuring all of the devices, and especially making them work together, was a nightmare. Among other tidbits about how difficult it was: she had to download 14 different apps!  The system also directly violated her privacy, uploading a video of her walking around the apartment nude that was recorded by the Withings Home Wi-Fi Security (ahem…) Camera with Air Quality Sensors. Fortunately the offending video was encrypted. Small comfort.

Hill came to realize how convoluted privacy and security can become with a smart home:

“The whole episode reinforced something that was already bothering me: Getting a smart home means that everyone who lives or comes inside it is part of your personal panopticon, something which may not be obvious to them because they don’t expect everyday objects to have spying abilities. One of the gadgets—the Eight Sleep Tracker—seemed aware of this, and as a privacy-protective gesture, required the email address of the person I sleep with to request his permission to show me sleep reports from his side of the bed. But it’s weird to tell a gadget who you are having sex with as a way to protect privacy, especially when that gadget is monitoring the noise levels in your bedroom.”

Matu reminds us that, even though most of the data was encrypted, even the most basic digital exhaust can give trained experts valuable clues that may build digital profiles of us, whether to attract us to ads or for more nefarious purposes:

“It turns out that how we interact with our computers and smartphones is very valuable information, both to intelligence agencies and the advertising industry. What websites do I visit? How long do I actually spend reading an article? How long do I spend on Instagram? What do I use maps for? The data packets that help answer these questions are the basic unit of the data economy, and many more of them will be sent by people living in a smart home.”

Given the concerns about whether Amazon, Google, and Apple are constantly monitoring you through your smart speaker (remember when an Echo was subpoenaed  in a murder case?), Matu reported that:

“… the Echo and Echo Dot … were in constant communication with Amazon’s servers, sending a request every couple of minutes to http://spectrum.s3.amazonaws.com/kindle-wifi/wifistub-echo.html. Even without the “Alexa” wake word, and even when the microphone is turned off, the Echo is frequently checking in with Amazon, confirming it is online and looking for updates. Amazon did not respond to an inquiry about why the Echo talks to Amazon’s servers so much more frequently than other connected devices.”

Even the seemingly most insignificant data can be important:

“I was able to pick up a bunch of insights into the Hill household—what time they wake up, when they turn their lights on and off, when their child wakes up and falls asleep—but the weirdest one for me personally was knowing when Kashmir brushes her teeth. Her Philips Sonicare Connected toothbrush notifies the app when it’s being used, sending a distinctive digital fingerprint to the router. While not necessarily the most sensitive information, it made me imagine the next iteration of insurance incentives: Use a smart toothbrush and get dental insurance at a discount!”

Lest you laugh at that, a dean at the BU Dental School told me much the same thing: that the digital evidence from a Colgate smart brush, in this case, could actually revolutionize dentistry, not only letting your dentist how well, or not, you brushed, but perhaps lowering your dental insurance premium or affecting the amount your dentist was reimbursed. Who woulda thunk it?

Summing up (there’s a lot of additional important info in the story, especially about the perfidious Visio Smart TV, that had such a company-weighted privacy policy that the FTC actually forced it to turn it off the “feature” and pay reparations, so do read the whole article), Hill concluded:

“I thought the house would take care of me but instead everything in it now had the power to ask me to do things. Ultimately, I’m not going to warn you against making everything in your home smart because of the privacy risks, although there are quite a few. I’m going to warn you against a smart home because living in it is annoying as hell.”

In addition to making privacy and security a priority, there is another simple and essential step smart home (and Quantified Self) device companies must take.

When you open the box for the first time, the first thing you should see must be a prominently displayed privacy and security policy, written in plain (and I mean really plain) English, and printed in large, bold type. It should make it clear that any data sharing is opt-in, and that you have the right to not agree, and emphasize the need for detailed, unique passwords (no,1-2-3-4 or the ever-popular “password” are not enough.

Just to make certain the point is made, it needs to be at the very beginning of the set-up app as well. Yes, you should also include the detailed legalese in agate type, but the critical points must be made in the basic statement, which needs to be reviewed not just by the lawyers, but also a panel of laypeople, who must also carry out the steps to make sure they’re really easily understood and acted on. This is not just a suggestion. You absolutely must do it or you risk major penalties and public fury. 


Clearly, this article gives us the first objective evidence that there’s a lot more to do to assure privacy and security for smart homes (and that there’s also a heck of a lot of room for improvement on how the devices play together!), reaffirming my judgement that the first IoT Essential Truth remains “make privacy and security your highest priority.” If this doesn’t get the focus it deserves, we may lose all the benefits of the IoT because of legitimate public and corporate concern that their secrets are at risk. N.B.!

comments: Comments Off on “The House That Spied on Me”: Finally Objective Info on IoT Privacy (or Lack Thereof) tags: , , , , ,

Remember: The IoT Is Primarily About Small Data, Not Big

Posted on 16th March 2015 in data, Internet of Things, M2M, management, manufacturing, open data

In one of my fav examples of how the IoT can actually save lives, sensors on only eight preemies’ incubators at Toronto’s Hospital for Sick Children yield an eye-popping 90 million data points a day!  If all 90 million data points get relayed on to the “data pool,” the docs would be drowning in data, not saving sick preemies.

Enter “small data.”

Writing in Forbes, Mike Kavis has a worthwhile reminder that the essence of much of the Internet of Things isn’t big data, but small. By that, he means:

a dataset that contains very specific attributes. Small data is used to determine current states and conditions  or may be generated by analyzing larger data sets.

“When we talk about smart devices being deployed on wind turbines, small packages, on valves and pipes, or attached to drones, we are talking about collecting small datasets. Small data tell us about location, temperature, wetness, pressure, vibration, or even whether an item has been opened or not. Sensors give us small datasets in real time that we ingest into big data sets which provide a historical view.”

Usually, instead of aggregating  ALL of the data from all of the sensors (think about what that would mean for GE’s Durathon battery plant, where 10,000 sensors dot the assembly line!), the data is originally analyzed at “the edge,” i.e., at or near the point where the data is collected. Then only the data that deviates from the norm (i.e., is significant)  is passed on to to the centralized data bases and processing.  That’s why I’m so excited about Egburt, and its “fog computing” sensors.

As with sooo many aspects of the IoT, it’s the real-time aspect of small data that makes it so valuable, and so different from past practices, where much of the potential was never collected at all, or, if it was, was only collected, analyzed and acted upon historically. Hence, the “Collective Blindness” that I’ve written about before, which limited our decision-making abilities in the past. Again, Kavis:

“Small data can trigger events based on what is happening now. Those events can be merged with behavioral or trending information derived from machine learning algorithms run against big data datasets.”

As examples of the interplay of small and large data, he cites:

  • real-time data from wind turbines that is used immediately to adjust the blades for maximum efficiency. The relevant data is then passed along to the data lake, “..where machine-learning algorithms begin to understand patterns. These patterns can reveal performance of certain mechanisms based on their historical maintenance record, like how wind and weather conditions effect wear and tear on various components, and what the life expectancy is of a particular part.”
  • medicine containers with smart labels. “Small data can be used to determine where the medicine is located, its remaining shelf life, if the seal of the bottle has been broken, and the current temperature conditions in an effort to prevent spoilage. Big data can be used to look at this information over time to examine root cause analysis of why drugs are expiring or spoiling. Is it due to a certain shipping company or a certain retailer? Are there re-occurring patterns that can point to problems in the supply chain that can help determine how to minimize these events?”

Big data is often irrelevant in IoT systems’ functioning: all that’s needed is the real-time small data to trigger an action:

“In many instances, knowing the current state of a handful of attributes is all that is required to trigger a desired event. Are the patient’s blood sugar levels too high? Are the containers in the refrigerated truck at the optimal temperature? Does the soil have the right mixture of nutrients? Is the valve leaking?”

In a future post, I’ll address the growing role of data scientists in the IoT — and the need to educate workers on all levels on how to deal effectively with data. For now, just remember that E.F. Schumacher was right: “small is beautiful.”

 

comments: Comments Off on Remember: The IoT Is Primarily About Small Data, Not Big tags: , , , , , , , , ,

Management Challenge: Lifeguards in the IoT Data Lake

In their Harvard Business Review November cover story, How Smart, Connected Products Are Transforming Competition, PTC CEO Jim Heppelmann and Professor Michael Porter make a critical strategic point about the Internet of Things that’s obscured by just focusing on IoT technology: “…What makes smart, connected products fundamentally different is not the internet, but the changing nature of the “things.”

In the past, “things” were largely inscrutable. We couldn’t peer inside massive assembly line machinery or inside cars once they left the factory, forcing companies to base much of both strategy and daily operations on inferences about these things and their behavior from limited data (data which was also often gathered only after the fact).

Now that lack of information is being removed. The Internet of Things creates two unprecedented opportunities regarding data about things:

  • data will be available instantly, as it is generated by the things
  • it can also be shared instantly by everyone who needs it.

This real-time knowledge of things presents both real opportunities and significant management challenges.

Each opportunity carries with it the challenge of crafting new policies on how to manage access to the vast new amounts of data and the forms in which it can be accessed.

For example: with the Internet of Things we will be able to bring about optimal manufacturing efficiency as well as unprecedented integration of supply chains and distribution networks. Why? Because we will now be able to “see” inside assembly line machinery, and the various parts of the assembly line will be able to automatically regulate each other without human intervention (M2M) to optimize each other’s efficiency, and/or workers will be able to fine-tune their operation based on this data.

Equally important, because of the second new opportunity, the exact same assembly line data can also be shared in real time with supply chain and distribution network partners. Each of them can use the data to trigger their own processes to optimize their efficiency and integration with the factory and its production schedule.

But that possibility also creates a challenge for management.

When data was hard to get, limited in scope, and largely gathered historically rather than in the moment, what data was available flowed in a linear, top-down fashion. Senior management had first access, then they passed on to individual departments only what they decided was relevant. Departments had no chance to simultaneously examine the raw data and have round-table discussions of its significance and improve decision-making. Everything was sequential. Relevant real-time data that they could use to do their jobs better almost never reached workers on the factory floor.

That all potentially changes with the IoT – but will it, or will the old tight control of data remain?

Managers must learn to ask a new question that’s so contrary to old top-down control of information: who else can use this data?

To answer that question they will have to consider the concept of a “data lake” created by the IoT.

“In broad terms, data lakes are marketed as enterprise wide data management platforms for analyzing disparate sources of data in its native format,” Nick Heudecker, research director at Gartner, says. “The idea is simple: instead of placing data in a purpose-built data store, you move it into a data lake in its original format. This eliminates the upfront costs of data ingestion, like transformation. Once data is placed into the lake, it’s available for analysis by everyone in the organization.”

Essentially, data that has been collected and stored in a data lake repository remains in the state it was gathered and is available to anyone, versus being structured, tagged with metadata, and having limited access.

That is a critical distinction and can make the data far more valuable, because the volume and variety will allow more cross-fertilization and serendipitous discovery.

At the same time, it’s also possible to “drown” in so much data, so C-level management must create new, deft policies – to serve as lifeguards, as it were. They must govern data lake access if we are to, on one hand, avoid drowning due to the sheer volume of data, and, on the other, to capitalize on its full value:

  • Senior management must resist the temptation to analyze the data first and then pass on only what they deem of value. They too will have a crack at the analysis, but the value of real-time data is getting it when it can still be acted on in the moment, rather than just in historical analyses (BTW, that’s not to say historical perspective won’t have value going forward: it will still provide valuable perspective).
  • There will need to be limits to data access, but they must be commonsense ones. For example, production line workers won’t need access to marketing data, just real-time data from the factory floor.
  • Perhaps most important, access shouldn’t be limited based on pre-conceptions of what might be relevant to a given function or department. For example, a prototype vending machine uses Near Field Communication to learn customers’ preferences over time, then offers them special deals based on those choices. However, by thinking inclusively about data from the machine, rather than just limiting access to the marketing department, the company shared the real-time information with its distribution network, so trucks were automatically rerouted to resupply machines that were running low due to factors such as summer heat.
  • Similarly, they will have to relax arbitrary boundaries between departments to encourage mutually-beneficial collaboration. When multiple departments not only share but also get to discuss the same data set, undoubtedly synergies will emerge among them (such as the vending machine ones) that no one department could have discovered on its own.
  • They will need to challenge their analytics software suppliers to create new software and dashboards specifically designed to make such a wide range of data easily digested and actionable.

Make no mistake about it: the simple creation of vast data lakes won’t automatically cure companies’ varied problems. But C-level managers who realize that if they are willing to give up control over data flow, real-time sharing of real-time data can create possibilities that were impossible to visualize in the past, will make data lakes safe, navigable – and profitable.

comments: Comments Off on Management Challenge: Lifeguards in the IoT Data Lake tags: , , , , , , , ,

I’ll be on “Game Changer” Radio Today @ 3 EST Talking About IoT

Huzzah!  I’ll be a guest on Bonnie Graham’s “Coffee Break With Game Changers” show live, today @ 3 PM to discuss the Internet of Things. SAP Radio

Other guests will include David Jonker, sr. director of Big Data Initiatives at SAP, and Ira Berk, vice-president of Solutions Go-to-market at SAP, who has global responsibility for the IoT infrastructure and middleware portfolio.

Among other topics that I hope to get to during the discussion:

  • The “Collective Blindness” meme that I raised recently — and how the IoT removes it.
  • The difficult shift companies will need to make from past practices, where information was a zero-sum game, where hoarding information led to profit, to one where sharing information is the key. Who else can use this information?
  • How the IoT can bring about an unprecedented era of “Precision Manufacturing,” which will not only optimize assembly line efficiency and eliminate waste, but also integrate the supply chain and distribution network.
  • The sheer quantity of data with the IoT threatens to overwhelm us. As much as possible, we need to migrate to “fog computing,” where as much data as possible is processed at the edge, with only the most relevant data passing to the cloud (given the SAP guys’ titles, I assume this will be of big interest to them.
  • The rise of IFTTT.com, which means device manufacturers don’t have to come up with every great way to use their devices: use open standards, just publish the APIs to IFTTT, and let the crowd create creative “recipes” to use the devices.
  • Safety and security aren’t the other guy’s problem: EVERY device manufacturer must build in robust security and privacy protections from the beginning. Lack of public trust can undermine everyone in the field.
  • We can cut the cost of seniors’ care and improve their well being, through “smart aging,” which brings together Quantified Self fitness devices that improve their care and make health care a doctor-patient partnership, and “smart home” devices that automate home functions and make them easier to manage.

Hope you can listen in.  The show will be archived if you can’t make it for the live broadcast .

comments: Comments Off on I’ll be on “Game Changer” Radio Today @ 3 EST Talking About IoT tags: , , , , ,

GE & Accenture provide detailed picture of current IoT strategy & deployment

I’ll admit it: until I began writing the “Managing the Internet of Things Revolution” guide to Internet of Things strategy for SAP, I was pre-occupied with the IoT’s gee-wiz potential for radical transformation: self-driving cars, medical care in which patients would be full partners with their doctors, products that customers would be able to customize after purchase.

GE_Accenture_IoT_reportThen I came to realize that this potential for revolution might be encouraging executives to hold off until the IoT was fully-developed, and, in the process, ignoring low-hanging fruit: a wide range of ways that the IoT could dramatically increase the efficiency of current operations, giving them a chance to experiment with limited, less-expensive IoT projects that would pay off rapidly and give them the confidence and understanding necessary to launch more dramatic IoT projects in the near future.

This is crucially important for IoT strategies: instead waiting for a radical transformation (which can be scary), view it instead as a continuum, beginning with small, relatively-low cost steps which will feed back into more dramatic steps for the future.

Now, there’s a great new study, “Industrial Internet Insights Report for 2015,” from GE and Accenture, that documents many companies are in the early stages of implementing such an incremental approach, with special emphasis on the necessary first step, launching Big Data analytics — and that they are already realizing tangible benefits. It is drawn from a survey of companies in the US, China, India, France, Germany, the UK, and South Africa.

The report is important, so I’ll review it at length.

Understandably, it was skewed toward the industries where GE applies its flavor of the IoT (the “Industrial Internet”): aviation, health care, transportation, power generation, manufacturing, and mining, but I suspect the findings also apply to other segments of the economy.

The summary underscores a “sense of urgency” to launch IoT initiatives:

“The vast majority (of respondents) believe that Big Data analytics has the power to dramatically alter the competitive landscape of industries just within the next year, and are investing accordingly…” (my emphasis).

84% said Big Data analytics “has the power to shift the competitive landscape for my industry” within just the next year, and 93% said they feared new competitors will enter the field to leverage data.  Wow: talk about short-term priorities!

It’s clear the authors believe the transformation will begin with Big Data initiatives, which, IMHO, companies should be starting anyways to better analyze the growing volume of data from conventional sources. 73% of the companies already are investing more than 20% of their overall tech budget on Big Data analytics — and some spend more than 30%! 80 to 90% said Big Data analytics was either the company’s top priority or at least in the top 3.

One eye-opening finding was that 53% of respondents said their board of directors was pushing the IoT initiatives. Probably makes sense, in that boards are expected to provide necessary perspective on the company’s long-term health.

GE and Accenture present a  4-step process to capitalize on the IoT:

  1. Start with the exponential growth in data volumes
  2. Add the additional data volume from the IoT
  3. Add growing analytics capability
  4. and, to add urgency, factor in “the context of industries where equipment itself or patient outcomes are at the heart of the business” where the ability to monitor equipment or monitor patient services can have significant economic impact and in some cases literally save lives [nothing like throwing the fear of God into the mix to motivate skeptics!].
For many companies, after implementing Big Data software, the next step toward realizing immediate IoT benefits is by installing sensors to monitor the status of operating assets and be able to implement “predictive maintenance,” which cuts downtime and reduces maintenance costs (the report cites some impressive statistics: ” .. saving up to 12 percent over scheduled repairs, reducing overall maintenance costs up to 30 percent, and eliminating breakdowns up to 70 percent.” What company, no matter what their stance on the IoT, wouldn’t want to enjoy those benefits?). The report cites companies in health care, energy and transportation that are already realizing benefits in this area.
Music to my ears was the emphasis on breaking down data-sharing barriers between departments, the first time I’ve seen substantiation of my IoT “Essential Truth” that, instead of hoarding data — whether between the company and supply-chain partners or within the company itself — that the IoT requires asking “who else can use this data?” It said that: “System barriers between departments prevent collection and correlation of data for maximum impact.” (my emphasis). The report went on to say:

“All in all, only about one-third of companies (36 percent) have adopted Big Data analytics across the enterprise. More prevalent are initiatives in a single operations area (16 percent) or in multiple but disparate areas (47 percent)…. The lack of an enterprise-wide analytics vision and operating model often results in pockets of unconnected analytics capabilities, redundant initiatives and, perhaps most important, limited returns on analytics investments.”

Most of the companies surveyed are moving toward centralization of data management to break down the silos. 49% plan to appoint a chief analytics officer to run the operation, and most will hire skilled data analysts or partner with outside experts (insert Accenture here, LOL…).

The GE/Accenture report also stressed that companies hoping to profit from the IoT also must create end-to-end security. Do do that, it recommended a strategy including:
  1. assess risks and consequences
  2. develop objectives and goals
  3. enforce security throughout the supply chain.
  4. use mitigation devices specifically designed for Industrial Control Systems
  5. establish strong corporate buy-in and governance.

For the longer term, the report also mentioned a consistent theme of mine, that companies must begin to think about dramatic new business models, such as substituting value-added services instead of traditional sales of products such as jet engines.  This is a big emphasis with GE.  It also emphasizes another issue I’ve stressed in the “Essential Truths,” i.e. partnering, as the mighty GE has done with startups Quirky and Electric Imp:

“Think of the partnering taking place among farm equipment, fertilizer, and seed companies and weather services, and the suppliers needed to provide IT, telecom, sensors, analytics and other products and services. Ask: ‘Which companies are also trying to reach my customers and my customers’ customers? What other products and services will talk to mine, and who will make, operate and service them? What capabilities and information does my company have that they need? How can we use this ecosystem to extend the reach and scope of our products and services through the Industrial Internet?'”

While the GE/Accenture report dwelt only on large corporations, I suspect that many of the same findings would apply to small-to-medium businesses as well, and that the falling prices of sensors and IoT platforms will mean more smart companies in this category will begin to launch incremental IoT strategies to first optimize their current operations and then make more radical changes.

Read it, or be left in the dust!


PS: as an added bonus, the report includes a link to the GE “Industrial Internet Evaluator,” a neat tool I hadn’t seen before. It invites readers to “see how others in your field are leveraging Big Data analytics for connecting assets, monitoring, analyzing, predicting and optimizing for business success.” Check it out!

http://www.stephensonstrategies.com/">Stephenson blogs on Internet of Things Internet of Things strategy, breakthroughs and management