Will some smart home device makers ever grow souls??

(Please cut me a little slack on this post, dripping with sarcasm: these latest examples of some smart home device makers’ contempt/obliviousness toward customers’ privacy and security shoved me over the edge!).

Once upon a time two smart boys in their dorm room thought up a new service that really made a new technology hum. When they turned it into a tiny company, they ever adopted a cute motto: “don’t be evil.” Neat!

Then their little service got very, very big and very, very profitable. The motto? It kinda withered away. Last year it was even dropped from the company’s code of conduct.

Which, conveniently, allowed that once tiny company to produce this abomination: the Google Nest Guard (the alarm, keypad, and motion sensor portion of Nest’s Secure home protection system) featuring a mic.

Oh, did I point out that Nest didn’t mention the mic’s presence? No, that fact only emerged when it announced the Guard’s integration with Google’s Assistant voice device (Sample command: “OK, Google, surveil my family.”) and Business Insider ferreted out the mic’s presence:

“The existence of a microphone on the Nest Guard, which is the alarm, keypad, and motion-sensor component in the Nest Secure offering, was never disclosed in any of the product material for the device.”

On Tuesday, a Google spokesperson told Business Insider the company had made an “error.”

“The on-device microphone was never intended to be a secret and should have been listed in the tech specs,” the spokesperson said. “That was an error on our part.”

Oh. All is forgiven. It was just an “error on our part.”

Except, how can I say this politely?, that’s utter baloney. It seems as if the mic just sorta got there. No engineer suggested adding it. No executives reviewing the design conveniently overlooked it.

Nope, that mic was there intentionally, and Google is so morally corrupt and/or amoral that they simply chose to ignore telling the public.

And, while we’re at it, let’s not heap all the opprobrium on Google. Amazon subsidiary Ring actually let its employees view videos shot with its doorbell device:

“These videos were unencrypted, and could be easily downloaded and shared. The team was also given a database that linked each video to the Ring customer it belonged to.”

As I’ve said many times before, my perspective on the issues of privacy and security are informed by my prior work in corporate crisis management, which taught me that far too many engineers (I have many friends in the profession, but if the shoe fits, wear it) are simply oblivious to privacy and security issues, viewing them as something to be handled through bolt-on protections after the fun part of product design is done. In fact, in adding the prior link, I came across something I wrote last year in which I quoted from the Google log — which contained nary a mention of privacy concerns — about an aspect of AI that would allow identification of what shop a batch of ramen came from. Funny, huh? No — scary.

Another lesson I drew from my past was the phenomenon of guilt by association, which is incredibly rampant right now: people conflate issues as diverse as smart home privacy violations, Russian election tampering, some men’s inability to find dates (I kid you not, and the result may be lethal for some women), the so-called “deep state,” etc., etc. The engineers I know tend to dismiss these wacky ideas because they aren’t logical. But the fact that the fears aren’t logical doesn’t mean they aren’t very, very real to those who embrace them.

That means that even those companies whose smart home devices DO contain robust privacy protections risk people rejecting their devices as well. Trust me on this one: I work every day with rational people who reject the cloud and all the services that could enrich their lives due to their fear of privacy and security violations.

That’s why responsible IoT companies must become involved in collaborations such as the Internet of Things Association, and IMC, working on collaborative strategies to deal with these issues.

Let’s not forget that these gaffes come at the same time as there’s a lot more interest among regulators and elected officials in regulating and/or even breaking up the Silicon Alley behemoths. You’d kinda think they’d be on their best behavior, not doing stupid things that just draw more criticism.

I’m fed up, and I won’t shut up. Write me if you have feasible suggestions to deal with the problem.

IMPORTANT POSTSCRIPT!

I just discovered a Verge piece from last month to the effect that Google is belatedly getting religion about personal privacy, even — and this wins big points in my book — putting its privacy policies in plain English (yes!) rather than legalese. Here’s a long piece from the article. If they follow up, I’d be the first to praise them and withdraw my criticism, although not of the industry as a whole:

“So today, as Google announced that it’s going to sell a device that’s not all that different from the Facebook Portal, whose most every review wondered whether you should really invite a Facebook camera into your home, Google also decided to publicly take ownership for privacy going forward.
As we discovered in our interview with Google Nest leader Rishi Chandra, Google has created a set of plain-English privacy commitments. And while Google didn’t actually share them during today’s Google I/O keynote, they’re now available for you to read on the web.
Here’s the high-level overview:
We’ll explain our sensors and how they work. The technical specifications for our connected home devices will list all audio, video, and environmental and activity sensors—whether enabled or not. And you can find the types of data these sensors collect and how that data is used in various features in our dedicated help center page.
We’ll explain how your video footage, audio recordings, and home environment sensor readings are used to offer helpful features and services, and our commitment for how we’ll keep this data separate from advertising and ad personalization.
We’ll explain how you can control and manage your data, such as providing you with the ability to access, review, and delete audio and video stored with your Google Account at any time.
But the full document gets way more specific than that. And remarkably, a number of the promises aren’t the typical wishy-washy legalese you might expect. Some are totally unambiguous. Some of them go against the grain, like how Nest won’t let you turn off the recording light on your camera anymore because it wants to assure you!
‘Your home is a special place. It’s where you get to decide who you invite in. It‘s the place for sharing family recipes and watching babies take first steps. You want to trust the things you bring into your home. And we’re committed to earning that trust,’ Google says.”

Maybe somebody’s listening!

5G Raises the Stakes for IoT Security

Last week’s international political news was a dramatic reminder of how inextricably linked technology progress (in this case, 5G infrastructure) and high-stakes global intrigue and even warfare have become.

The speed-up in deployment of 5G networks in the US and worldwide can both dramatically increase the IoT’s benefits (with reduced latency we’ll get a significant increase in the volume of rich, near-real-time data, allowing autonomous vehicles and other hard-to-imagine advances) but also the dangers (the possibility of China, Russia or someone else launching a cyber attack through a “back door” that could cripple our critical infrastructure). That puts the IoT right in the middle of a very tense global diplomatic and technical battle, with the outcome potentially having a big impact on the IoT’s near-term growth.

The US government’s indictment of Huawei (coming on the heels of an as-yet un-corroborated Bloomberg story that Huawei had planted chips in Apple and Amazon devices that would allow “back-door” attacks not just on the devices but on overall networks) plus a little-noticed story about yet another Chinese manufacturer of cheap IoT devices that could let a bad actor install malware in its firmware are just the latest reminders that IoT privacy and security must be designed in from the beginning, using what the EU calls “privacy by design.”

Don’t forget that we’ve already had a very real preview of exactly how dangerous this can be:  the 2016 DDoS attack on Internet infrastructure company Dyn that used IoS devices with inadequate protections as its the Trojan horses to launch the attack. Much of the Internet was crippled for several hours.

It also means, as I wrote in The Future Is Smart and elsewhere that it’s not enough to design in privacy protections into your own products and services: if the public and companies lose confidence in the IoT because of an attack aimed at anyone, even the irresponsible companies that don’t worry about security, I learned during my years doing corporate crisis management that there’s an irrational but nonetheless compelling guilt-by-association phenomenon that can destroy confidence in all IoT. Is that fair? No, but that doesn’t mean it’s any less of a reality. That’s why it’s critical that you take an active role in both supporting enlightened federal policy on both 5G infrastructure and IoT regulation, especially privacy and security regulations that are performance-based, rather than descriptive (which might restrict innovation), as well as joining industry organizations working on the privacy and security issues, such as the IMC, Internet of Things Association, and IMC.

In The Future Is Smart I wrote that, counterintuitively, privacy and security can’t be bolted on after you’ve done the sexy part of designing cool new features for your IoT device or service. This news makes that even more the case. What’s required is a mind-set in which you think of privacy and security from the very beginning and then visualize the process after its initial sale as cyclical and never-ending: you must constantly monitor emerging threats and then upgrade firmware and software protections.

 

 

 

“All of Us:” THE model for IoT privacy and security!

pardon me in advance:this will be long, but I think the topic merits it!

One of my fav bits of strategic folk wisdom (in fact, a consistent theme in my Data Dynamite book on the open data paradigm shift) is, when you face a new problem, to think of another organization that might have one similar to yours, but which suffers from it to the nth degree (in some cases, even a matter of literal life-or-death!).

That’s on the likelihood that the severity of their situation would have led these organizations to already explore radical and innovative solutions that might guide your and shorten the process. In the case of the IoT, that would include jet turbine manufacturers and off-shore oil rigs, for example.

I raise that point because of the ever-present problem of IoT privacy and security. I’ve consistently criticized many companies’ lack of attention to seriousness and ingenuity, and warned that this could result not only in disaster for these companies, but also the industry in general due to guilt-by-association.

This is even more of an issue since the May roll-out of the EU’s General Data Protection Regulation (GDPR), based on the presumption of an individual right to privacy.

Now, I have exciting confirmation — from the actions of an organization with just such a high-stakes privacy and security challenge — that it is possible to design an imaginative and effective process alerting the public to the high stakes and providing a thorough process to both reassure them and enroll them in the process.

Informed consent at its best!

It’s the NIH-funded All of Us, a bold effort to recruit 1 million or more people of every age, sex, race, home state, and state of health nationwide to speed medical research, especially toward the goal of “personalized medicine.” The researchers hope that, “By taking into account individual differences in lifestyle, environment, and biology, researchers will uncover paths toward delivering precision medicine.”

All of Us should be of great interest to IoT practitioners, starting with the fact that it might just save our own lives by leading to creation of new medicines (hope you’ll join me in signing up!). In addition, it parallels the IoT in allowing unprecedented degrees of precision in individuals’ care, just as the IoT does with manufacturing, operating data, etc.:

“Precision medicine is an approach to disease treatment and prevention that seeks to maximize effectiveness by taking into account individual variability in genes, environment, and lifestyle. Precision medicine seeks to redefine our understanding of disease onset and progression, treatment response, and health outcomes through the more precise measurement of molecular, environmental, and behavioral factors that contribute to health and disease. This understanding will lead to more accurate diagnoses, more rational disease prevention strategies, better treatment selection, and the development of novel therapies. Coincident with advancing the science of medicine is a changing culture of medical practice and medical research that engages individuals as active partners – not just as patients or research subjects. We believe the combination of a highly engaged population and rich biological, health, behavioral, and environmental data will usher in a new and more effective era of American healthcare.” (my emphasis added)


But what really struck me about All of Us’s relevance to IoT is the absolutely critical need to do everything possible to assure the confidentiality of participants’ data, starting with HIPP protections and extending to the fact that it would absolutely destroy public confidence in the program if the data were to be stolen or otherwise compromised.  As Katie Rush, who heads the project’s communications team told me, “We felt it was important for people to have a solid understanding of what participation in the program entails—so that through the consent process, they were fully informed.”

What the All of Us staff designed was, in my estimation (and I’ve been in or around medical communication for forty years), the gold standard for such processes, and a great model for effective IoT informed consent:

  • you can’t ignore it and still participate in the program: you must sign the consent form.
  • you also can’t short-circuit the process: it said at the beginning the process would take 18-30 minutes (to which I said yeah, sure — I was just going to sign the form and get going), and it really did, because you had to do each step or you couldn’t join — the site was designed so no shortcuts were allowed!:
    • first, there’s an easy-to-follow, attractive short animation about that section of the program
    • then you have to answer some basic questions to demonstrate that you understand the implications.
    • then you have to give your consent to that portion of the program
    • the same process is repeated for each component of the program.
  • all of the steps, and all of the key provisions, are explained in clear, simple English, not legalese. To wit:
    • “Personal information, like your name, address, and other things that easily identify participants will be removed from all data.
    • Samples—also without any names on them—are stored in a secure biobank”
    • “We require All of Us Research Program partner organizations to show that they can meet strict data security standards before they may collect, transfer, or store information from participants.
    • We encrypt all participant data. We also remove obvious identifiers from data used for research. This means names, addresses, and other identifying information is separate from the health information.
    • We require researchers seeking access to All of Us Research Program data to first register with the program, take our ethics training, and agree to a code of conduct for responsible data use.
    • We make data available on a secure platform—the All of Us research portal—and track the activity of all researchers who use it.
    • We enlist independent reviewers to check our plans and test our systems on an ongoing basis to make sure we have effective security controls in place, responsive to emerging threats.”

The site emphasizes that everything possible will be done to protect your privacy and anonymity, but it is also frank that there is no way of removing all risk, and your final consent requires acknowledging that you understand those limits:

“We are working with top privacy experts and using highly-advanced security tools to keep your data safe. We have several  steps in place to protect your data. First, the data we collet from you will be stored on=oyters with extra security portection. A special team will have clearance to process and track your data. We will limit who is allowed to see information that could directly identy you, like your name or social security number. In the unlikely event of a data breach, we will notify you. You are our partner, and your privacy will always be our top priority.”

The process is thorough, easy to understand, and assures that those who actually sign up know exactly what’s expected from them, what will be done to protect them, and that they may still have some risk.

Why can’t we expect that all IoT product manufacturers will give us a streamlined version of the same process? 


I will be developing consulting services to advise companies that want to develop common-sense, effective, easy-to-implement IoT privacy and security measures. Write me if you’d like to know more.

Why IoT Engineers Need Compulsory Sensitivity Training on Privacy & Security

Posted on 4th April 2018 in AI, data, Essential Truths, Internet of Things, privacy, security

OK, you may say I’m over-sensitive, but a headline today from Google’s blog that others may chuckle about (“Noodle on this: Machine learning that can identify ramen by shop“) left me profoundly worried about some engineers’ tone-deaf insensitivity to growing public concern about privacy and security.

This is not going to be pleasant for many readers, but bear with me — IMHO, it’s important to the IoT’s survival.

As I’ve written before, I learned during my work on corporate crisis management in the 80’s and 90’s that there’s an all-too-frequent gulf between the public and engineers on fear.  Engineers, as left-brained and logical as they come (or, in Myers-Briggs lingo, ISTJs, “logical, detached and detailed” and the polar opposite of ENFP’s such as me, ” caring, creative, quick and impulsive” ) are ideally-suited for the precision needs of their profession — but often (but not always, I’ll admit…) clueless about how the rest of us respond to things such as the Russian disruption of our sacred political institutions via Facebook or any of the numerous violations of personal privacy and security that have taken place with IoT devices lacking in basic protections.

The situation is bad, and getting worse. In one Pew poll, 16% or less of Americans felt that a wide range of institutions, from companies to government, were protecting their information.

Engineers are quick to dismiss the resulting fear because it isn’t logical.  But, as I’ve written before, the fact fear isn’t logical doesn’t mean it isn’t really real for many people, and can cloud their thought processes and decision-making.

Even worse, it’s cumulative and can ensnare good companies as well as bad.  After a while, all the privacy and security violations get conflated in their minds.

Exhibit A for this insensitivity? The despicable memo from Facebook VP Andrew Bosworth:

““Maybe someone dies in a terrorist attack coordinated on our tools. And still we connect people. The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good.”

Eventually he, begrudgingly, apologized, as did Mark Zuckerberg, but, IMHO that was just facesaving. Why didn’t anyone at Facebook demand a retraction immediately, and why did some at Facebook get mad not at Bosworth but instead at anyone who’d leak such information?  They and the corporate culture are as guilty as Bosworth in my mind.

So why do I bring up the story about identifying the source of your ramen using AI, which was surely written totally innocently by a Google engineer who thought it would be a cute example of how AI can be applied to a wide range of subjects? It’s because I read it — with my antennae admittedly sharpened by all the recent abuses — as something that might have been funny several years ago but should have gone unpublished now in light of all the fears about privacy and security. Think of this little fun project the way a lot of the people I try to counsel on technology fears every day would have: you mean they now can and will find out where I get my noodles? What the hell else do they know about me, and who will they give that information to???

Again, I’m quite willing to admit I may be over-reacting because of my own horror about the nonchalance on privacy and security, but I don’t think so.

That’s why I’ll conclude this screed with a call for all IoT engineers to undergo mandatory privacy and security training on a continuing basis. The risk of losing consumer confidence in their products and services is simply too great for them to get off the hook because that’s not their job. If you do IoT, privacy and security is part of the job description.

End of sermon. Go about your business.

 

 

comments: Comments Off on Why IoT Engineers Need Compulsory Sensitivity Training on Privacy & Security tags: , , , ,

“The House That Spied on Me”: Finally Objective Info on IoT Privacy (or Lack Thereof)

Posted on 25th February 2018 in data, Essential Truths, Internet of Things, privacy, security, smart home

Pardon a political analogy, Just as the recent indictment of 13 Russians in the horrific bot campaign to undermine our democracy (you may surmise my position on this! The WIRED article about it is a must read!) finally provided objective information on the plot, so too Kasmir Hill’s and Surya Matu’s excruciatingly detailed “The House That Spied on Me”  finally provides objective information on the critical question of how much personal data IoT device manufacturers are actually compiling from our smart home devices.

This is critical, because we’ve previously had to rely on anecdotal evidence such as the Houston baby-cam scandal, and that’s not adequate for sound government policy making and/or advice to other companies on how to handle the privacy/security issue.

Last year, Hill (who wrote one of the first articles on the danger when she was at Forbes) added just about every smart home you can imagine to her apartment (I won’t repeat the list: I blush easily…) . Then her colleague, Matu, monitored the outflow of the devices using a special router he created to which she connected all the devices:

“… I am basically Kashmir’s sentient home. Kashmir wanted to know what it would be like to live in a smart home and I wanted to find out what the digital emissions from that home would reveal about her. Cybersecurity wasn’t my focus. … Privacy was. What could I tell about the patterns of her and her family’s life by passively gathering the data trails from her belongings? How often were the devices talking? Could I tell what the people inside were doing on an hourly basis based on what I saw?”

The answer was: a lot (I couldn’t paste the chart recording the numbers here, so check the article for the full report)!

As Matu pointed out, with the device he had access to precisely the data about Hill’s apartment that Comcast could collect and sell because of a 2017 law allowing ISPs to sell customers’ internet usage data without their consent — including the smart device data.  The various devices sent data constantly — sometimes even when they weren’t being used! In fact, there hasn’t been a single hour since the router was installed in December when at least some devices haven’t sent data — even if no one was at home!

BTW: Hill, despite her expertise and manufacturers’ claims of ease-of-setup, found configuring all of the devices, and especially making them work together, was a nightmare. Among other tidbits about how difficult it was: she had to download 14 different apps!  The system also directly violated her privacy, uploading a video of her walking around the apartment nude that was recorded by the Withings Home Wi-Fi Security (ahem…) Camera with Air Quality Sensors. Fortunately the offending video was encrypted. Small comfort.

Hill came to realize how convoluted privacy and security can become with a smart home:

“The whole episode reinforced something that was already bothering me: Getting a smart home means that everyone who lives or comes inside it is part of your personal panopticon, something which may not be obvious to them because they don’t expect everyday objects to have spying abilities. One of the gadgets—the Eight Sleep Tracker—seemed aware of this, and as a privacy-protective gesture, required the email address of the person I sleep with to request his permission to show me sleep reports from his side of the bed. But it’s weird to tell a gadget who you are having sex with as a way to protect privacy, especially when that gadget is monitoring the noise levels in your bedroom.”

Matu reminds us that, even though most of the data was encrypted, even the most basic digital exhaust can give trained experts valuable clues that may build digital profiles of us, whether to attract us to ads or for more nefarious purposes:

“It turns out that how we interact with our computers and smartphones is very valuable information, both to intelligence agencies and the advertising industry. What websites do I visit? How long do I actually spend reading an article? How long do I spend on Instagram? What do I use maps for? The data packets that help answer these questions are the basic unit of the data economy, and many more of them will be sent by people living in a smart home.”

Given the concerns about whether Amazon, Google, and Apple are constantly monitoring you through your smart speaker (remember when an Echo was subpoenaed  in a murder case?), Matu reported that:

“… the Echo and Echo Dot … were in constant communication with Amazon’s servers, sending a request every couple of minutes to http://spectrum.s3.amazonaws.com/kindle-wifi/wifistub-echo.html. Even without the “Alexa” wake word, and even when the microphone is turned off, the Echo is frequently checking in with Amazon, confirming it is online and looking for updates. Amazon did not respond to an inquiry about why the Echo talks to Amazon’s servers so much more frequently than other connected devices.”

Even the seemingly most insignificant data can be important:

“I was able to pick up a bunch of insights into the Hill household—what time they wake up, when they turn their lights on and off, when their child wakes up and falls asleep—but the weirdest one for me personally was knowing when Kashmir brushes her teeth. Her Philips Sonicare Connected toothbrush notifies the app when it’s being used, sending a distinctive digital fingerprint to the router. While not necessarily the most sensitive information, it made me imagine the next iteration of insurance incentives: Use a smart toothbrush and get dental insurance at a discount!”

Lest you laugh at that, a dean at the BU Dental School told me much the same thing: that the digital evidence from a Colgate smart brush, in this case, could actually revolutionize dentistry, not only letting your dentist how well, or not, you brushed, but perhaps lowering your dental insurance premium or affecting the amount your dentist was reimbursed. Who woulda thunk it?

Summing up (there’s a lot of additional important info in the story, especially about the perfidious Visio Smart TV, that had such a company-weighted privacy policy that the FTC actually forced it to turn it off the “feature” and pay reparations, so do read the whole article), Hill concluded:

“I thought the house would take care of me but instead everything in it now had the power to ask me to do things. Ultimately, I’m not going to warn you against making everything in your home smart because of the privacy risks, although there are quite a few. I’m going to warn you against a smart home because living in it is annoying as hell.”

In addition to making privacy and security a priority, there is another simple and essential step smart home (and Quantified Self) device companies must take.

When you open the box for the first time, the first thing you should see must be a prominently displayed privacy and security policy, written in plain (and I mean really plain) English, and printed in large, bold type. It should make it clear that any data sharing is opt-in, and that you have the right to not agree, and emphasize the need for detailed, unique passwords (no,1-2-3-4 or the ever-popular “password” are not enough.

Just to make certain the point is made, it needs to be at the very beginning of the set-up app as well. Yes, you should also include the detailed legalese in agate type, but the critical points must be made in the basic statement, which needs to be reviewed not just by the lawyers, but also a panel of laypeople, who must also carry out the steps to make sure they’re really easily understood and acted on. This is not just a suggestion. You absolutely must do it or you risk major penalties and public fury. 


Clearly, this article gives us the first objective evidence that there’s a lot more to do to assure privacy and security for smart homes (and that there’s also a heck of a lot of room for improvement on how the devices play together!), reaffirming my judgement that the first IoT Essential Truth remains “make privacy and security your highest priority.” If this doesn’t get the focus it deserves, we may lose all the benefits of the IoT because of legitimate public and corporate concern that their secrets are at risk. N.B.!

comments: Comments Off on “The House That Spied on Me”: Finally Objective Info on IoT Privacy (or Lack Thereof) tags: , , , , ,

More Blockchain Synergies With IoT: Supply Chain Optimization

The more I learn about blockchain’s possible uses — this time for supply chains — the more convinced I am that it is absolutely essential to full development of the IoT’s potential.

I recently raved about blockchain’s potential to perhaps solve the IoT’s growing security and privacy challenges. Since then, I’ve discovered that it can also further streamline and optimize the supply chain, another step toward the precision that I think is such a hallmark of the IoT.

As I’ve written before, the ability to instantly share (something we could never do before) real-time data about your assembly line’s status, inventories, etc. with your supply chain can lead to unprecdented integration of the supply chain and factory, much of it on a M2M basis without any human intervention. It seems to me that the blockchain can be the perfect mechanism to bring about this synchronization.

A brief reminder that, paradoxically, it’s because blockchain entries (blocks) are shared, and distributed (vs. centralized) that it’s secure without using a trusted intermediary such as a bank, because no one participant can change an entry after it’s posted.

Complementing the IBM video I included in my last post on the subject, here’s one that I think succinctly summarizes blockchain’s benefits:

A recent LoadDelivered article detailed a number of the benefits from building your supply chain around blockchain. They paralleling the ones I mentioned in my prior post regarding its security benefits, of using blockchain to organize your supply chain (with some great links for more details):

  • “Recording the quantity and transfer of assets – like pallets, trailers, containers, etc. – as they move between supply chain nodes (Talking Logistics)
  • Tracking purchase orders, change orders, receipts, shipment notifications, or other trade-related documents
  • Assigning or verifying certifications or certain properties of physical products; for example determining if a food product is organic or fair trade (Provenance)
  • Linking physical goods to serial numbers, bar codes, digital tags like RFID, etc.
  • Sharing information about manufacturing process, assembly, delivery, and maintenance of products with suppliers and vendors.”

That kind of information, derived from real-time IoT sensor data, should be irresistible to companies compared to the relative inefficiency of today’s supply chain.

The article goes on to list a variety of benefits:

  • “Enhanced Transparency. Documenting a product’s journey across the supply chain reveals its true origin and touchpoints, which increases trust and helps eliminate the bias found in today’s opaque supply chains. Manufacturers can also reduce recalls by sharing logs with OEMs and regulators (Talking Logistics).
  • Greater Scalability. Virtually any number of participants, accessing from any number of touchpoints, is possible (Forbes).
  • Better Security. A shared, indelible ledger with codified rules could potentially eliminate the audits required by internal systems and processes (Spend Matters).
  • Increased Innovation. Opportunities abound to create new, specialized uses for the technology as a result of the decentralized architecture.”

Note that it the advantages aren’t all hard numbers, but also allowing marketing innovations, similar to the way the IoT allows companies to begin marketing their products as services because of real-time data from the products in the field. In the case of applying it to the supply chain (food products, for example), manufacturers could get a marketing advantage because they could offer objective, tamper-proof documentation of the product’s organic or non-GMO origins. Who would have thought that technology whose primary goal is increasing operating efficiency could have these other, creative benefits as well?

Applying  blockchain to the supply chain is getting serious attention, including a pilot program in the Port of Rotterdam, Europe’s largest.  IBM, Intel, Cisco and Accenture are among the blue-chip members of Hyperledger, a new open source Linux Foundation collaboration to further develop blockchain. Again, it’s the open source, decentralized aspect of blockchain that makes it so effective.

Logistics expert Adrian Gonzalez is perhaps the most bullish on blockchain’s potential to revolutionize supply chains:

“the peer-to-peer, decentralized architecture of blockchain has the potential to trigger a new wave of innovation in how supply chain applications are developed, deployed, and used….(becoming) the new operating system for Supply Chain Operating Networks

It’s also another reminder of the paradoxical wisdom of one of my IoT “Essential Truths,” that we must learn to ask “who else could share this information” rather than hoarding it as in the past. It is the very fact that blockchain data is shared that means it can’t be tampered with by a single actor.

What particularly intrigues me about widespread use of blockchain at the heart of companies’ operations and fueled by real-time data from IoT sensors and other devices is that it would ensure that privacy and security, which I otherwise fear would always be an afterthought, would instead be inextricably linked with achieving efficiency gains. That would make companies eager to embrace the blockchain, assuring their attention to privacy and security as part of the deal. That would be a definite win-win.

Blockchain must definitely be on your radar in 2017.

 

Lo and behold, right after I posted this, news that WalMart, the logistics savants, are testing blockchain for supply chain management!

 

comments: Comments Off on More Blockchain Synergies With IoT: Supply Chain Optimization tags: , , , , , , , ,

When Philips’s Hue Bulbs Are Attacked, IoT Security Becomes Even Bigger Issue

OK, what will it take to make security (and privacy) job #1 for the IoT industry?

The recent Mirai DDoS attack should have been enough to get IoT device companies to increase their security and privacy efforts.

Now we hear that the Hue bulbs from Philips, a global electronics and IoT leader that DOES emphasize security and doesn’t cut corners, have been the focus of a potentially devastating attack (um, just wonderin’: how does triggering mass epileptic seizures through your light bulbs grab you?).

Since it’s abundantly clear that the US president-elect would rather cut regulations than add needed ones (just announcing that, for every new regulation, two must be cut), the burden of improving IoT security will lie squarely on the shoulders of the industry itself. BTW:kudos in parting to outgoing FTC Chair Edith Ramirez, who has made intelligent, workable IoT regulations in collaboration with self-help efforts by the industry a priority. Will we be up to the security challenge, or, as I’ve warned before, will security and privacy lapses totally undermine the IoT in its adolescence by losing the public and corporate confidence and trust that is so crucial in this particular industry?

Count me among the dubious.

Here’s what happened in this truly scary episode, which, for the first time, presages making the focus of an IoT hack an entire city, by exploiting what might otherwise be a smart city/smart grid virtue: a large installed base of smart bulbs, all within communication distance of each other. The weapons? An off-the-shelf drone and an USB stick (the same team found that a car will also do nicely as an attack vector). Fortunately, the perpetrators in this case were a group of white-hat hackers from the Weizmann Institute of Science in Israel and Dalhousie University in Canada, who reported it to Philips so they could implement additional protections, which the company did.

Here’s what they wrote about their plan of attack:

“In this paper we describe a new type of threat in which adjacent IoT devices will infect each other with a worm that will spread explosively over large areas in a kind of nuclear chain reaction (my emphasis), provided that the density of compatible IoT devices exceeds a certain critical mass. In particular, we developed and verified such an infection using the popular Philips Hue smart lamps as a platform.

“The worm spreads by jumping directly from one lamp to its neighbors, using only their built-in ZigBee wireless connectivity and their physical proximity. The attack can start by plugging in a single infected bulb anywhere in the city, and then catastrophically spread everywhere within minutes, enabling the attacker to turn all the city lights on or off, permanently brick them, or exploit them in a massive DDOS attack (my emphasis). To demonstrate the risks involved, we use results from percolation theory to estimate the critical mass of installed devices for a typical city such as Paris whose area is about 105 square kilometers: The chain reaction will fizzle if there are fewer than about 15,000 randomly located smart lights in the whole city, but will spread everywhere when the number exceeds this critical mass (which had almost certainly been surpassed already (my emphasis).

“To make such an attack possible, we had to find a way to remotely yank already installed lamps from their current networks, and to perform over-the-air firmware updates. We overcame the first problem by discovering and exploiting a major bug in the implementation of the Touchlink part of the ZigBee Light Link protocol, which is supposed to stop such attempts with a proximity test. To solve the second problem, we developed a new version of a side channel attack to extract the global AES-CCM key that Philips uses to encrypt and authenticate new firmware. We used only readily available equipment costing a few hundred dollars, and managed to find this key without seeing any actual updates. This demonstrates once again how difficult it is to get security right even for a large company that uses standard cryptographic techniques to protect a major product.”

Again, this wasn’t one of those fly-by-night Chinese manufacturers of low-end IoT devices, but Philips, a major, respected, and vigilant corporation.

As for the possible results? It could:

  •  jam WiFi connections
  • disturb the electric grid
  • brick devices making entire critical systems inoperable
  • and, as I mentioned before, cause mass epileptic seizures.

As for the specifics, according to TechHive, the researchers installed Hue bulbs in several offices in an office building in the Israeli city of Beer Sheva. In a nice flair for the ironic, the building housed several computer security firms and the Israeli Computer Emergency Response Team.  They attached the attack kit on the USB stick to a drone, and flew it toward the building from 350 meters away. When they got to the building they took over the bulbs and made them flash the SOS signal in Morse Code.

The researchers”were able to bypass any prohibitions against remote access of the networked light bulbs, and then install malicious firmware. At that point the researchers were able to block further wireless updates, which apparently made the infection irreversible. ‘There is no other method of reprogramming these [infected] devices without full disassemble (which is not feasible). Any old stock would also need to be recalled, as any devices with vulnerable firmware can be infected as soon as power is applied.’”

Worst of all, the attack was against Zigbee, one of the most robust and widely-used IoT protocols, an IoT favorite because Zigbee networks tend to be cheaper and simpler than WiFi or BlueTooth.

The attack points up one of the critical ambiguities about the IoT. On one hand, the fact that it allows networking of devices leads to “network effects,” where each device becomes more valuable because of the synergies with other IoT devices. On the other hand, that same networking and use of open standards means that penetrating one device can mean ultimately penetrating millions and compounding the damage.


I’m hoping against hope that when Trump’s team tries to implement cyber-warfare protections they’ll extend the scope to include the IoT because of this specific threat. If they do, they’ll realize that you can’t just say yes cyber-security and no, regulations. In the messy world of actually governing, rather than issuing categorical dictums, you sometimes have to embrace the messy world of ambiguity.  

What do you think?

 

comments: Comments Off on When Philips’s Hue Bulbs Are Attacked, IoT Security Becomes Even Bigger Issue tags: , , , , , , , ,

Don’t Say I Didn’t Warn You: One of Largest Botnet Attacks Ever Due to Lax IoT Security

Don’t say I didn’t warn you about how privacy and security had to be THE highest priority for any IoT device.

On September 19th, Chris Rezendes and I were the guests on a Harvard Business Review webinar on IoT privacy and security. I once again was blunt that:

  • you can’t wait until you’ve designed your cool new IoT device before you begin to add in privacy and security protections. Start on Day 1!
  • sensors are particularly vulnerable, since they’re usually designed for minimum cost, installed, and forgotten.
  • as with the Target hack, hackers will try to exploit the least protected part of the system.
  • privacy and security protections must be iterative, because the threats are constantly changing.
  • responsible companies have as much to lose as the irresponsible, because the result of shortcomings could be held against the IoT in general.

The very next day, all hell broke loose. Hackers used the Mirai malware to launch one of the largest distributed denial-of-service attack ever, on security blogger Brian Krebs (BTW, the bad guys failed, because of valiant work by the good guys here in Cambridge, at Akamai!).

 

The threat was so bad that DHS’s National Cyber Awareness System sent out the first bulletin I ever remember getting from them dealing specifically with IoT devices. As it warned, “IoT devices are particularly susceptible to malware, so protecting these devices and connected hardware is critical to protect systems and networks.”  By way of further explanation, DHS showed how ridiculously simple the attacks were because of inadequate protection:

“The Mirai bot uses a short list of 62 common default usernames and passwords to scan for vulnerable devices. Because many IoT devices are unsecured or weakly secured, this short dictionary allows the bot to access hundreds of thousands of devices. The purported Mirai author claimed that over 380,000 IoT devices  (my emphasis) were enslaved by the Mirai malware in the attack on Krebs’ website.”

A later attack in France during September using Mirai resulted in the largest DDoS attack ever.

The IoT devices affected in the latest Mirai incidents were primarily home routers, network-enabled cameras, and digital video recorders. Mirai malware source code was published online at the end of September, opening the door to more widespread use of the code to create other DDoS attacks.

How’d they do it?

By a feature of the malware that detects and attacks consumer IoT devices that only have default, sometimes hardwired, passwords and usernames (or, as Dark Reading put it in an apocalyptic sub-head, “Mirai malware could signal the beginning of new trend in using Internet of Things devices as bots for DDoS attacks.”

To place the blame closer to home (well, more accurately, in the home!) you and I, if we bought cheap smart thermostats or baby monitors with minimal or no privacy protections and didn’t bother to set up custom passwords, may have unwittingly participated in the attack. Got your attention yet?

 

No responsible IoT inventor or company can deny it any longer: the entire industry is at risk unless corporate users and the general public can be confident that privacy and security are baked in and continuously upgraded. Please watch the HBR webinar if you haven’t already, and pledge to make IoT privacy and security Job #1!


 

PS: According to the DHS bulletin:

“In early October, Krebs on Security reported on a separate malware family responsible for other IoT botnet attacks. This other malware, whose source code is not yet public, is named Bashlite. This malware also infects systems through default usernames and passwords. Level 3 Communications, a security firm, indicated that the Bashlite botnet may have about one million (my emphasis) enslaved IoT devices.”

BTW: thanks for my friend Bob Weisberg for reminding me to give this situation its due!

comments: 6 » tags: , , ,

Zoe: perhaps even better than Echo as IoT killer device?

Zoe smart home hub

I’ve raved before about Echo, Amazon’s increasingly versatile smart home hub, primarily because it is voice activated, and thus can be used by anyone, regardless of tech smarts — or whether their hands are full of stuff.  As I’ve mentioned, voice control makes it a natural for my “SmartAging” concept to help improve seniors’ health and allow them to manage their homes, because you don’t have to understand the underlying technology — just talk.

Now there’s a challenger on the horizon: start-up Zoe, which offers many of Echo’s uses, but with an important difference that’s increasingly relevant as IoT security and privacy challenges mount: your data will remain securely in your home. Or, as their slogan goes:

“So far, smart home meant high convenience, no privacy, or privacy, but no fun. We are empowering you to have both.”

You can still get in on Zoe’s Indegogo campaign with a $249 contribution, which will get you a hub and an extra “voice drop” to use in another room, or the base level, $169 for a single room. Looks kinda cool to me, especially with the easily changed “Art Covers” and backlight coloring (the Che Guevera one looks appropriate for a revolutionary product) …  The product will ship in late 2016.

Don’t get me wrong: I love Echo & will be getting mine soon, but there is that creepy factor given government officials’ fascination with the potential of tapping into smart home data as part of their surveillance. Remember what US Director of Intelligence James Clapper said, ““In the future, intelligence services might use the [internet of things] for identification, surveillance, monitoring, location tracking, and targeting for recruitment, or to gain access to networks or user credentials.” Consider then, that Echo sits there on your kitchen counter, potentially hacked and then hoovering up all of your kitchen chit-chat to relay directly to the spooks.  Wouldn’t you rather that data remained totally under your control?

In addition to storing the data on site rather than in the cloud, Zoe also touts that it has advanced voice-recognition so it can learn IFTTT-style “recipes,” or be operated by apps. She comes with 1,500 built-in voice commands, or, if you stump her, (and only if you choose to, preserving that in-house-only option) web-based Advanced Voice Recognition steps in, with a cloud-based voice recognition system. Her recognition capabilities will grow over time.. Zoe will work with WiFi, Bluetooth, Z-Wave, and other standards.

The company will ship the developers’ kit in six months. It will be open source.

Not being cloud based will mean it loses to Echo on two important counts. For many people, the ability to order things from Amazon simply by speaking may be more important than security concerns,. Also, I notice it doesn’t mention any speakers, so it may be lacking the ability to also serve as a music source (obviously it wouldn’t work with Amazon Music or Apple Music if it isn’t cloud-connected, but it would at least be nice to be able to use it to play your own collection — advantage to Echo on that one.

At least this means there’s competition in the field (and, BTW, I’d love to see Apple swoop in and make THE voice-activated device!)


BTW: Thanks to good buddy Bob Weisberg for the tip about Zoe! Follow him!

 

comments: Comments Off on Zoe: perhaps even better than Echo as IoT killer device? tags: , , , , , , , ,

IoT’s Future Makes iPhone Privacy Case Even More Important

Yesterday’s NYT had the most thoughtful piece I’ve seen about the long-term implications of the FBI’s attempts to get Apple to add a “backdoor” to the iPhone that would allow the agency to examine the data on the phone of terrorist Syed Farook, who, along with his wife, killed 14 late last year.

The growth and potential impact of the Internet of Things on our lives will only make the significance of this landmark case greater over time, and I stand totally with Apple CEO Tim Cook (“this is not a poll, this is about the future”) on what I think is a decision that every thinking person concerned about the growing role of technology in our lives should support. It’s that important!

First, my standard disclaimer about Apple, i.e., that I work part-time at the Apple Store, but know as much as you do about Apple’s decision-making process and have zero impact on it.  Now for a couple of other personal considerations to establish my bona fides on the issue:

  1. I’m pretty certain I was the first person to suggest (via a Boston Globe op-ed two weeks [“Fight Terrorism With Palm Pilots”] or so after 9/11 that the early mobiles could be used to help the public report possible threats and/or respond to terrorism.  Several years later I wrote the first primitive app for first-generation PDAs (“Terrorism Survival Planner”) on the subject, and did consulting work for both the Department of Homeland Security and the CTIA on how first-generation smart phones could be used as part of terrorism prevention.
    I take this possibility seriously, support creative use of smartphone in terrorism preparation and response, and also realize that cellphone contents can not only help document cases, but also possibly prevent future ones.
  2. As I’ve said before, I used to do corporate crisis management consulting, so I understand how fear can cloud people’s judgment on issues of this sort.
  3. I’m also proud to come from a 300+ year line of attorneys, most particularly my younger brother, Charles, who had an award-winning career defending indigent clients on appeal, including many where it might have been tempting to have abridged their civil rights because of the heinous nature of the crimes they were accused of committing.

I like to think of myself as a civil libertarian as well, because I’ve seen too many instances where civil liberties were abridged for one extremely unlikeable person, only to have that serve as precedent for future cases where good people were swallowed up and unjustly convicted  (yea, Innocence Project!).

And this case comes right on the heels of my recent blog posts about how federal authorities such as James Clapper were already taking far too much (IMHO) interest in obtaining a treasure trove of data from our home IoT devices.

All in all, there’s a very real threat that the general public may become rightly paranoid about the potential threats to their privacy from cell phones and IoT devices and toss ’em in the trash can. 


That’s all by way of introduction to Farhad Manjoo’s excellent piece in the Times exploring the subtleties of Apple’s decision to fight the feds (see Tim Cook’s ABC interview here) — with plenty of emphasis on how it would affect confidence in the IoT.

As his lede said:

“To understand what’s at stake in the battle between Apple and the F.B.I. over cracking open a terrorist’s smartphone, it helps to be able to predict the future of the tech industry.”

Manjoo went on to detail the path we’re heading down, in which the IoT will play an increasingly prominent place (hmm: in my ardor for Amazon’s Echo, I’d totally ignored the potential for the feds or bad guys or both [sometimes in our history, they’ve sadly been one and the same, for more details, consider one J. Edgar Hoover..] to use that unobtrusive little cylinder on your kitchen counter to easily monitor everything you and your family say! Chilling, non?).

Read and weep:

“Consider all the technologies we think we want — not just better and more useful phones, but cars that drive themselves, smart assistants you control through voice or household appliances that you can monitor and manage from afar. Many will have cameras, microphones and sensors gathering more data, and an ever more sophisticated mining effort to make sense of it all. Everyday devices will be recording and analyzing your every utterance and action.

“This gets to why tech companies, not to mention we users, should fear the repercussions of the Apple case. Law enforcement officials and their supporters argue that when armed with a valid court order, the cops should never be locked out of any device that might be important in an investigation.

“But if Apple is forced to break its own security to get inside a phone that it had promised users was inviolable, the supposed safety of the always-watching future starts to fall apart. If every device can monitor you, and if they can all be tapped by law enforcement officials under court order, can anyone ever have a truly private conversation? Are we building a world in which there’s no longer any room for keeping secrets?” (my emphasis)

Ominously, he went on to quote Prof. Neil Richards, an expert prognosticator on the growing threats to privacy from our growing dependence on personal technology:

“’This case can’t be a one-time deal,’ said Neil Richards, a professor at the Washington University School of Law. ‘This is about the future.’

“Mr. Richards is the author of “Intellectual Privacy,” a book that examines the dangers of a society in which technology and law conspire to eliminate the possibility of thinking without fear of surveillance. He argues that intellectual creativity depends on a baseline measure of privacy, and that privacy is being eroded by cameras, microphones and sensors we’re all voluntarily surrounding ourselves with.

“’If we care about free expression, we have to care about the ways in which we come up with interesting things to say in the first place,’ he said. ‘And if we are always monitored, always watched, always recorded, we’re going to be much more reluctant to experiment with controversial, eccentric, weird, ‘deviant’ ideas — and most of the ideas that we care about deeply were once highly controversial.’”

Manjoo also points out that laws on these issues often lag years behind technology (see what Rep. Ted Lieu, one of only four Representatives to have studied computer science, said about the issue).

Chris Sogogian, the ACLU’s chief technologist, brings it home squarely to the IoT’s future:

“’What we really need for the Internet of Things to not turn into the Internet of Surveillance is a clear ruling that says that the companies we’re inviting into our homes and bedrooms cannot be conscripted to turn their products into roving bugs for the F.B.I.,’ he said.”

Indeed, and, as I’ve said before, it behooves IoT companies to both build in tough privacy and security protections themselves, and become actively involved in coalitions such as the Online Trust Alliance.

The whole article is great, and I strongly urge you to read the whole thing.

IMHO, this case is a call to arms for the IoT industry, and the hottest places in hell will be reserved for those who continue to sit at their laptops planning their latest cool app and/or device, without becoming involved in collaborative efforts to find detailed solutions that preserve our personal privacy and civil liberties on one hand, and, on the other, realize there’s a legitimate need to use the same technology to catch bad guys and protect us. It will take years, and it will require really, really hard work.


Oh, and it will also take the wisdom of Solomon for the courts to judge these issues. Sorry to be a partisan, but please feel free to let Sen. McConnell know how you feel about his unilateral decision to keep the Supreme Court deadlocked on this and other crucial issues for well over a year. Yes, even King Solomon couldn’t get past the Senate this year…

comments: Comments Off on IoT’s Future Makes iPhone Privacy Case Even More Important tags: , , , , , , , ,
http://www.stephensonstrategies.com/">Stephenson blogs on Internet of Things Internet of Things strategy, breakthroughs and management