“All of Us:” THE model for IoT privacy and security!

pardon me in advance:this will be long, but I think the topic merits it!

One of my fav bits of strategic folk wisdom (in fact, a consistent theme in my Data Dynamite book on the open data paradigm shift) is, when you face a new problem, to think of another organization that might have one similar to yours, but which suffers from it to the nth degree (in some cases, even a matter of literal life-or-death!).

That’s on the likelihood that the severity of their situation would have led these organizations to already explore radical and innovative solutions that might guide your and shorten the process. In the case of the IoT, that would include jet turbine manufacturers and off-shore oil rigs, for example.

I raise that point because of the ever-present problem of IoT privacy and security. I’ve consistently criticized many companies’ lack of attention to seriousness and ingenuity, and warned that this could result not only in disaster for these companies, but also the industry in general due to guilt-by-association.

This is even more of an issue since the May roll-out of the EU’s General Data Protection Regulation (GDPR), based on the presumption of an individual right to privacy.

Now, I have exciting confirmation — from the actions of an organization with just such a high-stakes privacy and security challenge — that it is possible to design an imaginative and effective process alerting the public to the high stakes and providing a thorough process to both reassure them and enroll them in the process.

Informed consent at its best!

It’s the NIH-funded All of Us, a bold effort to recruit 1 million or more people of every age, sex, race, home state, and state of health nationwide to speed medical research, especially toward the goal of “personalized medicine.” The researchers hope that, “By taking into account individual differences in lifestyle, environment, and biology, researchers will uncover paths toward delivering precision medicine.”

All of Us should be of great interest to IoT practitioners, starting with the fact that it might just save our own lives by leading to creation of new medicines (hope you’ll join me in signing up!). In addition, it parallels the IoT in allowing unprecedented degrees of precision in individuals’ care, just as the IoT does with manufacturing, operating data, etc.:

“Precision medicine is an approach to disease treatment and prevention that seeks to maximize effectiveness by taking into account individual variability in genes, environment, and lifestyle. Precision medicine seeks to redefine our understanding of disease onset and progression, treatment response, and health outcomes through the more precise measurement of molecular, environmental, and behavioral factors that contribute to health and disease. This understanding will lead to more accurate diagnoses, more rational disease prevention strategies, better treatment selection, and the development of novel therapies. Coincident with advancing the science of medicine is a changing culture of medical practice and medical research that engages individuals as active partners – not just as patients or research subjects. We believe the combination of a highly engaged population and rich biological, health, behavioral, and environmental data will usher in a new and more effective era of American healthcare.” (my emphasis added)


But what really struck me about All of Us’s relevance to IoT is the absolutely critical need to do everything possible to assure the confidentiality of participants’ data, starting with HIPP protections and extending to the fact that it would absolutely destroy public confidence in the program if the data were to be stolen or otherwise compromised.  As Katie Rush, who heads the project’s communications team told me, “We felt it was important for people to have a solid understanding of what participation in the program entails—so that through the consent process, they were fully informed.”

What the All of Us staff designed was, in my estimation (and I’ve been in or around medical communication for forty years), the gold standard for such processes, and a great model for effective IoT informed consent:

  • you can’t ignore it and still participate in the program: you must sign the consent form.
  • you also can’t short-circuit the process: it said at the beginning the process would take 18-30 minutes (to which I said yeah, sure — I was just going to sign the form and get going), and it really did, because you had to do each step or you couldn’t join — the site was designed so no shortcuts were allowed!:
    • first, there’s an easy-to-follow, attractive short animation about that section of the program
    • then you have to answer some basic questions to demonstrate that you understand the implications.
    • then you have to give your consent to that portion of the program
    • the same process is repeated for each component of the program.
  • all of the steps, and all of the key provisions, are explained in clear, simple English, not legalese. To wit:
    • “Personal information, like your name, address, and other things that easily identify participants will be removed from all data.
    • Samples—also without any names on them—are stored in a secure biobank”
    • “We require All of Us Research Program partner organizations to show that they can meet strict data security standards before they may collect, transfer, or store information from participants.
    • We encrypt all participant data. We also remove obvious identifiers from data used for research. This means names, addresses, and other identifying information is separate from the health information.
    • We require researchers seeking access to All of Us Research Program data to first register with the program, take our ethics training, and agree to a code of conduct for responsible data use.
    • We make data available on a secure platform—the All of Us research portal—and track the activity of all researchers who use it.
    • We enlist independent reviewers to check our plans and test our systems on an ongoing basis to make sure we have effective security controls in place, responsive to emerging threats.”

The site emphasizes that everything possible will be done to protect your privacy and anonymity, but it is also frank that there is no way of removing all risk, and your final consent requires acknowledging that you understand those limits:

“We are working with top privacy experts and using highly-advanced security tools to keep your data safe. We have several  steps in place to protect your data. First, the data we collet from you will be stored on=oyters with extra security portection. A special team will have clearance to process and track your data. We will limit who is allowed to see information that could directly identy you, like your name or social security number. In the unlikely event of a data breach, we will notify you. You are our partner, and your privacy will always be our top priority.”

The process is thorough, easy to understand, and assures that those who actually sign up know exactly what’s expected from them, what will be done to protect them, and that they may still have some risk.

Why can’t we expect that all IoT product manufacturers will give us a streamlined version of the same process? 


I will be developing consulting services to advise companies that want to develop common-sense, effective, easy-to-implement IoT privacy and security measures. Write me if you’d like to know more.

Great Podcast Discussion of #IoT Strategy With Old Friend Jason Daniels

Right after I submitted my final manuscript for The Future is Smart I had a chance to spend an hour with old friend Jason Daniels (we collaborated on a series of “21st Century Homeland Security Tips You Won’t Hear From Officials” videos back when I was a homeland security theorist) on his “Studio @ 50 Oliver” podcast.

We covered just about every topic I hit in the book, with a heavy emphasis on the attitude shifts (“IoT Essential Truths” needed to really capitalize on the IoT and the bleeding-edge concept I introduce at the end of the book, the “Circular Corporation,” with departments and individuals (even including your supply chain, distribution network and customers, if you choose) in a continuous, circular management style revolving around a shared real-time IoT hub.  Hope you’ll enjoy it!

IoT Design Manifesto 1.0: great starting point for your IoT strategy & products!

Late in the process of writing my forthcoming IoT strategy book, The Future Is Smart, I happened on the “IoT Design Manifesto 1.0” site. I wish I’d found it earlier so I could have featured it more prominently in the book.

The reason is that the manifesto is the product (bear in mind that the original team of participants designed it to be dynamic and iterative, so it will doubtlessly change over time) of a collaborative process involving both product designers and IoT thought leaders such as the great Rob van Kranenburg. As I’ve written ad nauseam, I think of the IoT as inherently collaborative, since sharing data rather than hoarding it can lead to synergistic benefits, and collaborative approaches such as smart cities get their strength from an evolving mishmash of individual actions that gets progressively more valuable.

From the names, I suspect most of the Manifesto’s authors are European. That’s important, since Europeans seem to be more concerned, on the whole, about IoT privacy and security than their American counterparts, witness the EU-driven “privacy by design” concept, which makes privacy a priority from the beginning of the design process.

At any rate, I was impressed that the manifesto combines both philosophical and economic priorities, and does so in a way that should maximize the benefits and minimize the problems.

I’m going to take the liberty of including the entire manifesto, with my side comments:

  1. WE DON’T BELIEVE THE HYPE. We pledge to be skeptical of the cult of the new — just slapping the Internet onto a product isn’t the answer, Monetizing only through connectivity rarely guarantees sustainable commercial success.
    (Comment: this is like my “just because you can do it doesn’t mean you should” warning: if making a product “smart” doesn’t add real value, why do it?)*
  2. WE DESIGN USEFUL THINGS. Value comes from products that are purposeful. Our commitment is to design products that have a meaningful impact on people’s lives; IoT technologies are merely tools to enable that.
    (Comment: see number 1!)
  3. “WE AIM FOR THE WIN-WIN-WIN. A complex web of stakeholders is forming around IoT products: from users, to businesses, and everyone in between. We design so that there is a win for everybody in this elaborate exchange.
    (Comment:This is a big one in my mind, and relates to my IoT Essential Truth #2 — share data, don’t hoard it — when you share IoT data, even with competitors in some cases [think of IFTTT “recipes”] — you can create services that benefit customers, companies, and even the greater good, such as reducing global warming).
  4. WE KEEP EVERYONE AND EVERYTHING SECURE. With connectivity comes the potential for external security threats executed through the product itself, which comes with serious consequences. We are committed to protecting our users from these dangers, whatever they may be.
    (Comment: Amen! as I’ve written ad nauseum, protecting privacy and security must be THE highest IoT priority — see next post below!).
  5. WE BUILD AND PROMOTE A CULTURE OF PRIVACY. Equally severe threats can also come from within. Trust is violated when personal  information gathered by the product is handled carelessly. We build and promote a culture of integrity where the norm is to handle data with care.
    (Comment:See 4!).
  6. WE ARE DELIBERATE ABOUT WHAT DATA WE COLLECT. This is not the business of hoarding data; we only collect data that serves the utility of the product and service. Therefore, identifying what those data points are must be conscientious and deliberate.
    (Comment: this is a delicate issue, because you may find data that wasn’t originally valuable becomes so as new correlations and links are established. However, just collecting data willy-nilly and depositing it in an unstructured “data lake” for possible use later is asking for trouble if your security is breeched.).
  7. WE MAKE THE PARTIES ASSOCIATED WITH AN IOT PRODUCT EXPLICIT. IoT products are uniquely connected, making the flow of information among stakeholders open and fluid. This results in a complex, ambiguous, and invisible network. Our responsibility is to make the dynamics among those parties more visible and understandable to everyone.
    (Comment: see what I wrote in the last post, where I recommended companies spell out their privacy and usage policies in plain language and completely).
  8. WE EMPOWER USERS TO BE THE MASTERS OF THEIR OWN DOMAIN. Users often do not have control over their role within the network of stakeholders surrounding an IoT product. We believe that users should be empowered to set the boundaries of how their data is accessed and how they are engaged with via the product.
    (Comment: consistent with prior points, make sure that any permissions are explicit and  opt-in rather than opt-out to protect users — and yourself (rather avoid lawsuits? Thought so…)
  9. WE DESIGN THINGS FOR THEIR LIFETIME. Currently physical products and digital services tend to be built to have different lifespans. In an IoT product features are codependent, so lifespans need to be aligned. We design products and their services to be bound as a single, durable entity.
    (Comment: consistent with the emerging circular economy concept, this can be a win-win-win for you, your customer and the environment. Products that don’t become obsolete quickly but can be upgraded either by hardware or software will delight customers and build their loyalty [remember that if you continue to meet their needs and desires, there’s less incentive for customers to check out competitors and possibly be wooed away!). Products that you enhance over time and particularly those you market as services instead of sell will also stay out of landfills and reduce your pduction costs.
  10. IN THE END, WE ARE HUMAN BEINGS. Design is an impactful act. With our work, we have the power to affect relationships between people and technology, as well as among people.  We don’t use this influence to only make profits or create robot overlords; instead, it is our responsibility to use design to help people, communities, and societies  thrive.
    Comment: yea designers!!)

I’ve personally signed onto the Manifesto, and do hope to contribute in the future (would like something explicit about the environment in it, but who knows) and urge you to do the same. More important, why start from scratch to come up with your own product design guidelines, when you can capitalize on the hard work that’s gone into the Manifesto as a starting point and modify it for your own unique needs?


*BTW: I was contemptuous of the first IoT electric toothbrush I wrote about, but since talked to a leader in the field who convinced me that it could actually revolutionize the practice of dentistry for the better by providing objective proof that  patient had brushed frequently and correctly. My bad!

Mycroft Brings Open-Source Revolution to Home Assistants

Brilliant!  Crowd-funded (even better!) Mycroft brings the rich potential of open-source to the growing field of digital home assistants.   I suspect it won’t be long until it claims a major part of the field, because the Mycroft platform can evolve and grow exponentially by capitalizing on the contributions of many, many people, not unlike the way IFTTT has with its crowd-sourced smart home “recipes.”

According to a fascinating ZD Net interview with its developer, Joshua Montgomery, his motivation was not profit per se, but to create a general AI intelligence system that would transform a start-up space he was re-developing:

“He wanted to create the type of artificial intelligence platform that ‘if you spoke to it when you walked in the room, it could control the music, control the lights, the doors’ and more.”

                         Mycroft

Montgomery wanted to do this through an open-source voice control system but for there wasn’t an open source equivalent to Siri or Alexa.  After building the natural language, open-source AI system to fill that need (tag line, “An Artificial Intelligence for Everyone”) he decided to build a “reference device” as the reporter terms it (gotta love that techno speak. In other words, a hardware device that could demonstrate the system). That in turn led to a crowdsourced campaign on Kickstarter and Backerkit to fund the home hub, which is based on the old chestnut of the IoT, Raspberry Pi. The result is a squat, cute (looks like a smiley face) unit, with a high-quality speaker.  

Most important, when the development team is done with the AI platform, Mycroft will release all of the Mycroft AI code under GPL V3, inviting the open-source community to capitalize and improve on it.  That will place Mycroft squarely in the open-source heritage of Linux and Mozilla.

Among other benefits, Mycroft will use natural language processing to activate a wide range of online services, from Netflix to Pandora, as well as control your smart home devices.

Mycroft illustrates one of my favorite IoT Essential Truths: we need to share data, not hoard it. I don’t care how brilliant your engineers are: they are only a tiny percentage of the world population, with only a limited amount of personal experience (especially if they’re callow millennials) and interests. When you go open source and throw your data open to the world, the progress will be greater as will be the benefits — to you and humanity.

A Vision for Dynamic and Lower-Cost Aging in Cities Through “SmartAging”

I’ve been giving a lot of thought recently about how my vision of I0T-based “SmartAging” through a combination of:

  • Quantified Self health apps and devices to improve seniors’ health and turn their health care into more of a partnership with their doctors
  • and smart home devices that would make it easier to manage their homes and “age in place” rather than being institutionalized

could meld with the exciting developments in smart city devices and strategy.  I believe the results could make seniors happier and healthier, reduce the burdens on city budgets of growing aging populations, and spur unprecedented creativity and innovation on these issues. Here’s my vision of how the two might come together. I’d welcome your thoughts on the concept!

 

A Vision for Dynamic and Lower-Cost Aging in Cities Through “SmartAging”

It’s clear business as usual in dealing with aging in America won’t work anymore.  10,000 baby boomers a day retire and draw Social Security. Between now and 2050, seniors will be the fastest growing segment of the population.  How can we stretch government programs and private resources so seniors won’t be sickly and live in abject poverty, yet millennials won’t be bankrupted either?

As someone in that category, this is of more than passing interest to me! 

I propose a new approach to aging in cities, marrying advanced but affordable personal technology, new ways of thinking about aging, and hybrid formal and ad hoc public-private partnerships, which can deal with at least part of the aging issue. Carving out some seniors from needing services through self-reliance and enhancing their well-being would allow focusing scarce resources on the most vulnerable remaining seniors. 

The approach is made possible not only by the plummeting cost and increasing power of personal technology but also the exciting new forms of collaboration it has made possible.

The proposal’s basis is the Internet of Things (IoT).  There is already a growing range of IoT wearable devices to track health indicators such as heart rates and promoting fitness activities, and IoT “smart home” devices controlling lighting, heat, and other systems. The framework visualized here would easily integrate these devices, but they can be expensive, so it is designed so seniors could benefit from the project without having to buy the dedicated devices.

This proposal does not attempt to be an all-encompassing solution to every issue of aging, but instead will create a robust, open platform that government agencies, companies, civic groups, and individuals can build upon to reduce burdens on individual seniors, improve their health and quality of life, and cut the cost of and need for some government services. Even better, the same platform and technologies can be used to enhance the lives of others throughout the life spectrum as well, increasing its value and versatility.

The proposal is for two complementary projects to create a basis for later, more ambitious one.

Each would be valuable in its own right and perhaps reach differing portions of the senior population. Combined, they would provide seniors and their families with a wealth of real-time information to improve health, mobility, and quality of life, while cutting their living costs and reducing social isolation.  The result would be a mutually-beneficial public-private partnerships and, one hopes, improve not only seniors’ lives, but also their feeling of connectedness to the broader community. Rather than treat seniors as passive recipients of services, it would empower them to be as self-reliant as possible given their varying circumstances. They would both be based on the Lifeline program in Massachusetts (and similar ones elsewhere) that give low-income residents basic Internet service at low cost.

Locally, Boston already has a record of achievement in internet-based services to connect seniors with others, starting with the simple and tremendously effective SnowCrew program that Joe Porcelli launched in the Jamaica Plain neighborhood. This later expanded nationwide into the NextDoor site and app, which could easily be used by participants in the program.

The first project would capitalize on the widespread popularity of the new digital “home assistants,” such as the Amazon Echo and Google Home.  One version of the Echo can be bought for as little as $49, with bulk buying also possible.  A critical advantage of these devices, rather than home monitoring devices specifically for seniors, is that they are mainstream, benefit from the “network effects” phenomenon that means each becomes more valuable as more are in use, and don’t stigmatize the users or shout I’M ELDERLY. A person who is in their 50s could buy one now, use it for routine household needs, and then add additional age-related functions (see below) as they age, amortizing the cost.

The most important thing to remember about these devices regarding aging is the fact that they are voice-activated, so they would be especially attractive to seniors who are tech-averse or simply unable to navigate complex devices. The user simply speaks a command to activate the device.

The Echo (one presumes a variation on the same theme will soon be the case with the “Home,” Apple’s forthcoming “Home Pod” and other devices that might enter the space in the future) gets its power from “skills,” or apps, that are developed by third-party developers. They give it the power, via voice, to deliver a wide range of content on every topic under the sun.  Several already released “skills” give an idea of how this might work:

  • Ask My Buddy helps users in an emergency. In an emergency, it can send phone calls or text messages to up to five contacts. A user would say, “Alexa, ask my buddy Bob to send help” and Bob would get an alert to check in on his friend.
  • Linked thermostats can raise or lower the temperature a precise amount, and lights can also be turned on or off or adjusted for specific needs.
  • Marvee can keep seniors in touch w/ their families and lessen social isolation.
  • The Fitbit skill allows the user who also has a Fitbit to trace their physical activity, encouraging fitness.

Again looking to Boston for precedent, related apps include the Children’s Hospital and Kids’ MD ones from Children’s Hospital. Imagine how helpful it could be if the gerontology departments of hospitals provided similar “skills” for seniors!

Most important to making this service work would be to capitalize on the growing number of city-based open-data programs that release a variety of important real-time data bases which independent developers mash up to create “skills”  such as real-time transit apps.  The author was a consultant to the District of Columbia in 2008 when it began this data-based “smart city” approach with the Apps for Democracy contest, which has spawned similar projects worldwide since then.  When real-time city data is released, the result is almost magic: individuals and groups see different value in the same data, and develop new services that use it in a variety of ways at no expense to taxpayers.

The key to this half of the pilot programs would be creating a working relationship with local Meetups such as those already created in various cities for Alexa programmers, which would facilitate the relationship) to stage one or more high-visibility hackathons. Programmers from major public and social service institutions serving seniors, colleges and universities, and others with an interest in the subject could come together to create “skills” based on the local public data feeds, to serve seniors’ needs, such as:

  • health
  • nutrition
  • mobility
  • city services
  • overcoming social isolation (one might ask how a technological program could help with this need. The City of Barcelona, generally acknowledged as the world’s “smartest” city, is circulating an RFP right now with that goal and already has a “smart” program for seniors who need immediate help to call for it) .

“Skills” are proliferating at a dizzying rate, and ones developed for one city can be easily adapted for localized use elsewhere.

Such a project would have no direct costs, but the city and/or a non-profit might negotiate lower bulk-buying rates for the devices, especially the l0wer price ($59 list) Amazon Dot, similar to the contract between the Japan Post Group, IBM, and Apple to buy 5 million iPads and equip them with senior-friendly apps from IBM which the Post Group would then furnish to Japanese seniors. Conceivably, the Dots bought this way might come preloaded with the localized and senior-friendly “skills.” 

The second component of a prototype SmartAging city program would make the wide range of local real-time location-based data available by various cities usable by cities joininh the 100+ cities worldwide who have joined the “Things Network” that create free citywide data networks specifically for Internet of Things use.

The concept uses technology called LoRaWAN: low-cost (the 10 units used in Amsterdam, each with a signal range of about 6 miles, only cost $12,000 total — much cheaper ones will be released soon), and were deployed and operative in less than a month!  The cost and difficulty of linking an entire city has plummeted as more cities join, and the global project is inherently collaborative.

With Things Network, entire cities would be converted into Internet of Things laboratories, empowering anyone (city agencies, companies, educational institutions, non-profits, individuals) to experiment with offering new services that would use the no-cost data sharing network.  In cities that already host Things Networks,  availability of the networks has spawned a wide range of novel local services.  For example, in Dunblane, Scotland, the team is developing a ThingsNetwork- based alarming system for people with dementia.  Even better, as the rapid spread of citywide open data programs and resulting open source apps to capitalize on them has illustrated, a neat app or service created in one city could easily be copied and enhanced elsewhere — virtuous imitation!

The critical component of the prototype programs would be to hold one or more hackathons once the network was in place.  The same range of participants would be invited, and since the Things Network could also serve a wide range of other public/private uses for all age groups and demographics, more developers and subject matter experts might participate in the hackathon, increasing the chances of more robust and multi-purpose applications resulting.

These citywide networks could eventually become the heart of ambitious two-way services for seniors based on real-time data, similar to those in Bolsano, Italy

The Internet of Things and smart cities will become widespread soon simply because of lowering costs and greater versatility, whether this prototype project for seniors happens or not. The suggestions above would make sure that the IoT serves the public interest by harnessing IoT data to improve seniors’ health, reduce their social isolation, and make them more self-sufficient. It will reduce the burden on traditional government services to seniors while unlocking creative new services we can’t even visualize today to enhance the aging process.

ThingWorx Analytics Video: microcosm of why IoT is so transformative!

I’ll speak at PTC’s LiveWorx lollapalooza later this month (ooh: act quickly and I can get you a $300 registration discount: use code EDUCATE300) on my IoT-based Circular Company meme, so I’ve been devouring everything I can about ThingWorx to prepare.

Came across a nifty 6:09 vid about one component of ThingWorx, its Analytics feature. It seems to me this video sez it all about both how you can both launch an incremental IoT strategy (a recent focus of mine, given my webinar with Mendix) that will begin to pay immediate benefits and can serve as the basis for more ambitious transformation later, especially because you’ll already have the analytical tools such as ThingWorx Analytics already installed.

What caught my eye was that Flowserve, the pump giant involved in this case, could retrofit existing pumps with retrofit sensors from National Instruments — crucial for two reasons:

  • you may have major investments in existing, durable machinery: hard to justify scrapping it just to take advantage of the IoT
  • relatively few high-end, high-cost machinery and devices have been redesigned from the ground up to incorporate IoT monitoring and operations.

Note the screen grab: each of these sensors takes 30,000 readings per second. How’s that for real-time data?  PTC refers to this as part of the “volume, velocity and variety challenge of data” with the IoT.

As a microcosm of the IoT’s benefits, this example shows how easy it is to use those massive amounts of data and how they can be used to improve understanding and performance.

There are three major components:

  • ThingWatcher:
    This is the most critical component, because it sifts through the incredible amount of data from the edge, learns what constitutes normal performance for that sensor (creating “pop-up learning flags”), and then monitors it future performance for anomalies and, as the sample video shows, delivers real-time alerts to users (without requiring human monitoring) so they can make adjustments and/or order repairs.  Finds anomalies from edge devices in real-time. Automatically observes and learns the normal state pattern for every device or sensor. It then monitors each for anomalies and delivers re- al-time alerts to end users.
  • ThingPredictor:
    For the all-important new function of predictive maintenance, two different types of ThingPredictor indicators pop up when if anomalies are detected, predicting how long it may be until failure, allowing plenty of time for less-costly, anticipatory repairs. Because the specific deviation is identified in advance, repair crews will have the needed part with them when needed, rather than having to make an additional trip back to pick up parts.

    If you ask for a standard predictive scoring you don’t specify which performance features to include and get a simple predictive score. However, you can specify several key features to evaluate and get a more detailed (and probably more helpful) answer. For example,  “if you indicate an important feature count of three, the causal scoring output will include the three most influential features for each record and the percentage weights of each feature’s influence on the score.”

  • ThingOptimizer:
    Finally, you can use “ThingOptimizer” to do some what-if calculations to decide which possible “levers,” as ThingWorx calls the key variables, could change the projections to either maximize a positive factor or minimize the negatives. “Prescriptive scoring results include both an original score (the score before any lever attributes are changed) and an optimized score (the score after optimal values are applied to the lever attributes). In addition, for each attribute identified in your data as a lever, original and optimal values are included in the prescriptive scoring results.” It sort of reminds me how the introduction of VisiCalc allowed users, for the first time, to play around with variables to see which would have the best results.
Best of all, as the video illustrates, ThingWorx Analytics would facilitate the kind of “Circular Company” I’ll address in my speech, because the exact same real-time data could simultaneously be used by operating personnel to fine tune operations and catch a problem in time for predictive maintenance, and by senior management to get an instant overview of how operations are going at all the installations. Same data, many uses.
Bottom line: a robust IoT platform could be the key to an incremental strategy to begin by improving daily operations and reducing maintenance problems, and also be the underpinning for more radical transformation as your IoT strategy becomes more advanced!  See you at LiveWorx!

Surprising Benefits of Combining IoT and Blockchain (they go beyond economic ones!)

One final effort to work this blockchain obsession out of my system so I can get on to some exciting other IoT news!

I couldn’t resist summarizing for you the key points in”Blockchain: the solution for transparency in product supply chains,” a white paper from Project Provenance Ltd., a London-based collective  (“Our common goal is to deliver meaningful change to commerce through open and accessible information about products and supply chains.”).

If you’ve followed any of the controversies over products such as “blood diamonds” or fish caught by Asian slaves & sold by US supermarkets, you know supply chains are not only an economic issue but also sometimes a vital social (and sometimes environmental) one. As the white paper warns:

“The choices we make in the marketplace determine which business practices thrive. From a diamond in a mine to a tree in a forest, it is the deepest darkest ends of supply chains that damage so much of the planet and its livelihood.”

Yikes!

Now blockchain can make doing the right thing easier and more profitable:

“Provenance enables every physical product to come with a digital ‘passport’ that proves authenticity (Is this product what it claims to be?) and origin (Where does this product come from?), creating an auditable record of the journey behind all physical products. The potential benefits for businesses, as well as for society and the environment, are hard to overstate: preventing the selling of fake goods, as well as the problem of ‘double spending’ of certifications present in current systems. The Decentralized Application (Dapp) proposed in this paper is still in development and we welcome businesses and standards organizations to join our consortium and collaborate on this new approach to understanding our material world.”

I also love Provenance’s work with blockchain because it demonstrates one of my IoT “Essential Truths,” namely, that we must share data rather than hoard it.  The exact same real-time data that can help streamline the supply chain to get fish to our stores quicker and with less waste can also mean that the people catching it are treated fairly. How cool is that?  Or, as Benjamin Herzberg, Program Lead, Private Sector Engagement for Good Governance at the World Bank Institute puts it in the quote that begins the paper, Now, in the hyper-connected and ever-evolving world, transparency is the new power.

While I won’t summarize the entire paper, I do recommend that you so, especially if blockchain is still new to you, because it gives a very detailed explanation of each blockchain component.

Instead, let’s jump in with the economic benefits of a blockchain and IoT-enabled supply chain, since most companies won’t consider it, no matter what the social benefits, if it doesn’t help the bottom line. The list is long, and impressive:

  • “Interoperable: A modular, interoperable platform that eliminates the possibility of double spending
  • Auditable: An auditable record that can be inspected and used by companies, standards organizations, regulators, and customers alike
  • Cost-efficient:  A solution to drastically reduce costs by eliminating the need for ‘handling companies’ to be audited
  • Real-time and agile:  A fast and highly accessible sign-up means quick deployment
  • Public: The openness of the platform enables innovation and could achieve bottom-up transparency in supply chains instead of burdensome top-down audits
  • Guaranteed continuity:  The elimination of any central operator ensures inclusiveness and longevity” (my emphasis)

Applying it to a specific need, such as documenting that a food that claims to be organic really is, blockchain is much more efficient and economical than cumbersome current systems, which usually rely on some third party monitoring and observing the process.  As I’ve mentioned before, the exquisite paradox of blockchain-based systems is that they are secure and trustworthy specifically because no one individual or program controls them: it’s done through a distributed system where all the players may, in fact, distrust each other:

“The blockchain removes the need for a trusted central organization that operates and maintains this system. Using blockchains as a shared and secure platform, we are able to see not only the final state (which mimics the real world in assigning the materials for a given product under the ownership of the final customer), but crucially, we are able to overcome the weaknesses of current systems by allowing one to securely audit all transactions that brought this state of being into effect; i.e., to inspect the uninterrupted chain of custody from the raw materials to the end sale.

“The blockchain also gives us an unprecedented level of certainty over the fidelity of the information. We can be sure that all transfers of ownership were explicitly authorized by their relevant controllers without having to trust the behavior or competence of an incumbent processor. Interested parties may also audit the production and manufacturing avatars and verify that their “on-chain” persona accurately reflects reality.”

The white paper concludes by also citing an additional benefit that I’ve mentioned before: facilitating the switch to an environmentally-sound “circular economy,” which requires not only tracking the creation of things, but also their usage, trying to keep them out of landfills. “The system proposed in this paper would not only allow the creation (including all materials, grades, processes etc) and lifecycle (use, maintenance etc) to be logged on the blockchain, but this would also make it easy to access this information when products are returned to be assessed and remanufactured into a new item.”

Please do read the whole report, and think how the economic benefits of applying blockchain-enabled IoT practices to your supply chain can also warm your heart.

 

More Blockchain Synergies With IoT: Supply Chain Optimization

The more I learn about blockchain’s possible uses — this time for supply chains — the more convinced I am that it is absolutely essential to full development of the IoT’s potential.

I recently raved about blockchain’s potential to perhaps solve the IoT’s growing security and privacy challenges. Since then, I’ve discovered that it can also further streamline and optimize the supply chain, another step toward the precision that I think is such a hallmark of the IoT.

As I’ve written before, the ability to instantly share (something we could never do before) real-time data about your assembly line’s status, inventories, etc. with your supply chain can lead to unprecdented integration of the supply chain and factory, much of it on a M2M basis without any human intervention. It seems to me that the blockchain can be the perfect mechanism to bring about this synchronization.

A brief reminder that, paradoxically, it’s because blockchain entries (blocks) are shared, and distributed (vs. centralized) that it’s secure without using a trusted intermediary such as a bank, because no one participant can change an entry after it’s posted.

Complementing the IBM video I included in my last post on the subject, here’s one that I think succinctly summarizes blockchain’s benefits:

A recent LoadDelivered article detailed a number of the benefits from building your supply chain around blockchain. They paralleling the ones I mentioned in my prior post regarding its security benefits, of using blockchain to organize your supply chain (with some great links for more details):

  • “Recording the quantity and transfer of assets – like pallets, trailers, containers, etc. – as they move between supply chain nodes (Talking Logistics)
  • Tracking purchase orders, change orders, receipts, shipment notifications, or other trade-related documents
  • Assigning or verifying certifications or certain properties of physical products; for example determining if a food product is organic or fair trade (Provenance)
  • Linking physical goods to serial numbers, bar codes, digital tags like RFID, etc.
  • Sharing information about manufacturing process, assembly, delivery, and maintenance of products with suppliers and vendors.”

That kind of information, derived from real-time IoT sensor data, should be irresistible to companies compared to the relative inefficiency of today’s supply chain.

The article goes on to list a variety of benefits:

  • “Enhanced Transparency. Documenting a product’s journey across the supply chain reveals its true origin and touchpoints, which increases trust and helps eliminate the bias found in today’s opaque supply chains. Manufacturers can also reduce recalls by sharing logs with OEMs and regulators (Talking Logistics).
  • Greater Scalability. Virtually any number of participants, accessing from any number of touchpoints, is possible (Forbes).
  • Better Security. A shared, indelible ledger with codified rules could potentially eliminate the audits required by internal systems and processes (Spend Matters).
  • Increased Innovation. Opportunities abound to create new, specialized uses for the technology as a result of the decentralized architecture.”

Note that it the advantages aren’t all hard numbers, but also allowing marketing innovations, similar to the way the IoT allows companies to begin marketing their products as services because of real-time data from the products in the field. In the case of applying it to the supply chain (food products, for example), manufacturers could get a marketing advantage because they could offer objective, tamper-proof documentation of the product’s organic or non-GMO origins. Who would have thought that technology whose primary goal is increasing operating efficiency could have these other, creative benefits as well?

Applying  blockchain to the supply chain is getting serious attention, including a pilot program in the Port of Rotterdam, Europe’s largest.  IBM, Intel, Cisco and Accenture are among the blue-chip members of Hyperledger, a new open source Linux Foundation collaboration to further develop blockchain. Again, it’s the open source, decentralized aspect of blockchain that makes it so effective.

Logistics expert Adrian Gonzalez is perhaps the most bullish on blockchain’s potential to revolutionize supply chains:

“the peer-to-peer, decentralized architecture of blockchain has the potential to trigger a new wave of innovation in how supply chain applications are developed, deployed, and used….(becoming) the new operating system for Supply Chain Operating Networks

It’s also another reminder of the paradoxical wisdom of one of my IoT “Essential Truths,” that we must learn to ask “who else could share this information” rather than hoarding it as in the past. It is the very fact that blockchain data is shared that means it can’t be tampered with by a single actor.

What particularly intrigues me about widespread use of blockchain at the heart of companies’ operations and fueled by real-time data from IoT sensors and other devices is that it would ensure that privacy and security, which I otherwise fear would always be an afterthought, would instead be inextricably linked with achieving efficiency gains. That would make companies eager to embrace the blockchain, assuring their attention to privacy and security as part of the deal. That would be a definite win-win.

Blockchain must definitely be on your radar in 2017.

 

Lo and behold, right after I posted this, news that WalMart, the logistics savants, are testing blockchain for supply chain management!

 

Libelium: flexibility a key strategy for IoT startups

I’ve been fixated recently on venerable manufacturing firms such as 169-yr. old Siemens making the IoT switch.  Time to switch focus, and look at one of my fav pure-play IoT firms, Libelium.  I think Libelium proves that smart IoT firms must, above all, remain nimble and flexible,  by three interdependent strategies:

  • avoiding picking winners among communications protocols and other standards.
  • avoiding over-specialization.
  • partnering instead of going it alone.
Libelium CEO Alicia Asin

Libelium CEO Alicia Asin

If you aren’t familiar with Libelium, it’s a Spanish company that recently turned 10 (my, how time flies!) in a category littered with failures that had interesting concepts but didn’t survive. Bright, young, CEO Alicia Asin, one of my favorite IoT thought leaders (and do-ers!) was recently named best manager of the year in the Aragón region in Spain.  I sat down with her for a wide-ranging discussion when she recently visited the Hub of the Universe.

I’ve loved the company since its inception, particularly because it is active in so many sectors of the IoT, including logistics, industrial control, smart meters, home automation and a couple of my most favorite, agriculture (I have a weak spot for anything that combines “IoT” AND “precision”!) and smart cities.  I asked Asin why the company hadn’t picked one of those verticals as its sole focus: “it was too risky to choose one market. That’s still the same: the IoT is still so fragmented in various verticals.”

The best illustration of the company’s strategy in action is its Waspmote sensor platform, which it calls the “most complete Internet of Things platform in the market with worldwide certifications.” It can monitor up to 120 sensors to cover hundreds of IoT applications in the wide range of markets Libelium serves with this diversified strategy, ranging from the environment to “smart” parking.  The new versions of their sensors include actuators, to not simply report data, but also allow M2M control of devices such as irrigation valves, thermostats, illumination systems, motors and PLC’s. Equally important, because of the potentially high cost of having to replace the sensors, the new ones use extremely little power, so they can last        .

Equally important as the company’s refusal to limit itself to a single vertical market is its commitment to open systems and multiple communications protocols, including LoRaWAN, SIGFOX, ZigBee and 4G — a total of 16 radio technologies. It also provides both open source SDK and APIs.

Why?  As Asin told me:

 

“There is not going to be a standard. This (competiting standards and technology) is the new normal.

“I talk to some cities that want to become involved in smart cities, and they say we want to start working on this but we want to use the protocol that will be the winner.

“No one knows what will be the winner.

“We use things that are resilient. We install all the agents — if you aren’t happy with one, you just open the interface and change it. You don’t have to uninstall anything. What if one of these companies increases their prices to heaven, or you are not happy with the coverage, or the company disappears? We allow you to have all your options open.

“The problem is that this (not picking a standard) is a new message, and people don’t like to listen.  This is how we interpret the future.”

Libelium makes 110 different plug and play sensors (or as they call them, “Plug and Sense,” to detect a wide range of data from sources including gases, events, parking, energy use, agriculture, and water.  They claim the lowest power consumption in the industry, leading to longer life and lower maintenance and operating costs.

Finally, the company doesn’t try to do everything itself: Libelium has a large and growing partner network (or ecosystem, as it calls it — music to the ears of someone who believes in looking to nature for profitable business inspiration). Carrying the collaboration theme even farther, they’ve created an “IoT Marketplace,” where pre-assembled device combinations from Libelium and partners can be purchased to meet the specific needs of niches such as e-health,  vineyards, water quality, smart factories, and smart parking.  As the company says, “the lack of integrated solutions from hardware to application level is a barrier for fast adoption,” and the kits take away that barrier.

I can’t stress it enough: for IoT startups that aren’t totally focused on a single niche (a high-stakes strategy), Libelium offers a great model because of its flexibility, agnostic view of standards, diversification among a variety of niches, and eagerness to collaborate with other vendors.


BTW: Asin is particularly proud of the company’s newest offering, My Signals,which debuted in October and has already won several awards.  She told me that they hope the device will allow delivering Tier 1 medical care to billions of underserved people worldwide who live in rural areas with little access to hospitals.  It combines 15 different sensors measuring the most important body parameters that would ordinarily be measured in a hospital, including ECG, glucose, airflow, pulse, oxygen in

It combines 15 different sensors measuring the most important body parameters that would ordinarily be measured in a hospital, including ECG, glucose, airflow, pulse, blood oxygen, and blood pressure. The data is encrypted and sent to the Libelium Cloud in real-time to be visualized on the user’s private account.

It fits in a small suitcase and costs less than 1/100th the amount of a traditional Emergency Observation Unit.

The kit was created to make it possible for m-health developers to create prototypes cheaply and quickly.

Siemens’s MindSphere: from automation to digitalization

Perhaps the most important component of a successful IoT transformation is building it on a robust platform, because that alone can let your company go beyond random IoT experiments to achieve an integrated IoT strategy that can add new components systematically and create synergistic benefits by combining the various aspects of the program.

A good starting point for discussion of such platforms is a description of the eight key platform components as detailed by IoT Analytics:

  1. “Connectivity & normalization: brings different protocols and different data formats into one ‘software’  interface ensuring accurate data streaming and interaction with all devices.
  2. Device management: ensures the connected ‘things’ are working properly, seamlessly running patches and updates for software and applications running on the device or edge gateways.
  3. Database: scalable storage of device data brings the requirements for hybrid cloud-based databases to a new level in terms of data volume, variety, velocity and veracity.
  4. Processing & action management: brings data to life with rule-based event-action-triggers enabling execution of ‘smart’ actions based on specific sensor data.
  5. Analytics: performs a range of complex analysis from basic data clustering and deep machine learning to predictive analytics extracting the most value out of the IoT data-stream.
  6. Visualization: enables humans to see patterns and observe trends from visualization dashboards where data is vividly portrayed through line-, stacked-, or pie charts, 2D- or even 3D-models.
  7. Additional tools: allow IoT developers prototype, test and market the IoT use case creating platform ecosystem apps for visualizing, managing and controlling connected devices.
  8. External interfaces: integrate with 3rd-party systems and the rest of the wider IT-ecosystem via built-in application programming interfaces (API), software development kits (SDK), and gateways.”

Despite (or because of, the complexity,) I think this is a decent description, because a robust IoT platf0rm really must encompass so many functions. The eight points give a basis for deciding whether what a company hawks as an IoT platform really deserves that title or really constitutes only part of the necessary whole (Aside: it’s also a great illustration of my Essential Truth that, instead of hoarding data as in the past, we must begin to ask “who else can use this data?” either inside the company or, potentially, outside, then use technology such as an IoT platform to integrate all those data uses productively.).

During my recent Barcelona trip (disclaimer: Siemens paid my way and arranged special access to some of its key decision makers, but made no attempt to limit my editorial judgment) I interviewed the company’s Chief Strategy Officer, Dr. Horst J. Kayser, who made it clear (as I mentioned in my earlier post about Siemens) that one of the advantages the company has over pure-play software firms is that it can apply its software offerings internally first and tweak them there, because of its 169-year heritage as a manufacturer, and “sits on a vast program of automation.”

Siemens’s IoT platform, MindSphere  is a collaboration with SAP, using the latter’s vast HANA cloud.  It ties together all components of Siemens’s IoT offerings, including data analytics, connectivity capabilities, developers’ tools, applications and services. MindSphere focuses on monitoring manufacturing assets’ real-time status, to evaluate and use customers’ data, producing insights that can cut production costs, improve performance, and even switch to predictive maintenance. Its Mind Connect Nano collects data from the assets and transferring it to MindSphere.

The “digital twin” is integrated throughout the MindSphere platform. Kayser says that “there’s a digital twin of the entire process, from conception through the manufacturing and maintenance, and it feeds the data back into the model.” In fact,  one dramatic example of the concept in action is the new Maserati Ghibli, created in 16 months instead of 30 — almost 50% less time than for prior models.  Using the Teamcenter PLM software, the team was able to virtually develop and extensively test the car before anything was created physically.

IMHO, Mindsphere and components such as Teamware might really be the key to actualizing my dream of the circular company, in this case with the IoT-based real-time digital twin at the heart of the enterprise — as Kayser said, “everything is done through one consistent data set.)” I hope to explore my concept, and the benefits I think it can produce, more with the Siemens strategists in the future!  I tried the idea out on several of them in Barcelona, and no one laughed, so we’ll see…

As with the company’s rail digitization services that I mentioned in my earlier post, there’s an in-house guinea pig for MindSphere as well: the company’s “Factory of the Future” in Amberg. The plant manufactures Simatic controllers, the key to the company’s automation products and services, to which digitalization is now being added as part of the company’s Industrie 4.0 IoT plan for manufacturing (paralleling GE’s “Industrial Internet.”). As you may be aware, Siemens’s efforts in this area are a subset of a formal German government/industry initiative — I  doubt seriously we’ll see this in the U.S. under Trump.

The results of digitalization at Amberg are astonishing by any measure, especially the ultimate accomplishment: a  99.9988 percent rate (no typo!!), which is even more incredible when you realize this is not mass production with long, uniform production runs: the plant manufactures more than 1,000 varieties of the controllers, with a total volume of 12 million Simatic products each year, or about one per second.  Here are some of the other benefits of what they call an emphasis on optimizing the entire value chain:

  • shorter delivery time: 24 hours from order.
  • time to market reduced by up to 50%.
  • cost savings of up to 25%

Of course there are several other robust IoT platforms, including GE’s Predix and PTC’s Thingworx, but my analysis shows that Mindsphere meets IoT Analytics’ criteria, and, combined with the company’s long background in manufacturing and automation, should make it a real player in the industrial internet. Bravo!

http://www.stephensonstrategies.com/">Stephenson blogs on Internet of Things Internet of Things strategy, breakthroughs and management