McKinsey IoT Report Nails It: Interoperability is Key!

I’ll be posting on various aspects of McKinsey’s new “The Internet of Things: Mapping the Value Beyond the Hype” report for quite some time.

First of all, it’s big: 148 pages in the online edition, making it the longest IoT analysis I’ve seen! Second, it’s exhaustive and insightful. Third, as with several other IoT landmarks, such as Google’s purchase of Nest and GE’s divestiture of its non-industrial internet division, the fact that a leading consulting firm would put such an emphasis on the IoT has tremendous symbolic importance.

McKinsey report — The IoT: Mapping the Value Beyond the Hype

My favorite finding:

“Interoperability is critical to maximizing the value of the Internet of Things. On average, 40 percent of the total value that can be unlocked requires different IoT systems to work together. Without these benefits, the maximum value of the applications we size would be only about $7 trillion per year in 2025, rather than $11.1 trillion.” (my emphasis)

This goes along with my most basic IoT Essential Truth, “share data.”  I’ve been preaching this mantra since my 2011 book, Data Dynamite (which, if I may toot my own horn, I believe remains the only book to focus on the sweeping benefits of a paradigm shift from hoarding data to sharing it).

I was excited to see that the specific example they zeroed in on was offshore oil rigs, which I focused on in my op-ed on “real-time regulations,” because sharing the data from the rig’s sensors could both boost operating efficiency and reduce the chance of catastrophic failure. The paper points out that there can be 30,000 sensors on an rig, but most of them function in isolation, to monitor a single machine or system:

“Interoperability would significantly improve performance by combining sensor data from different machines and systems to provide decision makers with an integrated view of performance across an entire factory or oil rig. Our research shows that more than half of the potential issues that can be identified by predictive analysis in such environments require data from multiple IoT systems. Oil and gas experts interviewed for this research estimate that interoperability could improve the effectiveness of equipment maintenance in their industry by 100 to 200 percent.”

Yet, the researchers found that only about 1% of the rig data was being used, because it rarely was shared off the rig with other in the company and its ecosystem!

The section on interoperability goes on to talk about the benefits — and challenges — of linking sensor systems in examples such as urban traffic regulation, that could link not only data from stationary sensors and cameras, but also thousands of real-time feeds from individual cars and trucks, parking meters — and even non-traffic data that could have a huge impact on performance, such as weather forecasts.  

While more work needs to be done on the technical side to increase the ease of interoperability, either through the growing number of interface standards or middleware, it seems to me that a shift in management mindset is as critical as sensor and analysis technology to take advantage of this huge increase in data:

“A critical challenge is to use the flood of big data generated by IoT devices for prediction and optimization. Where IoT data are being used, they are often used only for anomaly detection or real-time control, rather than for optimization or prediction, which we know from our study of big data is where much additional value can be derived. For example, in manufacturing, an increasing number of machines are ‘wired,’ but this instrumentation is used primarily to control the tools or to send alarms when it detects something out of tolerance. The data from these tools are often not analyzed (or even collected in a place where they could be analyzed), even though the data could be used to optimize processes and head off disruptions.”

I urge you to download the whole report. I’ll blog more about it in coming weeks.

Incredible example of rethinking “things” with Internet of Things

Ladies and gentlemen, I give you the epitome of the IoT-enabled product: the trash can!

My reader statistics do not indicate this blog has a heavy readership among trash cans, but let me apologize in advance to them for what I’m about to write: it’s not personal, just factual.

I’m sorry, but you municipal trash cans are pathetic!

Dented. Chipping paint. Trash overflowing. Smelly. Pests (ever seen any of those prize city rats? Big!!!) Sometime even knocked over. And, worst of all, you are so…. DUMB. You just sit there and don’t do anything.

BigBelly trash compactor and recycling center

But that was then, and this is now.

I have seen the future of trash cans, and, equally important, perhaps the best example I’ve seen of how smart designers and company strategists can –and must — totally rethink products’ design and how they are used because of the Internet of Things! 

At last week’s Re-Work Internet of Things Summit there were many exciting new IoT examples (I’ll blog others in coming weeks) but perhaps the one that got more people talking was the BigBelly trash compactor & recycling system, high-tech successor to the lowly trash can.

The company’s motto is that they are “transforming waste management practices and contributing to the Smart Cities of tomorrow.” Indeed!

I was first attracted to the BigBelly systems because of my alternative energy and environmental passions: they featured PV-powered trash compactors, which can quintuple the amount a trash container can hold, eliminating overflowing containers and the need to send trucks to empty them as frequently. Because the containers are closed, there’s no more ugly banana peels and McDonald’s wrappers assaulting your delicate eyes — or noses! Equally important, each is paired with a recycling container, which are almost never seen on city streets, dramatically reducing the amount of recyclables that go into regular trash simply because no recycling containers are accessible downtown.  These features alone would be a noteworthy advance compared to conventional trash cans.

But BigBelly wasn’t content to just improve the efficiency of trash and recyclable collection: they decided to make the containers smart.

The company worked with Digi to add wireless communications to the bins. This is a critical part of BigBelly’s broader significance: when the IoT first started to creep into corporate consciousness, of course designers thought about smart versions of high-value products such as cars, but lowly trash cans? That deserves real praise, because they fundamentally re-examined not only the product as it existed, but also realized that an IoT-based version that could also communicate real-time data would become much more versatile and much more valuable.

Here’s what has resulted so far (and I suspect that as the BigBellys are more widely deployed and both city administrators and others become aware of their increased functionality, other features will be added: I see them as “Smart City Hubs!”):

  • heatmap of trash generation in Lower Manhattan using real-time data from BigBellys and CLEAN dashboard

    instead of traditional pickup routes and schedules that were probably based on sheer proximity (or, as BigBelly puts it a little more colorfully, “muscle memory and gut instincts”), they now offer a real-time way to monitor actual waste generation, through the “CLEAN Management Console,” which allows DPW personnel to monitor and evaluate bins’ fullness, trends and historical analysis, for perspective. Collections can now be dynamic and driven by current needs, not historical patterns.

  • For those cities that opt for it, the company offers a Managed Services option where it does the analysis and management of the devices — not unlike the way jet turbine manufacturers now offer their customers value-added data that allows them to optimize performance — and generates new revenue streams for the manufacturers.
  • You may remember that I blogged a while ago about the “Collective Blindness” analogy: that, until the IoT, we humans simply couldn’t visualize much about the inner workings of the material world, so we were forced to do klugy work-arounds.  That’s not, strictly speaking, the case here, since trash in a conventional can is obviously visible, but the actual volume of trash was certainly invisible to those at headquarters. Now they can see — and really manage it.
  •  They can dramatically increase recycling programs’ participation rate and efficiency. As BigBelly says, the system provides “intelligent infrastructure to support ongoing operations and free up staffing and resources to support new and expanded recycling programs. Monitoring each separate stream volumes, days to fullness, and other activities in CLEAN enables you to make changes where needed to create a more effective public recycling program. Leverage the stations’ valuable sidewalk real estate to add messaging of encouraging words to change your users’ recycling behaviors.”Philadelphia is perhaps the best example of how effective the system can be. The city bought 210 of the recycling containers in 2009. On average, each collected 225 pounds of recyclables monthly, resulting in 23.5 tons of material diverted from landfills. Philly gets $50 per ton from the recycling — and avoiding $63 in landfill tipping fees, with a total benefit to the city of $113 per ton, or $2599 per month.

Here’s where it really gets neat, in my estimation.

Because the BigBellys are connected in real time, the devices can serve a number of real-time communication functions as well (enabled by an open API and an emphasis by BigBelly on finding collaborative uses). That includes making them hubs for a “mesh network” municipal wi-fi system (which, by the way, means that your local trash container/communications hub could actually save your life in a disaster or terror attack, when stationary networks may be disrupted, as I explained years ago in this YouTube video).

The list of benefits goes on (BigBelly lists all of them, right down to “Happy Cities,” on its web site). Trust me: if my premise is right that we can’t predict all of the benefits of the IoT at this point because we simply aren’t accustomed to thinking expansively about all the ways connected devices can be used, there will be more!

So here’s my take-away from the BigBelly:

If something as humble and ubiquitous as a municipal trashcan can  be transformed into a waste-reduction, recycling collection, municipal communication hub, then to fully exploit the Internet of Things’ full potential, we need to take a new, creative look at every material thing we interact with, no longer making assumptions about its limited role, and instead looking at it creatively as part of an interconnected network whose utility grows the more things (and people!) it’s connected with!

Let me know your ideas on how to capitalize on this new world of possibilities!

Virtual Sensor Networks: a key #IoT tool?

I was once again honored to be a guest on Coffee Break With Game Changers Radio today with David Jonker and Ira Berk of SAP — it’s always a delight to have a dialogue on the Internet of Things with these two brainy guys (and hats off as well to moderator/host Bonnie Graham!).

Toward the end of the show, Ira brought up a concept that was new to me: virtual sensor networks.

I’ve got sensors on the brain right now, because I’m frankly worried that sensors that don’t have adequate baked-in security and privacy protections and which can’t be ungraded as new opportunities and threats present themselves may be a threat to the IoT because they typically remain in use for so many years. Ah, but that’s a topic for another post.

According to Wikipedia, Virtual sensor networks are an:

“… emerging form of collaborative wireless sensor networks. In contrast to early wireless sensor networks that were dedicated to a specific application (e.g., target tracking), VSNs enable multi-purpose, collaborative, and resource efficient WSNs. The key idea difference of VSNs is the collaboration and resource sharing….
“… A VSN can be formed by providing logical connectivity among collaborative sensors. Nodes can be grouped into different VSNs based on the phenomenon they track (e.g., rock slides vs. animal crossing) or the task they perform. VSNs are expected to provide the protocol support for formation, usage, adaptation, and maintenance of subset of sensors collaborating on a specific task(s). Even the nodes that do not sense the particular event/phenomenon could be part of a VSN as far as they are willing to allow sensing nodes to communicate through them. Thus, VSNs make use of intermediate nodes, networks, or other VSNs to efficiently deliver messages across members of a VSN.”

Makes sense to me: collaboration is a critical basic component of the human aspect of the IoT (one of my IoT “Essential Truths), so why shouldn’t that extend to the mechanics as well?). If you have a variety of sensors already deployed in a given area, why should you have to deploy a whole new set of single-purpose ones to monitor a different condition if data could be synthesized from the existing sensors to effectively yield the same needed information?

2008 article on the concept said the virtual sensor networks are particularly relevant to three categories where data is* needed:

“Firstly, VSNs are useful in geographically overlapped applications, e.g., monitoring rockslides and animal crossing within a mountainous terrain. Different types of devices that detect these phenomena can relay each other for data transfer without having to deploy separate networks (Fig. 1). Secondly, VSNs are useful in logically separating multipurpose sensor networks, e.g., smart neighborhood systems with multifunctional sensor nodes. Thirdly, VSNs can be used to enhance efficiency of systems that track dynamic phenomena such as subsurface chemical plumes that migrate, split, or merge. Such networks may involve dynamically varying subsets of sensors.”

That article went on to propose a flexible, self-organizing “cluster-tree” approach to create the VSN, using tracking of a pollution plume as an example:

“…  a subset of nodes organizes themselves to form a VSN to track a specific plume. Whenever a node detects a relevant event for the first time it sends a message towards the root of the cluster tree indicating that it is aware of the phenomenon and wants to collaborate with similar nodes. The node may join an existing VSN or makes it possible for other nodes that wish to form a VSN, to find it. Use of a cluster tree or a similar structure guarantees that two or more nodes observing the same phenomenon will discover each other. Simulation based results show that our approach is more efficient and reliable than Rumor Routing and is able to combine all the nodes that collaborate on a specific task into a VSN.”

I suspect the virtual sensor network concept will become particularly widespread as part of “smart city” deployments: cash-strapped municipalities will want to get as much bang for the buck possible from already-deployed sensors, without having to install new ones. Bet my friends in Spain at Libellium will be in the forefront of this movement!

Thanks, Ira!


*BTW: if any members of the Grammar Police are lurking out there (I’m a retired lt. colonel of the Mass. State Grammar Police myself), you may take umbrage at “data is.”  Strictly speaking, the proper usage in the past has been “data are,” but the alternative is becoming so widespread that it’s becoming acceptable usage. So sue me…

 

Apple & IBM partnership in Japan to serve seniors a major step toward “Smart Aging”

As Bob Seger and I prepare to turn 70 (alas, no typo) on Wednesday (as long as he’s still singing “Against the Wind” I know I’m still rockin’) my thoughts turn to my “Smart Aging” paradigm, which combines Quantified Self devices that can change our relationships with doctors into a partnership and give us encouragement to do more fitness activities and smart home devices that make it easier for seniors to run their homes and avoid institutionalization.

That’s why I was delighted to read this week about Apple (obligatory disclaimer: I work part-time at The Apple Store, especially with “those of a certain age,” but am not privy to any of their strategy, and my opinions are solely my own) and IBM teaming with Japan Post (hmm: that’s one postal service that seems to think creatively. Suspect that if one B. Franklin still ran ours, as he did in colonial days, we’d be more creative as well…) to provide iPads to Japan’s seniors as part of Japan Post’s “integrated lifestyle support group” (the agency will actually go public later this year, and the health services will be a key part of its services).

Apple and IBM announced, as part of their “enterprise mobility” partnership that will also increase iPads’ adoption by businesses, that they will provide 5 million iPads with senior-friendly apps to Japanese seniors by 2020.  IBM’s role will be to develop app analytics and cloud services and “apps that IBM built specifically for elderly people .. for medication adherence … exercise and diet, and … that provide users with access to community activities and supporting services, including grocery shopping and job matching.”

The overall goal is to use the iPads and apps to connect seniors with healthcare services and their families.  I can imagine that FaceTime and the iPads’ accessibility options will play a critical role, and that current apps such as Lumosity that help us geezers stay mentally sharp will also be a model.

According to Mobile Health News, the partnership will offer some pretty robust services from the get-go:

“If seniors or their caregivers choose, they can take advantage of one of Japan Post Groups’ post office services, called Watch Over where, for a fee, the mail carriers will check in on elderly customers and then provide the elderly person’s family with an update. 

“In the second half of this year, customers can upgrade the service to include iPad monitoring as well.After Japan Post Group pilots the iPads and software with 1,000 seniors for six months, the company will expand the service in stages.”

Lest we forget, Japan is THE harbinger of what lies ahead for all nations as their populations age. 20% of the population was already over 65 in 2006,  38% will be in 2055.  As I’ve said before in speeches, the current status quo in aging is simply unsustainable: we must find ways for seniors to remain healthy and cut the governmental costs of caring for them as they grow as a percentage of the population.  As Japan Post CEO Taizo Nishimuro (who looks as if he’s a candidate for the new services — y0u go, guy!) said, the issue is “most acute in Japan — we need real solutions.”

IBM CEO Ginni Rometty said her company will take on a 3-part mission:

“First, they’ll be working on ‘quality of life apps,’ both by building some themselves and by integrating others, all of which will be aimed at accessibility first. The key target will be iOS, since it’s a mobile-first strategy in keeping with our changed computing habits. Second, they’re working on developing additional accessibility features not yet available, and third they’re helping Japan Post with the service layer required to deliver this to the elderly.”

Sweet! — and it reminds me of the other recently announced IBM/Apple announcement, in that case with J & J, to build a robust support structure for Apple’s new open-source ResearchKit and HealthKit platform to democratize medical research.  The IoT ain’t nothin’ without collaboration, after all.

Cook, according to TechCrunch, put the initiative in a global context (not unlike his environmental initiatives, where, IMHO, he’s become THE leading corporate change agent regarding global warming):

“Tim Cook called the initiative ‘groundbreaking,’ saying that it is ‘not only important for Japan, but [also] has global implications. Together, the three of us and all the teams that work so diligently behind us will dramatically improve the lives of millions of people.’

“…. The Apple CEO talked about how the company aims to ‘help people that are marginalized in some way, and empower them to do the things everyone else can do.” He cited a UC Irvine study which details how remote monitoring and connection with loved ones via iPad help instill a sense of confidence and independence in seniors. He added that he believes what the companies are doing in Japan is also scalable around the world.”

It will be interesting to see exactly how the partnership addresses the challenge of creating those senior-friendly “quality of life” apps: as someone who’s on the front-lines of explaining even Apple’s intuitive devices to older customers, I can tell you that many seniors begin are really frightened by these technologies, and it will take a combination of great apps and calm, patient hand-holding to put them at ease.

As I enter my 7th decade, I’m pumped!

Deloitte’s IoT “Information Value Loop”: critical attitudinal shift

Ever so often it’s good to step back from the day-to-day minutia of current Internet of Things projects, and get some perspective on the long-term prospects and challenges.

That’s what Deloitte did last December, when it held an “Internet of Things Grand Challenge Workshop,” with a focus on the all-important “forging the path to revenue generation.”

The attendees included two of my idols: John Seely Brown and John Hagel, of Deloitte’s “Center for the Edge” (love the pun in that title!).

The results were recently released, and bear close examination, especially the concept of how to foster what they call the “Information Value Loop”:

Deloitte IoT Information Value Loop

Deloitte IoT Information Value Loop

“The underlying asset that the IoT creates and exploits is information, yet we lack a well- developed, practical guide to understand how information creates value and how companies can effectively capture value. The ‘Information Value Loop’ describes how information creates value, how to increase that value, and how understanding the relevant technology is central to positioning an organization to capture value. The Information Value Loop is one way to begin making sense of the changes we face. The Loop consists of three interconnected elements: stages, value drivers, and technologies. Where the stages and value drivers are general principles defining if and how information creates value under any circumstances, it is the specifics of today’s technology that connect the Loop to the challenges and opportunities created by the IoT.”

This fits nicely with one of my IoT Esssential Truths,” that we need to turn linear information flows into cyclical ones to fully capitalize on the IoT.  No pussy-footin’ about this for these guys: “For information to create any value at all, it must pass through all the stages of the Loop. This is a binary outcome: should the flow of information be blocked completely at any stage, no value is created by that information.”

IMHO, this is also going to be one of the biggest challenges of the IoT for management: in the days when it was sooo difficult to gather and disseminate information, it made sense for those in the C-suite to control it, and parcel out what they felt was relevant, to whom and when they felt it was relevant. More often than not, the flow was linear and hierarchical, with one information silo in the company handing on the results to the next after they’d processed it. That didn’t allow any of the critical advantages the IoT brings, of allowing everyone who needs it to share real-time data instantly.  But saying we need to change those information management practices is one thing: actually having senior management give up their gatekeeper functions is another, and shouldn’t be understated as a challenge.

So here are some of the other key points in the conference proceedings:

  • In line with the multi-step strategy I outlined in Managing the Internet of Things Revolution, they concluded that incremental improvements to existing processes and products are important, but will only take you so far, at which point radical innovation will be crucial: “At first blush, the early IoT emphasis on sustaining innovation seems reasonable. Performance and cost improvement are seldom absent from the priorities of stakeholders; they are relatively easy to measure and their impact is likely more immediate than any investment that is truly disruptive. Put simply, the business case for an IoT application that focuses on operational efficiencies is relatively easy to make. Many decision makers are hard-wired to prefer the path of less resistance and, for many, truly innovative IoT applications seem too far-flung and abstract to risk pursuing. Still, organizations cannot innovate from the cost side forever.”
  • Melding the public and private, “Cities have inherent societal challenges in place to serve as natural incubators of IoT solutions.” Yeah!
  • As in everything else, those contrarian Millennials (who aren’t so hung up on buying stuff and often prefer to just use it)  are likely to save us when it comes to the IoT:  “From an innovation perspective … some of the new technologies are first marketed at the consumers. Thus, many believe that near-term innovation in IoT applications will come out of the consumer sector – spurred by the emergence of the tech-savvy Millennial consumers as a driving economic force.”
  • As I’ve written before, while some customers will still prefer to buy products outright, the IoT will probably bring a shift from selling products to marketing services based on those products, creating new revenue streams and long-term relationships with customers: “As IoT makes successful forays into the world of consumer and industrial products, it may radically change the producer—buyer transactional model from one based on capital expenditure to one based on operating expenditure. Specifically, in a widely adopted IoT world, buyers may be more apt to purchase product service outcomes on some kind of “per unit” basis, rather than the product itself and in so doing, render the physical product as something more of an afterthought. The manufacturer would then gradually transform into a service provider, operating on a complete awareness of each product’s need for replenishment, repair, replacement, etc.”

    Or, a hybrid model may emerge: “What may ultimately happen in a relatively connected product world is that many may accept the notion of the smartly connected product, but in a limited way. Such people will want to own the smartly connected product outright, but will also accept the idea of sharing the usage data to the limited extent that the sellers use such data in relatively benign ways, such as providing advice on more efficient usage, etc. The outcome here will also rely upon a long term total cost of ownership (TCO) perspective. With any fundamental purchasing model changes (as is taking place in owned vs. cloud resources in the network / IT world), not all suppliers will be able to reap additional economic benefit under the service model. Buyers will eventually recognize the increase in TCO and revert back to the more economical business model if the economic rents are too high.”

  • It’s likely that those players in the IoT ecosystem who create value-added data interpretation will be the most valuable and profitable: “…are certain building blocks of the IoT network “more equal” than others?

    “Some have argued that the holy grail of the IoT value loop resides in the data and that those in the IoT ecosystem who aggregate and transform massive amounts of raw data into commercially useful intelligence capture the real value in the IoT environment. This notion holds that commercially useful data provide insights that drive action and ultimately represent the reason that the end user pursues a smart solution in the first place. Put another way, the end customer is more apt to pay for a more comprehensive treatment of raw data than for a better sensor. Indeed, some even believe that as time passes, the gap in relative value captured by those who curate and analyze the data and the rest of the IoT ecosystem will only widen and that, on a long-term basis, players within the “non-data” part of the IoT ecosystem will need to develop some data analytics capabilities simply to differentiate themselves as something more than commodity providers. Of course, some think that the emphasis on data is overblown and argue that where the real value in the IoT ecosystem is captured depends on application. Time will tell of course. But there can be little doubt that the collection and enhancement of data is highly coveted, and analytics and the ability to make use of the vast quantities of information that is captured will serve as critical elements to virtually any IoT solution.”

I urge you to download and closely analyze the entire report. It’s one of the most thoughtful and visionary pieces of IoT theory I’ve seen (no doubt because of its roundtable origins: in keeping with the above-mentioned need for cyclical information flow for the IoT [and, IMHO, creativity in general], the more insights you can bring together on a real-time basis, the richer the outcome. Bravo!

 

The Internet of Things’ Essential Truths

I’ve been writing about what I call the Internet of Things’ “Essential Truths” for three years now, and decided the time was long overview to codify them and present them in a single post to make them easy to refer to.

As I’ve said, the IoT really will bring about a total paradigm shift, because, for the the first time, it will be possible for everyone who needs it to share real-time information instantly. That really does change everything, obliterating the “Collective Blindness” that has hampered both daily operations and long-term strategy in the past. As a result, we must rethink a wide range of management shibboleths (OK, OK, that was gratuitous, but I’ve always wanted to use the word, and it seemed relevant here, LOL):

  1. First, we must share data. Tesla leads the way with its patent sharing. In the past, proprietary knowledge led to wealth: your win was my loss. Now, we must automatically ask “who else can use this information?” and, even in the case of competitors, “can we mutually profit from sharing this information?” Closed systems and proprietary standards are the biggest obstacle to the IoT.
  2. Second, we must use the Internet of Things to empower workers. With the IoT, it is technically possible for everyone who could do their job better because of access to real-time information to share it instantly, so management must begin with a new premise: information should be shared with the entire workforce. Limiting access must be justified.
  3. Third, we must close the loop. We must redesign our data management processes to capitalize on new information, creating continuous feedback loops.
  4. Fourth, we must rethink products’ roles. Rolls-Royce jet engines feed back a constant stream of real-time data on their operations. Real-time field data lets companies have a sustained dialogue with products and their customers, increasingly allowing them to market products as services, with benefits including new revenue streams.
  5. Fifth, we must develop new skills to listen to products and understand their signals. IBM scientists and medical experts jointly analyzed data from sick preemies’ bassinettes & realized they could diagnose infections a day before there was any visible sign. It’s not enough to have vast data streams: we need to understand them.
  6. Sixth, we must democratize innovation. The wildly-popular IFTTT web site allows anyone to create new “recipes” to exploit unforeseen aspects of IoT products – and doesn’t require any tech skills to use. By sharing IoT data, we empower everyone who has access to develop new ways to capitalize on that data, speading the IoT’s development.
  7. Seventh, and perhaps most important, we must take privacy and security seriously. What responsible parent would put an IoT baby monitor in their baby’s room after the highly-publicized incident when a hacker exploited the manufacturer’s disregard for privacy and spewed a string of obscenities at the baby? Unless everyone in the field takes privacy and security seriously, the public may lose faith in the IoT.

There you have ’em: my best analysis of how the Internet of Things will require a revolution not just in technology, but also management strategy and practices. What do you think?

Apple ResearchKit will launch medical research paradigm shift to crowd-sourcing

Amidst the hoopla about the new MacBook and much-anticipated Apple Watch, Apple snuck something into Monday’s event that blew me away (obligatory disclaimer: I work part-time at The Apple Store, but the opinions expressed here are mine).

My Heart Counts app

Four years after I proselytized about the virtues of democratizing data in my Data Dynamite: how liberating data will transform our world book (BTW: pardon the hubris, but I still think it’s the best thing out there about the attitudinal shift needed to capitalize on sharing data), I was so excited to learn about the new ResearchKit.

Tag line? “Now everybody can do their part to advance medical research.”

The other new announcements might improve your quality of life. This one might save it!

As Senior VP of Operations Jeff Williams said in announcing the kit,  the process of medical research ” ..hasn’t changed in decades.” That’s not really true: as I wrote in my book, the Quantified Self movement has been sharing data for several years, as well as groups such as CureTogether and PatientsLikeMe. However, what is definitely true is that no one has harnessed the incredible power of the smartphone for this common goal until now, and that’s really incredible. It’s a great example of my IoT Essential Truth of asking “who else could use this data?

A range of factors cast a pall over traditional medical research.

Researchers have had to cast a broad net even to get 50-100 volunteers for a clinical trial (and may have to pay them, to boot, placing the results validity when applied to the general population in doubt).  The data has often been subjective (in the example Williams mentioned, Parkinson’s patients are classified by a doctor simply on the basis of walking a few feet). Also, communication about the project has been almost exclusively one way, from the researcher to the patient, and limited, at best.

What if, instead, you just had to turn on your phone and open a simple app to participate? As the website says, “Each one [smartphone] is equipped with powerful processors and advanced sensors that can track movement, take measurements, and record information — functions that are perfect for medical studies.” Suddenly research can be worldwide, and involve millions of diverse participants, increasing the data’s amount and validity (There’s a crowdsourcing research precedent: lot of us have been participating in scientific crowdsourcing for almost 20 years, by installing the SETI@Home software that runs in the background on our computers, analyzing data from deep space to see if ET is trying to check in)!

Polymath/medical data guru John Halamka, MD wrote me that:

“Enabling patients to donate data for clinical research will accelerate the ‘learning healthcare system’ envisioned by the Institute of Medicine.   I look forward to testing out Research Kit myself!”

The new apps developed using ResearchKit harvest information from the Health app that Apple introduced as part of iOS8. According to Apple:

“When granted permission by the user, apps can access data from the Health app such as weight, blood pressure, glucose levels and asthma inhaler use, which are measured by third-party devices and apps…. ResearchKit can also request from a user, access to the accelerometer, microphone, gyroscope and GPS sensors in iPhone to gain insight into a patient’s gait, motor impairment, fitness, speech and memory.

Apple announced that it has already collaborated with some of the world’s most prestigious medical institutions, including Mass General, Dana-Farber, Stanford Medical, Cornell and many others, to develop apps using ResearchKit. The first five apps target asthma, breast cancer, cardiovascular disease, diabetes and Parkinson’s disease.  My favorite, because it affects the largest number of people, is the My Heart Counts one. It uses the iPhone’s built-in motion sensors to track participants’ activity, collecting data during a 6-minute walk test from those who are able to walk that long. If participants also have a wearable activity device connecting with the Health app (aside: still don’t know why my Jawbone UP data doesn’t flow to the Health app, even though I made the link) , they are encouraged to use that as well. Participants will also enter data about their heart disease risk factors and their lab tests readings to get feedback on their chances of developing heart disease and their “heart age.” Imagine the treasure trove of cardiac data it will yield!

 A critical aspect of why I think ResearchKit will be have a significant impact is that Apple decided t0 make it open source, so that anyone can tinker with the code and improve it (aside: has Apple EVER made ANYTHING open source? Doubt it! That alone is noteworthy).  Also, it’s important to note, in light of the extreme sensitivity of any personal health data, that Apple guarantees that it will not have access to any of the personal data.

Because of my preoccupation with “Smart Aging,” I’m really interested in whether any researchers will specifically target seniors with ResearchKit apps. I’ll be watching carefully when the Apple Watch comes out April 24th to see if seniors buy them (not terribly optimistic, I must admit, because of both the cost and the large number of seniors I help at The Apple Store who are befuddled by even Apple’s user-friendly technology) because the watch is a familiar form factor for them (I haven’t worn a watch since I got my first cell phone, and most young people I know have never had one) and might be willing to use them to participate in these projects.

N0w, if you’ll excuse me, I just downloaded the My Heart Counts app, and must find out my “heart age!”


 

Doh!  Just after I posted this, I saw a really important post on Ars Technica pointing out that this brave new world of medical research won’t go anywhere unless the FDA approves:

“As much as Silicon Valley likes to think of itself as a force for good, disrupting this and pivoting that, it sometimes forgets that there’s a wider world out there. And when it comes to using devices in the practice of medicine, that world contains three very important letters: FDA. That’s right, the US Food and Drug Administration, which Congress has empowered to regulate the marketing and research uses of medical devices.

“Oddly, not once in any of the announcement of ResearchKit did we see mention of premarket approval, 510k submission, or even investigational device exemptions. Which is odd, because several of the uses touted in the announcement aren’t going to be possible without getting the FDA to say yes.”

I remember reading that Apple had reached out to the FDA during development of the Apple Watch, so I’m sure none of this comes as a surprise to them, and any medical researcher worth his or her salt is also aware of that factor. However, the FDA is definitely going to have a role in this issue going forward, and that’s as it should be — as I’ve said before, with any aspect of the IoT, privacy and security is Job One.

 

 

IFTTT DO apps: neat extension of my fav #IoT crowdsourcing tool!

Have I told you lately how much I love IFTTT? Of course!  As I’ve said, I think they are a phenomenal example of my IoT “Essential Truth” question: who else can use this data?

IFTTT_DO_buttonNow, they’ve come up with 3 new apps, the “DO button,” “DO camera,” and “DO Note,” that make this great tool even more versatile!

With a DO “recipe,” you simply tap on the appropriate app, and the “recipe” runs. Presto! Change-o!

As a consultant who must bill for his time, I particularly like the one that lets you “Track Your Work hours” on Google Drive, but you’re sure to find your own favorites in categories such as play, work, home, families, and essentials. Some are just fun, and some will increase your productivity or help manage your household more easily (hmm: not sure where “post a note to your dog’s timeline” fits in (aside to my sons: feel free to “send notes to your data via email”.  If past experience is any indication, there should be many, many more helpful “Do” recipes as soon as users are familiar with how to create them.

As I’ve said before, it’s no reflection on the talented engineers at HUE, NEST, et. al., but there’s simply no way they could possibly visualize all the ways that their devices could be used and/or combined with others, and that’s why IFTTT, by adding the crowdsourcing component and democratizing data, is so important to speeding the IoT’s deployment.

IBM picks for IoT trends to watch this year emphasize privacy & security

Last month Bill Chamberlin, the principal analyst for Emerging Tech Trends and Horizon Watch Community Leader for IBM Market Development (hmmm, must have an oversized biz card..) published a list of 20 IoT trends to watch this year that I think provide a pretty good checklist for evaluating what promises to be an important period in which the IoT becomes more mainstream.

It’s interesting to me, especially in light of my recent focus on the topics (and I’ll blog on the recent FTC report on the issue in several days), that he put privacy and security number one on the list, commenting that “Trust and authentication become critical across all elements of the IoT, including devices, the networks, the cloud and software apps.” Amen.

Most of the rest of the list was no surprise, with standards, hardware, software, and edge analytics rounding out the top five (even though it hasn’t gotten a lot of attention, I agree edge analytics are going to be crucial as the volume of sensor data increases dramatically: why pass along the vast majority of data, that is probably redundant, to the cloud, vs. just what’s a deviation from the norm and probably more important?).

Two dealing with sensors did strike my eye:

9.  Sensor fusion: Combining data from different sources can improve accuracy. Data from two sensors is better than data from one. Data from lots of sensors is even better.

10.  Sensor hubs: Developers will increasingly experiment with sensor hubs for IoT devices, which will be used to offload tasks from the application processor, cutting down on power consumption and improving battery life in the devices”

Both make a lot of sense.

One was particularly noteworthy in light of my last post, about the Gartner survey showing most companies were ill-prepared to plan and launch IoT strategies: “14.  Chief IoT Officer: Expect more senior level execs to be put in place to build the enterprise-wide IoT strategy.” Couldn’t agree more that this is vital!

Check out the whole list: I think you’ll find it helpful in tracking this year’s major IoT developments.

Management Challenge: Lifeguards in the IoT Data Lake

In their Harvard Business Review November cover story, How Smart, Connected Products Are Transforming Competition, PTC CEO Jim Heppelmann and Professor Michael Porter make a critical strategic point about the Internet of Things that’s obscured by just focusing on IoT technology: “…What makes smart, connected products fundamentally different is not the internet, but the changing nature of the “things.”

In the past, “things” were largely inscrutable. We couldn’t peer inside massive assembly line machinery or inside cars once they left the factory, forcing companies to base much of both strategy and daily operations on inferences about these things and their behavior from limited data (data which was also often gathered only after the fact).

Now that lack of information is being removed. The Internet of Things creates two unprecedented opportunities regarding data about things:

  • data will be available instantly, as it is generated by the things
  • it can also be shared instantly by everyone who needs it.

This real-time knowledge of things presents both real opportunities and significant management challenges.

Each opportunity carries with it the challenge of crafting new policies on how to manage access to the vast new amounts of data and the forms in which it can be accessed.

For example: with the Internet of Things we will be able to bring about optimal manufacturing efficiency as well as unprecedented integration of supply chains and distribution networks. Why? Because we will now be able to “see” inside assembly line machinery, and the various parts of the assembly line will be able to automatically regulate each other without human intervention (M2M) to optimize each other’s efficiency, and/or workers will be able to fine-tune their operation based on this data.

Equally important, because of the second new opportunity, the exact same assembly line data can also be shared in real time with supply chain and distribution network partners. Each of them can use the data to trigger their own processes to optimize their efficiency and integration with the factory and its production schedule.

But that possibility also creates a challenge for management.

When data was hard to get, limited in scope, and largely gathered historically rather than in the moment, what data was available flowed in a linear, top-down fashion. Senior management had first access, then they passed on to individual departments only what they decided was relevant. Departments had no chance to simultaneously examine the raw data and have round-table discussions of its significance and improve decision-making. Everything was sequential. Relevant real-time data that they could use to do their jobs better almost never reached workers on the factory floor.

That all potentially changes with the IoT – but will it, or will the old tight control of data remain?

Managers must learn to ask a new question that’s so contrary to old top-down control of information: who else can use this data?

To answer that question they will have to consider the concept of a “data lake” created by the IoT.

“In broad terms, data lakes are marketed as enterprise wide data management platforms for analyzing disparate sources of data in its native format,” Nick Heudecker, research director at Gartner, says. “The idea is simple: instead of placing data in a purpose-built data store, you move it into a data lake in its original format. This eliminates the upfront costs of data ingestion, like transformation. Once data is placed into the lake, it’s available for analysis by everyone in the organization.”

Essentially, data that has been collected and stored in a data lake repository remains in the state it was gathered and is available to anyone, versus being structured, tagged with metadata, and having limited access.

That is a critical distinction and can make the data far more valuable, because the volume and variety will allow more cross-fertilization and serendipitous discovery.

At the same time, it’s also possible to “drown” in so much data, so C-level management must create new, deft policies – to serve as lifeguards, as it were. They must govern data lake access if we are to, on one hand, avoid drowning due to the sheer volume of data, and, on the other, to capitalize on its full value:

  • Senior management must resist the temptation to analyze the data first and then pass on only what they deem of value. They too will have a crack at the analysis, but the value of real-time data is getting it when it can still be acted on in the moment, rather than just in historical analyses (BTW, that’s not to say historical perspective won’t have value going forward: it will still provide valuable perspective).
  • There will need to be limits to data access, but they must be commonsense ones. For example, production line workers won’t need access to marketing data, just real-time data from the factory floor.
  • Perhaps most important, access shouldn’t be limited based on pre-conceptions of what might be relevant to a given function or department. For example, a prototype vending machine uses Near Field Communication to learn customers’ preferences over time, then offers them special deals based on those choices. However, by thinking inclusively about data from the machine, rather than just limiting access to the marketing department, the company shared the real-time information with its distribution network, so trucks were automatically rerouted to resupply machines that were running low due to factors such as summer heat.
  • Similarly, they will have to relax arbitrary boundaries between departments to encourage mutually-beneficial collaboration. When multiple departments not only share but also get to discuss the same data set, undoubtedly synergies will emerge among them (such as the vending machine ones) that no one department could have discovered on its own.
  • They will need to challenge their analytics software suppliers to create new software and dashboards specifically designed to make such a wide range of data easily digested and actionable.

Make no mistake about it: the simple creation of vast data lakes won’t automatically cure companies’ varied problems. But C-level managers who realize that if they are willing to give up control over data flow, real-time sharing of real-time data can create possibilities that were impossible to visualize in the past, will make data lakes safe, navigable – and profitable.