IoT-based “Regulation 3.0” Might Have Avoided Merrimack Valley Tragedies

Pardon me: this is a very personal post.

For about an hour Thursday night we didn’t know whether my son’s home in Lawrence was one of those blown up by the gasline explosions (fortunately, he and his dear family were never at risk — they’re living in Bolivia for two years — but the house was right at Ground Zero). Fortunately, it is intact.

However, the scare took me back to an op-ed I wrote eight years ago in Federal Computer Week after the BP catastrophe in the Gulf, when I was working in disaster communications. I proposed what in fact was an IoT-based way to avoid similar disasters in the future: what I called “Regulation 3.0,” which would be a win-win solution for critical infrastructure companies (85% of the critical infrastructure in the US is in private hands) and the public interest by installing IoT-monitoring sensors and M2M control devices that would act automatically on that sensor data, rather than requiring human intervention:

  • in daily operations, it would let the companies dramatically increase their efficiency by giving real-time data on where the contents were and the condition of pipelines, wires, etc. so the operations could be optimized.
  • in a disaster, as we found out in Lawrence and Andover, where Columbia Gas evidently blew it on response management, government agencies (and, conceivably, even the general public, might have real-time data, to speed the response (that’s because of one of my IoT Essential Truths, “share data, don’t hoard it”).

We could never have that real-time data sharing in the past, so we were totally dependent on the responsible companies for data, which even they probably didn’t have because of the inability to monitor flow, etc.

Today, by contrast, we need to get beyond the old prescriptive regulations, which told companies what equipment to install (holding back progress when new, more efficient controls were created, and switch to performance-based regulation where the companies would instead be held to standards (i.e., in the not-too-distant future, when the IoT will be commonplace, collecting and sharing real-time data on their facilities), so they’d be free to adopt even better technology in the future.

However, Regulation 3.0 should become the norm, because it would be better all around:

  • helping the companies’ improve their daily operations.
  • cutting the cost of compliance (because data could be crunched and reported instantly, without requiring humans compiling and submitting it).
  • reducing the chance of incidents ever happening (When I wrote the op-ed I’d never heard of IoT-based “predictive maintenance,” which lets companies spot maintenance issues at the earliest point, so they can do repairs quickly and cheaper than if having to respond once they’re full-blown problems.).

I had a chance to discuss the concept yesterday with Rep. Joe Kennedy, who showed a real knowledge of the IoT and seemed open to the incident.

Eight years after I first broached the concept, PTC reports that the pipeline industry is now impementing IoT-based operations, with benefits including:

  • Situational awareness..
  • Situational intelligence..
  • and Predictive analytics.

Clearly, this is in the economic interests of the companies that control the infrastructure, and of the public interest.  The Time has come for IoT-based “Regulation 3.0.”

 

McKinsey IoT Report Nails It: Interoperability is Key!

I’ll be posting on various aspects of McKinsey’s new “The Internet of Things: Mapping the Value Beyond the Hype” report for quite some time.

First of all, it’s big: 148 pages in the online edition, making it the longest IoT analysis I’ve seen! Second, it’s exhaustive and insightful. Third, as with several other IoT landmarks, such as Google’s purchase of Nest and GE’s divestiture of its non-industrial internet division, the fact that a leading consulting firm would put such an emphasis on the IoT has tremendous symbolic importance.

McKinsey report — The IoT: Mapping the Value Beyond the Hype

My favorite finding:

“Interoperability is critical to maximizing the value of the Internet of Things. On average, 40 percent of the total value that can be unlocked requires different IoT systems to work together. Without these benefits, the maximum value of the applications we size would be only about $7 trillion per year in 2025, rather than $11.1 trillion.” (my emphasis)

This goes along with my most basic IoT Essential Truth, “share data.”  I’ve been preaching this mantra since my 2011 book, Data Dynamite (which, if I may toot my own horn, I believe remains the only book to focus on the sweeping benefits of a paradigm shift from hoarding data to sharing it).

I was excited to see that the specific example they zeroed in on was offshore oil rigs, which I focused on in my op-ed on “real-time regulations,” because sharing the data from the rig’s sensors could both boost operating efficiency and reduce the chance of catastrophic failure. The paper points out that there can be 30,000 sensors on an rig, but most of them function in isolation, to monitor a single machine or system:

“Interoperability would significantly improve performance by combining sensor data from different machines and systems to provide decision makers with an integrated view of performance across an entire factory or oil rig. Our research shows that more than half of the potential issues that can be identified by predictive analysis in such environments require data from multiple IoT systems. Oil and gas experts interviewed for this research estimate that interoperability could improve the effectiveness of equipment maintenance in their industry by 100 to 200 percent.”

Yet, the researchers found that only about 1% of the rig data was being used, because it rarely was shared off the rig with other in the company and its ecosystem!

The section on interoperability goes on to talk about the benefits — and challenges — of linking sensor systems in examples such as urban traffic regulation, that could link not only data from stationary sensors and cameras, but also thousands of real-time feeds from individual cars and trucks, parking meters — and even non-traffic data that could have a huge impact on performance, such as weather forecasts.  

While more work needs to be done on the technical side to increase the ease of interoperability, either through the growing number of interface standards or middleware, it seems to me that a shift in management mindset is as critical as sensor and analysis technology to take advantage of this huge increase in data:

“A critical challenge is to use the flood of big data generated by IoT devices for prediction and optimization. Where IoT data are being used, they are often used only for anomaly detection or real-time control, rather than for optimization or prediction, which we know from our study of big data is where much additional value can be derived. For example, in manufacturing, an increasing number of machines are ‘wired,’ but this instrumentation is used primarily to control the tools or to send alarms when it detects something out of tolerance. The data from these tools are often not analyzed (or even collected in a place where they could be analyzed), even though the data could be used to optimize processes and head off disruptions.”

I urge you to download the whole report. I’ll blog more about it in coming weeks.

comments: Comments Off on McKinsey IoT Report Nails It: Interoperability is Key! tags: , , , , , , ,

FTC report provides good checklist to design in IoT security and privacy

FTC report on IoT

FTC report on IoT

SEC Chair Edith Ramirez has been pretty clear that the FTC plans to look closely at the IoT and takes IoT security and privacy seriously: most famously by fining IoT marketer TrendNet for non-existent security with its nanny cam.

Companies that want to avoid such actions — and avoid undermining fragile public trust in their products and the IoT as a whole — would do well to clip and refer to this checklist that I’ve prepared based on the recent FTC Report, Privacy and Security in a Connected World, compiled based on a workshop they held in 2013, and highlighting best practices that were shared at the workshop.

  1. Most important, “companies should build security into their devices at the outset, rather than as an afterthought.” I’ve referred before to the bright young things at the Wearables + Things conference who used their startup status as an excuse for deferring security and privacy until a later date. WRONG: both must be a priority from Day One.

  2. Conduct a privacy or security risk assessment during design phase.

  3. Minimize the data you collect and retain.  This is a tough one, because there’s always that chance that some retained data may be mashed up with some other data in future, yielding a dazzling insight that could help company and customer alike, BUT the more data just floating out there in “data lake” the more chance it will be misused.

  4. Test your security measures before launching your products. … then test them again…

  5. “..train all employees about good security, and ensure that security issues are addressed at the appropriate level of responsibility within the organization.” This one is sooo important and so often overlooked: how many times have we found that someone far down the corporate ladder has been at fault in a data breach because s/he wasn’t adequately trained and/or empowered?  Privacy and security are everyone’s job.

  6. “.. retain service providers that are capable of maintaining reasonable security and provide reasonable oversight for these service providers.”

  7. ‘… when companies identify significant risks within their systems, they should implement a defense-in -depth approach, in which they consider implementing security measures at several levels.”

  8. “… consider implementing reasonable access control measures to limit the ability of an unauthorized person to access a consumer’s device, data, or even the consumer’s network.” Don’t forget: with the Target data breach, the bad guys got access to the corporate data through a local HVAC dealer. Everything’s linked — for better or worse!

  9. “.. companies should continue to monitor products throughout the life cycle and, to the extent feasible, patch known vulnerabilities.”  Privacy and security are moving targets, and require constant vigilance.

  10. Avoid enabling unauthorized access and misuse of personal information.

  11. Don’t facilitate attacks on other systems. The very strength of the IoT in creating linkages and synergies between various data sources can also allow backdoor attacks if one source has poor security.

  12. Don’t create risks to personal safety. If you doubt that’s an issue, look at Ed Markey’s recent report on connected car safety.

  13. Avoid creating a situation where companies might use this data to make credit, insurance, and employment decisions.  That’s the downside of cool tools like Progressive’s “Snapshot,” which can save us safe drivers on premiums: the same data on your actual driving behavior might some day be used become compulsory, and might be used to deny you coverage or increase your premium).

  14. Realize that FTC Fair Information Practice Principles will be extended to IoT. These “FIPPs, ” including “notice, choice, access, accuracy, data minimization, security, and accountability,” have been around for a long time, so it’s understandable the FTC will apply them to the IoT.  Most important ones?  Security, data minimization, notice, and choice.

Not all of these issues will apply to all companies, but it’s better to keep all of them in mind, because your situation may change. I hope you’ll share these guidelines with your entire workforce: they’re all part of the solution — or the problem.

comments: Comments Off on FTC report provides good checklist to design in IoT security and privacy tags: , , , ,

Why the Internet of Things Will Bring Fundamental Change “What Can You Do Now That You Couldn’t Do Before?”

The great Eric Bonabeau has chiseled it into my consciousness that the test of whether a new technology really brings about fundamental change is to always ask “What can you do now that you couldn’t do before?

Tesla Roadster

That’s certainly the case for the Tesla alternative last winter to a costly, time-consuming, and reputation-staining recall  (dunno: I must have been hiding under a rock at the time to have not heard about it).

In reporting the company’s action, Wired‘s story’s subtitle was “best example yet of the Internet of Things?”

I’d have to agree it was.

Coming at the same time as the godawful Chevy recall that’s still playing out and still dragging down the company, Tesla promptly and decisively response solved another potentially dangerous situation:

 

“‘Not to worry,’ said Tesla, and completed the fix for its 29,222 vehicle owners via software update. What’s more, this wasn’t the first time Tesla has used such updates to enhance the performance of its cars. Last year it changed the suspension settings to give the car more clearance at high speeds, due to issues that had surfaced in certain collisions.”

Think of it: because Tesla has basically converted cars into computers with four wheels, modifying key parts by building in sensors and two-way communications, it has also fundamentally changed its relationship with customers: it can remain in constant contact with them, rather than losing contact between the time the customer drives off the lot and when the customer remembers (hopefully..) to schedule a service appointment, and many modifications that used to require costly and hard-to-install replacement parts now are done with a few lines of code!

Not only can Tesla streamline recalls, but it can even enhance the customer experience after the car is bought: I remember reading somewhere that car companies may start offering customer choice on engine performance: it could offer various software configurations to maximize performance or to maximize fuel savings — and continue to tweak those settings in the future, just as computers get updated operating systems. That’s much like the transformation of many other IoT-enhanced products into services, where the customer may willingly pay more over a long term for a not just a hunk of metal, but also a continuing data stream that will help optimize efficiency and reduce operating costs.

Wired went on to talk about how the engineering/management paradigm shift represented a real change:

  • “In nearly all instances, the main job of the IoT — the reason it ever came to be — is to facilitate removal of non-value add activity from the course of daily life, whether at work or in private. In the case of Tesla, this role is clear. Rather than having the tiresome task of an unplanned trip to the dealer put upon them, Tesla owners can go about their day while the car ‘fixes itself.’
  • Sustainable value – The real challenge for the ‘consumer-facing’ Internet of Things is that applications will always be fighting for a tightly squeezed share of disposable consumer income. The value proposition must provide tangible worth over time. For Tesla, the prospect of getting one’s vehicle fixed without ‘taking it to the shop’ is instantly meaningful for the would-be buyer – and the differentiator only becomes stronger over time as proud new Tesla owners laugh while their friends must continue heading to the dealer to iron out typical bug fixes for a new car. In other words, there is immediate monetary value and technology expands brand differentiation. As for Tesla dealers, they must be delighted to avoid having to make such needling repairs to irritated customers – they can merely enjoy the positive PR halo effect that a paradigm changing event like this creates for the brand – and therefore their businesses.
  • Setting new precedents – Two factors really helped push Tesla’s capability into the news cycle: involvement by NHTSA and the word ‘recall.’ At its issuance, CEO Elon Musk argued that the fix should not technically be a ‘recall’ because the necessary changes did not require customers find time to have the work performed. And, despite Musk’s feather-ruffling remarks over word choice, the stage appears to have been set for bifurcation in the future by the governing bodies. Former NHTSA administrator David Strickland admitted that Musk was ‘partially right’ and that the event could be ‘precedent-setting’ for regulators.”

That’s why I’m convinced that Internet of Things technologies such as sensors and tiny radios may be the easy part of the revolution: the hard part is going to be fundamental management changes that require new thinking and new questions.

What can you do now that you couldn’t do before??

BTW: Musk’s argument that its software upgrade shouldn’t be considered a traditional “recall” meshes nicely with my call for IoT-based “real-time regulation.”  As I wrote, it’s a win-win, because the same data that could be used for enforcement can also be used to enhance the product and its performance:

  • by installing the sensors and monitoring them all the time (typically, only the exceptions to the norm would be reported, to reduce data processing and required attention to the data) the company would be able to optimize production and distribution all the time (see my piece on ‘precision manufacturing’).
  • repair costs would be lower: “predictive maintenance” based on real-time information on equipment’s status is cheaper than emergency repairs. the public interest would be protected, because many situations that have resulted in disasters in the past would instead be avoided, or at least minimized.
  • the cost of regulation would be reduced while its effectiveness would be increased: at present, we must rely on insufficient numbers of inspectors who make infrequent visits: catching a violation is largely a matter of luck. Instead, the inspectors could monitor the real-time data and intervene instantly– hopefully in time to avoid an incident. “

Failure to inspect oil rigs another argument for “real-time regulation”

The news that the Bureau of Land Management has failed to inspect thousands of fracking and other oil wells considered at high risk for contaminating water is Exhibit A for my argument we need Intnet of Things-based “real-time regulation” for a variety of risky regulated businesses.

According to a new GAO report obtained by AP:

“Investigators said weak control by the Interior Department’s Bureau of Land Management resulted from policies based on outdated science and from incomplete monitoring data….

“The audit also said the BLM did not coordinate effectively with state regulators in New Mexico, North Dakota, Oklahoma and Utah.”

Let’s face it: a regulatory scheme based on after-the-fact self-reporting by the companies themselves backed up by infrequent site visits by an inadequate number of inspectors will never adequately protect the public and the environment.  In this case, the GAO said that “…. the BLM had failed to conduct inspections on more than 2,100 of the 3,702 wells that it had specified as ‘high priority’ and drilled from 2009 through 2012. The agency considers a well ‘high priority’ based on a greater need to protect against possible water contamination and other environmental safety issues.”

By contrast, requiring that oil rigs and a range of other technology-based products, from jet engines to oil pipelines, have sensors attached (or, over time, built in) that would send real-time data to the companies should allow them to spot incipient problems at their earliest stages, in time to schedule early maintenance that would both reduce maintenance costs and reduce or even eliminate catastrophic failures. As I said before, this should be a win-win solution.

If problems still persisted after the companies had access to this real-time data, then more draconian steps could be required, such as also giving state and federal regulators real-time access to the same data — something that would be easy to do with IoT-based systems. There would have to be tight restrictions on access to the data that would protect proprietary corporate information, but companies that are chronic offenders would forfeit some of those protections to protect the public interest.

 

comments: Comments Off on Failure to inspect oil rigs another argument for “real-time regulation” tags: , , ,

It’s Time for IoT-enabled “Real-Time” Regulation

Pardon me, but I still take the increasingly-unfashionable view that we need strong, activist government, to protect the weak and foster the public interest.

That’s why I’m really passionate about the concept (for what it’s worth, I believe I’m the first to propose this approach)  that we need Internet of Things enabled “real-time regulation” that wouldn’t rely on scaring companies into good behavior through the indirect means of threatening big fines for violations, but could actually minimize, or even avoid, incidents from ever happening, while simultaneously improving companies’ operating efficiency and reducing costly repairs. I wrote about the concept in today’s O’Reilly SOLID blog — and I’m going to crusade to make the concept a reality!

I first wrote about “real-time” regulation before I was really involved in the IoT: right after the BP Gulf blow-out, when I suggested that:

The .. approach would allow officials to monitor in real time every part of an oil rig’s safety system. Such surveillance could have revealed the faulty battery in the BP rig’s blowout preventer and other problems that contributed to the rig’s failure. A procedure could have been in place to allow regulators to automatically shut down the rig when it failed the pressure test rather than leaving that decision to BP.”

Since then I’ve modified my position about regulators’ necessarily having first-hand access to the real-time data, realizing that any company with half a brain would realize as soon as they saw data that there might be a problem developing (as opposed to having happened, which is what was too often the case in the past..) would take the initiative to shut down the operation ASAP to make a repair, saving itself the higher cost of dealing with a catastrophic failure.

As far as I’m concerned, “real-time regulation” is a win-win:

  • by installing the sensors and monitoring them all the time (typically, only the exceptions to the norm would be reported, to reduce data processing and required attention to the data) the company would be able to optimize production and distribution all the time (see my piece on “precision manufacturing“).
  • repair costs would be lower: “predictive maintenance” based on real-time information on equipment’s status is cheaper than emergency repairs.
  • the public interest would be protected, because many situations that have resulted in disasters in the past would instead be avoided, or at least minimized.
  • the cost of regulation would be reduced while its effectiveness would be increased: at present, we must rely on insufficient numbers of inspectors who make infrequent visits: catching a violation is largely a matter of luck. Instead, the inspectors could monitor the real-time data and intervene instantly– hopefully in time to avoid an incident.

Even though the IoT is not fully realized (Cisco says only 4% of “things” are linked at present), that’s not the case with the kind of high-stakes operation we’re most concerned with.  GE now builds about 60 sensors into every jet, realizing new revenues by proving the real-time data to customers, while being able to improve design and maintenance by knowing exactly what’s happening right now to the engines.  Union Pacific has cut dangerous and costly derailments due to bearing failures by 75% by placing sensors along the trackbed.

As I said in the SOLID post, it’s time that government begin exploring the “real-time regulation” alternative.  I’m contacting the tech-savvy Mass. delegation, esp. Senators Markey and Warren, and will report back on my progress toward making it a reality!

http://www.stephensonstrategies.com/">Stephenson blogs on Internet of Things Internet of Things strategy, breakthroughs and management