Could EVERYTHING be “smart?” It may be happening sooner we thought, and with implications that are hard to fathom today.
That’s the potential with new technology pioneered by Shyam Gollakota, an assistant professor at the University of Washington. For the first time, it would let battery- and cordless-less devices harvest signals from Wi-Fi, radio, or TV to communicate and power themselves.
For a long time, the most “out there” idea about IoT sensors has been Prof. Kris Pister’s “smart dust” concept, which aimed at a complete sensor/communication system in a package only one cubic millimeter in size. Pister argued that such devices would be so small and cheap that they could be installed — or perhaps even scattered — almost everywhere. The benefits could be varied and inconceivable in the past. According to Pister, possible applications could include:
- “Defense-related sensor networks
- battlefield surveillance, treaty monitoring, transportation monitoring, scud hunting, …
- Virtual keyboard
- Glue a dust mote on each of your fingernails. Accelerometers will sense the orientation and motion of each of your fingertips, and talk to the computer in your watch. QWERTY is the first step to proving the concept, but you can imagine much more useful and creative ways to interface to your computer if it knows where your fingers are: sculpt 3D shapes in virtual clay, play the piano, gesture in sign language and have to computer translate, …
- Combined with a MEMS augmented-reality heads-up display, your entire computer I/O would be invisible to the people around you. Couple that with wireless access and you need never be bored in a meeting again! Surf the web while the boss rambles on and on.
- Inventory Control
- The carton talks to the box, the box talks to the palette, the palette talks to the truck, and the truck talks to the warehouse, and the truck and the warehouse talk to the internet. Know where your products are and what shape they’re in any time, anywhere. Sort of like FedEx tracking on steroids for all products in your production stream from raw materials to delivered goods.
- Product quality monitoring
- temperature, humidity monitoring of meat, produce, dairy products
- Mom, don’t buy those Frosted Sugar Bombs, they sat in 80% humidity for two days, they won’t be crunchy!
- impact, vibration, temp monitoring of consumer electronics
- failure analysis and diagnostic information, e.g. monitoring vibration of bearings for frequency signatures indicating imminent failure (back up that hard drive now!)
- Smart office spaces
- The Center for the Built Environment has fabulous plans for the office of the future in which environmental conditions are tailored to the desires of every individual. Maybe soon we’ll all be wearing temperature, humidity, and environmental comfort sensors sewn into our clothes, continuously talking to our workspaces which will deliver conditions tailored to our needs. No more fighting with your office mates over the thermostat.
- Interfaces for the Disabled (courtesy of Bryndis Tobin)
- Bryndis sent me email with the following idea: put motes “on a quadriplegic’s face, to monitor blinking & facial twitches – and send them as commands to a wheelchair/computer/other device.” This could be generalized to a whole family of interfaces for the disabled. Thanks Bryndis!”
Now imagine that a critical component of such a tiny, ubiquitous device was removed. Because it didn’t need a battery it could be even smaller and cheaper (because of cheaper and simpler radio hardware circuitry).
“The goal is having billions of disposable devices start communicating,” Gollakota said (my emphasis).
You may remember that I’ve written before about my metaphor of a pre-IoT era of “Collective Blindness,” the universal inability to peer (literally or figuratively) inside things in the past, which forced us to create all sorts of work-arounds to cope with that lack of real-time data. Imagine how precise our knowledge about just about everything will be if Gollakota’s technology becomes commonplace.
.As Technology Review reported, the critical challenge is making it possible for a device lacking a traditional power source to communicate: “Transferring power wirelessly is not a new trick. But getting a device without a conventional power source to communicate is harder, because generating radio signals is very power-intensive and the airwaves harvested from radio, TV, and other telecommunication technologies hold little energy.”
The principle making the innovation possible is “backscattering,” reflecting waves, particles or signals back in the direction they came from, which creates a new signal.
The early results are encouraging. Gollakata has made a contact lens that can connect with a smartphone. Think I’ll pass on that one, but other devices he and his team have created include brain implants and “a flexible skin patch that can sense temperature and respiration, a design that could be used to monitor hospital patients.” Marketers will love this one: a concert poster broadcasting a bit of the featured band’s music over FM radio!
Jeeva Wireless, Gollakata’s commercial spinoff, is using a variety of the technology, “passive Wi-Fi.” Devices using it can data up to 100 feet and connect through walls.
Tiny passive devices using backscatter could be manufactured for as little as a dollar. “In tomorrow’s smart home, security cameras, temperature sensors, and smoke alarms should never need to have their batteries changed.”
Gollakata sums up the potential impact: “We can get communication for free” (my emphasis).
That’s incredible, but in light of the continuing series of major DDoS attacks made possible by weak or non-existent IoT security measures, I must remind everyone that speed, power, and ubiquity aren’t everything: we also need IoT security, so I hope the low cost and ability to function without a dedicated energy source won’t obscure that need as well.
BTW: a MIT profile on Gollakata mentions one of his other, related, inventions, which I think would mesh beautifully with my SmartAging vision to help seniors age in place in better health.
It’s called WiSee, which uses wireless signals such as Wi-Fi to “enable whole-home sensing and recognition of human gestures. Since wireless signals do not require line-of-sight and can traverse through walls, WiSee can enable whole-home gesture recognition using few wireless sources (e.g., a Wi-Fi router and a few mobile devices in the living room).”
I love the concept for seniors, because (like Echo, which I’m finally getting!!) it doesn’t require technical expertise, which many seniors lack and/or find intimidating, to launch and direct automated devices. In this case, the activation is through sensing and recognition of human gestures. According to Gollakata,“’Gestures enable a whole new set of interaction techniques for always-available computing embedded in the environment. As an example, he suggests that a hand swiping motion in the air could enable a user to control the radio volume while showering – or change the song playing on the stereo in the living room while you are cooking in the kitchen.”
He goes on to explain:
“…. that the approaches offered today to enable gesture recognition – by either installing cameras throughout a home/office or outfitting the human body with sensing devices – are in most cases either too expensive or unfeasible. So he and his group members are skirting these issues by taking advantage of the slight changes in ambient wireless signals that are created by motion. Since wireless signals do not require line-of-sight and can traverse through walls, he and his group have achieved the first gesture recognition system that works in those situations. ‘We showed that this approach can extract accurate information about a rich set of gestures from multiple concurrent users.”
Combine that with speaking to Alexa, and even the most frail seniors could probably control most of the functions in a smart home. Gollakota says that the approaches offered today to enable gesture recognition – by either installing cameras throughout a home/office or outfitting the human body with sensing devices – are in most cases either too expensive or unfeasible. So he and his group members are skirting these issues by taking advantage of the slight changes in ambient wireless signals that are created by motion. Since wireless signals do not require line-of-sight and can traverse through walls, he and his group have achieved the first gesture recognition system that works in those situations. “We showed that this approach can extract accurate information about a rich set of gestures from multiple concurrent users, “he says.
Incredible work, professor!