The Internet of Things assumes ubiquitous sensate environments. Without these, the cognitive engines of this everywhere-enabled world are deaf, dumb, and blind, and cannot respond relevantly to the real-world events that they aim to augment. Advances over the past decade have been rampant, as sensors tend to exploit Moore’s Law. Accordingly, sensors of all sorts now seem to be increasingly in everything, indicating an approaching phase transition once they are properly networked, as we saw when web browsers appeared and our interaction with computers fundamentally changed. This shift will create a seamless electronic nervous system that covers the planet—and one of the main challenges for the computing community now is in how to merge the rapidly evolving “omniscient” electronic sensoria onto human perception. This article tours aspects of this coming revolution guided by several recent and ongoing projects in the author’s research group at the MIT Media Lab that approach this precept from different perspectives. We give examples that range from smart buildings to sports, exploiting technical trends ranging from wearable computing to wireless sensor networks.