It isn’t very often that an Internet principle comes along that is so important that it actually affects almost everyone and everything. The Live Web  is one of those Internet principles.

The Static Web — the Internet as we know it today — has no thread of knowing or context. Until now, there has not been enough infrastructure in existence for a computer to do the work of presenting the Internet in a context of purpose. The Live Web presents an infrastructure and architecture for automating context on the internet. The Live Web brings to life the notion of context automation.

The term Live Web was first coined by Doc Searls and his son Allen Searls to describe a Web where timeliness and context matter as much as relevance. It blossoms with the following three assumptions:

•    All things are connected to the Internet. •    All things are recorded and tagged. •    All things can be recalled and accessed in context.

The Live Web is made up of three core principles that give rise to generating context:

•    First principle: Ubiquitous programmable data access. (APIs) •    Second principle: Ubiquitous event-based endpoints. •    Third principle: Ubiquitous event-based evaluation and execution machines.

Note that the three principles match the three assumptions.

The Static Web is a fabulous living thing, but it has no cognitive essence. Browsing just goes from place to place, location to location gathering information at each place but has not context. Nothing that brings it together to form a cognitive connection between where you are going and what you are doing, knowing, being, or learning. The Static web is way cool, but yet — metaphorically — blind, deaf, and dumb. The Live Web combines a related set of principles into a structured whole that gives the Internet a path to cognition. The Live Web is a foundation for generating, automating, managing and measuring context. Totally Connected and Loosely Coupled

Imagining the scenario where everything is connected to the Internet is not too difficult. In a recent article CNNMoney describes a completely connected world where not just every device, but literally everything you own will be connected to the Internet.

At the Mobile World Congress in Barcelona, companies like IBM (IBM, Fortune 500), Qualcomm (QCOMM), AT&T (T, Fortune 500) and Ericsson showed off their vision of a not-too-distant future in which every item in your life, from your refrigerator to your fridge magnets, will soon connect to the Internet or communicate other Internet-connected gizmos.

Here’s how it would work: Electric devices like washing machines, thermostats and televisions will be manufactured with chips that allow them to wirelessly connect to the Internet. Non-electric items will either come with low-energy, Bluetooth-like transmitters that can send information to an Internet-connected hub or radio frequency identification (RFID) tags that can be sensed by Internet-connected near-field communication (NFC) readers.

This is how these articles always are: “everything will have a network connection” and then they stop. News flash: giving something a network connection isn’t sufficient to make this network of things useful. I’ll admit this “Facebook of Things” comment points to a strategy. IBM, or Qualcomm, or ATT, or someone else would love to build a big site that all our things connect to.

This is not unlike a May 2001 Scientific American article on the Semantic Web where Tim Berners-Lee, James Hendler, and Ora Lassila give the following scenario:

“The entertainment system was belting out the Beatles’ ‘We Can Work It Out’ when the phone rang. When Pete answered, his phone turned the sound down by sending a message to all the other local devices that had a volume control. His sister, Lucy, was on the line from the doctor’s office: …”

While this sounds futuristic and somehow achievable there are some serious questions that need to be asked. How does the phone know what devices have volume controls? How does the phone know you want the volume to turn down?

That then leads to even more difficult questions. Why would you program your phone to turn down the volume on your stereo? Isn’t the more natural place to do that on the stereo?

While the vision is spot on, the implementing of the vision in the method described just isn’t going to happen.

The problem with the idea of a big Facebook of Things kind of site is the tight coupling that it implies a person would have to take charge of all the devices. You would have to “friend” each one. And remember, these are devices, so not only do you have to connect and “friend” them but you will be doing the work of managing them. In this scenario, you would have to tell my stereo about my phone. I’m going to have to make sure I buy a stereo system that understands the “mute sound” command that my phone sends. I’m going to have to tell my phone that it should send “mute sound” command to the phone and “pause movie” commands to my DVR and “turn up lights” to my home lighting system.

This just isn’t going to happen. Ever.

The reason these visions fall short and end up sounding like nightmares instead of Disneyland is that we have a tough time breaking out of the request-response pattern of distributed devices that we’re all too familiar and comfortable with. This is how the Static Web works.

This pattern leads us to the conclusion that the only way to make this work is to create directories of things, the commands they understand, etc. But, the only way these visions will come to pass is with a new model that supports more loosely coupled modes of interaction between the thousands of things that are to be connected.

Consider the preceding scenario from Sir Tim modified slightly.

“The entertainment system was belting out the Beatles’ ‘We Can Work It Out’ when the phone rang. When Pete answered, his phone broadcasts a message to all local devices indicating it has received a call. His stereo responded by turning down the volume. His DVR responded by pausing the program he was watching. His sister, Lucy, …”

In the second scenario, the phone doesn’t have to know anything about other local devices. The phone need only indicate that it has received a call. Each device can interpret that message however it sees fit or ignore it altogether. This significantly reduces the complexity of the overall system because individual devices are loosely coupled. The phone software is much simpler and the infrastructure to pass messages between devices is much less complex than an infrastructure that supports semantic discovery of capabilities and commands.

Events, the messages about things that have happened are the key to this simple, loosely coupled scenario. If we can build an open, ubiquitous event-based protocol similar to the open, ubiquitous request protocol we have in HTTP, the vision of a network of things can come to pass in a way that doesn’t require constant tweaking of connections and doesn’t give any one silo (company/government/organization) control of it.

We’ve done this before with the Web. It’s time to do it again with the network of things.

We don’t need a Facebook of Things. We need an Internet of Things. We need the Live Web.