To me, the web is in an exciting transitional period. I see the web progressing towards shared data. Allow me to explain myself. I’m going to use a metaphor to describe what is in my thoughts. The traditional web can be thought as nodes with edges that have branches to other nodes. Each node is a site (with all of the local pages, directories) and the branches can be thought of as the interconnect to other sites. We’ve understood how the node works, but I think we are really beginning to grasp how the edges works. The traditional method for creating a branch is to directly link to another node, eg; linking to another website. The transitional element of the web is that there’s more than direct linking, there are edges of the nodes touching each other.
Facebook is the leader of understanding how the edges work. Almost all websites with a user-centric focus have facebook/oauth connections. Facebook has essentially become an open data store for these websites to tap into. The act of tapping into a data store instead of directly linking is the relationship of the edges of these nodes. This is why I say that Facebook understands how the edges work. So now, data is slowly becoming decentralized, but becoming more open at the same time. What I mean by decentralized is that the data does not all sit physically in one set of clusters. How the data stored is outside the scope of this post [to be revisited]. The way I view these edges (services) is that websites should become more interdependent. You can see this with Facebook connect. You have a site that you want to get user data. Facebook has this data, so you use a service to tap into facebook’s data (facebook connect). You can find Facebook connect anywhere, from simple websites to extensive business web applications.
So, I think to meet the future of the web, it is necessary to integrate services that allow data cooperation across different platforms. Fortunately, the web architecture is grasping this change. Look at things like openid and json. Json is really simple to read, uses less characters (less memory) and is easy to parse. Now, the future of this datacentric web is not just reliant on software. Something not many people have been paying attention to is the hardware that makes this technology possible. Computing power is still quite limited (hence why Facebook widened their servers and Google makes software hacks to solve problems) and the hardware behind it is still not ready for the new web architecture. The hardware is certainly something that somebody should look into ;)
With all of this talk about Facebook, I’d like to throw something in about Google. Google is almost far beyond the immediate future of the web. The area they are revolutionizing is web architecture and they are also toying with neural networks and classifiers, something I will be posting about in the future.