Tagsocial graph

Facebook, Google & Plaxo Join the DataPortability WorkGroup

F

This rocks: Duncan Riley just has a scoop on Techcrunch announcing that Facebook, Google and Plaxo are joining the DataPortability Workgroup

Duncan had been hinting at something big on Twitter, and what can I say, he was right: “I don’t joke when I say that the post I’ve written changes the entire game.”

DataPortability, and particularly being able to move around your social network, is said to be one of the hottest topics in 2008, and for a reason. Facebook and Google are in, so this is going to happen, fast.

More thoughts on this later, I’ll have to go dig up more stuff first.

Update: In the comments to Duncan’s TC article, Joey3fingers asks what this will mean exactly: “Are you just telling us that Google now has all of our contacts now?”

No, it doesn’t. Quite the contrary, what Data Portability (DP) means is that we (the users) get more control over our data. (See Robert Scoble’s explanation video here.) DP allows us to take our data and take it somewhere else.

Why is this so important?

Well, first of all, our data (more exactly: our social network information aka the social graph, i.e. who we know) is what essentially makes Facebook & co work, it’s the very core of their business. But so far, we couldn’t take our data and take it somewhere else: Our data was locked in there, which is why those social networks are usually referred to as a walled garden. Facebook’s terms of service basically sucks in our data and won’t allow us to take in elsewhere.

Second, so far we had to re-enter all of our social network data over and over again, whenever we joined a new service. Every single time, we had to re-enter our contacts and friends, sort them into groups and what not. The term Social Network Fatigue was coined for a reason.

Both issues could be tackled now, thanks to the big players joining the DataPoratbility Workgroup. So stay tuned.

Update: ReadWriteWeb also has coverage.

Human vs Machine: What’s Better In Search?

H

The next few months should be interesting to watch: Monday, Wikia Search goes online. So there we have another powerful player in the next wave of search engine wars.

For the last few years, Google with its (mostly) machine-based search algorithms has been the dominant player in the search market, producing more or less the best results by exploiting the inherent value of hyperlinks: If website authors or bloggers link to another website, so the basic idea, they endorse that website, i.e. they consider it relevant in one way or another.

Now the humans are pushing their way back into search: In 2007, Jason Calacanis’ Mahalo introduced a completely human-based search, producing great results, but only covering a relatively small number of search terms. (For terms that aren’t listed, Mahalo forwards to a Google search.) Robert Scoble already suspects that Mahalo, Techmeme and Facebook (i.e. search based on your social graph) will kick Google’s butt.

Monday, Jimmy Wales’ Wikia Search will go into public beta. Wikia Search aims at making the search algorithms open and transparent, so the black box that is Google won’t be as easily manipulated by SEO efforts.

What those projects have in commons is, as Tim O’Reilly points out, that “both are trying to re-draw the boundary between human and machine.” How this hybrid works out will determine both the quality of our search results (and thereby the way we perceive a great many things around us) and also our defense against spam.

By the way, even Google doesn’t completely rely on machines alone, but has to manually intervene with some search terms:

(…) there is a small percentage of Google pages that dramatically demonstrate human intervention by the search quality team. As it turns out, a search for “O’Reilly” produces one of those special pages. Driven by PageRank and other algorithms, my company, O’Reilly Media, used to occupy most of the top spots, with a few for Bill O’Reilly, the conservative pundit. It took human intervention to get O’Reilly Auto Parts, a Fortune-500 company, onto the first page of search results. There’s a special split-screen format for cases like this.

So why is this necessary if there is such a powerful algorithm? Writes Cory Doctorow:

The idea of a ranking algorithm is that it produces “good results” — returns the best, most relevant results based on the user’s search terms. We have a notion that the traditional search engine algorithm is “neutral” — that it lacks an editorial bias and simply works to fulfill some mathematical destiny, embodying some Platonic ideal of “relevance.” Compare this to an “inorganic” paid search result of the sort that Altavista used to sell. But ranking algorithms are editorial: they embody the biases, hopes, beliefs and hypotheses of the programmers who write and design them.

So where Google puts its money on math-fu, and Mahalo on editorial filters, Wikia Search focuses on transparency and a Wikipedia-inspired community model to open up that Google black box. What hybrid will bring us the best results and decide which information we’re going to see? 2008 will be the year that tells us. Let the battle begin!