Tagpublishing 2.0

Google, the FTC and Germany: Public vs Private Media


There’s a war going on, and it’s not pretty. The old conflict between publicly funded and private media, and the fight about who regulates the whole sphere. Of course all of it was triggered by the internet. How could the net just allow information to be spread so easily and at such a low cost!

But jokes aside, there’s some seriously disturbing stuff going on right now. Namely, two focal points in this conflict about who should make media and under what conditions, and how should media be consumed.

Focal point #1: Google vs FTC

The FTC published a paper as basis for further discussion (“Staff Discussion Draft”, PDF) to evaluate the situation of news media today and to draft policy proposals. One of them: additional intellectual property rights to support news media against “free riding by news aggregators”. This by itself is one of the dumbest things I’ve read in a while. News aggregators (read: Google News), of course, channel traffic to the media sites. There’s no cannibalization going on, it’s the other way round. It’s good to see – and just fair to point out – that the draft also states that “expanded IP rights could restrict citizens’ access to this news, inhibit public discourse, and impinge upon free speech rights.” Yes, that it might well do.

Google reply on their Public Policy blog and the response is well worth reading:

Comments to FTC 20 July 2010

If you prefer the summary, jump straight to Jeff Jarvis, who sides clearly with Google as well as I do: This is not really a legal battle, but one over business models. And protecting an old, broken system should not be in the FTC’s (or anyone’s) interest.

Focal point #2: German public broadcasters are “depublishing”

A similar problem is discussed in Germany these days, if maybe in slightly different environment. In Germany we’ve had strong (and well-funded, particularly compared to the US) public broadcasters. (Note: not newspapers.) These broadcasters have done a tremendous job in the past, and even though there has been a lot of criticism over budgets and spending and about certain areas of engagement, they have a fairly strong support on a societal level. They have a clear mandate to provide basic information in all areas (including entertainment), and at least kind-of-clear limits of their engagement (no dating sites etc). These limits are based mainly on protecting private companies from publicly funded competition.

And it’s this eternal conflict of interests (here: “the public” vs “private publishing corporations”) that’s at the core of the dilemma. Public broadcasters in Germany were always very limited on what they could do online. But now, content has to be “depublished”* after some time. (Depending on the kind of content, which is evaluated by a three step system of the more absurd kind of type, after a week, a year or some other time span.) The content won’t be deleted, but hidden.

(Links with some background in German: Tagesschau summary of the regulating Rundfunkstaatsvertrag, Tagesschau’s Jörg Sadrozinski’s take on Depublizieren.)

How much protection do private publishers need from the government?

Now this raises all kinds of interesting questions. (The biggest of which is of course: WTF? But let’s save that for another time.) Questions I cannot necessarily answer off the top of my head. Like: Should private broadcasters really be protected from public broadcasters? How much so? Are there certain fields where this protection should be stronger than others? (Sports? Mobile services?) But also: How can content that we paid for by our (publicly collected and handled) fees be locked away after we paid for it, and how can even more of our money be spent on locking it away? What happens to all the references in Wikipedia that linked to said public content? How can a generation of tax and fee payers be expected to pay for fees if the content won’t be available through the channels they use?

I cannot even remember the last time I watched TV at home, on a TV set, live, on the air. And I certainly won’t start now.

So we need to ask ourselves: How much protection do we want to give to publishers and broadcasters, and what price are we willing to pay?

There’s a war going on, and it’s not pretty.

  • The word makes me want to invoke Godwin’s Law. But I’ll hold back, I promise.

Next-generation content management for newspapers (is in the making)


Image: Howard Beatty by Flickr User Ann Althouse, CC licensed (by-nc)Steve Yelvington helps newspapers get the web. Newspapers have a hard time adapting the new ways of the web, what with all this user-generated content, changing consumer habits and dropping sales. It’s a huge cultural problem – traditional vs new vs social media – too. (And it’s not that newspapers, their editors or their management are stupid. Of course they aren’t. Still, they struggling.)

Working with Morris DigitalWorks, Steve is working on a next-generation news site management system. Quite a claim to fame, but both his track record and the few details he already shares back it up. So what’s different here?

We’re integrating a lot more social-networking functionality, which we think is an important tool for addressing the “low frequency” problem that most news sites face. We’re going to be aggressive aggregators, pulling in RSS feeds from every community resource we can find, and giving our users the ability to vote the results up/down. We’ll link heavily to all the sources, including “competitors.” Ranking/rating, commenting, and RSS feeds will be ubiquitous. Users of Twitter, Pownce and Friendfeed will be able to follow topics of interest. We’re also experimenting with collaborative filtering, something I’ve been interested in since I met the developers of GroupLens in the mid-1990s. It’s how Amazon offers you books and products that interest you: People whose behavior is the most like yours have looked at/bought/recommended this other thing.

That’s music in my ears. The whole thing is based on Drupal, which has always been strong on community features. Here, it seems, the whole platform will be aimed at creating mashups, drawing in RSS feeds, pushing them around and spitting them out. In the end, you should end up with a pretty lively site full of both professionally produced and user-generated content and commentary. Of course, by providing both input and output channels for RSS feeds, the data isn’t restricted to just the website, it lives on beyond, way in the cloud.

And the best thing: Usability-wise it’ll be aimed not at techies, but at editors. No major coding necessary:

Open tools and open platforms are great for developers, but what we really want to do is place this kind of power directly in the hands of content producers. They won’t have to know a programming language, or how databases work, or even HTML to create special presentations based on database queries. Need a new XML feed? Point and click.

That’s great news, and certainly a project to watch closely. Can’t wait to see the launch. October it is.

(via Strange Attractor)

Note: So far, the CMS code hasn’t been released under a GPL, but they’ve pledged to do so. All in good time.

Image: Howard Beatty by Flickr User Ann Althouse, released under Creative Commons (by-nc)