Categoryconnected world

Data about me in my city

D

This article is a few months old (and in German), but two points of view that I’ll just offer side by side as they pretty much sum up the state of play in smart cities these days.

For context, this is about a smart city partnership in which Huawei implement their technologies in Duisburg, a mid-sized German city with a population of about 0.5 million. The (apparently non-binding) partnership agreement includes Smart Government (administration), Smart Port Logistics, Smart Education (education & schools), Smart Infrastructure, 5G and broadband, Smart Home, and the urban internet of things.

Note: The quotes and paraphrases are roughly translated from the original German article.

Jan Weidenfeld from the Marcator Institute for China Studies:

“As a city administration, I’d be extremely cautious here.” China has a fundamentally different societal system, and a legal framework that means that every Chinese company, including Huawei, is required to open data streams to the communist party. (…)

Weidenfeld points out that 5 years ago, when deliberations about the project began, China was a different country than it is today. At both federal and state levels, the thinking about China has evolved. (…)

“Huawei Smart City is a large-scale societal surveillance system, out of which Duisburg buys the parts that are legally fitting – but this context mustn’t be left out when assessing the risks.”

Anja Kopka, media spokesperson for the city of Duisburg:

The city of Duisburg doesn’t see “conclusive evidence” regarding these security concerns.The data center meets all security requirements for Germany, and is certified as such. “Also, as a municipal administration we don’t have the capacity to reliably assess claims of this nature.” Should federal authorities whose competencies include assessing such issues provide clear action guidelines for dealing with Chinese partners in IT, then Duisburg will adapt accordingly.

The translation is a bit rough around the edges, but I think you’ll get the idea.

With infrastructure, when we see the damage it’s already too late

We have experts warning, but the warnings are of such a structural nature that they’re kinda of to big and obvious to prove. Predators will kill other animals to eat. ????

By the time abuse or any real issue can be proven, it’d be inherently to late to do anything about it. We have a small-ish city administration that knows perfectly well that they don’t have the capacity to do their due diligence, so they just take their partners’ word for it.

The third party here, of course, being a global enterprise with an operating base in a country that has a unique political and legal system that in many ways isn’t compatible with any notion of human rights, let alone data rights, that otherwise would be required in the European Union.

The asymetries in size and speed are vast

And it’s along multiple axes — imbalance of size and speed, and incompatibility of culture — that I think we see the most interesting, and most potentially devastating conflicts:

  • A giant corporation runs circles around a small-to-mid sized city. I think it’s fair to assume that only because of Chinese business etiquette was the CFO of one of Huawei’s business units even flown out to Duisburg to sign the initial memorandum of understanding with Duisburg’s mayor Sören Link. The size and power differential is so ridiculous that it might just as well have been the Head of Sales EMEA or some other mid-level manager that took that meeting. After all, for Chinese standards, a city of a population of a half-million wouldn’t even considered a third tier city. Talk about an uneven playing field.
  • The vast differences of (for lack of a better word, and broadly interpreted) culture in the sense of business realities and legal framework and strategic thinking between a large corporation with global ambitions and backed by a highly centralized authoritarian state on one side, and the day-to-day of a German town are overwhelming. So much so, that I don’t think that the mayor of Duisburg and his team are even aware of all the implicit assumptions and biases they bring to the table.

And it’s not an easy choice at some level: Someone comes in and offers much needed resources that you need and don’t have any chance to get, desperation might force you to make some sub-prime decisions. But this comes at a price — the question is just how bad that price will be over the long term.

I’m not convinced that any smart city of this traditional approach is worth implementing, or even might be worth implementing; probably not. But of all the players, the one backed by a non-democratic regime with a track record of mass surveillance and human rights violations is surely at the bottom of the list.

It’s all about the process

That’s why whenever I speak about smart cities (which I do quite frequently these days), I focus on laying better foundations: We can’t always start from scratch when considering a smart city project. We need a framework as a point of reference, and as a guideline, and it has to make sure to keep us aligned with our values.

Some aspects to take into account here are transparency, accountability, privacy and security (my mnemonic device is TAPS); a foundation based on human and digital rights; and participatory processes from day one.

And just to be very clear: Transparency is not enough. Transparency without accountability is nothing.

Please note that this blog post is based on a previously published item in my newsletter Connection Problem, to which you can sign up here.

What type of smart city do we want to live in?

W

Warning: Trick question! The right questions should of course be: What type of city do we want to live in? What parts of our cities do we want to be smart, and in what ways?

That said, this is the talk of my talk for NEXT Conference 2019 in which I explore some basic principles for making sure that if we add so-called smart city technology to our public spaces, we’ll end up with desirable results.

What happens when one product in a family of connected products fails?

W

About a year ago, we changed our home audio setup to “smart” speakers: We wanted to be able to stream directly from Spotify, as well we from our phones. We also wanted to avoid introducing yet another microphone into our living room. (The kitchen is, hands down, the only place where a voice assistant makes real sense to me personally. Your mileage may vary.) Preferably, there should be a line-in as well; I’m old school that way.

During my research I learned that the overlap of circles in this Venn diagram of speakers that are (a) connected (“smart”) for streaming, (b) have good sound and (c) don’t have a microphone is… very thin indeed.

The Venn diagram of speakers that are connected (“smart”), good and have no microphone

The Sonos range looked best to me; except for those pesky microphones. Our household is largely voice assistant free, minus the phones, where we just deactivated the assistants to whatever degree we could.

In the end, we settled for a set of Bang & Olufsen Beoplay speakers for living room and kitchen: Solid brand, good reputation. High end. Should do just fine — and it better, given the price tag!

This is just for context. I don’t want to turn this into a product review. But let’s just say that we ran into some issues with one of the speakers. These issues appeared to be software related. And while from the outside they looked like they should be easy to fix, it turned out the mechanisms to deliver the fixes were somewhat broken themselves.

Long story short: I’m now trying to return the speakers. Which made me realize that completely different rules apply than I’m used to. In Germany, where we are based, consumer protection laws are reasonably strong, so if something doesn’t work you can usually return it without too much hassle.

But with a set of connected speakers, we have an edge case. Or more accurately, a whole stack of edge cases.

  • The product still fulfills the basic function, just in a way that is so diminished and awkward to get to work because of a software issue that it’s too much to do on a daily basis, meaning the speakers simply stay off. It kinda works, kinda doesn’t. Certainly doesn’t work as advertised.
  • If one of these gets returned and I wanted to switch to a different brand, then I’d be stuck with the other speakers in the set, which are now expensive paperweights: Connected products work in families, or so-called platform ecosystems. One without the other just doesn’t make sense. It’s like the chain that breaks as soon as the weakest link breaks. So can I return all of them because there is a software issue with one of them?

This is going to be an interesting process, I’m afraid. Can we return the whole family and switch on over to a different make of speakers? Or are we stuck with an expensive set of speakers that while not quite broken, is very much unusable in our context?

If so, then at least I know never to buy connected speakers again. Rather, then I guess I’d recycle these and instead go back to a high end analog speaker set with some external streaming connector – knowing full well that that connector will be useless in a few years time but that the speakers and amp would be around and working flawlessly for 15-20 years, like my old ones did.

And that is the key insight here for our peers in the industry: If your product in this nascent field fails because of lacking quality management, then you leave scorched earth. Consumers aren’t going to trust your products any more, sure. But they are unlikely to trust anyone else’s, either.

The falling tide lowers all boats.

So let’s get not just the products right: In consumer IoT, all too often we think in families/ecosystems. So we have to consider long-term software updates (and mechanisms to deliver them, and fall-backs to those mechanisms…) as well as return policies in case something goes wrong with one of the products. And while we’re at it, we need to equally upgrade consumer protection regulation to deal with these issues of ecosystems and software updates.

This is the only way to ensure consumer trust. So we can reap the benefits of innovation without suffering all the externalized costs as well as unintended consequences of a job sloppily done.

(See also the ThingsCon Trustable Technology Mark, Better IoT, Tech Transformed.)

Update (Oct 2019): Turns out other companies also start recognizing that there’s a demand for mic-free speakers: Sonos just launched a speaker that they market specifically for it being mic-free, and it’s otherwise identical to one of their staples. (It’s called the One SL; I imagine the “SL” stands for “streamlined” or “stop listening” but I might be projecting.)

Developing better urban metrics for Smart Cities

D

When we embed connected technologies — sensors, networks, etc. — into the public space*, we create connected public space. In industry parlance, this is called a Smart City. (I prefer “connected city”, but let’s put the terminology discussion on the back burner for now.) And data networks change the way we live.

* Note: Increasingly, the term “public space” has itself come under attack. In many cities, formerly public (as in publicly owned & governed) has been privatized, even if it’s still accessibly by the public, more or less. Think of a shopping mall, or the plazas that are sometimes attached to a shopping mall: You can walk in, but a mall cop might enforce some house rules that were written not by citizens but the corporation that owns the land. I find this not just highly problematic, I also recommend flat out rejecting that logic as a good way forward. Urban space — anything outside closed buildings, really — should, for the most part, be owned by the public, and even where for historical reasons it can’t be owned, it should at least be governed by the public. This means the rules should be the same in a park, a shopping mall-adjacent plaza, and the street; they should be enforced by (publicly employed) police rather than (privately employed) mall cops. Otherwise there’s no meaningful recourse for mistreatment, there’s no ownership, citizens are relegated from stakeholders to props/consumers.

Networks and data tend not to ease but to reinforce power dynamics, so we need to think hard about what type of Smart City we want to live in:

  • Do we want to allow people to get faster service for a fee (“Skip the line for $5”), or prefer everyone to enjoy the same level of service, independent of their income?
  • Do we want to increase the efficiency for 90% of the population through highly centralized services even if it means making the life of the other 10% much harder, or do we plan for a more resilient service delivery for all, even if it means the overall service delivery is a tad slower?
  • Do we want to cut short-term spending through privatization even if it means giving up control over infrastructure, or do we prioritize key infrastructure in our budgeting process so that the government can ensure quality control and service delivery in the long term, even if it costs more in the short term?

These are blunt examples, but I reckon you can tell where I’m going with this: I think democratic life requires public space and urban infrastructure to be available to all citizens and stakeholders, and to work well for all citizens. Pay for play should only apply for added non-essential services.

“Don’t confuse the data you can capture with the things you need to know!

In order to shape policies in this space meaningfully, we need to think about what the things are that we prioritize. Here, a brief warning is in place: the old management adage “you can’t manage what you don’t measure” is problematic to say the least. All too often we see organizations act on the things they can measure, even if these things are not necessarily meaningful but just easy to measure. Don’t confuse the data you can capture with the things you need to know!

What do we want to prioritize, and maybe even measure?

That said, what are the things we want to prioritize? And might it even be possible to measure them?

Here I don’t have final answers, just some pointers that I hope might lead us into the right direction. These are angles to be explored whenever we consider a new smart city project, at any scale — even, and maybe especially, for pilot projects! Let’s consider them promising starting points:

Participation
Has there been meaningful participation in the early feedback, framing, planning, governance processes? If feedback has been very limited and slow, what might the reasons be? Is it really lack of interest, or maybe the barrier to engagement was just too high? Were the documents to long, too full of jargon, to hard to access? (See Bianca Wylie’s thread on Sidewalk Labs’ 1.500+ page development plan.) Were the implications, the pros and cons, not laid out in an accessible way? For example, in Switzerland there’s a system in place that makes sure that in a referendum both sides have to agree on the language that explains pros and cons, so as to make sure both sides’ ideas are represented fairly and accessibly.

Sustainability
Would these changes significantly improve sustainability? The UN’s Sustainable Development Goals (SGD) framework might offer a robust starting point, even though we should probably aim higher given the political (and real!) climate.

Will it solve a real issue, improve the life for citizens?
Is this initiative going to solve a real issue and improve lives meaningfully? This is often going to be tricky to answer, but if there’s no really good reason to believe it’s going to make a meaningful positive impact then it’s probably not a good idea to pursue. The old editors’ mantra might come in handy: If in doubt, cut it out. There are obvious edge cases here: Sometimes, a pilot project is necessary to explore something truly new; in those cases, there must be a plausible, credible, convincing hypothesis in place that can be tested.

Are there safeguards in place to prevent things from getting worse than before if something doesn’t work as planned?
Unintended consequences are unavoidable in complex systems. But there are ways to mitigate risks, and to make sure that the fallback for a failed systems are not worse then the original status. If any project would be better while working perfectly but worse while failing, then that deserves some extra thought. If it works better for some groups but not for others, that’s usually a red flag, too.

When these basic goals are met, and only then, should we move on to more traditional measurements, the type that dominates the discourse today, like:

  • Will this save taxpayers’ money, and lead to more cost-effective service delivery?
  • Will this lead to more efficient service delivery?
  • Will this make urban management easier or more efficient for the administration?
  • Will this pave the way for future innovation?

These success factors / analytical lenses are not grand, impressive ideas: They are the bare minimum we should secure before engaging in anything more ambitious. Think of them as the plumbing infrastructure of the city: Largely unnoticed while everything works, but if it ever has hiccups, it’s really bad.

We should stick to basic procedural and impact driven questions first. We should incorporate the huge body of research findings from urban planners, sociologists, and political scientists rather than reinvent the wheel. And we should never, ever be just blinded by a shiny new technological solution to a complex social or societal issue.

Let’s learn to walk before we try to run.

Which type of Smart City do we want to live in?

W

Connectivity changes the nature of things. It quite literally changes what a thing is.

By adding connectivity to, say, the proverbial internet fridge it stops being just an appliance that chills food. It becomes a device that senses; captures, processes and shares information; acts on this processed information. The thing-formerly-known-as-fridge becomes an extension of the network. It makes the boundaries of the apartment more permeable.

So connectivity changes the fridge. It adds features and capabilities. It adds vulnerabilities. At the same time, it also adds a whole new layer of politics to the fridge.

Power dynamics

Why do I keep rambling on about fridges? Because once we add connectivity — or rather: data-driven decision making of any kind — we need to consider power dynamics.

If you’ve seen me speak at any time throughout the last year, chances are you’ve encountered this slide that I use to illustrate this point:

The connected home and the smart city are two areas where the changing power dynamics of IoT (in the larger sense) and data-driven decision making manifest most clearly: The connected home, because it challenges our notions of privacy (in the last 150 years, in the global West). And the smart city, because there is no opting out of public space. Any sensor, any algorithm involved in governing public space impacts all citizens.

That’s what connects the fridge (or home) and the city: Both change fundamentally by adding a data layer. Both acquire a new kind of agenda.

3 potential cities of 2030

So as a thought experiment, let’s project three potential cities in the year 2030 — just over a decade from now. Which of these would you like to live in, which would you like to prevent?

In CITY A, a pedestrian crossing a red light is captured by facial recognition cameras and publicly shamed. Their CitizenRank is downgraded to IRRESPONSIBLE, their health insurance price goes up, they lose the permission to travel abroad.

In CITY B, wait times at the subway lines are enormous. Luckily, your Amazon Prime membership has expended to cover priority access to this formerly public infrastructure, and now includes dedicated quick access lines to the subway. With Amazon Prime, you are guaranteed Same Minute Access.

In CITY C, most government services are coordinated through a centralized government database that identifies all citizens by their fingerprints. This isn’t restricted to digital government services, but also covers credit card applications or buying a SIM card. However, the official fingerprint scanners often fail to scan manual laborers’ fingerprints correctly. The backup system (iris scans) don’t work on too well on those with eye conditions like cataract. Whenever these ID scans don’t work, the government service requests are denied.

Now, as you may have recognized, this is of course a trick question. (Apologies.) Two of these cities more or less exist today:

  • CITY A represents the Chinese smart city model based on surveillance and control, as piloted in Shenzhen or Beijing.
  • CITY C is based on India’s centralized government identification database, Aadhaar.
  • Only CITY B is truly, fully fictional (for now).

What model of Smart City to optimize for?

We need to decide what characteristics of a Smart City we’d like to optimize for. Do we want to optimize for efficiency, resource control, and data-driven management? Or do we want to optimize for participation & opportunity, digital citizens rights, equality and sustainability?

There are no right or wrong answers (even though I’d clearly prefer a focus on the second set of characteristics), but it’s a decision we should make deliberately. One leads to favoring monolithic centralized control structures, black box algorithms and top-down governance. The other leads to decentralized and participatory structures, openness and transparency, and more bottom-up governance built in.

Whichever we build, these are the kinds of dependencies we should keep in mind. I’d rather have an intense, participatory deliberation process that involves all stakeholders than just quickly throwing a bunch of Smart City tech into the urban fabric.

After all, this isn’t just about technology choices: It’s the question what kind of society we want to live in.

How to build a responsible Internet of Things

H

Over the last few years, we have seen an explosion of new products and services that bridge the gap between the internet and the physical world: The Internet of Things (IoT for short). IoT increasingly has touch points with all aspects of our lives whether we are aware of it or not.

In the words of security researcher Bruce Schneier: “The internet is no longer a web that we connect to. Instead, it’s a computerized, networked, and interconnected world that we live in. This is the future, and what we’re calling the Internet of Things.”1

But IoT consists of computers, and computers are often insecure, and so our world becomes more insecure—or at the very least, more complex. And thus users of connected devices today have a lot to worry about (because smart speakers and their built-in personal digital assistants are particularly popular at the moment, we’ll use those as an example):

Could their smart speaker be hacked by criminals? Can governments listen in on their conversations? Is the device always listening, and if so, what does it do with the data? Which organizations get to access the data these assistants gather from and about them? What are the manufacturers and potential third parties going to do with that data? Which rights do users retain, and which do they give up? What happens if the company that sold the assistant goes bankrupt, or decides not to support the service any longer?

Or phrased a little more abstractedly2: Does this device do what I expect (does it function)? Does it do anything I wouldn’t normally expect (is it a Trojan horse)? Is the organization that runs the service trustworthy? Does that organization have trustworthy, reliable processes in place to protect myself and my data? These are just some of the questions faced by consumers today, but they face these questions a lot.

Trust and expectations in IoT
Trust and expectations in IoT. Image: Peter Bihr/The Waving Cat

Earning (back) that user trust is essential. Not just for any organization that develops and sells connected products, but for the whole ecosystem.

Honor the spirit of the social contract

User trust needs to be earned. Too many times have users clicked “agree” on some obscure, long terms of service (ToS) or end user license agreement (EULA) without understanding the underlying contract. Too many times have they waived their rights, giving empty consent. This has led to a general distrust—if not in the companies themselves then certainly in the system. No user today feels empowered to negotiate a contractual relationship with a tech company on eye level—because they can’t.

Whenever some scandal blows up and creates damaging PR, the companies slowly backtrack, but in too many cases they were legally speaking within their rights: Because nobody understood the contract but the abstract product language suggests a certain spirit of mutual goodwill between product company and their users that is not honored by the letter of that contract.

So short and sweet: Honor the spirit of the social contract that ties companies and their users together. Make the letters of the contract match that spirit, not the other way round. Earning back the users’ trust will not just make the ecosystem more healthy and robust, it will also pay huge dividends over time in brand building, retention, and, well, user trust.

Respect the user

Users aren’t just an anonymous, homogeneous mass. They are people, individuals with diverse backgrounds and interests. Building technical systems at scale means having to balance individual interests with automation and standardization.

Good product teams put in the effort to do user research and understand their users better: What are their interests, what are they trying to get out of a product and why, how might they use it? Are they trying to use it as intended or in interesting new ways? Do they understand the tradeoffs involved in using a product? These are all questions that basic, but solid user research would easily cover, and then some. This understanding is a first step towards respecting the user.

There’s more to it, of course: Offering good customer service, being transparent about user choices, allowing users to control their own data. This isn’t a conclusive list, and even the most extensive checklist wouldn’t do any good in this case: Respect isn’t a list of actions, it’s a mindset to apply to a relationship.

Offer strong privacy & data protection

Privacy and data protection is a tricky area, and one where screwing up is easy (and particularly damaging for all parties involved).

Protecting user data is essential. But what that means is not always obvious. Here are some things that user data might need to be protected from:

  • Criminal hacking
  • Software bugs that leak data
  • Unwarranted government surveillance
  • Commercial third parties
  • The monetization team
  • Certain business models

Part of these fall squarely into the responsibility of the security team. Others are based on the legal arrangements around how the organization is allows (read: allows itself) to use user data: The terms of services. Others yet require business incentives to be aligned with users’ interests.

The issues at stake aren’t easy to solve. There are no silver bullets. There are grey areas that are fuzzy, complex and complicated.

In some cases, like privacy, there are even cultural and regional differences. For example, to paint with a broad brush, privacy protection has fundamentally different meanings in the US than it does in Europe. While in the United States, privacy tends to mean that consumers are protected from government surveillance, in Europe the focus is on protecting user data from commercial exploitation.

Whichever it may be—and I’d argue it needs to be both—any organization that handles sensitive user data should commit to the strongest level of privacy and data protection. And it should clearly communicate that commitment and its limits to users up front.

Make it safe and secure

It should go without saying (but alas, doesn’t) that any device that connects to the internet and collects personal data needs to be reliably safe and secure. This includes aspects ranging from the design process (privacy by design, security by design) to manufacturing to safe storage and processing of data to strong policies that protect data and users against harm and exploitation. But it doesn’t end there: Especially the end-of-life stage of connected products are important, too. If an organization stops maintaining the service and ceases to update the software with security patches, or if the contract with the user doesn’t have protections against data spills at the acquisition or liquidation stage of a company, then the data could have been safe for years but all of a sudden poses new risks.

IT security is hard enough as it is, but security of data-driven systems that interconnect and interact is so much harder. After all, the whole system is only as strong as its weakest component.

Alas, there is neither fame nor glory in building secure systems: At best, there is no scandal over breaches. At worst, there are significant costs without any glamorous announcements. In the same way that prevention in healthcare is less attractive than quick surgery to repair the damage, it is also more effective and cheaper in the long run. So hang in there, and the users might just vote with their feet and dollars to support the safest, most secure, most trustworthy products and organizations.

Choose the right business model

A business model can make or break a company. Obviously, without a business model, a company won’t last long. But without the right business model, it’ll thrive not together with its customers but at their expense.

We see so much damage done because wrong business models—and hence, wrong incentives—drive and promote horrible decision making.

If a business is based on user data—as is often the case in IoT—then finding the right business model is essential. Business models, and the behaviors they incentivize, matter. More to the point, aligning the organization’s financial incentives with the users’ interests matters.

As a rule of thumb, data mining isn’t everything. Ads, and the surveillance marketing they increasingly require, have reached a point of being poisonous. If, however, an organization finds a business model that is based on protecting its users’ data, then that organization and its customers are going to have a blast of a time.

To build sustainable businesses—businesses that will sustain themselves and not poison their ecosystem—it’s absolutely essential to pick and align business models and incentives wisely.


  1. Bruce Schneier: Click Here to Kill Everyone. Available at http://nymag.com/selectall/2017/01/the-internet-of-things-dangerous-future-bruce-schneier.html 
  2. See Peter Bihr: Trust and expectations in IoT. Available at https://thewavingcat.com/2017/06/28/trust-and-expectation-in-iot/ 

Facebook, Twitter, Google are a new type of media platform, and new rules apply

F

When Congress questioned representatives of Facebook, Google and Twitter, it became official: We need to finally find an answer to a debate that’s been bubbling for months (if not years) about the role of the tech companies—Google, Apple, Facebook, Amazon, Microsoft, or GAFAM—and their platforms.

The question is summed up by Ted Cruz’s line of inquiry (and here’s a person I never expected to quote) in the Congressional hearing: “Do you consider your sites to be neutral public fora?” (Some others echoed versions of this question.)

Platform or media?

Simply put, the question boils down to this: Are GAFAM tech companies or media companies? Are they held to standards (and regulation) of “neutral platform” or “content creator”? Are they dumb infrastructure or pillars of democracy?

These are big questions to ask, and I don’t envy the companies for their position in this one. As a neutral platform they get a large degree of freedom, but have to take responsibility for the hate speech and abuse on their platform. As a media company they get to shape the conversation more actively, but can’t claim the extreme point of view of free speech they like to take. You can’t both be neutral and “bring humanity together” as Mark Zuckerberg intends. As Ben Thompson points out on Stratechery (potentially paywalled), neutrality might be the “easier” option:

the “safest” position for the company to take would be the sort of neutrality demanded by Cruz — a refusal to do any sort of explicit policing of content, no matter how objectionable. That, though, was unacceptable to the company’s employee base specifically, and Silicon Valley broadly

I agree this would be easier. (I’m not so sure that the employee preference is the driving force, but that’s another debate and it certainly plays a role.) Also, let’s not forget that each of these companies plays a global game, and wherever they operate they have to meet legal requirements. Where are they willing to draw the line? Google famously didn’t enter the Chinese market a few years ago, presumably because they didn’t want to meet the government’s censorship requirements. This was a principled move, and I would expect not an easy one for a big market. But where do you draw the line? US rules on nudity? German rules on censoring Nazi glorification and hate speech? Chinese rules on censoring pro-democracy reporting or on government surveillance?

For GAFAM, the position has traditionally been clear cut and quite straightforward, which we can still (kind of, sort of) see in the Congressional hearing:

“We don’t think of it in the terms of ‘neutral,'” [Facebook General Counsel Colin] Stretch continued, pointing out that Facebook tries to give users a personalized feed of content. “But we do think of ourselves as — again, within the boundaries that I described — open to all ideas without regard to viewpoint or ideology.” (Source: Recode)

Once more:

[Senator John] Kennedy also asked Richard Salgado, Google’s director of law enforcement and information security, whether the company is a “newspaper” or a neutral tech platform. Salgado replied that Google is a tech company, to which Kennedy quipped, “that’s what I thought you’d say.” (Source: Business Insider)

Now that’s interesting, because while they claim to be “neutral” free speech companies, Facebook and the others have of course been hugely filtering content by various means (from their Terms of Service to community guidelines), and shaping the attention flow (who sees what and when) forever.

This aspect isn’t discussed much, but worth noting nonetheless: How Facebook and other tech firms deal with content has been based to a relatively large degree by United States legal and cultural standards. Which makes sense, given that they’re US companies, but doesn’t make a lot of sense given they operate globally. To name just two examples from above that highlight how legal and cultural standards differ from country to country, take pictures of nudity (largely not OK in the US, largely OK in Germany) versus positively referencing the Third Reich (largely illegal in Germany, largely least legal in the US).

Big tech platforms are a new type of media platform

Here’s the thing: These big tech platforms aren’t neutral platforms for debate, nor are they traditional media platforms. They are neither neither dumb tech (they actively choose and frame and shape content & traffic) nor traditional media companies that (at least notionally) primarily engage in content creation. These big tech platforms are a new type of media platform, and new rules apply. Hence, they require new ways of thinking and analysis, as well as new approaches to regulation.

(As an personal, rambling aside: Given we’ve been discussing the transformational effects of digital media and especially social media for far over a decade now, how do we still even have to have this debate in 2017? I genuinely thought that we had at least sorted out our basic understanding of social media as a new hybrid by 2010. Sigh.)

We might be able to apply existing regulatory—and equally important: analytical—frameworks. Or maybe we can find a way to apply existing ones in new ways. But, and I say this expressly without judgement, these are platforms that operate at a scale and dynamism we haven’t seen before. They are of a new quality, they display qualities and combinations of qualities and characteristics we don’t have much experience with. Yet, on a societal level we’ve been viewing them through the old lenses of either media (“a newspaper”, “broadcast”) or neutral platforms (“tubes”, “electricity”). And it hasn’t worked yet, and will continue not to work, because it makes little sense.

That’s why it’s important to take a breath and figure out how to best understand implications, and shape the tech, the organizations, the frameworks within which they operate.

It might turn out, and I’d say it’s likely, that they operate within some frameworks but outside others, and in those cases we need to adjust the frameworks, the organizations, or both. To align the analytical and regulatory frameworks with realities, or vice versa.

This isn’t an us versus them situation like many parties are insinuating: It’s not politics versus tech as actors on both the governmental and the tech side sometimes seem to think. It’s not tech vs civil society as some activists claim. It’s certainly not Silicon Valley against the rest of the world, even though a little more cultural sensitivity might do SV firms a world of good. This is a question of how we want to live our lives, govern our lives, as they are impacted by the flow of information.

It’s going to be tricky to figure this out as there are many nation states involved, and some supra-national actors, and large global commercial actors and many other, smaller but equally important players. It’s a messy mix of stakeholders and interests.

But one thing I can promise: The solution won’t be just technical, not just legal, nor cultural. It’ll be a slow and messy process that involves all three fields, and a lot of work. We know that the status quo isn’t working for too many people, and we can shape the future. So that soon, it’ll work for many more people—maybe for all.

Please note that this is cross-posted from Medium. Also, for full transparency, we work occasionally with Google.