Categoryconnected world

Facebook, Twitter, Google are a new type of media platform, and new rules apply

F

When Congress questioned representatives of Facebook, Google and Twitter, it became official: We need to finally find an answer to a debate that’s been bubbling for months (if not years) about the role of the tech companies—Google, Apple, Facebook, Amazon, Microsoft, or GAFAM—and their platforms.

The question is summed up by Ted Cruz’s line of inquiry (and here’s a person I never expected to quote) in the Congressional hearing: “Do you consider your sites to be neutral public fora?” (Some others echoed versions of this question.)

Platform or media?

Simply put, the question boils down to this: Are GAFAM tech companies or media companies? Are they held to standards (and regulation) of “neutral platform” or “content creator”? Are they dumb infrastructure or pillars of democracy?

These are big questions to ask, and I don’t envy the companies for their position in this one. As a neutral platform they get a large degree of freedom, but have to take responsibility for the hate speech and abuse on their platform. As a media company they get to shape the conversation more actively, but can’t claim the extreme point of view of free speech they like to take. You can’t both be neutral and “bring humanity together” as Mark Zuckerberg intends. As Ben Thompson points out on Stratechery (potentially paywalled), neutrality might be the “easier” option:

the “safest” position for the company to take would be the sort of neutrality demanded by Cruz — a refusal to do any sort of explicit policing of content, no matter how objectionable. That, though, was unacceptable to the company’s employee base specifically, and Silicon Valley broadly

I agree this would be easier. (I’m not so sure that the employee preference is the driving force, but that’s another debate and it certainly plays a role.) Also, let’s not forget that each of these companies plays a global game, and wherever they operate they have to meet legal requirements. Where are they willing to draw the line? Google famously didn’t enter the Chinese market a few years ago, presumably because they didn’t want to meet the government’s censorship requirements. This was a principled move, and I would expect not an easy one for a big market. But where do you draw the line? US rules on nudity? German rules on censoring Nazi glorification and hate speech? Chinese rules on censoring pro-democracy reporting or on government surveillance?

For GAFAM, the position has traditionally been clear cut and quite straightforward, which we can still (kind of, sort of) see in the Congressional hearing:

“We don’t think of it in the terms of ‘neutral,'” [Facebook General Counsel Colin] Stretch continued, pointing out that Facebook tries to give users a personalized feed of content. “But we do think of ourselves as — again, within the boundaries that I described — open to all ideas without regard to viewpoint or ideology.” (Source: Recode)

Once more:

[Senator John] Kennedy also asked Richard Salgado, Google’s director of law enforcement and information security, whether the company is a “newspaper” or a neutral tech platform. Salgado replied that Google is a tech company, to which Kennedy quipped, “that’s what I thought you’d say.” (Source: Business Insider)

Now that’s interesting, because while they claim to be “neutral” free speech companies, Facebook and the others have of course been hugely filtering content by various means (from their Terms of Service to community guidelines), and shaping the attention flow (who sees what and when) forever.

This aspect isn’t discussed much, but worth noting nonetheless: How Facebook and other tech firms deal with content has been based to a relatively large degree by United States legal and cultural standards. Which makes sense, given that they’re US companies, but doesn’t make a lot of sense given they operate globally. To name just two examples from above that highlight how legal and cultural standards differ from country to country, take pictures of nudity (largely not OK in the US, largely OK in Germany) versus positively referencing the Third Reich (largely illegal in Germany, largely least legal in the US).

Big tech platforms are a new type of media platform

Here’s the thing: These big tech platforms aren’t neutral platforms for debate, nor are they traditional media platforms. They are neither neither dumb tech (they actively choose and frame and shape content & traffic) nor traditional media companies that (at least notionally) primarily engage in content creation. These big tech platforms are a new type of media platform, and new rules apply. Hence, they require new ways of thinking and analysis, as well as new approaches to regulation.

(As an personal, rambling aside: Given we’ve been discussing the transformational effects of digital media and especially social media for far over a decade now, how do we still even have to have this debate in 2017? I genuinely thought that we had at least sorted out our basic understanding of social media as a new hybrid by 2010. Sigh.)

We might be able to apply existing regulatory—and equally important: analytical—frameworks. Or maybe we can find a way to apply existing ones in new ways. But, and I say this expressly without judgement, these are platforms that operate at a scale and dynamism we haven’t seen before. They are of a new quality, they display qualities and combinations of qualities and characteristics we don’t have much experience with. Yet, on a societal level we’ve been viewing them through the old lenses of either media (“a newspaper”, “broadcast”) or neutral platforms (“tubes”, “electricity”). And it hasn’t worked yet, and will continue not to work, because it makes little sense.

That’s why it’s important to take a breath and figure out how to best understand implications, and shape the tech, the organizations, the frameworks within which they operate.

It might turn out, and I’d say it’s likely, that they operate within some frameworks but outside others, and in those cases we need to adjust the frameworks, the organizations, or both. To align the analytical and regulatory frameworks with realities, or vice versa.

This isn’t an us versus them situation like many parties are insinuating: It’s not politics versus tech as actors on both the governmental and the tech side sometimes seem to think. It’s not tech vs civil society as some activists claim. It’s certainly not Silicon Valley against the rest of the world, even though a little more cultural sensitivity might do SV firms a world of good. This is a question of how we want to live our lives, govern our lives, as they are impacted by the flow of information.

It’s going to be tricky to figure this out as there are many nation states involved, and some supra-national actors, and large global commercial actors and many other, smaller but equally important players. It’s a messy mix of stakeholders and interests.

But one thing I can promise: The solution won’t be just technical, not just legal, nor cultural. It’ll be a slow and messy process that involves all three fields, and a lot of work. We know that the status quo isn’t working for too many people, and we can shape the future. So that soon, it’ll work for many more people—maybe for all.

Please note that this is cross-posted from Medium. Also, for full transparency, we work occasionally with Google.

New report: A Trustmark for IoT

N

Summary: For Mozilla, we explored the potentials and challenges of a trustmark for the Internet of Things (IoT). That research is now publicly available. You can find more background and all the relevant links at thewavingcat.com/iot-trustmark

If you follow our work both over at ThingsCon and here at The Waving Cat, you know that we see lots of potential for the Internet of Things (IoT) to create value and improve lives, but also some serious challenges. One of the core challenges is that it’s hard for consumers to figure out which IoT products and services are good—which ones are designed responsibly, which ones deserve their trust. After all, too often IoT devices are essentially black boxes that are hard interrogate and that might change with the next over-the-air software update.

So, what to do? One concept I’ve grown increasingly fond of is consumer labeling as we know from food, textiles, and other areas. But for IoT, that’s not simple. The networked, data-driven, and dynamic nature of IoT means that the complexity is high, and even seemingly simple questions can lead to surprisingly complex answers. Still, I think there’s huge potential there to make huge impact.

I was very happy when Mozilla picked up on that idea and commissioned us to explore the potential of consumer labels. Mozilla just made that report publicly available:

Read the report: “A Trustmark for IoT” (PDF, 93 pages)

I’m excited to see where Mozilla might take the IoT trustmark and hope we can continue to explore this topic.

Increasingly, in order to have agency over their lives, users need to be able to make informed decisions about the IoT devices they invite into their lives. A trustmark for IoT can significantly empower users to do just that.

For more background, the executive summary, and all the relevant links, head on over to thewavingcat.com/iot-trustmark.

Also, I’d like to extend a big thank you! to the experts whose insights contributed to this reports through conversations online and offline, public and in private:

Alaisdair Allan (freelance consultant and author), Alexandra Deschamps-Sonsino (Designswarm, IoT London, #iotmark), Ame Elliott (Simply Secure), Boris Adryan (Zu?hlke Engineering), Claire Rowland (UX designer and author), David Ascher, David Li (Shenzhen Open Innovation Lab), Dries de Roeck (Studio Dott), Emma Lilliestam (Security researcher), Geoffrey MacDougall (Consumer Reports), Ge?rald Santucci (European Commission), Holly Robbins (Just Things Foundation), Iskander Smit (info.nl, Just Things Foundation), Jan-Peter Kleinhans (Stiftung Neue Verantwortung), Jason Schultz (NYU), Jeff Katz (Geeny), Jon Rogers (Mozilla Open IoT Studio), Laura James (Doteveryone, Digital Life Collective), Malavika Jayaram (Berkman Klein Center, Digital Asia Hub), Marcel Schouwenaar (Just Things Foundation, The Incredible Machine), Matt Biddulph (Thington), Michelle Thorne (Mozilla Open IoT Studio), Max Kru?ger (ThingsCon), Ronaldo Lemos (ITS Rio), Rosie Burbidge (Fox Williams), Simon Ho?her (ThingsCon), Solana Larsen (Mozilla), Stefan Ferber (Bosch Software Innovation), Thomas Amberg (Yaler), Ugo Vallauri (The Restart Project), Usman Haque (Thingful, #iotmark). Also and especially I’d like to thank the larger ThingsCon and London #iotmark communities for sharing their insights.

German federal government adopts an action plan on automated driving

G

For a while we’ve been debating the ethics of algorithms, especially in the context of autonomous vehicles: What should happen, when something goes wrong? Who/what does the robo car protect? Who’s liable for damage if a crash occurs?

Germany, which has a strategy in place to become not just a world-leading manufacturer of autonomous vehicles but also a world-leading consumer market, just announced how to deal with these questions.

Based on the findings of an ethics commission, Germany’s federal government just adopted an action plan on automated driving (here quoted in full:

The Ethics Commission’s report comprises 20 propositions. The key elements are:

  • Automated and connected driving is an ethical imperative if the systems cause fewer accidents than human drivers (positive balance of risk).
  • Damage to property must take precedence over personal injury. In hazardous situations, the protection of human life must always have top priority.
  • In the event of unavoidable accident situations, any distinction between individuals based on personal features (age, gender, physical or mental constitution) is impermissible.
  • In every driving situation, it must be clearly regulated and apparent who is responsible for the driving task: the human or the computer.
  • It must be documented and stored who is driving (to resolve possible issues of liability, among other things).
  • Drivers must always be able to decide themselves whether their vehicle data are to be forwarded and used (data sovereignty).
  • The Federal Ministry of Transport and Digital Infrastructure’s Ethics Commission comprised 14 academics and experts from the disciplines of ethics, law and technology. Among these were transport experts, legal experts, information scientists, engineers, philosophers, theologians, consumer protection representatives as well as representatives of associations and companies.

///

Reading this, I have to say I’m relieved and impressed: These guidelines seem entirely reasonable, common sense, and practical. Especially the non-discrimination clause and the principle of data sovereignty is good to see included in this. Well done!

This bodes well for other areas where we haven’t seen this level of consideration from the German government yet, like smart cities and the super-set of #iot. I hope we’ll see similar findings and action plans in those areas soon, too.

Netzpolitik13: Das Internet der Dinge: Rechte, Regulierung & Spannungsfelder

N

My slides from Das ist Netzpolitik (Berlin, 1. September 2017). Title: “Das Internet der Dinge: Rechte, Regulierung & Spannungsfelder“.

Vom Hobby-Basteln bis hin zur Smart City: Das Internet of Things (#IoT) hat zunehmend Berührungspunkte mit allen Bereichen unseres Lebens. Aber wer bestimmt was erlaubt ist, was mit unseren Daten passiert, und ob es OK ist, unter die Haube zu gucken? IoT sitzt an der Schnittstelle vieler Technologie-, Governance- und Regulierungsbereiche—und schafft dadurch gleich eine ganze Reihe von Spannungsfeldern.

Due to technical issues with the video projection, my slides weren’t shown for the first few minutes. Apologies. On the plus side, the organizers had kindly put a waving cat on the podium for me. ?

It’s a rare talk in that I gave it in German, something I’m hardly used to these days.

In it, I argue that IoT poses a number of particular challenges that we need to address (incl. the level of complexity and blurred lines across disciplines and expertise; power dynamics; and transparency). I outline inherent tensions and propose a few approaches on how to tackle them, especially around increasing transparency and legibility of IoT products.

I conclude with a call for Europe to actively take a global leadership role in the area of consumer and data protection, analog to Silicon Valley’s (claimed/perceived) leadership in disruptive innovation as well as funding/scaling of digital products, and to Shenzhen’s hardware manufacturing leadership.

Netzpolitik has an extensive write-up in German.

Update: Netzpolitik also recorded an interview with me: Regulierung und Datenschutz im Internet der Dinge.

ThingsCon Salon Berlin July videos are up!

T

On 14th July we had another ThingsCon Salon Berlin. You can learn more about upcoming ThingsCon events here.

Here’s the presentations!

Gulraiz Khan

Gulraiz Khan (@gulraizkhan) is a transdisciplinary designer who works on civic engagement. As an urbanist, he is interested in developing grassroots engagement methods that can help communities thrive through political and environmental flux. He is currently working as a Lecturer in Communication & Design at Habib University in Karachi, Pakistan. Prior to this, he received an MFA in Transdisciplinary Design from Parsons The New School for Design in New York. He also serves at the Assistant Director for The Playground, the Centre for Transdisciplinarity, Design and Innovation at Habib University.

Peter Bihr

Peter Bihr (@peterbihr) gave a few remarks about the trip and available for an informal chat about Shenzhen.

Screening of View Source: Shenzhen

We screened the brand new video documentary about the ThingsCon trip to Shenzhen. Produced by The Incredible Machine with support from the Creative Industry Fund NL, this is the video counterpart to the written View Source: Shenzhen report we just published and shows the journey that The Incredible Machine had when trying to build a smart bike lock in Shenzhen.

Hope to see you soon at a ThingsCon event near you!

Trust and expectations in IoT

T

One of the key challenges for Internet of Things (IoT) in the consumer space boils down to expectation management: For consumers it’s unreasonably hard to know what to expect from any given IoT product/service.

This is also why we’ve been investigating potentials and challenges of IoT labels and are currently running a qualitative online survey—please share your thoughts! The resulting report will be published later this year.

I think the quadrant of questions anyone should be able to answer to a certain degree looks somewhat like this (still in draft stage):


“Trust and expectations in IoT by The Waving Cat / Peter Bihr (image available under CC by)”

Let’s go through the quadrants, counter clockwise starting at the top left:

Does it do what I expect it do do?
This should pretty straightforward for most products: Does the fitness tracker track my fitness? Does the connected fridge refrigerate? Etc.

Is the organization trustworthy?
This question is always a tough one, but it comes down to building, earning, and keeping the trust of your consumers and clients. This is traditionally the essence of brands.

Are the processes trustworthy?
The most tricky question, because usually internal processes are really hard, if not impossible, to interrogate. Companies could differentiate themselves in a positive way by being as transparent as possible.

Does it do anything I wouldn’t expect?
I believe this question is essential. Connected products often have features that may be unexpected to the layperson, sometimes because they are a technical requirement, sometimes because they are added later through a software update. Whatever the reason, an IoT device should never do anything that their users don’t have a reason to expect them to. As an extra toxic example, it seems unreasonable to expect that a smart TV would be always listening and sharing data with a cloud-service.

If these four bases are covered, I think that’s a good place to start.

Challenges for governance in the Internet of Things

C

Image by Paula Vermeulen via Unsplash

I’d like to share 3 short stories that demonstrate just a few of the challenges of governance for IoT.

1) In the fall of 2016 Facebook, Twitter, Netflix and other popular consumer websites were temporarily shut down in a so-called Distributed Denial of Service (DDoS) attack. This isn’t unusual in itself—it happens all the time in smaller scale. What WAS unusual was the attack vector: For the first time, a large-scale DDoS attack was driven by IoT products, mainly cheap, unsecured, internet-connected CCTV cameras. Who suffers the consequences? Who’s responsible? Who’s liable?

2) As part of the European Digital Single Market, the EU just passed the The General Data Protection Regulation, or GDPR for short. It’s is designed to enable individuals to better control their personal data. However, experts around the globe are scrambling to figure out how this applied to the IoT: Almost certainly, a lot of the type of data collection and personalization that’s part of consumer IoT products falls squarely under the GDPR. What will IoT-related services look like 5 years from now? Is it going to be different services depending on where you are? Based on where your provider is? Based on where your residency is? Or will it just stay the same?

3) In 2015, Mount Sinai Hospital in New York launched an interesting research project called Deep Patient. They applied artificial intelligence (AI) techniques—concretely machine learning algorithms—to analyze their patient records for patterns. It turned out that these algorithms were extremely good at predicting certain medical conditions; much better than human doctors. But it wasn’t clear how they got to these predictions. Is it responsible to act on medical predictions if the doctors don’t know what they’re based on? Is it responsible not to? How do we deal with intelligence and data that we don’t understand? What if our fridges, cars, or smartphones knew better what’s good for us than we do?

These 3 short stories demonstrate how wide the range of questions is that we face in IoT. The width and depth of this range makes questions of governance more than just a little tricky.

Image: Paula Vermeulen, Unsplash