Categoryconnected world

How to build a responsible Internet of Things

H

Over the last few years, we have seen an explosion of new products and services that bridge the gap between the internet and the physical world: The Internet of Things (IoT for short). IoT increasingly has touch points with all aspects of our lives whether we are aware of it or not.

In the words of security researcher Bruce Schneier: “The internet is no longer a web that we connect to. Instead, it’s a computerized, networked, and interconnected world that we live in. This is the future, and what we’re calling the Internet of Things.”1

But IoT consists of computers, and computers are often insecure, and so our world becomes more insecure—or at the very least, more complex. And thus users of connected devices today have a lot to worry about (because smart speakers and their built-in personal digital assistants are particularly popular at the moment, we’ll use those as an example):

Could their smart speaker be hacked by criminals? Can governments listen in on their conversations? Is the device always listening, and if so, what does it do with the data? Which organizations get to access the data these assistants gather from and about them? What are the manufacturers and potential third parties going to do with that data? Which rights do users retain, and which do they give up? What happens if the company that sold the assistant goes bankrupt, or decides not to support the service any longer?

Or phrased a little more abstractedly2: Does this device do what I expect (does it function)? Does it do anything I wouldn’t normally expect (is it a Trojan horse)? Is the organization that runs the service trustworthy? Does that organization have trustworthy, reliable processes in place to protect myself and my data? These are just some of the questions faced by consumers today, but they face these questions a lot.

Trust and expectations in IoT
Trust and expectations in IoT. Image: Peter Bihr/The Waving Cat

Earning (back) that user trust is essential. Not just for any organization that develops and sells connected products, but for the whole ecosystem.

Honor the spirit of the social contract

User trust needs to be earned. Too many times have users clicked “agree” on some obscure, long terms of service (ToS) or end user license agreement (EULA) without understanding the underlying contract. Too many times have they waived their rights, giving empty consent. This has led to a general distrust—if not in the companies themselves then certainly in the system. No user today feels empowered to negotiate a contractual relationship with a tech company on eye level—because they can’t.

Whenever some scandal blows up and creates damaging PR, the companies slowly backtrack, but in too many cases they were legally speaking within their rights: Because nobody understood the contract but the abstract product language suggests a certain spirit of mutual goodwill between product company and their users that is not honored by the letter of that contract.

So short and sweet: Honor the spirit of the social contract that ties companies and their users together. Make the letters of the contract match that spirit, not the other way round. Earning back the users’ trust will not just make the ecosystem more healthy and robust, it will also pay huge dividends over time in brand building, retention, and, well, user trust.

Respect the user

Users aren’t just an anonymous, homogeneous mass. They are people, individuals with diverse backgrounds and interests. Building technical systems at scale means having to balance individual interests with automation and standardization.

Good product teams put in the effort to do user research and understand their users better: What are their interests, what are they trying to get out of a product and why, how might they use it? Are they trying to use it as intended or in interesting new ways? Do they understand the tradeoffs involved in using a product? These are all questions that basic, but solid user research would easily cover, and then some. This understanding is a first step towards respecting the user.

There’s more to it, of course: Offering good customer service, being transparent about user choices, allowing users to control their own data. This isn’t a conclusive list, and even the most extensive checklist wouldn’t do any good in this case: Respect isn’t a list of actions, it’s a mindset to apply to a relationship.

Offer strong privacy & data protection

Privacy and data protection is a tricky area, and one where screwing up is easy (and particularly damaging for all parties involved).

Protecting user data is essential. But what that means is not always obvious. Here are some things that user data might need to be protected from:

  • Criminal hacking
  • Software bugs that leak data
  • Unwarranted government surveillance
  • Commercial third parties
  • The monetization team
  • Certain business models

Part of these fall squarely into the responsibility of the security team. Others are based on the legal arrangements around how the organization is allows (read: allows itself) to use user data: The terms of services. Others yet require business incentives to be aligned with users’ interests.

The issues at stake aren’t easy to solve. There are no silver bullets. There are grey areas that are fuzzy, complex and complicated.

In some cases, like privacy, there are even cultural and regional differences. For example, to paint with a broad brush, privacy protection has fundamentally different meanings in the US than it does in Europe. While in the United States, privacy tends to mean that consumers are protected from government surveillance, in Europe the focus is on protecting user data from commercial exploitation.

Whichever it may be—and I’d argue it needs to be both—any organization that handles sensitive user data should commit to the strongest level of privacy and data protection. And it should clearly communicate that commitment and its limits to users up front.

Make it safe and secure

It should go without saying (but alas, doesn’t) that any device that connects to the internet and collects personal data needs to be reliably safe and secure. This includes aspects ranging from the design process (privacy by design, security by design) to manufacturing to safe storage and processing of data to strong policies that protect data and users against harm and exploitation. But it doesn’t end there: Especially the end-of-life stage of connected products are important, too. If an organization stops maintaining the service and ceases to update the software with security patches, or if the contract with the user doesn’t have protections against data spills at the acquisition or liquidation stage of a company, then the data could have been safe for years but all of a sudden poses new risks.

IT security is hard enough as it is, but security of data-driven systems that interconnect and interact is so much harder. After all, the whole system is only as strong as its weakest component.

Alas, there is neither fame nor glory in building secure systems: At best, there is no scandal over breaches. At worst, there are significant costs without any glamorous announcements. In the same way that prevention in healthcare is less attractive than quick surgery to repair the damage, it is also more effective and cheaper in the long run. So hang in there, and the users might just vote with their feet and dollars to support the safest, most secure, most trustworthy products and organizations.

Choose the right business model

A business model can make or break a company. Obviously, without a business model, a company won’t last long. But without the right business model, it’ll thrive not together with its customers but at their expense.

We see so much damage done because wrong business models—and hence, wrong incentives—drive and promote horrible decision making.

If a business is based on user data—as is often the case in IoT—then finding the right business model is essential. Business models, and the behaviors they incentivize, matter. More to the point, aligning the organization’s financial incentives with the users’ interests matters.

As a rule of thumb, data mining isn’t everything. Ads, and the surveillance marketing they increasingly require, have reached a point of being poisonous. If, however, an organization finds a business model that is based on protecting its users’ data, then that organization and its customers are going to have a blast of a time.

To build sustainable businesses—businesses that will sustain themselves and not poison their ecosystem—it’s absolutely essential to pick and align business models and incentives wisely.


  1. Bruce Schneier: Click Here to Kill Everyone. Available at http://nymag.com/selectall/2017/01/the-internet-of-things-dangerous-future-bruce-schneier.html 
  2. See Peter Bihr: Trust and expectations in IoT. Available at https://thewavingcat.com/2017/06/28/trust-and-expectation-in-iot/ 

Facebook, Twitter, Google are a new type of media platform, and new rules apply

F

When Congress questioned representatives of Facebook, Google and Twitter, it became official: We need to finally find an answer to a debate that’s been bubbling for months (if not years) about the role of the tech companies—Google, Apple, Facebook, Amazon, Microsoft, or GAFAM—and their platforms.

The question is summed up by Ted Cruz’s line of inquiry (and here’s a person I never expected to quote) in the Congressional hearing: “Do you consider your sites to be neutral public fora?” (Some others echoed versions of this question.)

Platform or media?

Simply put, the question boils down to this: Are GAFAM tech companies or media companies? Are they held to standards (and regulation) of “neutral platform” or “content creator”? Are they dumb infrastructure or pillars of democracy?

These are big questions to ask, and I don’t envy the companies for their position in this one. As a neutral platform they get a large degree of freedom, but have to take responsibility for the hate speech and abuse on their platform. As a media company they get to shape the conversation more actively, but can’t claim the extreme point of view of free speech they like to take. You can’t both be neutral and “bring humanity together” as Mark Zuckerberg intends. As Ben Thompson points out on Stratechery (potentially paywalled), neutrality might be the “easier” option:

the “safest” position for the company to take would be the sort of neutrality demanded by Cruz — a refusal to do any sort of explicit policing of content, no matter how objectionable. That, though, was unacceptable to the company’s employee base specifically, and Silicon Valley broadly

I agree this would be easier. (I’m not so sure that the employee preference is the driving force, but that’s another debate and it certainly plays a role.) Also, let’s not forget that each of these companies plays a global game, and wherever they operate they have to meet legal requirements. Where are they willing to draw the line? Google famously didn’t enter the Chinese market a few years ago, presumably because they didn’t want to meet the government’s censorship requirements. This was a principled move, and I would expect not an easy one for a big market. But where do you draw the line? US rules on nudity? German rules on censoring Nazi glorification and hate speech? Chinese rules on censoring pro-democracy reporting or on government surveillance?

For GAFAM, the position has traditionally been clear cut and quite straightforward, which we can still (kind of, sort of) see in the Congressional hearing:

“We don’t think of it in the terms of ‘neutral,'” [Facebook General Counsel Colin] Stretch continued, pointing out that Facebook tries to give users a personalized feed of content. “But we do think of ourselves as — again, within the boundaries that I described — open to all ideas without regard to viewpoint or ideology.” (Source: Recode)

Once more:

[Senator John] Kennedy also asked Richard Salgado, Google’s director of law enforcement and information security, whether the company is a “newspaper” or a neutral tech platform. Salgado replied that Google is a tech company, to which Kennedy quipped, “that’s what I thought you’d say.” (Source: Business Insider)

Now that’s interesting, because while they claim to be “neutral” free speech companies, Facebook and the others have of course been hugely filtering content by various means (from their Terms of Service to community guidelines), and shaping the attention flow (who sees what and when) forever.

This aspect isn’t discussed much, but worth noting nonetheless: How Facebook and other tech firms deal with content has been based to a relatively large degree by United States legal and cultural standards. Which makes sense, given that they’re US companies, but doesn’t make a lot of sense given they operate globally. To name just two examples from above that highlight how legal and cultural standards differ from country to country, take pictures of nudity (largely not OK in the US, largely OK in Germany) versus positively referencing the Third Reich (largely illegal in Germany, largely least legal in the US).

Big tech platforms are a new type of media platform

Here’s the thing: These big tech platforms aren’t neutral platforms for debate, nor are they traditional media platforms. They are neither neither dumb tech (they actively choose and frame and shape content & traffic) nor traditional media companies that (at least notionally) primarily engage in content creation. These big tech platforms are a new type of media platform, and new rules apply. Hence, they require new ways of thinking and analysis, as well as new approaches to regulation.

(As an personal, rambling aside: Given we’ve been discussing the transformational effects of digital media and especially social media for far over a decade now, how do we still even have to have this debate in 2017? I genuinely thought that we had at least sorted out our basic understanding of social media as a new hybrid by 2010. Sigh.)

We might be able to apply existing regulatory—and equally important: analytical—frameworks. Or maybe we can find a way to apply existing ones in new ways. But, and I say this expressly without judgement, these are platforms that operate at a scale and dynamism we haven’t seen before. They are of a new quality, they display qualities and combinations of qualities and characteristics we don’t have much experience with. Yet, on a societal level we’ve been viewing them through the old lenses of either media (“a newspaper”, “broadcast”) or neutral platforms (“tubes”, “electricity”). And it hasn’t worked yet, and will continue not to work, because it makes little sense.

That’s why it’s important to take a breath and figure out how to best understand implications, and shape the tech, the organizations, the frameworks within which they operate.

It might turn out, and I’d say it’s likely, that they operate within some frameworks but outside others, and in those cases we need to adjust the frameworks, the organizations, or both. To align the analytical and regulatory frameworks with realities, or vice versa.

This isn’t an us versus them situation like many parties are insinuating: It’s not politics versus tech as actors on both the governmental and the tech side sometimes seem to think. It’s not tech vs civil society as some activists claim. It’s certainly not Silicon Valley against the rest of the world, even though a little more cultural sensitivity might do SV firms a world of good. This is a question of how we want to live our lives, govern our lives, as they are impacted by the flow of information.

It’s going to be tricky to figure this out as there are many nation states involved, and some supra-national actors, and large global commercial actors and many other, smaller but equally important players. It’s a messy mix of stakeholders and interests.

But one thing I can promise: The solution won’t be just technical, not just legal, nor cultural. It’ll be a slow and messy process that involves all three fields, and a lot of work. We know that the status quo isn’t working for too many people, and we can shape the future. So that soon, it’ll work for many more people—maybe for all.

Please note that this is cross-posted from Medium. Also, for full transparency, we work occasionally with Google.

New report: A Trustmark for IoT

N

Summary: For Mozilla, we explored the potentials and challenges of a trustmark for the Internet of Things (IoT). That research is now publicly available. You can find more background and all the relevant links at thewavingcat.com/iot-trustmark

If you follow our work both over at ThingsCon and here at The Waving Cat, you know that we see lots of potential for the Internet of Things (IoT) to create value and improve lives, but also some serious challenges. One of the core challenges is that it’s hard for consumers to figure out which IoT products and services are good—which ones are designed responsibly, which ones deserve their trust. After all, too often IoT devices are essentially black boxes that are hard interrogate and that might change with the next over-the-air software update.

So, what to do? One concept I’ve grown increasingly fond of is consumer labeling as we know from food, textiles, and other areas. But for IoT, that’s not simple. The networked, data-driven, and dynamic nature of IoT means that the complexity is high, and even seemingly simple questions can lead to surprisingly complex answers. Still, I think there’s huge potential there to make huge impact.

I was very happy when Mozilla picked up on that idea and commissioned us to explore the potential of consumer labels. Mozilla just made that report publicly available:

Read the report: “A Trustmark for IoT” (PDF, 93 pages)

I’m excited to see where Mozilla might take the IoT trustmark and hope we can continue to explore this topic.

Increasingly, in order to have agency over their lives, users need to be able to make informed decisions about the IoT devices they invite into their lives. A trustmark for IoT can significantly empower users to do just that.

For more background, the executive summary, and all the relevant links, head on over to thewavingcat.com/iot-trustmark.

Also, I’d like to extend a big thank you! to the experts whose insights contributed to this reports through conversations online and offline, public and in private:

Alaisdair Allan (freelance consultant and author), Alexandra Deschamps-Sonsino (Designswarm, IoT London, #iotmark), Ame Elliott (Simply Secure), Boris Adryan (Zu?hlke Engineering), Claire Rowland (UX designer and author), David Ascher, David Li (Shenzhen Open Innovation Lab), Dries de Roeck (Studio Dott), Emma Lilliestam (Security researcher), Geoffrey MacDougall (Consumer Reports), Ge?rald Santucci (European Commission), Holly Robbins (Just Things Foundation), Iskander Smit (info.nl, Just Things Foundation), Jan-Peter Kleinhans (Stiftung Neue Verantwortung), Jason Schultz (NYU), Jeff Katz (Geeny), Jon Rogers (Mozilla Open IoT Studio), Laura James (Doteveryone, Digital Life Collective), Malavika Jayaram (Berkman Klein Center, Digital Asia Hub), Marcel Schouwenaar (Just Things Foundation, The Incredible Machine), Matt Biddulph (Thington), Michelle Thorne (Mozilla Open IoT Studio), Max Kru?ger (ThingsCon), Ronaldo Lemos (ITS Rio), Rosie Burbidge (Fox Williams), Simon Ho?her (ThingsCon), Solana Larsen (Mozilla), Stefan Ferber (Bosch Software Innovation), Thomas Amberg (Yaler), Ugo Vallauri (The Restart Project), Usman Haque (Thingful, #iotmark). Also and especially I’d like to thank the larger ThingsCon and London #iotmark communities for sharing their insights.

German federal government adopts an action plan on automated driving

G

For a while we’ve been debating the ethics of algorithms, especially in the context of autonomous vehicles: What should happen, when something goes wrong? Who/what does the robo car protect? Who’s liable for damage if a crash occurs?

Germany, which has a strategy in place to become not just a world-leading manufacturer of autonomous vehicles but also a world-leading consumer market, just announced how to deal with these questions.

Based on the findings of an ethics commission, Germany’s federal government just adopted an action plan on automated driving (here quoted in full:

The Ethics Commission’s report comprises 20 propositions. The key elements are:

  • Automated and connected driving is an ethical imperative if the systems cause fewer accidents than human drivers (positive balance of risk).
  • Damage to property must take precedence over personal injury. In hazardous situations, the protection of human life must always have top priority.
  • In the event of unavoidable accident situations, any distinction between individuals based on personal features (age, gender, physical or mental constitution) is impermissible.
  • In every driving situation, it must be clearly regulated and apparent who is responsible for the driving task: the human or the computer.
  • It must be documented and stored who is driving (to resolve possible issues of liability, among other things).
  • Drivers must always be able to decide themselves whether their vehicle data are to be forwarded and used (data sovereignty).
  • The Federal Ministry of Transport and Digital Infrastructure’s Ethics Commission comprised 14 academics and experts from the disciplines of ethics, law and technology. Among these were transport experts, legal experts, information scientists, engineers, philosophers, theologians, consumer protection representatives as well as representatives of associations and companies.

///

Reading this, I have to say I’m relieved and impressed: These guidelines seem entirely reasonable, common sense, and practical. Especially the non-discrimination clause and the principle of data sovereignty is good to see included in this. Well done!

This bodes well for other areas where we haven’t seen this level of consideration from the German government yet, like smart cities and the super-set of #iot. I hope we’ll see similar findings and action plans in those areas soon, too.

Netzpolitik13: Das Internet der Dinge: Rechte, Regulierung & Spannungsfelder

N

My slides from Das ist Netzpolitik (Berlin, 1. September 2017). Title: “Das Internet der Dinge: Rechte, Regulierung & Spannungsfelder“.

Vom Hobby-Basteln bis hin zur Smart City: Das Internet of Things (#IoT) hat zunehmend Berührungspunkte mit allen Bereichen unseres Lebens. Aber wer bestimmt was erlaubt ist, was mit unseren Daten passiert, und ob es OK ist, unter die Haube zu gucken? IoT sitzt an der Schnittstelle vieler Technologie-, Governance- und Regulierungsbereiche—und schafft dadurch gleich eine ganze Reihe von Spannungsfeldern.

Due to technical issues with the video projection, my slides weren’t shown for the first few minutes. Apologies. On the plus side, the organizers had kindly put a waving cat on the podium for me. ?

It’s a rare talk in that I gave it in German, something I’m hardly used to these days.

In it, I argue that IoT poses a number of particular challenges that we need to address (incl. the level of complexity and blurred lines across disciplines and expertise; power dynamics; and transparency). I outline inherent tensions and propose a few approaches on how to tackle them, especially around increasing transparency and legibility of IoT products.

I conclude with a call for Europe to actively take a global leadership role in the area of consumer and data protection, analog to Silicon Valley’s (claimed/perceived) leadership in disruptive innovation as well as funding/scaling of digital products, and to Shenzhen’s hardware manufacturing leadership.

Netzpolitik has an extensive write-up in German.

Update: Netzpolitik also recorded an interview with me: Regulierung und Datenschutz im Internet der Dinge.

ThingsCon Salon Berlin July videos are up!

T

On 14th July we had another ThingsCon Salon Berlin. You can learn more about upcoming ThingsCon events here.

Here’s the presentations!

Gulraiz Khan

Gulraiz Khan (@gulraizkhan) is a transdisciplinary designer who works on civic engagement. As an urbanist, he is interested in developing grassroots engagement methods that can help communities thrive through political and environmental flux. He is currently working as a Lecturer in Communication & Design at Habib University in Karachi, Pakistan. Prior to this, he received an MFA in Transdisciplinary Design from Parsons The New School for Design in New York. He also serves at the Assistant Director for The Playground, the Centre for Transdisciplinarity, Design and Innovation at Habib University.

Peter Bihr

Peter Bihr (@peterbihr) gave a few remarks about the trip and available for an informal chat about Shenzhen.

Screening of View Source: Shenzhen

We screened the brand new video documentary about the ThingsCon trip to Shenzhen. Produced by The Incredible Machine with support from the Creative Industry Fund NL, this is the video counterpart to the written View Source: Shenzhen report we just published and shows the journey that The Incredible Machine had when trying to build a smart bike lock in Shenzhen.

Hope to see you soon at a ThingsCon event near you!

Trust and expectations in IoT

T

One of the key challenges for Internet of Things (IoT) in the consumer space boils down to expectation management: For consumers it’s unreasonably hard to know what to expect from any given IoT product/service.

This is also why we’ve been investigating potentials and challenges of IoT labels and are currently running a qualitative online survey—please share your thoughts! The resulting report will be published later this year.

I think the quadrant of questions anyone should be able to answer to a certain degree looks somewhat like this (still in draft stage):


“Trust and expectations in IoT by The Waving Cat / Peter Bihr (image available under CC by)”

Let’s go through the quadrants, counter clockwise starting at the top left:

Does it do what I expect it do do?
This should pretty straightforward for most products: Does the fitness tracker track my fitness? Does the connected fridge refrigerate? Etc.

Is the organization trustworthy?
This question is always a tough one, but it comes down to building, earning, and keeping the trust of your consumers and clients. This is traditionally the essence of brands.

Are the processes trustworthy?
The most tricky question, because usually internal processes are really hard, if not impossible, to interrogate. Companies could differentiate themselves in a positive way by being as transparent as possible.

Does it do anything I wouldn’t expect?
I believe this question is essential. Connected products often have features that may be unexpected to the layperson, sometimes because they are a technical requirement, sometimes because they are added later through a software update. Whatever the reason, an IoT device should never do anything that their users don’t have a reason to expect them to. As an extra toxic example, it seems unreasonable to expect that a smart TV would be always listening and sharing data with a cloud-service.

If these four bases are covered, I think that’s a good place to start.