Tagthingscon

What’s long-term success? Outsized positive impact.

W

For us, success is outsized positive impact—which is why I’m happy to see our work becoming part of Brazil’s National IoT Plan.

Recently, I was asked what long-term success looked like for me. Here’s the reply I gave:

To have outsized positive impact on society by getting large organizations (companies, governments) to ask the right questions early on in their decision-making processes.

As you know, my company consists of only one person: myself. That’s both boon & bane of my work. On one hand it means I can contribute expertise surgically into larger contexts, on the other it means limited impact when working by myself.

So I tend (and actively aim) to work in collaborations—they allow to build alliances for greater impact. One of those turned into ThingsCon, the global community of IoT practitioners fighting for a more responsible IoT. Another, between my company, ThingsCon and Mozilla, led to research into the potential of a consumer trustmark for the Internet of Things (IoT).

I’m very, very happy (and to be honest, a little bit proud, too) that this report just got referenced fairly extensively in Brazil’s National IoT Plan, concretely in Action Plan / Document 8B (PDF). (Here’s the post on Thingscon.com.)

To see your work and research (and hence, to a degree, agenda) inform national policy is always exciting.

This is exactly the kind of impact I’m constantly looking for.

Monthnotes for January 2018

M

January isn’t quite over, but since I’ll be traveling starting this weekend, I wanted to drop these #monthnotes now. A lot of time this month went into prepping an upcoming project which is likely to take up the majority of my time in 2018. More on that soon.

×

Capacity planning: This year my work capacity is slightly reduced since I want to make sure to give our new family member the face time he deserves. That said, this year’s capacity is largely accounted for, which is extra nice given it’s just January, and it’s for a thing I’m genuinely excited about. That said, I think it’s important to work on a few things in parallel because there’s always potential that unfolds from cross-pollination; so I’m up for a small number of not-huge projects in addition to what’s already going on, particularly in the first half of the year. Get in touch.

×

On Sunday, I’m off to San Francisco for a work week with the good folks at Mozilla because reasons and a number of meetings in the Bay Area. (Full disclosure: my partner works at Mozilla). Last year I’ve done some work with Mozilla and ThingsCon exploring the idea of a trustmark for IoT (our findings).

Image: commons (SDASM Archives)

Should you be in SF next week, ping me and we can see if we can manage a coffee.

×

IoT, trust & voice: More and more, I’m coming around to the idea that voice is the most important—or at least most imminent—manifestation of IoT regarding user data. Voice, and how it relates to trust, is what I’ll be focusing on a lot of my work in 2018.

×

User profiling in smart homes: Given my focus on voice & trust in IoT this year, I was very happy that Berlin tech & policy think tank Stiftung Neue Verantwortung invited me to a workshop on user profiling in smart homes. It was all Chatham House rules and I don’t want to dive into specifics at this point, but smart homes and voice assistants are worth a deep dive when it comes to trust—and trustworthiness—in IoT.

Connected homes and smart cities

Not least because (as I’ve been hammering home for a long time) the connected home and the smart city are two areas that most clearly manifest a lot of the underlying tensions and issues around IoT at scale: Connected homes, because traditionally the home was considered a private space (that is, if you look at the last 100 years in the West), and embedded microphones in smart homes means it’s not anymore. And smart cities, because in public space there is no opt-out: Whatever data is collected, processed, and acted on in public space impacts all citizens, if they want it or not. These are fundamental changes with far reaching consequences for policy, governance, and democracy.

×

Worth your time: A few pointers to articles and presentations I found worthwhile:

  • Kate Crawford’s talk on bias in AI training data is ace: The Trouble with Bias [Youtube].
  • TechCrunch has a bit of a top-level explainer of GDPR, Europe’s General Data Protection Regulation that goes into effect in May this year. It’s being widely lauded in Europe (except by the usual suspects, like ad-land), and been unsurprisingly criticized in Silicon Valley as disruptive regulation. (See what I did there?) So it came as a pleasant surprise to me that TechCrunch of all places finds GDPR to be a net positive. Worth 10 minutes of your time! [TechCrunch: WTF is GDPR?]
  • noyb.eu—My Privacy is none of your Business: Max Schrems, who became well-known in European privacy circles after winning privacy-related legal battles including one against Facebook and one that brought down the US/EU Safe Harbor Agreement, is launching a non-profit: They aim to enforce European privacy protection through collective enforcement, which is now an option because of GDPR. They’re fundraising for the org. The website looks crappy as hell very basic, but I’d say it’s a legit endeavor and certainly an interesting one.

×

Writing & thinking:

  • In How to build a responsible Internet of Things I lay out a few basic, top-level principles distilled from years of analyzing the IoT space—again with an eye on consumer trust.
  • On Business Models & Incentives: Some thoughts on how picking the wrong business model—and hence creating harmful incentives for an organization to potentially act against its own customers—is dangerous and can be avoided.
  • I’ve been really enjoying putting together my weekly newsletter together. It’s a little more personal and interest-driven than this blog, but tackles similar issues of the interplay between tech & society. It’s called Connection Problem. You can sign up here.

I was also very happy that Kai Brach, founder of the excellent Offscreen magazine kindly invited me to contribute to the next issue (out in April). The current one is also highly recommended!

×

Again, if you’d like to work with me in the upcoming months, please get in touch quickly so we can figure out how best to work together.

×

That’s it for January. See you in Feb!

How to build a responsible Internet of Things

H

Over the last few years, we have seen an explosion of new products and services that bridge the gap between the internet and the physical world: The Internet of Things (IoT for short). IoT increasingly has touch points with all aspects of our lives whether we are aware of it or not.

In the words of security researcher Bruce Schneier: “The internet is no longer a web that we connect to. Instead, it’s a computerized, networked, and interconnected world that we live in. This is the future, and what we’re calling the Internet of Things.”1

But IoT consists of computers, and computers are often insecure, and so our world becomes more insecure—or at the very least, more complex. And thus users of connected devices today have a lot to worry about (because smart speakers and their built-in personal digital assistants are particularly popular at the moment, we’ll use those as an example):

Could their smart speaker be hacked by criminals? Can governments listen in on their conversations? Is the device always listening, and if so, what does it do with the data? Which organizations get to access the data these assistants gather from and about them? What are the manufacturers and potential third parties going to do with that data? Which rights do users retain, and which do they give up? What happens if the company that sold the assistant goes bankrupt, or decides not to support the service any longer?

Or phrased a little more abstractedly2: Does this device do what I expect (does it function)? Does it do anything I wouldn’t normally expect (is it a Trojan horse)? Is the organization that runs the service trustworthy? Does that organization have trustworthy, reliable processes in place to protect myself and my data? These are just some of the questions faced by consumers today, but they face these questions a lot.

Trust and expectations in IoT
Trust and expectations in IoT. Image: Peter Bihr/The Waving Cat

Earning (back) that user trust is essential. Not just for any organization that develops and sells connected products, but for the whole ecosystem.

Honor the spirit of the social contract

User trust needs to be earned. Too many times have users clicked “agree” on some obscure, long terms of service (ToS) or end user license agreement (EULA) without understanding the underlying contract. Too many times have they waived their rights, giving empty consent. This has led to a general distrust—if not in the companies themselves then certainly in the system. No user today feels empowered to negotiate a contractual relationship with a tech company on eye level—because they can’t.

Whenever some scandal blows up and creates damaging PR, the companies slowly backtrack, but in too many cases they were legally speaking within their rights: Because nobody understood the contract but the abstract product language suggests a certain spirit of mutual goodwill between product company and their users that is not honored by the letter of that contract.

So short and sweet: Honor the spirit of the social contract that ties companies and their users together. Make the letters of the contract match that spirit, not the other way round. Earning back the users’ trust will not just make the ecosystem more healthy and robust, it will also pay huge dividends over time in brand building, retention, and, well, user trust.

Respect the user

Users aren’t just an anonymous, homogeneous mass. They are people, individuals with diverse backgrounds and interests. Building technical systems at scale means having to balance individual interests with automation and standardization.

Good product teams put in the effort to do user research and understand their users better: What are their interests, what are they trying to get out of a product and why, how might they use it? Are they trying to use it as intended or in interesting new ways? Do they understand the tradeoffs involved in using a product? These are all questions that basic, but solid user research would easily cover, and then some. This understanding is a first step towards respecting the user.

There’s more to it, of course: Offering good customer service, being transparent about user choices, allowing users to control their own data. This isn’t a conclusive list, and even the most extensive checklist wouldn’t do any good in this case: Respect isn’t a list of actions, it’s a mindset to apply to a relationship.

Offer strong privacy & data protection

Privacy and data protection is a tricky area, and one where screwing up is easy (and particularly damaging for all parties involved).

Protecting user data is essential. But what that means is not always obvious. Here are some things that user data might need to be protected from:

  • Criminal hacking
  • Software bugs that leak data
  • Unwarranted government surveillance
  • Commercial third parties
  • The monetization team
  • Certain business models

Part of these fall squarely into the responsibility of the security team. Others are based on the legal arrangements around how the organization is allows (read: allows itself) to use user data: The terms of services. Others yet require business incentives to be aligned with users’ interests.

The issues at stake aren’t easy to solve. There are no silver bullets. There are grey areas that are fuzzy, complex and complicated.

In some cases, like privacy, there are even cultural and regional differences. For example, to paint with a broad brush, privacy protection has fundamentally different meanings in the US than it does in Europe. While in the United States, privacy tends to mean that consumers are protected from government surveillance, in Europe the focus is on protecting user data from commercial exploitation.

Whichever it may be—and I’d argue it needs to be both—any organization that handles sensitive user data should commit to the strongest level of privacy and data protection. And it should clearly communicate that commitment and its limits to users up front.

Make it safe and secure

It should go without saying (but alas, doesn’t) that any device that connects to the internet and collects personal data needs to be reliably safe and secure. This includes aspects ranging from the design process (privacy by design, security by design) to manufacturing to safe storage and processing of data to strong policies that protect data and users against harm and exploitation. But it doesn’t end there: Especially the end-of-life stage of connected products are important, too. If an organization stops maintaining the service and ceases to update the software with security patches, or if the contract with the user doesn’t have protections against data spills at the acquisition or liquidation stage of a company, then the data could have been safe for years but all of a sudden poses new risks.

IT security is hard enough as it is, but security of data-driven systems that interconnect and interact is so much harder. After all, the whole system is only as strong as its weakest component.

Alas, there is neither fame nor glory in building secure systems: At best, there is no scandal over breaches. At worst, there are significant costs without any glamorous announcements. In the same way that prevention in healthcare is less attractive than quick surgery to repair the damage, it is also more effective and cheaper in the long run. So hang in there, and the users might just vote with their feet and dollars to support the safest, most secure, most trustworthy products and organizations.

Choose the right business model

A business model can make or break a company. Obviously, without a business model, a company won’t last long. But without the right business model, it’ll thrive not together with its customers but at their expense.

We see so much damage done because wrong business models—and hence, wrong incentives—drive and promote horrible decision making.

If a business is based on user data—as is often the case in IoT—then finding the right business model is essential. Business models, and the behaviors they incentivize, matter. More to the point, aligning the organization’s financial incentives with the users’ interests matters.

As a rule of thumb, data mining isn’t everything. Ads, and the surveillance marketing they increasingly require, have reached a point of being poisonous. If, however, an organization finds a business model that is based on protecting its users’ data, then that organization and its customers are going to have a blast of a time.

To build sustainable businesses—businesses that will sustain themselves and not poison their ecosystem—it’s absolutely essential to pick and align business models and incentives wisely.


  1. Bruce Schneier: Click Here to Kill Everyone. Available at http://nymag.com/selectall/2017/01/the-internet-of-things-dangerous-future-bruce-schneier.html 
  2. See Peter Bihr: Trust and expectations in IoT. Available at https://thewavingcat.com/2017/06/28/trust-and-expectation-in-iot/ 

Interview with Netzpolitik.org: Regulierung und Datenschutz im Internet der Dinge

I

In September I spoke at Netzpolitik’s annual conference, Das ist Netzpolitik. While I was there, Netzpolitik.org also recorded an interview with me: “Regulierung und Datenschutz im Internet der Dinge“.

A big thank you to Netzpolitik and Stefanie Talaska for the conversation!

Launching the ThingsCon Fellowship Program

L

Please note: This is cross-posted from the ThingsCon blog.

We’re happy to announce the ThingsCon Fellowship Program.

The ThingsCon Fellowship recognizes achievements and commitment that advance the ThingsCon mission of fostering the creation of a responsible and human-centric IoT generally, and support for the ThingsCon community specifically.

With the program, we aim to amplify the fellows’ work in this area and to promote knowledge transfer and networking between fellows and the larger ThingsCon network.

The first round of fellows for 2017/2018 consists of a small cohort of ThingsCon allies. These individuals have over the past years put tremendous effort into advancing and promoting the ThingsCon mission.

We are both humbled and proud to welcome these six outstanding individuals as the inaugural ThingsCon Fellows:

ThingsCon Fellows 2017-2018

Alexandra Deschamps-Sonsino Ame Elliott Dries de Roeck Iohanna Nicenboim Michelle Thorne Ricardo Brito

Together with them we will develop and evolve the ThingsCon Fellowship program through a collaborative process of mutual exchange and shared learning.

Learn more about the program and the fellows on thingscon.com/fellowship.

Monthnotes for October 2017

M

October wasn’t just productive. This month our work got a ton of attention across different projects—which is great as it leads to great conversations and is indicative of larger impact. Speaking of impact, over on ThingsCon we realized a long-held dream by announcing the ThingsCon Fellowship program. This and more below. Enjoy!

If you’d like to explore working together, please get in touch.

Lots of conversations & media attention

It’s always great to see own work get attention. After all, this is how impact starts.

There was some excellent Twitter action when in one day, Mozilla‘s main account tweeted our trustmark report, and Medium‘s featured my thoughts on Google’s push to AI-powered services which had already been going somewhat viral. This started a number of fantastic conversations. Then VentureBeat asked to cross-post my recent article on Germany’s need to get ready for AI. A nice hat-trick indeed.

CNN screenshot

As if this wasn’t enough, in an op-ed on CNN.com on the future of IoT, Mozilla CEO Mark Surman and Michelle Thorne kindly gave not one but two shout-outs to ThingsCon, too!

stories connecting dots

Also, remember when back in July, the smart & ever-lovely Markus Andrezak interviewed me for his podcast Stories Connecting Dots? The second part of our interview just went live and I’m honored to be opening the second season of SDC discussing Shenzhen’s IoT ecosystem.

ThingsCon Fellowship Program

I could not be more happy to announce the ThingsCon Fellowship Program. It’s been a long-held dream of mine to start this, and I can hardly believe it’s finally happening.

The ThingsCon Fellowship recognizes achievements and commitment that advance the ThingsCon mission of fostering the creation of a responsible and human-centric IoT generally, and support for the ThingsCon community specifically. With the program, we aim to amplify the fellows’ work in this area and to promote knowledge transfer and networking between fellows and the larger ThingsCon network.

The first round of fellows for 2017/2018 consists of a small cohort of ThingsCon allies. These individuals have over the past years put tremendous effort into advancing and promoting the ThingsCon mission. We are both humbled and proud to welcome these six outstanding individuals as the inaugural ThingsCon Fellows:

ThingsCon Fellows 2017-2018 ThingsCon Fellows 2017-18

Alexandra Deschamps-Sonsino Ame Elliott Dries de Roeck Iohanna Nicenboim Michelle Thorne Ricardo Brito

Thinking, writing, speaking

At the invitation of Prof. Sven Engesser at Technical University Dresden, I had the pleasure of presenting to the master students of applied media studies. Here are my slides (in German):

It’s great to see that communication science/media studies tackle IoT and human-computer interfaces as a field of research. I was impressed with the level of thinking and questions from the group. The discussion was lively, on point, and there were none of the obvious questions. Instead, the students probed the pretty complex issues surrounding IoT, AI, and algorithmic decision making in the context of communications and communication science. It’s part of the master program, and of Prof. Engesser’s new role as professor there, to also set up a lab to study how smart home assistants and other voice-enabled connected devices impact the way we communicate at home—both with other people and with machines. It’ll be interesting to watch the lab’s progress and findings, and I hope we’ll find ways to collaborate on some of these questions.

What else?

I was more than a little pleased to learn that our recent work on a trustmark for IoT that we’ve been doing with Mozilla (see thewavingcat.com/iot-trustmark) is continuing to unfold its impact: I had heard whispers before, and now heard confirmation, that some core recommendations from our report found their way into a large country’s national IoT policy. It’s not yet published, but will be soon.

What’s next?

A project with our office neighbors, the lovely Syspons team, is kicking off. It’s about increasing the impact of health education in South Africa, which I’m quite excited about.

In the next few weeks we’ll also decide what the next steps are for our IoT Trustmark efforts.

On 9 November, I’ll be at SimplySecure‘s conference Underexposed (program). My talk there is called The Internet of Sneaky Things. I’ll be exploring how IoT is at a crossroads, and we can either let it become the Internet of Sneaky Things or we can make it better, more human-centric, and more responsible.

Later this month I’ll be also speaking at Good School, a Hamburg-based executive leadership program, where I’ll be giving a glimpse or two at China and its digital landscape.

And last but not least, a personal note (which is rare on this blog): We’re expecting a baby within the next few weeks, which of course makes me very happy. My tweets and monthnotes might temporarily become a little more irregular (or not), and or time-shifted to odd late night postings (or not). Who knows? We’ll see! Next year I’ll likely take a few months off to stay home with the little one. But until then, everything else here will continue as normal for now.

In the meantime, please get in touch if you’d like to discuss new projects.

New report: A Trustmark for IoT

N

Summary: For Mozilla, we explored the potentials and challenges of a trustmark for the Internet of Things (IoT). That research is now publicly available. You can find more background and all the relevant links at thewavingcat.com/iot-trustmark

If you follow our work both over at ThingsCon and here at The Waving Cat, you know that we see lots of potential for the Internet of Things (IoT) to create value and improve lives, but also some serious challenges. One of the core challenges is that it’s hard for consumers to figure out which IoT products and services are good—which ones are designed responsibly, which ones deserve their trust. After all, too often IoT devices are essentially black boxes that are hard interrogate and that might change with the next over-the-air software update.

So, what to do? One concept I’ve grown increasingly fond of is consumer labeling as we know from food, textiles, and other areas. But for IoT, that’s not simple. The networked, data-driven, and dynamic nature of IoT means that the complexity is high, and even seemingly simple questions can lead to surprisingly complex answers. Still, I think there’s huge potential there to make huge impact.

I was very happy when Mozilla picked up on that idea and commissioned us to explore the potential of consumer labels. Mozilla just made that report publicly available:

Read the report: “A Trustmark for IoT” (PDF, 93 pages)

I’m excited to see where Mozilla might take the IoT trustmark and hope we can continue to explore this topic.

Increasingly, in order to have agency over their lives, users need to be able to make informed decisions about the IoT devices they invite into their lives. A trustmark for IoT can significantly empower users to do just that.

For more background, the executive summary, and all the relevant links, head on over to thewavingcat.com/iot-trustmark.

Also, I’d like to extend a big thank you! to the experts whose insights contributed to this reports through conversations online and offline, public and in private:

Alaisdair Allan (freelance consultant and author), Alexandra Deschamps-Sonsino (Designswarm, IoT London, #iotmark), Ame Elliott (Simply Secure), Boris Adryan (Zu?hlke Engineering), Claire Rowland (UX designer and author), David Ascher, David Li (Shenzhen Open Innovation Lab), Dries de Roeck (Studio Dott), Emma Lilliestam (Security researcher), Geoffrey MacDougall (Consumer Reports), Ge?rald Santucci (European Commission), Holly Robbins (Just Things Foundation), Iskander Smit (info.nl, Just Things Foundation), Jan-Peter Kleinhans (Stiftung Neue Verantwortung), Jason Schultz (NYU), Jeff Katz (Geeny), Jon Rogers (Mozilla Open IoT Studio), Laura James (Doteveryone, Digital Life Collective), Malavika Jayaram (Berkman Klein Center, Digital Asia Hub), Marcel Schouwenaar (Just Things Foundation, The Incredible Machine), Matt Biddulph (Thington), Michelle Thorne (Mozilla Open IoT Studio), Max Kru?ger (ThingsCon), Ronaldo Lemos (ITS Rio), Rosie Burbidge (Fox Williams), Simon Ho?her (ThingsCon), Solana Larsen (Mozilla), Stefan Ferber (Bosch Software Innovation), Thomas Amberg (Yaler), Ugo Vallauri (The Restart Project), Usman Haque (Thingful, #iotmark). Also and especially I’d like to thank the larger ThingsCon and London #iotmark communities for sharing their insights.