Blog

What’s long-term success? Outsized positive impact.

W

For us, success is outsized positive impact—which is why I’m happy to see our work becoming part of Brazil’s National IoT Plan.

Recently, I was asked what long-term success looked like for me. Here’s the reply I gave:

To have outsized positive impact on society by getting large organizations (companies, governments) to ask the right questions early on in their decision-making processes.

As you know, my company consists of only one person: myself. That’s both boon & bane of my work. On one hand it means I can contribute expertise surgically into larger contexts, on the other it means limited impact when working by myself.

So I tend (and actively aim) to work in collaborations—they allow to build alliances for greater impact. One of those turned into ThingsCon, the global community of IoT practitioners fighting for a more responsible IoT. Another, between my company, ThingsCon and Mozilla, led to research into the potential of a consumer trustmark for the Internet of Things (IoT).

I’m very, very happy (and to be honest, a little bit proud, too) that this report just got referenced fairly extensively in Brazil’s National IoT Plan, concretely in Action Plan / Document 8B (PDF). (Here’s the post on Thingscon.com.)

To see your work and research (and hence, to a degree, agenda) inform national policy is always exciting.

This is exactly the kind of impact I’m constantly looking for.

Monthnotes for January 2018

M

January isn’t quite over, but since I’ll be traveling starting this weekend, I wanted to drop these #monthnotes now. A lot of time this month went into prepping an upcoming project which is likely to take up the majority of my time in 2018. More on that soon.

×

Capacity planning: This year my work capacity is slightly reduced since I want to make sure to give our new family member the face time he deserves. That said, this year’s capacity is largely accounted for, which is extra nice given it’s just January, and it’s for a thing I’m genuinely excited about. That said, I think it’s important to work on a few things in parallel because there’s always potential that unfolds from cross-pollination; so I’m up for a small number of not-huge projects in addition to what’s already going on, particularly in the first half of the year. Get in touch.

×

On Sunday, I’m off to San Francisco for a work week with the good folks at Mozilla because reasons and a number of meetings in the Bay Area. (Full disclosure: my partner works at Mozilla). Last year I’ve done some work with Mozilla and ThingsCon exploring the idea of a trustmark for IoT (our findings).

Image: commons (SDASM Archives)

Should you be in SF next week, ping me and we can see if we can manage a coffee.

×

IoT, trust & voice: More and more, I’m coming around to the idea that voice is the most important—or at least most imminent—manifestation of IoT regarding user data. Voice, and how it relates to trust, is what I’ll be focusing on a lot of my work in 2018.

×

User profiling in smart homes: Given my focus on voice & trust in IoT this year, I was very happy that Berlin tech & policy think tank Stiftung Neue Verantwortung invited me to a workshop on user profiling in smart homes. It was all Chatham House rules and I don’t want to dive into specifics at this point, but smart homes and voice assistants are worth a deep dive when it comes to trust—and trustworthiness—in IoT.

Connected homes and smart cities

Not least because (as I’ve been hammering home for a long time) the connected home and the smart city are two areas that most clearly manifest a lot of the underlying tensions and issues around IoT at scale: Connected homes, because traditionally the home was considered a private space (that is, if you look at the last 100 years in the West), and embedded microphones in smart homes means it’s not anymore. And smart cities, because in public space there is no opt-out: Whatever data is collected, processed, and acted on in public space impacts all citizens, if they want it or not. These are fundamental changes with far reaching consequences for policy, governance, and democracy.

×

Worth your time: A few pointers to articles and presentations I found worthwhile:

  • Kate Crawford’s talk on bias in AI training data is ace: The Trouble with Bias [Youtube].
  • TechCrunch has a bit of a top-level explainer of GDPR, Europe’s General Data Protection Regulation that goes into effect in May this year. It’s being widely lauded in Europe (except by the usual suspects, like ad-land), and been unsurprisingly criticized in Silicon Valley as disruptive regulation. (See what I did there?) So it came as a pleasant surprise to me that TechCrunch of all places finds GDPR to be a net positive. Worth 10 minutes of your time! [TechCrunch: WTF is GDPR?]
  • noyb.eu—My Privacy is none of your Business: Max Schrems, who became well-known in European privacy circles after winning privacy-related legal battles including one against Facebook and one that brought down the US/EU Safe Harbor Agreement, is launching a non-profit: They aim to enforce European privacy protection through collective enforcement, which is now an option because of GDPR. They’re fundraising for the org. The website looks crappy as hell very basic, but I’d say it’s a legit endeavor and certainly an interesting one.

×

Writing & thinking:

  • In How to build a responsible Internet of Things I lay out a few basic, top-level principles distilled from years of analyzing the IoT space—again with an eye on consumer trust.
  • On Business Models & Incentives: Some thoughts on how picking the wrong business model—and hence creating harmful incentives for an organization to potentially act against its own customers—is dangerous and can be avoided.
  • I’ve been really enjoying putting together my weekly newsletter together. It’s a little more personal and interest-driven than this blog, but tackles similar issues of the interplay between tech & society. It’s called Connection Problem. You can sign up here.

I was also very happy that Kai Brach, founder of the excellent Offscreen magazine kindly invited me to contribute to the next issue (out in April). The current one is also highly recommended!

×

Again, if you’d like to work with me in the upcoming months, please get in touch quickly so we can figure out how best to work together.

×

That’s it for January. See you in Feb!

How to build a responsible Internet of Things

H

Over the last few years, we have seen an explosion of new products and services that bridge the gap between the internet and the physical world: The Internet of Things (IoT for short). IoT increasingly has touch points with all aspects of our lives whether we are aware of it or not.

In the words of security researcher Bruce Schneier: “The internet is no longer a web that we connect to. Instead, it’s a computerized, networked, and interconnected world that we live in. This is the future, and what we’re calling the Internet of Things.”1

But IoT consists of computers, and computers are often insecure, and so our world becomes more insecure—or at the very least, more complex. And thus users of connected devices today have a lot to worry about (because smart speakers and their built-in personal digital assistants are particularly popular at the moment, we’ll use those as an example):

Could their smart speaker be hacked by criminals? Can governments listen in on their conversations? Is the device always listening, and if so, what does it do with the data? Which organizations get to access the data these assistants gather from and about them? What are the manufacturers and potential third parties going to do with that data? Which rights do users retain, and which do they give up? What happens if the company that sold the assistant goes bankrupt, or decides not to support the service any longer?

Or phrased a little more abstractedly2: Does this device do what I expect (does it function)? Does it do anything I wouldn’t normally expect (is it a Trojan horse)? Is the organization that runs the service trustworthy? Does that organization have trustworthy, reliable processes in place to protect myself and my data? These are just some of the questions faced by consumers today, but they face these questions a lot.

Trust and expectations in IoT
Trust and expectations in IoT. Image: Peter Bihr/The Waving Cat

Earning (back) that user trust is essential. Not just for any organization that develops and sells connected products, but for the whole ecosystem.

Honor the spirit of the social contract

User trust needs to be earned. Too many times have users clicked “agree” on some obscure, long terms of service (ToS) or end user license agreement (EULA) without understanding the underlying contract. Too many times have they waived their rights, giving empty consent. This has led to a general distrust—if not in the companies themselves then certainly in the system. No user today feels empowered to negotiate a contractual relationship with a tech company on eye level—because they can’t.

Whenever some scandal blows up and creates damaging PR, the companies slowly backtrack, but in too many cases they were legally speaking within their rights: Because nobody understood the contract but the abstract product language suggests a certain spirit of mutual goodwill between product company and their users that is not honored by the letter of that contract.

So short and sweet: Honor the spirit of the social contract that ties companies and their users together. Make the letters of the contract match that spirit, not the other way round. Earning back the users’ trust will not just make the ecosystem more healthy and robust, it will also pay huge dividends over time in brand building, retention, and, well, user trust.

Respect the user

Users aren’t just an anonymous, homogeneous mass. They are people, individuals with diverse backgrounds and interests. Building technical systems at scale means having to balance individual interests with automation and standardization.

Good product teams put in the effort to do user research and understand their users better: What are their interests, what are they trying to get out of a product and why, how might they use it? Are they trying to use it as intended or in interesting new ways? Do they understand the tradeoffs involved in using a product? These are all questions that basic, but solid user research would easily cover, and then some. This understanding is a first step towards respecting the user.

There’s more to it, of course: Offering good customer service, being transparent about user choices, allowing users to control their own data. This isn’t a conclusive list, and even the most extensive checklist wouldn’t do any good in this case: Respect isn’t a list of actions, it’s a mindset to apply to a relationship.

Offer strong privacy & data protection

Privacy and data protection is a tricky area, and one where screwing up is easy (and particularly damaging for all parties involved).

Protecting user data is essential. But what that means is not always obvious. Here are some things that user data might need to be protected from:

  • Criminal hacking
  • Software bugs that leak data
  • Unwarranted government surveillance
  • Commercial third parties
  • The monetization team
  • Certain business models

Part of these fall squarely into the responsibility of the security team. Others are based on the legal arrangements around how the organization is allows (read: allows itself) to use user data: The terms of services. Others yet require business incentives to be aligned with users’ interests.

The issues at stake aren’t easy to solve. There are no silver bullets. There are grey areas that are fuzzy, complex and complicated.

In some cases, like privacy, there are even cultural and regional differences. For example, to paint with a broad brush, privacy protection has fundamentally different meanings in the US than it does in Europe. While in the United States, privacy tends to mean that consumers are protected from government surveillance, in Europe the focus is on protecting user data from commercial exploitation.

Whichever it may be—and I’d argue it needs to be both—any organization that handles sensitive user data should commit to the strongest level of privacy and data protection. And it should clearly communicate that commitment and its limits to users up front.

Make it safe and secure

It should go without saying (but alas, doesn’t) that any device that connects to the internet and collects personal data needs to be reliably safe and secure. This includes aspects ranging from the design process (privacy by design, security by design) to manufacturing to safe storage and processing of data to strong policies that protect data and users against harm and exploitation. But it doesn’t end there: Especially the end-of-life stage of connected products are important, too. If an organization stops maintaining the service and ceases to update the software with security patches, or if the contract with the user doesn’t have protections against data spills at the acquisition or liquidation stage of a company, then the data could have been safe for years but all of a sudden poses new risks.

IT security is hard enough as it is, but security of data-driven systems that interconnect and interact is so much harder. After all, the whole system is only as strong as its weakest component.

Alas, there is neither fame nor glory in building secure systems: At best, there is no scandal over breaches. At worst, there are significant costs without any glamorous announcements. In the same way that prevention in healthcare is less attractive than quick surgery to repair the damage, it is also more effective and cheaper in the long run. So hang in there, and the users might just vote with their feet and dollars to support the safest, most secure, most trustworthy products and organizations.

Choose the right business model

A business model can make or break a company. Obviously, without a business model, a company won’t last long. But without the right business model, it’ll thrive not together with its customers but at their expense.

We see so much damage done because wrong business models—and hence, wrong incentives—drive and promote horrible decision making.

If a business is based on user data—as is often the case in IoT—then finding the right business model is essential. Business models, and the behaviors they incentivize, matter. More to the point, aligning the organization’s financial incentives with the users’ interests matters.

As a rule of thumb, data mining isn’t everything. Ads, and the surveillance marketing they increasingly require, have reached a point of being poisonous. If, however, an organization finds a business model that is based on protecting its users’ data, then that organization and its customers are going to have a blast of a time.

To build sustainable businesses—businesses that will sustain themselves and not poison their ecosystem—it’s absolutely essential to pick and align business models and incentives wisely.


  1. Bruce Schneier: Click Here to Kill Everyone. Available at http://nymag.com/selectall/2017/01/the-internet-of-things-dangerous-future-bruce-schneier.html 
  2. See Peter Bihr: Trust and expectations in IoT. Available at https://thewavingcat.com/2017/06/28/trust-and-expectation-in-iot/ 

On Business Models & Incentives

O

We’ve been discussing ethics & responsibility in IoT specifically, and business more generally, a lot lately. This seems more relevant than ever today, simply because we see so much damage done because wrong business models—and hence, wrong incentives—drive and promote horrible decision making.

One blatantly obvious example is Facebook and its focus on user engagement. I’d like to make clear I pick Facebook because it is simply the best known example of an industry-wide trend.

Advertisers are sold on “engagement” as a metric since the web allowed to measure user behavior (ie. what used to be called “Web 2.0”, now “social media”). Before that (early Web), it was various flavors of page impressions as a proxy for reach. Before that (print, TV) it was calculated/assumed reach based on sampling and the size of print runs.

It’s important to keep in mind that these metrics have changed over time, and can change and be changed any time. They aren’t a divine hand-down, nor a constant in the world. They are what we, as an industry and society, make them.

Now, for a few years advertisers have been sold on, and have been overall quite happy with, having their ad efficiency and effectiveness on engagement. This term means how many people didn’t just see their ads, but interacted (“engaged”) with them in one way or another. Typically, this means clicking on them, sharing them on social media or via email, and the like. It’s a strong proxy for attention, which is what advertisers are really after: They want potential customers to notice their messages. It’s hard to argue with that; it’s their job to make sure people notice their ads.

That said, the focus on engagement was driven forcefully by the platforms that profit from selling online ads as a means to differentiate themselves from print and TV media, as well as the online offerings of traditionally print/TV based media. “Look here, we can give you much more concrete numbers to measure how well your ads work”, they said. And they were, by and large, not wrong.

But.

The business model based on engagement turned out to be horrible. Damaging. Destructive.

This focus on engagement means that all incentives of the business are to get people to pay more attention to advertisements, at the expense of everything else. Incentivizing engagement means that the more you can learn about a user, by any means, puts you in a better position to get them to pay attention to your ads.

This is how we ended up with a Web that spies on us, no matter where we go. How we ended up with websites that read us more than we read them. With clickbait, “super cookies”, and fake news. Every one of these techniques are means to drive up engagement. But at what cost?

I truly believe you can’t discuss fake news, the erosion of democracy, online harassment, and populism without discussion online surveillance (aka “ad-tech”, or “surveillance capitalism”) first.

Business models, and the behaviors they incentivize, matter. Facebook and many other online advertisement platforms picked horrible incentives, and we all have been paying the price for it. It’s killing the Web. It’s eroding our privacy, the exchange of ideas, and democracy. Because where our communications channels spy on us, and the worst and most troll-ish (“most engaging”) content floats to the top because of ill-advised and badly checked algorithmic decision-making, we can’t have discussions anymore in public, or even in the spaces and channels that appear to be private.

It doesn’t have to be that way. We can choose our own business models, and hence incentives.

For example, over at ThingsCon we were always wary of relying too much on sponsorship, because it adds another stakeholder (or client) you need to accommodate beyond participants and speakers. We mostly finance all ThingsCon events through ticket sales (even if “financing” is a big word; everything is mostly done by our own volunteer work). Our research is either done entirely in-house out of interest or occasionally as a kind of “researcher-for-hire” commission. We subsidize ThingsCon a lot through our other work. Does that mean we lose some quick cash? Absolutely. Do we regret it? Not in the very least. It allows a certain clarity of mission that wouldn’t otherwise be possible. But I admit it’s a trade-off.

(A note for the event organizers out there: Most of the sponsors we ended up taking on were more than happy to go with food sponsoring, a ticket package, or subsidizing tickets for underrepresented groups—all entirely compatible with participants’ needs.)

If we want to build sustainable businesses—businesses that will sustain themselves and not poison their ecosystem—we need to pick our business models and incentives wisely.

Monthnotes for December 2017

M

December was a slow month in terms of work: We had a baby and I took a few weeks off, using what little time was left to tie up some loose ends and to make sure the lights were still on. In January, I’ll be back full time, digging into some nice, big, juicy research questions.

My capacity planning for 2018 is in full swing. If you’d like to work with me in the upcoming months, please get in touch quickly.

Media

SPIEGEL called to chat about insecure IoT devices. We chatted about the state of the IoT ecosystem, externalized costs, and consumer trust. In the end, a short quote about cheaply made, and hence insecure, connected gadgets made the cut. The whole article is available online (in German) here: SPIEGEL: Internet der Dinge Ist der Ruf erst ruiniert, vernetzt es sich ganz ungeniert.

Thinking & writing

Just two quick links to blog posts I wrote:

  • The key challenge for the industry in the next 5 years is consumer trust is something I originally had written for our client newsletter, but figured it might be relevant to a larger audience, too. In it, I explore some of the key questions that I’ve recently been pondering and that have been coming up constantly in peer group conversations. Namely, 1) What’s the relationship between (digital) technology and ethics/sustainability? 2) The Internet of Things (IoT) has one key challenge in the coming years: Consumer trust. 3) Artificial Intelligence (AI): What’s the killer application? Maybe more importantly, which niche applications are most interesting? 4) What funding models can we build the web on, now that surveillance tech (aka “ad tech”) has officially crossed over to the dark side and is increasingly perceived as no-go by early adopters?
  • Focus areas over time explains how the focus area of my work has shifted from across a number of emerging technologies and related practices.

Newsletter

I’ve pulled my good old newsletter out of storage, blown off the dust, and been taking it for a spin once more. I’m experimenting with a weekly format of things I found worth discussing, some signals on my radar, and some pointers to stuff we’ve been working on. To follow along as I try and shape my thinking on some incoming signals, sign up here for Season 3.

What’s next?

Capacity planning for 2018 is in full swing, and it’s shaping up to be a busy year. There’s a couple of big projects that (barring some major hiccups) will kick off within the next few weeks. Just like I’ve done a lot of self-directed research in 2017 which has been tremendously useful, I’ll continue this kind of work in 2018. I’ll try to also write a lot to help spread what I learn that way. Between all of that, the year is filling up nicely. It looks like 2018 won’t be boring, that’s for sure.

If you’d like to work with me in the upcoming months, please get in touch quickly so we can figure out how best to work together.

The key challenge for the industry in the next 5 years is consumer trust

T

Note: Every quarter or so I write our client newsletter. This time it touched on some aspects I figured might be useful to this larger audience, too, so I trust you’ll forgive me cross-posting this bit from the most recent newsletter.

Some questions I’ve been pondering and that we’ve been exploring in conversations with our peer group day in, day out.

This isn’t an exhaustive list, of course, but gives you a hint about my headspace?—?experience shows that this can serve as a solid early warning system for industry wide debates, too. Questions we’ve had on our collective minds:

1. What’s the relationship between (digital) technology and ethics/sustainability? There’s a major shift happening here, among consumers and industry, but I’m not yet 100% sure where we’ll end up. That’s a good thing, and makes for interesting questions. Excellent!

2. The Internet of Things (IoT) has one key challenge in the coming years: Consumer trust. Between all the insecurities and data leaks and bricked devices and “sunsetted” services and horror stories about hacked toys and routers and cameras and vibrators and what have you, I’m 100% convinced that consumer trust?—?and products’ trustworthiness?—?is the key to success for the next 5 years of IoT. (We’ve been doing lots of work in that space, and hope to continue to work on this in 2018.)

3. Artificial Intelligence (AI): What’s the killer application? Maybe more importantly, which niche applications are most interesting? It seems safe to assume that as deploying machine learning gets easier and cheaper every day we’ll see AI-like techniques thrown at every imaginable niche. Remember when everyone and their uncle had to have an app? It’s going to be like that but with AI. This is going to be interesting, and no doubt it’ll produce spectacular successes as well as fascinating failures.

4. What funding models can we build the web on, now that surveillance tech (aka “ad tech”) has officially crossed over to the dark side and is increasingly perceived as no-go?

These are all interesting, deep topics to dig into. They’re all closely interrelated, too, and have implications on business, strategy, research, policy. We’ll continue to dig in.

But also, besides these larger, more complex questions there are smaller, more concrete things to explore:

  • What are new emerging technologies? Where are exciting new opportunities?
  • What will happen due to more ubiquitous autonomous vehicles, solar power, crypto currencies? What about LIDAR and Li-Fi?
  • How will the industry adapt to the European GDPR? Who will be the first players to turn data protection and scarcity into a strength, and score major wins? I’m convinced that going forward, consumer and data protection offer tremendous business opportunities.

If these themes resonate, or if you’re asking yourself “how can we get ahead in 2018 without compromising user rights”, let’s chat.

Want to work together? I’m starting the planning for 2018. If you’d like to work with me in the upcoming months, please get in touch.

PS: I write another newsletter, too, in which I share regular project updates, thoughts on the most interesting articles I come across, and where I explore areas around tech, society, culture & business that I find relevant. To watch my thinking unfolding and maturing, this is for you. You can subscribe here.

Focus areas over time

F

The end of the year is a good time to look back and take stock, and one of the things I’ve been looking at especially is how the focus of my work has been shifting over the years.

I’ve been using the term emerging technologies to describe where my interests and expertise are, because it describes clearly that the concrete focus is (by definition!) constantly evolving. Frequently, the patterns become obvious only in hindsight. Here’s how I would describe the areas I focused on primarily over the last decade or so:

focus areas over time Focus areas over time (Image: The Waving Cat)

Now this isn’t a super accurate depiction, but it gives a solid idea. I expect the Internet of Things to remain a priority for the coming years, but it’s also obvious that algorithmic decision-making and its impact (labeled here as artificial intelligence) is gaining importance, and quickly. The lines are blurry to begin with.

It’s worth noting that these timelines aren’t absolutes, either: I’ve done work around the implications of social media later than that, and work on algorithms and data long before. These labels indicated priorities and focus more than anything.

So anyway, hope this is helpful to understand my work. As always, if you’d like to bounce ideas feel free to ping me.