Blog

Focus areas over time

F

The end of the year is a good time to look back and take stock, and one of the things I’ve been looking at especially is how the focus of my work has been shifting over the years.

I’ve been using the term emerging technologies to describe where my interests and expertise are, because it describes clearly that the concrete focus is (by definition!) constantly evolving. Frequently, the patterns become obvious only in hindsight. Here’s how I would describe the areas I focused on primarily over the last decade or so:

focus areas over time Focus areas over time (Image: The Waving Cat)

Now this isn’t a super accurate depiction, but it gives a solid idea. I expect the Internet of Things to remain a priority for the coming years, but it’s also obvious that algorithmic decision-making and its impact (labeled here as artificial intelligence) is gaining importance, and quickly. The lines are blurry to begin with.

It’s worth noting that these time lines aren’t absolutes, either: I’ve done work around the implications of social media later than that, and work on algorithms and data long before. These labels indicated priorities and focus more than anything.

So anyway, hope this is helpful to understand my work. As always, if you’d like to bounce ideas feel free to ping me.

Monthnotes for November 2017

M

November was for wrapping up some client work and speaking, and for starting the planning for 2018. On a personal note, we also had a baby and so things have been a little slower than usual these last couple of weeks. So we’ll keep going as normal for now, just with a bigger smile on my face.

I’m starting the planning for 2018. If you’d like to work with me in the upcoming months, please get in touch.

Media

Netzpolitik: In September I spoke at Netzpolitik’s annual conference, Das ist Netzpolitik (see below). While I was there, Netzpolitik.org also recorded an interview with me: “Regulierung und Datenschutz im Internet der Dinge“:

stories connecting dots

Stories Connecting Dots: Back in July, the smart & ever-lovely Markus Andrezak interviewed me for his podcast Stories Connecting Dots. The second part of our interview just went live and I’m honored to be opening the second season of SDC discussing Shenzhen’s IoT ecosystem.

Thinking, writing, speaking

Underexposed: On 9 November, I spoke at SimplySecure‘s conference Underexposed (program). It was an excellent event, put together by the even-more-excellent Ame Elliott.

My talk was called The Internet of Sneaky Things. In it, I explored how IoT is at a crossroads, and we can either let it become the Internet of Sneaky Things or we can make it better, more human-centric, and more responsible.

Underexposed also surfaced a conversation that’s been picking up steam: The importance of great and simple-to-use documentation, tooling and toolkits for designers and developers to make their products and processes more safe, secure, and ethical. And analog, of course, the equivalents for strategist, entrepreneurs, and everybody else: All the experts in their fields might not be experts in these other, often thorny meta areas, so let’s build tools that make their life easier. This led to this list of toolkits for designers around ethics & emerging tech.

Good School: I also spoke at Good School, a Hamburg-based executive leadership program, where I was happy to give a glimpse or two at China and its digital landscape as we experienced it during our recent research trips to Shenzhen and Shanghai. The feedback was fantastic—seems the topic struck a nerve.

Inflection point [blog post]: AI, IoT, Robotics: We’re at an inflection point for emerging technologies

Newsletter

For a long time, I’ve been writing (on and off) a newsletter with some work related and some more personal notes: Some project updates, some half-formed thoughts, some freshly explored ideas, some articles I found interesting.

Over the last few weeks, I’ve been trying to write that newsletter more regularly, and experimenting with a weekly format of things I found worth discussing. To follow along as I try and shape my thinking on some incoming signals, sign up here for Season 3.

ThingsCon

Lots of ThingsCon action around the world: Dublin is confirmed (details TBD), we’re having promising conversations with teams in two cities in the US, Shenzhen is on the way to turn into a regular event series, and another 1-2 cities in China might also happen soon.

In the meantime, because this month’s notes are a little delayed, the big annual ThingsCon Amsterdam has happened (check out the videos), and ThingsCon Nairobi premiered to a full house, too!

Miscellaneous

I learned a word from Cennydd: Provocatype, “a speculative prototype that isn’t ‘good’ product per se, but is intended to spark conversation”. It made instant sense. Definitively a keeper.

If you’d like to work with me in the upcoming months, please get in touch.

Toolkits for designers & developers around ethics, privacy & security

T

At SimplySecure’s excellent Underexposed conference we discussed the importance of making it easier for those involved in making connected products and services to make safe, secure, and privacy-conscious products. After all, they might be experts, but necessarily security experts, for example. So, toolkit time!

I asked participants in the room as well as publicly on Twitter which toolkits and resources are worth knowing. This is what this looked like in the room:

“Which toolkits should we all know? Ethics, privacy, security”

Here’s the tweet that went with it:

So what are the toolkit recommendations? Given the privacy-sensitive nature of the event, I’m linking to the source only where people send the recommendations on public Twitter. Also, please note I’m including them without much background, and unchecked. So here goes:

This list can by no means claim to be complete, but hopefully it will still be useful to some of you.

Interview with Netzpolitik.org: Regulierung und Datenschutz im Internet der Dinge

I

In September I spoke at Netzpolitik’s annual conference, Das ist Netzpolitik. While I was there, Netzpolitik.org also recorded an interview with me: “Regulierung und Datenschutz im Internet der Dinge“.

A big thank you to Netzpolitik and Stefanie Talaska for the conversation!

Facebook, Twitter, Google are a new type of media platform, and new rules apply

F

When Congress questioned representatives of Facebook, Google and Twitter, it became official: We need to finally find an answer to a debate that’s been bubbling for months (if not years) about the role of the tech companies—Google, Apple, Facebook, Amazon, Microsoft, or GAFAM—and their platforms.

The question is summed up by Ted Cruz’s line of inquiry (and here’s a person I never expected to quote) in the Congressional hearing: “Do you consider your sites to be neutral public fora?” (Some others echoed versions of this question.)

Platform or media?

Simply put, the question boils down to this: Are GAFAM tech companies or media companies? Are they held to standards (and regulation) of “neutral platform” or “content creator”? Are they dumb infrastructure or pillars of democracy?

These are big questions to ask, and I don’t envy the companies for their position in this one. As a neutral platform they get a large degree of freedom, but have to take responsibility for the hate speech and abuse on their platform. As a media company they get to shape the conversation more actively, but can’t claim the extreme point of view of free speech they like to take. You can’t both be neutral and “bring humanity together” as Mark Zuckerberg intends. As Ben Thompson points out on Stratechery (potentially paywalled), neutrality might be the “easier” option:

the “safest” position for the company to take would be the sort of neutrality demanded by Cruz — a refusal to do any sort of explicit policing of content, no matter how objectionable. That, though, was unacceptable to the company’s employee base specifically, and Silicon Valley broadly

I agree this would be easier. (I’m not so sure that the employee preference is the driving force, but that’s another debate and it certainly plays a role.) Also, let’s not forget that each of these companies plays a global game, and wherever they operate they have to meet legal requirements. Where are they willing to draw the line? Google famously didn’t enter the Chinese market a few years ago, presumably because they didn’t want to meet the government’s censorship requirements. This was a principled move, and I would expect not an easy one for a big market. But where do you draw the line? US rules on nudity? German rules on censoring Nazi glorification and hate speech? Chinese rules on censoring pro-democracy reporting or on government surveillance?

For GAFAM, the position has traditionally been clear cut and quite straightforward, which we can still (kind of, sort of) see in the Congressional hearing:

“We don’t think of it in the terms of ‘neutral,'” [Facebook General Counsel Colin] Stretch continued, pointing out that Facebook tries to give users a personalized feed of content. “But we do think of ourselves as — again, within the boundaries that I described — open to all ideas without regard to viewpoint or ideology.” (Source: Recode)

Once more:

[Senator John] Kennedy also asked Richard Salgado, Google’s director of law enforcement and information security, whether the company is a “newspaper” or a neutral tech platform. Salgado replied that Google is a tech company, to which Kennedy quipped, “that’s what I thought you’d say.” (Source: Business Insider)

Now that’s interesting, because while they claim to be “neutral” free speech companies, Facebook and the others have of course been hugely filtering content by various means (from their Terms of Service to community guidelines), and shaping the attention flow (who sees what and when) forever.

This aspect isn’t discussed much, but worth noting nonetheless: How Facebook and other tech firms deal with content has been based to a relatively large degree by United States legal and cultural standards. Which makes sense, given that they’re US companies, but doesn’t make a lot of sense given they operate globally. To name just two examples from above that highlight how legal and cultural standards differ from country to country, take pictures of nudity (largely not OK in the US, largely OK in Germany) versus positively referencing the Third Reich (largely illegal in Germany, largely least legal in the US).

Big tech platforms are a new type of media platform

Here’s the thing: These big tech platforms aren’t neutral platforms for debate, nor are they traditional media platforms. They are neither neither dumb tech (they actively choose and frame and shape content & traffic) nor traditional media companies that (at least notionally) primarily engage in content creation. These big tech platforms are a new type of media platform, and new rules apply. Hence, they require new ways of thinking and analysis, as well as new approaches to regulation.

(As an personal, rambling aside: Given we’ve been discussing the transformational effects of digital media and especially social media for far over a decade now, how do we still even have to have this debate in 2017? I genuinely thought that we had at least sorted out our basic understanding of social media as a new hybrid by 2010. Sigh.)

We might be able to apply existing regulatory—and equally important: analytical—frameworks. Or maybe we can find a way to apply existing ones in new ways. But, and I say this expressly without judgement, these are platforms that operate at a scale and dynamism we haven’t seen before. They are of a new quality, they display qualities and combinations of qualities and characteristics we don’t have much experience with. Yet, on a societal level we’ve been viewing them through the old lenses of either media (“a newspaper”, “broadcast”) or neutral platforms (“tubes”, “electricity”). And it hasn’t worked yet, and will continue not to work, because it makes little sense.

That’s why it’s important to take a breath and figure out how to best understand implications, and shape the tech, the organizations, the frameworks within which they operate.

It might turn out, and I’d say it’s likely, that they operate within some frameworks but outside others, and in those cases we need to adjust the frameworks, the organizations, or both. To align the analytical and regulatory frameworks with realities, or vice versa.

This isn’t an us versus them situation like many parties are insinuating: It’s not politics versus tech as actors on both the governmental and the tech side sometimes seem to think. It’s not tech vs civil society as some activists claim. It’s certainly not Silicon Valley against the rest of the world, even though a little more cultural sensitivity might do SV firms a world of good. This is a question of how we want to live our lives, govern our lives, as they are impacted by the flow of information.

It’s going to be tricky to figure this out as there are many nation states involved, and some supra-national actors, and large global commercial actors and many other, smaller but equally important players. It’s a messy mix of stakeholders and interests.

But one thing I can promise: The solution won’t be just technical, not just legal, nor cultural. It’ll be a slow and messy process that involves all three fields, and a lot of work. We know that the status quo isn’t working for too many people, and we can shape the future. So that soon, it’ll work for many more people—maybe for all.

Please note that this is cross-posted from Medium. Also, for full transparency, we work occasionally with Google.