Blog

A list of resources for ethical, responsible, and public interest tech development

A

For an upcoming day of teaching I started compiling a list of resources relevant for the ethical, responsible development of tech, especially public interest tech. This list is very much incomplete, a starting point.

(For disclosure’s sake, I should add that I’ve started lists like this before: I’ll try, but cannot promise, to be maintaining this one. Just assume it’s a snapshot, useful primarily in the now and as an archive for future reference.)

I also can take only very partial credit for it since I asked Twitter for input. I love Twitter for this kind of stuff: Ask, and help shall be provided. My Twitter is a highly curated feed of smart, helpful people. (I understand that for many people Twitter feels very, very different. My experience is privileged that way.) A big thank you in particular to Sebastian Deterding, Alexandra Deschamps-Sonsino, Dr. Laura James, Iskander Smit, and to others I won’t name because they replied via DM and this might have been a privacy-related decision. You know who you are – thank you!

Here are a bunch of excellent starting points to dig deeper, ranging from books to academic papers to events to projects to full blown reading lists. This list covers a lot of ground. You can’t really go wrong here, but choose wisely.

Projects & papers:

Organizations:

Books:

  • Everyware (Greenfield)
  • The Epic Struggle of the Internet of Things (Sterling)
  • Weapons of Math Destruction (O’Neil)
  • Smart Cities (Townsend)
  • Future Ethics (Bowles)

Libraries, reading lists & lists of lists:

Monthnotes for March 2019

M

This installment of monthnotes features the wrap-up of a fellowship, updates on a PhD program I’ll be supervising for, a ThingsCon event, and an anniversary. Enjoy.

If you’d like to work with me in the upcoming months, I have limited availability but am always happy to have a chat. I’m currently doing the planning for Q3 and Q4 2019.

The Waving Cat turns 5

The Waving Cat just turned 5 officially. Which is still mind blowing to me. It’s been quite the ride, and 5 incredibly productive years.

In this time I’ve written 3 book-ish things and many reports, co-published multiple magazine-ish things and a proper academic paper. Co-chaired some amazing conferences like ThingsCon, Interaction16, UIKonf and more. Worked on strategy, policy and research across a pretty wide range of industries and clients from global tech to non-profit to governments. Was on a number of juries, and mentored a bunch of teams. Was a Mozilla Fellow. Launched a consumer trustmark. Helped kickstart a number of exciting projects including ThingsCon, Zephyr Berlin, Dearsouvenir and the Trustable Technology Mark. Spoke at about 40 events. Wrote, contributed or was quoted in about 60 articles.

So yeah, it’s been a good 5 years run. On to the next round of adventures.

(By the way, that anniversary is the company’s; the website & blog go way, way further back. All the way to like 2005.)

Wrapping up my Mozilla Fellowship

With the end of February, my Mozilla Fellowship officially wrapped up. (That is, the active part of the fellowship; Mozilla makes a point of the affiliation being for life.)

Technically this fellowship was about launching ThingsCon’s Trustable Technology Mark (which got so much great media coverage!) but it was so much more.

I’m glad and grateful for the opportunity to be warmly welcomed into this fantastic community and to meet and work with so many ambitious, smart, caring and overall awesome people.

Nothing could symbolize this better than the lovely ceremony the team put together for Julia Kloiber’s and my farewell. Unicorn gavels and flower crowns and laminated “for life” cards and bubbly were all involved. Thank you! ?

OpenDott is nearly ready

The collaboration with Mozilla isn’t ending anytime soon. OpenDott.org is a paid PhD program in responsible tech that is hosted by University of Dundee in collaboration with Mozilla and a host of smaller orgs including ThingsCon, and that I’ll supervising a PhD for.

I’m not logistically involved in this stage but my understanding is that the final paperwork is being worked out with the 5 future PhDs right now: The last YES’s collected, the last forms being signed, etc. Can’t wait for this to kick off for real, even though I’ll be only marginally involved. I mean, come on – a PhD in responsible tech? How awesome is that.

ThingsCon

The new ThingsCon website, thingscon.org, is by now more or less up and running and complete. Just in time for a (for ThingsCon somewhat unusual) event in May: A small and intimate unconference in Berlin about responsible paths in tech, economy, and beyond. Details and how to apply here.

Zephyrs: going fast

We’ve been making our ultimate travel pants under the Zephyr Berlin brand for about 2 years now. I’m not sure what happened but we must have landed on a relevant recommendations list or two as we’ve been getting a pretty sharp spike in orders these last few weeks. This is fantastic and a lot of fun. But the women’s cut is almost out now. We don’t know if/when we’ll produce the next batch, so if you’re looking to score one of those, don’t wait too long.

The Newsletter Experiment, continued

As I’ve mentioned in the last monthnotes, over in my personal(ish) newsletter Connection Problem I started an experiment with memberships. The gist of it is, I publish about 100K words a year, most of which are critical-but-constructive takes about tech industry and how we can maximize responsible tech rather than exploitation. You can support this independent writing by joining the membership.

It’s all happening under the principle of “unlocked commons”, meaning members support writing that will be available in the commons, for free, continuously. You can learn more in the newsletter archive or on this page. It’s an exciting experiment for me, and hopefully the output is something that’s useful and enjoyable for you, too.

AI, ethics, smart cities

I was invited to Aspen Institute’s annual conference on artificial intelligence, Humanity Defined: Politics and Ethics in the AI Age. It’s a good event, bringing (mostly US based) AI experts to Germany and putting them onstage with (mostly German) policy experts to spark some debate. I’ve been to this since it started last year and enjoyed it. This time, my highlight was some background on the European High Level Group on AI Ethics Guidelines shared there by one of the group’s ethicists, Thomas Metzinger. He made a convincing case that this might be the best AI ethics doc currently, globally (it’s going to be published next week); and that it has glaring, painful shortcomings, especially as far as red lines are concerned – areas or types of AI applications that Europe would not engage in. These red lines are notably absent in the final document. Which seems… a shame? More on that soon.

I’m just mentioning this here because there are a few exciting projects coming up that will give me an opportunity to explore the intersection of smart cities, policy, AI/machine decision learning and how insights from creating the Trustable Technology Mark can lead to better, more responsible smart cities, tech governance, and applied AI ethics. More on that soon.

What’s next?

This week I’ll be at the Internet Freedom Festival (IFF) in Valencia, Spain. Then later in the month I’ll be teaching for a day about trustable tech at Hochschule Darmstadt at the kind invitation of Prof. Andrea Krajewski. Otherwise it’s drafting outlines, writing some project proposals, and lots of meetings and writing.

If you’d like to work with me in the upcoming months, I have limited availability but am always happy to have a chat. I’m currently doing the planning for Q3 and Q4 2019.

Have a great April!

Yours truly,
P.

Which type of Smart City do we want to live in?

W

Connectivity changes the nature of things. It quite literally changes what a thing is.

By adding connectivity to, say, the proverbial internet fridge it stops being just an appliance that chills food. It becomes a device that senses; captures, processes and shares information; acts on this processed information. The thing-formerly-known-as-fridge becomes an extension of the network. It makes the boundaries of the apartment more permeable.

So connectivity changes the fridge. It adds features and capabilities. It adds vulnerabilities. At the same time, it also adds a whole new layer of politics to the fridge.

Power dynamics

Why do I keep rambling on about fridges? Because once we add connectivity — or rather: data-driven decision making of any kind — we need to consider power dynamics.

If you’ve seen me speak at any time throughout the last year, chances are you’ve encountered this slide that I use to illustrate this point:

The connected home and the smart city are two areas where the changing power dynamics of IoT (in the larger sense) and data-driven decision making manifest most clearly: The connected home, because it challenges our notions of privacy (in the last 150 years, in the global West). And the smart city, because there is no opting out of public space. Any sensor, any algorithm involved in governing public space impacts all citizens.

That’s what connects the fridge (or home) and the city: Both change fundamentally by adding a data layer. Both acquire a new kind of agenda.

3 potential cities of 2030

So as a thought experiment, let’s project three potential cities in the year 2030 — just over a decade from now. Which of these would you like to live in, which would you like to prevent?

In CITY A, a pedestrian crossing a red light is captured by facial recognition cameras and publicly shamed. Their CitizenRank is downgraded to IRRESPONSIBLE, their health insurance price goes up, they lose the permission to travel abroad.

In CITY B, wait times at the subway lines are enormous. Luckily, your Amazon Prime membership has expended to cover priority access to this formerly public infrastructure, and now includes dedicated quick access lines to the subway. With Amazon Prime, you are guaranteed Same Minute Access.

In CITY C, most government services are coordinated through a centralized government database that identifies all citizens by their fingerprints. This isn’t restricted to digital government services, but also covers credit card applications or buying a SIM card. However, the official fingerprint scanners often fail to scan manual laborers’ fingerprints correctly. The backup system (iris scans) don’t work on too well on those with eye conditions like cataract. Whenever these ID scans don’t work, the government service requests are denied.

Now, as you may have recognized, this is of course a trick question. (Apologies.) Two of these cities more or less exist today:

  • CITY A represents the Chinese smart city model based on surveillance and control, as piloted in Shenzhen or Beijing.
  • CITY C is based on India’s centralized government identification database, Aadhaar.
  • Only CITY B is truly, fully fictional (for now).

What model of Smart City to optimize for?

We need to decide what characteristics of a Smart City we’d like to optimize for. Do we want to optimize for efficiency, resource control, and data-driven management? Or do we want to optimize for participation & opportunity, digital citizens rights, equality and sustainability?

There are no right or wrong answers (even though I’d clearly prefer a focus on the second set of characteristics), but it’s a decision we should make deliberately. One leads to favoring monolithic centralized control structures, black box algorithms and top-down governance. The other leads to decentralized and participatory structures, openness and transparency, and more bottom-up governance built in.

Whichever we build, these are the kinds of dependencies we should keep in mind. I’d rather have an intense, participatory deliberation process that involves all stakeholders than just quickly throwing a bunch of Smart City tech into the urban fabric.

After all, this isn’t just about technology choices: It’s the question what kind of society we want to live in.

The 3 I’s: Incentives, Interests, Implications

T

When discussing how to make sure that tech works to enrich society — rather than extract value from many for the benefit of a few — we often see a focus on incentives. I argue that that’s not enough: We need to consider and align incentives, interests, and implications.

Incentives

Incentives are, of course, mostly thought of as an economic motivator for companies: Maximize profit by lowering costs or offsetting or externalizing it, or charging more (more per unit, more per customer, or simply charging more customers). Sometimes incentives can be non-economic, too, like in the case of positive PR. For individuals, it’s conventionally thought of in the context of consumers trying to get their products as cheaply as possible.

All this of course is based on what in economics is called rational choice theory, a framework for understanding social and economic behavior: “The rational agent is assumed to take account of available information, probabilities of events, and potential costs and benefits in determining preferences, and to act consistently in choosing the self-determined best choice of action.” (Wikipedia) Rational choice theory isn’t complete, though, and might simply be wrong; we know, for example, that all kinds of cognitive biases are also at play in decision-making. The latter is for individuals, of course. But organizations inherently have their own blind spots and biases, too.

So this focus on incentives, while near-ubiquitous, is myopic: While incentives certainly play a role in decision making, they are not the only factor at play. Neither do companies only work towards maximizing profits (I know my own doesn’t, and I daresay many take other interests into account, too). Nor do consumers only optimize their behavior towards saving money (at the expense, say, of secure connected products). So we shouldn’t over-index on getting the incentives right and instead take other aspects into account, too.

Interests

When designing frameworks that aim at a better interplay of technology, society and individual, we should look beyond incentives. Interests, however vaguely we might define those, can clearly impact decision making. For example, if a company (large or small, doesn’t matter) wants to innovate in a certain area, they might willingly forgo large profits and instead invest in R&D or multi-stakeholder dialog. This could help them in their long term prospects through either new, better products (linking back to economic incentives) or by building more resilient relationships with their stakeholders (and hence reducing potential friction with external stakeholders).

Other organizations might simply be mission driven and focus on impact rather than profit, or at least balance both differently. Becoming a B-Corp for example has positive economic side effects (higher chance of retaining talent, positive PR) but more than that it allows the org to align its own interests with those of key stakeholder groups, namely not just investors but also customers and staff.

Consumers, equally, are not unlikely by any means to prioritize price over other characteristics: Organic and Fairtrade food or connected products with quality seals (like our own Trustable Technology Mark) might cost more but offer benefits that others don’t. Interests, rational or not, influence behavior.

And, just as an aside, there are plenty of cases where “irrationally” responsible behavior by an organization (like investing more than legally required in data protection, or protecting privacy better than industry best practice) can offer a real advantage in the market if the regulatory framework changes. I know at least one Machine Learning startup that had a party when GDPR came into effect since all of a sudden, their extraordinary focus on privacy meant they where ahead of the pack while the rest of the industry was in catch-up mode.

Implications

Finally, we should consider the implications of the products coming onto the market as well as the regulatory framework they live under. What might this thing/product/policy/program do to all the stakeholders — not just the customers who pay for the product? How might it impact a vulnerable group? How will it pay dividends in the future, and for whom?

It is especially this last part that I’m interested in: The dividends something will pay in the future. Zooming in even more, the dividends that infrastructure thinking will pay in the future.

Take Ramez Naam’s take on decarbonization — he makes a strong point that early solar energy subsidies (first in Germany, then China and the US) helped drive development of this new technology, which in turn drove the price down and so started a virtuous circle of lower price > more uptake > more innovation > lower price > etc. etc.

We all know what happened next (still from Ramez):

“Electricity from solar power, meanwhile, drops in cost by 25-30% for every doubling in scale. Battery costs drop around 20-30% per doubling of scale. Wind power costs drop by 15-20% for every doubling. Scale leads to learning, and learning leads to lower costs. … By scaling the clean energy industries, Germany lowered the price of solar and wind for everyone, worldwide, forever.”

Now, solar energy is not just competitive. In some parts of the world it is the cheapest, period.

This type of investment in what is essentially infrastructure — or at least infrastructure-like! — pays dividends not just to the directly subsidized but to the whole larger ecosystem. This means significantly, disproportionately bigger impact. It creates and adds value rather than extracting it.

We need more infrastructure thinking, even for areas that are, like solar energy and the tech we need to harvest it, not technically infrastructure. It needs a bit of creative thinking, but it’s not rocket science.

We just need to consider and align the 3 I’s: incentives, interests, and implications.

Monthnotes for January 2019

M

January was a month for admin, planning, and generally getting sorted. There was lots of admin, taxes, year planning, to take care of. I also tried to get my hands dirty by digging into machine learning some more and ran some experiments with deep fake generation (the non-sleazy kind, obviously); so far with little success, but some learning nonetheless. And the WEF featured ThingsCon and the Trustable Technology Mark!

If you’d like to work with me in the upcoming months, I have very limited availability but am always happy to have a chat. I’m currently doing the planning for Q2 and Q3 2019.

Trustable Technology Mark

The Trustable Technology Mark launched to lots of media attention. But still it was a pleasant surprise when the WEF called about an interview as part of a new program about the role of Civil Society in the Fourth Industrial Revolution. ThingsCon and the Trustable Technology Mark featured in the report by WEF (deep link to the PDF) that was just released in Davos and that kicks off that program. Thanks for featuring us! This blog post has all the links in an overview.

Throughout the month also lots of chats about the Trustmark and how it might be relevant for other areas. This month including AI, too!

ThingsCon

As we continue to further integrate the existing teams and infrastructures between Germany, Netherlands and Belgium into a larger European operation, we had some fiddling to do with the ThingsCon website. Going forward, thingscon.org is the place to follow.

Tender

I put together a small team and a tender for a super interesting public administration bid that my company was specifically invited to participate in. ?

The Next generation

Was happy to hosted a student group of IT security and entrepreneurship to give them a deep dive into trustable tech, tech ethics, and alternative business models (there’s not just the VC/hyper-growth model!)

PhD in Responsible Tech

OpenDott.org is a paid PhD program in responsible tech that is hosted by University of Dundee in collaboration with Mozilla and a host of smaller orgs including ThingsCon, so I’m involved in this, which is a true joy. This week we’re running a workshop to plan out the details and logistics of the program, and to help select the 5 PhDs from the pool of applications.

A Newsletter Experiment

Over in my personal(ish) newsletter Connection Problem I started an experiment with memberships. It’s all happening under the principle of “unlocked commons”, meaning members support writing that will be available in the commons, for free, continuously. You can learn more in the newsletter archive or on this page. The gist of it is: I publish about 100K words a year, most of which are critical-but-constructive takes about tech industry and how we can maximize responsible tech rather than exploitation. By joining the membership you can support this independent writing.

A huge thank you to those who signed up right away and for all the kind words of support. It’s been humbling in the best possible ways.

If you’d like to work with me in the upcoming months, I have very limited availability but am always happy to have a chat. I’m currently doing the planning for Q2 and Q3 2019.

That’s it for January – have a great February!

Yours truly,
P.

WEF report features ThingsCon & the Trustable Technology Mark

W

I was super happy to be interviewed about ThingsCon and the Trustable Technology Mark for a report by the World Economic Forum (WEF) for their newly launched initiative Civil Society in the Fourth Industrial Revolution. You can download the full report here:

Civil Society in the Fourth Industrial Revolution: Preparation and Response (PDF)

The report was just published at the WEF in Davos and it touches on a lot of areas that I think are highly relevant:

Grasping the opportunities and managing the challenges of the Fourth Industrial Revolution require a thriving civil society deeply engaged with the development, use, and governance of emerging technologies. However, how have organizations in civil society been responding to the opportunities and challenges of digital and emerging technologies in society? What is the role of civil society in using these new powerful tools or responding to Fourth Industrial Revolution challenges to accountability, transparency, and fairness?
Following interviews, workshops, and consultations with civil society leaders from humanitarian, development, advocacy and labor organizations, the white paper addresses:
— How civil society has begun using digital and emerging technologies
— How civil society has demonstrated and advocated for responsible use of technology
— How civil society can participate and lead in a time of technological change
— How industry, philanthropy, the public sector and civil society can join together and invest in addressing new societal challenges in the Fourth Industrial Revolution.

Thanks for featuring our work so prominently in the report. You’ll find our bit as part of the section Cross-cutting considerations for civil society in an emerging Fourth Industrial Revolution.

Living in the New New Normal

L

Image: Unsplash (derveit)

Please note: This post veers a bit outside my usual topics for this blog, so you can read the post in full on Medium.

It’s the year 2019. What’s it like to live in the New New Normal, in a world where the once-disruptive Silicon Valley tech companies (GAFAM) have become the richest, most powerful companies in the world?

In a world in which Chinese tech giants (BAT), too, have reached a level of maturity, and scale, to equal those Silicon Valley companies and are starting to push outside of China and onto the world stage? In which these companies represent not change, innovation and improvement (of the world, or at least the online experience) but the status quo; where they are the entrenched powers defending their positions? In a world that has left the utopian ideas of the early open web (especially openness and decentralization) in the dust, and instead we see an internet that has been consolidated and centralized more than ever?

In other words, what’s it like to live between increasingly restrictive “ecosystems” of vendor lock-in, and the main choice is between the Silicon Valley model and the Chinese model?

Read the full post on Medium.