Blog

What I learned from launching a consumer trustmark for IoT

W

Throughout 2018, we developed the Trustable Technology Mark, a consumer trustmark for IoT, that our non-profit ThingsCon administers. As the project lead on this Trustmark, I spent countless hours in discussions and meetings, at workshops and conferences, and doing research about other relevant consumer labels, trustmarks and certifications that might offer us some useful paths forward. I thought it might be interesting to share what I’ve learned along the way.

(Please note that this is also the reason this blog post appears first on my website; it’s because if there’s anything problematic here, it’s my fault and doesn’t reflect ThingsCon positions.)

1) The label is the least important thing

Launching a Trustmark is not about the label but about everything else. I’ve encountered probably dozens of cool label concepts, like “nutritional” labels for tech, “fair trade” style privacy labels, and many more. While there were many really neat approaches, the challenges lie elsewhere entirely. Concretely, the main challenges I see are the following:

  • What goes into the label, i.e. where and how do you source the data? (Sources)
  • Who analyzes the data and decides? (Governance)
  • Who benefits from the Trustmark? (Stakeholders and possible conflicts of interest)
  • How to get to traction? (Reach & relevance)

We’ve solved some of these challenges, but not all. Our data sourcing has been working well. We’re doing well with our stakeholders and possible conflicts of interest (nobody gets paid, we don’t charge for applications/licenses, and it’s all open sourced: In other words, no conflicts of interest and very transparent stakeholders, but this raises sustainability challenges). We don’t yet have robust governance structures, need a bigger pool of experts for reviews, and haven’t built the reach and relevance yet that we’ll need eventually if this is to be a long term success.

2) Sometimes you need to re-invent the wheel

Going into the project, I naively thought there must be existing models we could just adapt. But turns out, new problem spaces don’t always work that way. The nature of Internet of Things (IoT) and connected devices meant we faced a set of fairly new and unique challenges, and nobody had solved this issue. (For example, how to deal with ongoing software updates that could change the nature of a device multiple times without introducing a verification mechanism like reverse engineering that would be too cost intensive to be realistic.)

So we had to go back to the drawing board, and came out with a solution that I would say is far from perfect but better than anything else I’ve seen to date: Our human experts review applications that are based on information provided by the manufacturer/maker of the product, and this information is based on a fairly extensive & holistic questionnaire that includes aspects from feature level to general business practices to guarantees that the company makes on the record by using our Trustmark.

Based on that, our Trustmark offers a carrot; we leave it to others to be the stick.

That said, we did learn a lot from the good folks at the Open Source Hardware Association. (Thanks, OSHWA!)

3) Collaborate where possible

We tried to collaborate as closely as possible with a number of friendly organizations (shout-out to Better IoT & Consumers International!) but also had to concede that in a project as fast moving and iterative it’s tough to coordinate as closely as we would have liked to have. That’s on us — by which I mean, it’s mostly on me personally, and I’m sorry I didn’t do a better job aligning this even better.

For example, while I did manage to have regular backchannel exchanges with collaborators, more formal partnerships are a whole different beast. I had less than a year to get this out the door, so anything involving formalizing was tricky. I was all the happier that a bunch of the partners in the Network of Centres and some other academic organizations decided to take the leap and set up lightweight partnerships with us. This allows a global footprint with partners in Brazil, United States, United Kingdom, Germany, Poland, Turkey, India and China. Thank you!

4) Take a stand

One of the most important take aways for me, however, was this: You can’t please everyone, or solve every problem.

For every aspect we would include, we’d exclude a dozen others. Every method (assessment, enforcement, etc.) used means another not used. Certification or license? Carrot or stick? Third party verification or rely on provided data? Incorporate life cycle analysis or focus on privacy? Include cloud service providers for IoT, or autonomous vehicles, or drones? These are just a tiny, tiny fraction of the set of questions we needed to decide. In the end, I believe that in order to have a chance at succeeding means cutting out many if not most aspects in order to have as clear a focus as possible.

And it means making a stand: Choose the problem space, and your approach to solving it, so you can be proud of it and stand behind it.

For the Trustable Technology Mark that meant: We prioritized a certain purity of mission over watering down our criteria, while choosing pragmatic processes and mechanisms over those we thought would be more robust but unrealistic. In the words of our slide deck, the Trustmark should hard to earn, but easy to document. That way we figured we could find those gems of products that try out truly novel approaches that are more respectful of consumers rights than the broad majority of the field.

Is this for everyone, or for everything? Certainly not. But that’s ok: We can stand behind it. And should we learn we’re wrong about something then we’ll know we tried our best, and can own those mistakes, too. We’ve planted a flag, a goal post that we hope will shift the conversation by setting a higher goal than most others.

It’s an ongoing project

The Trustable Technology Mark is a project under active development, and we’ll be happy sharing our learnings as things develop. In the meantime, I hope this has been helpful.

If you’ve got anything to share, please send it to me personally (peter@thewavingcat.com) or to trustabletech@thingscon.org.

The Trustable Technology Mark was developed under the ThingsCon umbrella with support from the Mozilla Foundation.

Monthnotes for April 2019

M

April brought a lot of intense input-output style work: Lots to digest, lots of writing.

If you’d like to work with me in the upcoming months, I have limited availability but am always happy to have a chat. I’m currently doing the planning for Q3 and Q4 2019.

Internet Freedom Festival

Earlier this month I got to participate in Valencia’s Internet Freedom Festival (IFF). I’d never been before, and it’s always great to join an event for the first time. Lots of interesting input there, and a great couple of sessions with both other foundation fellows as well as funders – a neat benefit of my Mozilla Fellowship.

Lectured at Hochschule Darmstadt

At the kind invitation of Prof. Andrea Krajewski I got to lecture for a day at Hochschule Darmstadt. With her students we explored responsible tech, ambient connected spaces, trust & tech. As part of the prep for this excellent day, I collected some resources for ethical and responsible tech development (blog post) which might turn out useful.

Focus areas for the next few months

I barely ever take part in tenders and mostly work based on client side requests. However, every now and then interesting stuff happens, and interesting stuff is happening right now, so I found myself participating in several consortia for tenders and project proposals. It’s quite unusual for me and also all around super as I’m excited by both the teams and the topic areas – it’s all around smart cities, ethical tech, AI, privacy, trust. So they’re right up my alley. More soon.

What’s next?

Lots of reading and writing. A ThingsCon Salon in Berlin (6 May). High-level workshops in Brussels and Berlin. A ThingsCon Unconf (24 May). Lots more writing.

If you’d like to work with me in the upcoming months, I have limited availability but am always happy to have a chat. I’m currently doing the planning for Q3 and Q4 2019.

Have a great Month!

Yours truly,
P.

A list of resources for ethical, responsible, and public interest tech development

A

For an upcoming day of teaching I started compiling a list of resources relevant for the ethical, responsible development of tech, especially public interest tech. This list is very much incomplete, a starting point.

(For disclosure’s sake, I should add that I’ve started lists like this before: I’ll try, but cannot promise, to be maintaining this one. Just assume it’s a snapshot, useful primarily in the now and as an archive for future reference.)

I also can take only very partial credit for it since I asked Twitter for input. I love Twitter for this kind of stuff: Ask, and help shall be provided. My Twitter is a highly curated feed of smart, helpful people. (I understand that for many people Twitter feels very, very different. My experience is privileged that way.) A big thank you in particular to Sebastian Deterding, Alexandra Deschamps-Sonsino, Dr. Laura James, Iskander Smit, and to others I won’t name because they replied via DM and this might have been a privacy-related decision. You know who you are – thank you!

Here are a bunch of excellent starting points to dig deeper, ranging from books to academic papers to events to projects to full blown reading lists. This list covers a lot of ground. You can’t really go wrong here, but choose wisely.

Projects & papers:

Organizations:

Books:

  • Everyware (Greenfield)
  • The Epic Struggle of the Internet of Things (Sterling)
  • Weapons of Math Destruction (O’Neil)
  • Smart Cities (Townsend)
  • Future Ethics (Bowles)

Libraries, reading lists & lists of lists:

Monthnotes for March 2019

M

This installment of monthnotes features the wrap-up of a fellowship, updates on a PhD program I’ll be supervising for, a ThingsCon event, and an anniversary. Enjoy.

If you’d like to work with me in the upcoming months, I have limited availability but am always happy to have a chat. I’m currently doing the planning for Q3 and Q4 2019.

The Waving Cat turns 5

The Waving Cat just turned 5 officially. Which is still mind blowing to me. It’s been quite the ride, and 5 incredibly productive years.

In this time I’ve written 3 book-ish things and many reports, co-published multiple magazine-ish things and a proper academic paper. Co-chaired some amazing conferences like ThingsCon, Interaction16, UIKonf and more. Worked on strategy, policy and research across a pretty wide range of industries and clients from global tech to non-profit to governments. Was on a number of juries, and mentored a bunch of teams. Was a Mozilla Fellow. Launched a consumer trustmark. Helped kickstart a number of exciting projects including ThingsCon, Zephyr Berlin, Dearsouvenir and the Trustable Technology Mark. Spoke at about 40 events. Wrote, contributed or was quoted in about 60 articles.

So yeah, it’s been a good 5 years run. On to the next round of adventures.

(By the way, that anniversary is the company’s; the website & blog go way, way further back. All the way to like 2005.)

Wrapping up my Mozilla Fellowship

With the end of February, my Mozilla Fellowship officially wrapped up. (That is, the active part of the fellowship; Mozilla makes a point of the affiliation being for life.)

Technically this fellowship was about launching ThingsCon’s Trustable Technology Mark (which got so much great media coverage!) but it was so much more.

I’m glad and grateful for the opportunity to be warmly welcomed into this fantastic community and to meet and work with so many ambitious, smart, caring and overall awesome people.

Nothing could symbolize this better than the lovely ceremony the team put together for Julia Kloiber’s and my farewell. Unicorn gavels and flower crowns and laminated “for life” cards and bubbly were all involved. Thank you! ?

OpenDott is nearly ready

The collaboration with Mozilla isn’t ending anytime soon. OpenDott.org is a paid PhD program in responsible tech that is hosted by University of Dundee in collaboration with Mozilla and a host of smaller orgs including ThingsCon, and that I’ll supervising a PhD for.

I’m not logistically involved in this stage but my understanding is that the final paperwork is being worked out with the 5 future PhDs right now: The last YES’s collected, the last forms being signed, etc. Can’t wait for this to kick off for real, even though I’ll be only marginally involved. I mean, come on – a PhD in responsible tech? How awesome is that.

ThingsCon

The new ThingsCon website, thingscon.org, is by now more or less up and running and complete. Just in time for a (for ThingsCon somewhat unusual) event in May: A small and intimate unconference in Berlin about responsible paths in tech, economy, and beyond. Details and how to apply here.

Zephyrs: going fast

We’ve been making our ultimate travel pants under the Zephyr Berlin brand for about 2 years now. I’m not sure what happened but we must have landed on a relevant recommendations list or two as we’ve been getting a pretty sharp spike in orders these last few weeks. This is fantastic and a lot of fun. But the women’s cut is almost out now. We don’t know if/when we’ll produce the next batch, so if you’re looking to score one of those, don’t wait too long.

The Newsletter Experiment, continued

As I’ve mentioned in the last monthnotes, over in my personal(ish) newsletter Connection Problem I started an experiment with memberships. The gist of it is, I publish about 100K words a year, most of which are critical-but-constructive takes about tech industry and how we can maximize responsible tech rather than exploitation. You can support this independent writing by joining the membership.

It’s all happening under the principle of “unlocked commons”, meaning members support writing that will be available in the commons, for free, continuously. You can learn more in the newsletter archive or on this page. It’s an exciting experiment for me, and hopefully the output is something that’s useful and enjoyable for you, too.

AI, ethics, smart cities

I was invited to Aspen Institute’s annual conference on artificial intelligence, Humanity Defined: Politics and Ethics in the AI Age. It’s a good event, bringing (mostly US based) AI experts to Germany and putting them onstage with (mostly German) policy experts to spark some debate. I’ve been to this since it started last year and enjoyed it. This time, my highlight was some background on the European High Level Group on AI Ethics Guidelines shared there by one of the group’s ethicists, Thomas Metzinger. He made a convincing case that this might be the best AI ethics doc currently, globally (it’s going to be published next week); and that it has glaring, painful shortcomings, especially as far as red lines are concerned – areas or types of AI applications that Europe would not engage in. These red lines are notably absent in the final document. Which seems… a shame? More on that soon.

I’m just mentioning this here because there are a few exciting projects coming up that will give me an opportunity to explore the intersection of smart cities, policy, AI/machine decision learning and how insights from creating the Trustable Technology Mark can lead to better, more responsible smart cities, tech governance, and applied AI ethics. More on that soon.

What’s next?

This week I’ll be at the Internet Freedom Festival (IFF) in Valencia, Spain. Then later in the month I’ll be teaching for a day about trustable tech at Hochschule Darmstadt at the kind invitation of Prof. Andrea Krajewski. Otherwise it’s drafting outlines, writing some project proposals, and lots of meetings and writing.

If you’d like to work with me in the upcoming months, I have limited availability but am always happy to have a chat. I’m currently doing the planning for Q3 and Q4 2019.

Have a great April!

Yours truly,
P.

Which type of Smart City do we want to live in?

W

Connectivity changes the nature of things. It quite literally changes what a thing is.

By adding connectivity to, say, the proverbial internet fridge it stops being just an appliance that chills food. It becomes a device that senses; captures, processes and shares information; acts on this processed information. The thing-formerly-known-as-fridge becomes an extension of the network. It makes the boundaries of the apartment more permeable.

So connectivity changes the fridge. It adds features and capabilities. It adds vulnerabilities. At the same time, it also adds a whole new layer of politics to the fridge.

Power dynamics

Why do I keep rambling on about fridges? Because once we add connectivity — or rather: data-driven decision making of any kind — we need to consider power dynamics.

If you’ve seen me speak at any time throughout the last year, chances are you’ve encountered this slide that I use to illustrate this point:

The connected home and the smart city are two areas where the changing power dynamics of IoT (in the larger sense) and data-driven decision making manifest most clearly: The connected home, because it challenges our notions of privacy (in the last 150 years, in the global West). And the smart city, because there is no opting out of public space. Any sensor, any algorithm involved in governing public space impacts all citizens.

That’s what connects the fridge (or home) and the city: Both change fundamentally by adding a data layer. Both acquire a new kind of agenda.

3 potential cities of 2030

So as a thought experiment, let’s project three potential cities in the year 2030 — just over a decade from now. Which of these would you like to live in, which would you like to prevent?

In CITY A, a pedestrian crossing a red light is captured by facial recognition cameras and publicly shamed. Their CitizenRank is downgraded to IRRESPONSIBLE, their health insurance price goes up, they lose the permission to travel abroad.

In CITY B, wait times at the subway lines are enormous. Luckily, your Amazon Prime membership has expended to cover priority access to this formerly public infrastructure, and now includes dedicated quick access lines to the subway. With Amazon Prime, you are guaranteed Same Minute Access.

In CITY C, most government services are coordinated through a centralized government database that identifies all citizens by their fingerprints. This isn’t restricted to digital government services, but also covers credit card applications or buying a SIM card. However, the official fingerprint scanners often fail to scan manual laborers’ fingerprints correctly. The backup system (iris scans) don’t work on too well on those with eye conditions like cataract. Whenever these ID scans don’t work, the government service requests are denied.

Now, as you may have recognized, this is of course a trick question. (Apologies.) Two of these cities more or less exist today:

  • CITY A represents the Chinese smart city model based on surveillance and control, as piloted in Shenzhen or Beijing.
  • CITY C is based on India’s centralized government identification database, Aadhaar.
  • Only CITY B is truly, fully fictional (for now).

What model of Smart City to optimize for?

We need to decide what characteristics of a Smart City we’d like to optimize for. Do we want to optimize for efficiency, resource control, and data-driven management? Or do we want to optimize for participation & opportunity, digital citizens rights, equality and sustainability?

There are no right or wrong answers (even though I’d clearly prefer a focus on the second set of characteristics), but it’s a decision we should make deliberately. One leads to favoring monolithic centralized control structures, black box algorithms and top-down governance. The other leads to decentralized and participatory structures, openness and transparency, and more bottom-up governance built in.

Whichever we build, these are the kinds of dependencies we should keep in mind. I’d rather have an intense, participatory deliberation process that involves all stakeholders than just quickly throwing a bunch of Smart City tech into the urban fabric.

After all, this isn’t just about technology choices: It’s the question what kind of society we want to live in.

The 3 I’s: Incentives, Interests, Implications

T

When discussing how to make sure that tech works to enrich society — rather than extract value from many for the benefit of a few — we often see a focus on incentives. I argue that that’s not enough: We need to consider and align incentives, interests, and implications.

Incentives

Incentives are, of course, mostly thought of as an economic motivator for companies: Maximize profit by lowering costs or offsetting or externalizing it, or charging more (more per unit, more per customer, or simply charging more customers). Sometimes incentives can be non-economic, too, like in the case of positive PR. For individuals, it’s conventionally thought of in the context of consumers trying to get their products as cheaply as possible.

All this of course is based on what in economics is called rational choice theory, a framework for understanding social and economic behavior: “The rational agent is assumed to take account of available information, probabilities of events, and potential costs and benefits in determining preferences, and to act consistently in choosing the self-determined best choice of action.” (Wikipedia) Rational choice theory isn’t complete, though, and might simply be wrong; we know, for example, that all kinds of cognitive biases are also at play in decision-making. The latter is for individuals, of course. But organizations inherently have their own blind spots and biases, too.

So this focus on incentives, while near-ubiquitous, is myopic: While incentives certainly play a role in decision making, they are not the only factor at play. Neither do companies only work towards maximizing profits (I know my own doesn’t, and I daresay many take other interests into account, too). Nor do consumers only optimize their behavior towards saving money (at the expense, say, of secure connected products). So we shouldn’t over-index on getting the incentives right and instead take other aspects into account, too.

Interests

When designing frameworks that aim at a better interplay of technology, society and individual, we should look beyond incentives. Interests, however vaguely we might define those, can clearly impact decision making. For example, if a company (large or small, doesn’t matter) wants to innovate in a certain area, they might willingly forgo large profits and instead invest in R&D or multi-stakeholder dialog. This could help them in their long term prospects through either new, better products (linking back to economic incentives) or by building more resilient relationships with their stakeholders (and hence reducing potential friction with external stakeholders).

Other organizations might simply be mission driven and focus on impact rather than profit, or at least balance both differently. Becoming a B-Corp for example has positive economic side effects (higher chance of retaining talent, positive PR) but more than that it allows the org to align its own interests with those of key stakeholder groups, namely not just investors but also customers and staff.

Consumers, equally, are not unlikely by any means to prioritize price over other characteristics: Organic and Fairtrade food or connected products with quality seals (like our own Trustable Technology Mark) might cost more but offer benefits that others don’t. Interests, rational or not, influence behavior.

And, just as an aside, there are plenty of cases where “irrationally” responsible behavior by an organization (like investing more than legally required in data protection, or protecting privacy better than industry best practice) can offer a real advantage in the market if the regulatory framework changes. I know at least one Machine Learning startup that had a party when GDPR came into effect since all of a sudden, their extraordinary focus on privacy meant they where ahead of the pack while the rest of the industry was in catch-up mode.

Implications

Finally, we should consider the implications of the products coming onto the market as well as the regulatory framework they live under. What might this thing/product/policy/program do to all the stakeholders — not just the customers who pay for the product? How might it impact a vulnerable group? How will it pay dividends in the future, and for whom?

It is especially this last part that I’m interested in: The dividends something will pay in the future. Zooming in even more, the dividends that infrastructure thinking will pay in the future.

Take Ramez Naam’s take on decarbonization — he makes a strong point that early solar energy subsidies (first in Germany, then China and the US) helped drive development of this new technology, which in turn drove the price down and so started a virtuous circle of lower price > more uptake > more innovation > lower price > etc. etc.

We all know what happened next (still from Ramez):

“Electricity from solar power, meanwhile, drops in cost by 25-30% for every doubling in scale. Battery costs drop around 20-30% per doubling of scale. Wind power costs drop by 15-20% for every doubling. Scale leads to learning, and learning leads to lower costs. … By scaling the clean energy industries, Germany lowered the price of solar and wind for everyone, worldwide, forever.”

Now, solar energy is not just competitive. In some parts of the world it is the cheapest, period.

This type of investment in what is essentially infrastructure — or at least infrastructure-like! — pays dividends not just to the directly subsidized but to the whole larger ecosystem. This means significantly, disproportionately bigger impact. It creates and adds value rather than extracting it.

We need more infrastructure thinking, even for areas that are, like solar energy and the tech we need to harvest it, not technically infrastructure. It needs a bit of creative thinking, but it’s not rocket science.

We just need to consider and align the 3 I’s: incentives, interests, and implications.

Monthnotes for January 2019

M

January was a month for admin, planning, and generally getting sorted. There was lots of admin, taxes, year planning, to take care of. I also tried to get my hands dirty by digging into machine learning some more and ran some experiments with deep fake generation (the non-sleazy kind, obviously); so far with little success, but some learning nonetheless. And the WEF featured ThingsCon and the Trustable Technology Mark!

If you’d like to work with me in the upcoming months, I have very limited availability but am always happy to have a chat. I’m currently doing the planning for Q2 and Q3 2019.

Trustable Technology Mark

The Trustable Technology Mark launched to lots of media attention. But still it was a pleasant surprise when the WEF called about an interview as part of a new program about the role of Civil Society in the Fourth Industrial Revolution. ThingsCon and the Trustable Technology Mark featured in the report by WEF (deep link to the PDF) that was just released in Davos and that kicks off that program. Thanks for featuring us! This blog post has all the links in an overview.

Throughout the month also lots of chats about the Trustmark and how it might be relevant for other areas. This month including AI, too!

ThingsCon

As we continue to further integrate the existing teams and infrastructures between Germany, Netherlands and Belgium into a larger European operation, we had some fiddling to do with the ThingsCon website. Going forward, thingscon.org is the place to follow.

Tender

I put together a small team and a tender for a super interesting public administration bid that my company was specifically invited to participate in. ?

The Next generation

Was happy to hosted a student group of IT security and entrepreneurship to give them a deep dive into trustable tech, tech ethics, and alternative business models (there’s not just the VC/hyper-growth model!)

PhD in Responsible Tech

OpenDott.org is a paid PhD program in responsible tech that is hosted by University of Dundee in collaboration with Mozilla and a host of smaller orgs including ThingsCon, so I’m involved in this, which is a true joy. This week we’re running a workshop to plan out the details and logistics of the program, and to help select the 5 PhDs from the pool of applications.

A Newsletter Experiment

Over in my personal(ish) newsletter Connection Problem I started an experiment with memberships. It’s all happening under the principle of “unlocked commons”, meaning members support writing that will be available in the commons, for free, continuously. You can learn more in the newsletter archive or on this page. The gist of it is: I publish about 100K words a year, most of which are critical-but-constructive takes about tech industry and how we can maximize responsible tech rather than exploitation. By joining the membership you can support this independent writing.

A huge thank you to those who signed up right away and for all the kind words of support. It’s been humbling in the best possible ways.

If you’d like to work with me in the upcoming months, I have very limited availability but am always happy to have a chat. I’m currently doing the planning for Q2 and Q3 2019.

That’s it for January – have a great February!

Yours truly,
P.