Tagiot

What I learned from launching a consumer trustmark for IoT

W

Throughout 2018, we developed the Trustable Technology Mark, a consumer trustmark for IoT, that our non-profit ThingsCon administers. As the project lead on this Trustmark, I spent countless hours in discussions and meetings, at workshops and conferences, and doing research about other relevant consumer labels, trustmarks and certifications that might offer us some useful paths forward. I thought it might be interesting to share what I’ve learned along the way.

(Please note that this is also the reason this blog post appears first on my website; it’s because if there’s anything problematic here, it’s my fault and doesn’t reflect ThingsCon positions.)

1) The label is the least important thing

Launching a Trustmark is not about the label but about everything else. I’ve encountered probably dozens of cool label concepts, like “nutritional” labels for tech, “fair trade” style privacy labels, and many more. While there were many really neat approaches, the challenges lie elsewhere entirely. Concretely, the main challenges I see are the following:

  • What goes into the label, i.e. where and how do you source the data? (Sources)
  • Who analyzes the data and decides? (Governance)
  • Who benefits from the Trustmark? (Stakeholders and possible conflicts of interest)
  • How to get to traction? (Reach & relevance)

We’ve solved some of these challenges, but not all. Our data sourcing has been working well. We’re doing well with our stakeholders and possible conflicts of interest (nobody gets paid, we don’t charge for applications/licenses, and it’s all open sourced: In other words, no conflicts of interest and very transparent stakeholders, but this raises sustainability challenges). We don’t yet have robust governance structures, need a bigger pool of experts for reviews, and haven’t built the reach and relevance yet that we’ll need eventually if this is to be a long term success.

2) Sometimes you need to re-invent the wheel

Going into the project, I naively thought there must be existing models we could just adapt. But turns out, new problem spaces don’t always work that way. The nature of Internet of Things (IoT) and connected devices meant we faced a set of fairly new and unique challenges, and nobody had solved this issue. (For example, how to deal with ongoing software updates that could change the nature of a device multiple times without introducing a verification mechanism like reverse engineering that would be too cost intensive to be realistic.)

So we had to go back to the drawing board, and came out with a solution that I would say is far from perfect but better than anything else I’ve seen to date: Our human experts review applications that are based on information provided by the manufacturer/maker of the product, and this information is based on a fairly extensive & holistic questionnaire that includes aspects from feature level to general business practices to guarantees that the company makes on the record by using our Trustmark.

Based on that, our Trustmark offers a carrot; we leave it to others to be the stick.

That said, we did learn a lot from the good folks at the Open Source Hardware Association. (Thanks, OSHWA!)

3) Collaborate where possible

We tried to collaborate as closely as possible with a number of friendly organizations (shout-out to Better IoT & Consumers International!) but also had to concede that in a project as fast moving and iterative it’s tough to coordinate as closely as we would have liked to have. That’s on us — by which I mean, it’s mostly on me personally, and I’m sorry I didn’t do a better job aligning this even better.

For example, while I did manage to have regular backchannel exchanges with collaborators, more formal partnerships are a whole different beast. I had less than a year to get this out the door, so anything involving formalizing was tricky. I was all the happier that a bunch of the partners in the Network of Centres and some other academic organizations decided to take the leap and set up lightweight partnerships with us. This allows a global footprint with partners in Brazil, United States, United Kingdom, Germany, Poland, Turkey, India and China. Thank you!

4) Take a stand

One of the most important take aways for me, however, was this: You can’t please everyone, or solve every problem.

For every aspect we would include, we’d exclude a dozen others. Every method (assessment, enforcement, etc.) used means another not used. Certification or license? Carrot or stick? Third party verification or rely on provided data? Incorporate life cycle analysis or focus on privacy? Include cloud service providers for IoT, or autonomous vehicles, or drones? These are just a tiny, tiny fraction of the set of questions we needed to decide. In the end, I believe that in order to have a chance at succeeding means cutting out many if not most aspects in order to have as clear a focus as possible.

And it means making a stand: Choose the problem space, and your approach to solving it, so you can be proud of it and stand behind it.

For the Trustable Technology Mark that meant: We prioritized a certain purity of mission over watering down our criteria, while choosing pragmatic processes and mechanisms over those we thought would be more robust but unrealistic. In the words of our slide deck, the Trustmark should hard to earn, but easy to document. That way we figured we could find those gems of products that try out truly novel approaches that are more respectful of consumers rights than the broad majority of the field.

Is this for everyone, or for everything? Certainly not. But that’s ok: We can stand behind it. And should we learn we’re wrong about something then we’ll know we tried our best, and can own those mistakes, too. We’ve planted a flag, a goal post that we hope will shift the conversation by setting a higher goal than most others.

It’s an ongoing project

The Trustable Technology Mark is a project under active development, and we’ll be happy sharing our learnings as things develop. In the meantime, I hope this has been helpful.

If you’ve got anything to share, please send it to me personally (peter@thewavingcat.com) or to trustabletech@thingscon.org.

The Trustable Technology Mark was developed under the ThingsCon umbrella with support from the Mozilla Foundation.

WEF report features ThingsCon & the Trustable Technology Mark

W

I was super happy to be interviewed about ThingsCon and the Trustable Technology Mark for a report by the World Economic Forum (WEF) for their newly launched initiative Civil Society in the Fourth Industrial Revolution. You can download the full report here:

Civil Society in the Fourth Industrial Revolution: Preparation and Response (PDF)

The report was just published at the WEF in Davos and it touches on a lot of areas that I think are highly relevant:

Grasping the opportunities and managing the challenges of the Fourth Industrial Revolution require a thriving civil society deeply engaged with the development, use, and governance of emerging technologies. However, how have organizations in civil society been responding to the opportunities and challenges of digital and emerging technologies in society? What is the role of civil society in using these new powerful tools or responding to Fourth Industrial Revolution challenges to accountability, transparency, and fairness?
Following interviews, workshops, and consultations with civil society leaders from humanitarian, development, advocacy and labor organizations, the white paper addresses:
— How civil society has begun using digital and emerging technologies
— How civil society has demonstrated and advocated for responsible use of technology
— How civil society can participate and lead in a time of technological change
— How industry, philanthropy, the public sector and civil society can join together and invest in addressing new societal challenges in the Fourth Industrial Revolution.

Thanks for featuring our work so prominently in the report. You’ll find our bit as part of the section Cross-cutting considerations for civil society in an emerging Fourth Industrial Revolution.

Trust Indicators for Emerging Technologies

T

For the Trustable Technology Mark, we identified 5 dimensions that indicate trustworthiness. Let’s call them trust indicators:

  • Privacy & Data Practices: Does it respect users’ privacy and protect their data rights?
  • Transparency: Is it clear to users what the device and the underlying services do and are capable of doing?
  • Security: Is the device secure and safe to use? Are there safeguards against data leaks and the like?
  • Stability: How long a life cycle can users expect from the device, and how robust are the underlying services? Will it continue to work if the company gets acquired, goes belly-up, or stops maintenance?
  • Openness: Is it built on open source or around open data, and/or contributes to open source or open data? (Note: We treat Openness not as a requirement for consumer IoT but as an enabler of trustworthiness.)

Now these 5 trust indicators—and the questions we use in the Trustable Technology Mark to assess them—are designed for the context of consumer products. Think smart home devices, fitness trackers, connected speakers or light bulbs. They work pretty well for that context.

Over the last few months, it has become clear that there’s demand for similar trust indicators for areas other than consumer products like smart cities, artificial intelligence, and other areas of emerging technology.

I’ve been invited to a number of workshops and meetings exploring those areas, often in the context of policy making. So I want to share some early thoughts on how we might be able to translate these trust indicators from a consumer product context to these other areas. Please note that the devil is in the detail: This is early stage thinking, and the real work begins at the stage where the assessment questions and mechanisms are defined.

The main difference between consumer context and publicly deployed technology—infrastructure!—means that we need to focus even most strongly on safeguards, inclusion, and resilience. If consumer goods stop working, there’s real damage, like lost income and the like, but in the bigger picture, failing consumer goods are mostly a quality of life issue; and in the case of consumer IoT space, mostly for the affluent. (Meaning that if we’re talking about failure to operate rather than data leaks, the damage has a high likelihood of being relatively harmless.)

For publicly deployed infrastructure, we are looking at a very different picture with vastly different threat models and potential damage. Infrastructure that not everybody can rely on—equally, and all the time—would not just be annoying, it might be critical.

After dozens of conversations with people in this space, and based on the research I’ve been doing both for the Trustable Technology Mark and my other work with both ThingsCon and The Waving Cat, here’s a snapshot of my current thinking. This is explicitly intended to start a debate that can inform policy decisions for a wide range of areas where emerging technologies might play a role:

  • Privacy & Data Practices: Privacy and good data protection practices are as essential in public space as in the consumer space, even though the implications and tradeoffs might be different ones.
  • Transparency & Accountability: Transparency is maybe even more relevant in this context, and I propose adding Accountability as an equally important aspect. This holds especially true where commercial enterprises install and possibly maintain large scale networked public infrastructure, like in the context of smart cities.
  • Security: Just as important, if not more so.
  • Resilience: Especially for smart cities (but I imagine the same holds true for other areas), we should optimize for Resilience. Smart city systems need to work, even if parts fail. Decentralization, openness, interoperability and participatory processes are all strategies that can increase Resilience.
  • Openness: Unlike in the consumer space, I consider openness (open source, open data, open access) essential in networked public infrastructure—especially smart city technology. This is also a foundational building block for civic tech initiatives to be effective.

There are inherent conflicts and tradeoffs between these trust indicators. But **if we take them as guiding principles to discuss concrete issues in their real contexts, I believe they can be a solid starting point. **

I’ll keep thinking about this, and might adjust this over time. In the meantime, I’m keen to hear what you think. If you have thoughts to share, drop me a line or hit me up on Twitter.

Monthnotes for October 2018

M

This month: Mozfest, a Digital Rights Cities Coalition, Trustable Technology Mark updates, ThingsCon Rotterdam.

If you’d like to work with me in the upcoming months, I have very limited availability but am always happy to have a chat. I’m currently doing the planning for Q2 2019.

Mozfest

Mozfest came and went, and was lovely as always. It was the 9th Mozfest, 8 or so of which I participated in — all the way back to the proto (or prototyping?) Mozfest event called Drumbeat in Barcelona in, what, 2010? But no time for nostalgia, it was bustling as always. The two things that were different for me that one, I participated as a Mozilla Fellow, which means a different quality of engagement and two, M and I brought the little one, so we had a toddler in tow. Which I’m delighted to say worked a charm!

A Digital Rights Cities Coalition

At Mozfest, the smart and ever lovely Meghan McDermott (see her Mozilla Fellows profile here) hosted a small invite-only workshop to formalize a Digital Rights Cities Coalition — a coalition of cities and civil society to protect, foster, promote digital rights in cities. I was both delighted and honored to be part of this space, and we’ll continue working together on related issues. The hope is that my work with ThingsCon and the Trustable Technology Mark can inform and contribute value to that conversation.

Trustable Technology Mark

The Trustable Technology Mark is hurtling towards the official launch at a good clip. After last month’s workshop weekend at Casa Jasmina, I just hosted a Trustmark session at Mozfest. It was a good opportunity to have new folks take a look at the concept with fresh eyes. I’m happy to report that I walked away with some new contacts and leads, some solid feedback, and an overall sense that at least for the obvious points of potential criticism that present themselves at first glance there are solid answers now as to why this way and not that, etc etc.

Courtesy Dietrich, a photo of me just before kicking off the session wearing a neighboring privacy booth’s stick-on mustache.

Also, more policy and academic partners signing on, which is a great sign, and more leads to companies coming in who want to apply for the Trustmark.

Next steps for the coming weeks: Finalize and freeze the assessment form, launch a website, line up more academic and commercial partners, reach out to other initiatives in the space, finalize trademarks (all ongoing), reach out to press, plan launch (starting to prep these two).

The current assessment form asks a total of 48 questions over 5 dimensions, with a total of 29 required YES’s. Here’s the most up-to-date presentation:


ThingsCon Rotterdam

Our annual ThingsCon conference is coming up: Join us in Rotterdam Dec 6-7!

Early bird is just about to end (?), and we’re about to finalize the program. It’s going to be an absolute blast. I’ll arrive happily (if probably somewhat bleary eyed after a 4am start that day) in Rotterdam to talk Trustable Technology and ethical tech, we’ll have a Trustmark launch party of some sort, we’ll launch a new website (before or right there and then), and we’ve been lining up a group of speakers so amazing I’m humbled even just listing it:

Alexandra Deschamps-Sonsino, Cennydd Bowles, Eric Bezzem, Laura James, Lorenzo Romanoli, Nathalie Kane, Peter Bihr, Afzal Mangal, Albrecht Kurze, Andrea Krajewski, Anthony Liekens, Chris Adams, Danielle Roberts, Dries De Roeck, Elisa Giaccardi, Ellis Bartholomeus, Gaspard Bos, Gerd Kortuem, Holly Robbins, Isabel Ordonez, Kars Alfrink, Klaas Kuitenbrouwer, Janjoost Jullens, Ko Nakatsu, Leonardo Amico, Maaike Harbers, Maria Luce Lupetti, Martijn de Waal, Martina Huynh, Max Krüger, Nazli Cila, Pieter Diepenmaat, Ron Evans, Sami Niemelä, Simon Höher, Sjef van Gaalen.

That’s only the beginning!

Here’s part of the official blurb, and more soon on thingscon.com and thingscon.nl/conference-2018

Now, 5 years into ThingsCon, the need for responsible technology has entered the mainstream debate. We need ethical technology, but how? With the lines between IoT, AI, machine learning and algorithmic decision-making increasingly blurring it’s time to offer better approaches to the challenges of the 21st century: Don’t complain, suggest what’s better! In this spirit, going forward we will focus on exploring how connected devices can be made better, more responsible and more respectful of fundamental human rights. At ThingsCon, we gather the finest practitioners; thinkers & tinkerers, thought leaders & researchers, designers & developers to discuss and show how we can make IoT work for everyone rather than a few, and build trustable and responsible connected technology.

Media, etc.

In the UK magazine NET I wrote an op-ed about Restoring Trust in Emerging Tech. It’s in the November 2018 issue, out now – alas, I believe, print only.

Reminder: Our annual ThingsCon report The State of Responsible IoT is out.

What’s next?

Trips to Brussels, Rotterdam, NYC to discuss a European digital agenda, launch a Trustmark, co-host ThingsCon, translate Trustmark principles for the smart city context, prep a US-based ThingsCon conference.

If you’d like to work with me in the upcoming months, I have very limited availability but am always happy to have a chat. I’m currently doing the planning for Q2 2019.

Yours truly, P.

New ThingsCon Report: The State of Responsible IoT 2018

N

State of Responsible IoT 2018 header

A quick cross-post from the ThingsCon blog about a report we’ve been working on and that we just pushed online: The State of Responsible IoT 2018

A lot has happened since we published the first ThingsCon State of Responsible IoT report in 2017: Responsibility and ethics in tech have begun to enter mainstream conversations, and these conversations are having an effect. The media, tech companies, and policy makers all are rethinking the effect of technology on society.

The lines between the Internet of Things (IoT), algorithmic decision-making, Artificial Intelligence/Machine Learning (AI/ML), and data-driven services are all ever-more blurry. We can’t discuss one without considering the others. That’s not a bad thing, it just adds complexity. The 21st century one for black and white thinking: It’s messy, complex, quickly evolving, and a time where simple answers won’t do.

It is all the more important to consider the implications, to make sure that all the new data-driven systems we’ll see deployed across our physical and digital environments work well—not just for the users but for all who are impacted.

Things have evolved and matured in big strides since our last State of Responsible IoT. This year’s report reflects that evolution, as well as the enormous breadth and depth of the debate. We couldn’t be happier with the result.

Some background as well as all the relevant links are available at thingscon.com/responsible-iot-report/ or using the short URL bit.ly/riot-report. The publication is available on Medium and as a PDF export.

This text is meant for sharing. The report is published by ThingsCon e.V. and licensed under Creative Commons (attribution/non-commercial/share-alike: CC BY-NC-SA). Images are provided by the author and used with permission. All rights lie with the individual authors. Please reference the author(s) when referencing any part of this report.

“The world doesn’t know where it wants to go”

&

Image: Compass by Valentin Antonucci (Unsplash) Image: Compass by Valentin Antonucci (Unsplash)

One of the joys of my working at the intersection of emerging tech and its impact is that I get to discuss things that are by definition cutting edge with people from entirely different backgrounds—like recently with my dad. He’s 77 years old and has a background in business, not tech.

We chatted about IoT, and voice-enabled connected devices, and the tradeoffs they bring between convenience and privacy. How significant chunks of the internet of things are optimized for costs at the expense of privacy and security. How IoT is, by and large, a network of black boxes.

When I tried to explain why I think we need a trustmark for IoT (which I’m building with ThingsCon and as a Mozilla fellow)—especially regarding voice-enabled IoT—he listened intently, thought about it for a moment, and then said:

“We’re at a point in time where the world doesn’t know where it wants to go.”

And somehow that exactly sums it up, ever so much more eloquently than I could have phrased it.

Only I’m thinking: Even though I can’t tell where the world should be going, I think I know where to plant our first step—and that is, towards a more transparent and trustworthy IoT. I hope the trustmark can be our compass.