Tagthingscon

Monthnotes for June & July 2018

M

Lots of travel and a brief time off means a combined summer-ish edition of month notes for June & July. A lot has happened over the last 8 or so weeks, so let’s dive right in. In no particular order…

Trustable Technology mark

The ThingsCon trustmark for IoT has a name, finally! Meet the Trustable Technology mark, or #trustabletech for short. The URL (trustabletech.com) still forwards to the trustmark page on ThingsCon.com, but will have its own place soon. The most current version of the explainer presentation is up on Slideshare:

What’s more, I’m not alone in this endeavor—far from it! More and more folks from the ThingsCon network have been giving their input, which is priceless. Also, Pete Thomas (University of Dundee) has been taking the design lead and been a great sparring partner on strategy questions, and Jason Schultz (NYU Law) has been thinking about legal and policy implications. A big thank you to Pete & Jason! I’m super excited this is moving along at such a clip.

Going forward, the next steps are to finalize and then test more extensively the checklist for the assessment that’s open for comments in this gDoc. Jason and I also just presented the trustmark at the most recent ThingsCon Salon Berlin (video below), and I’ll be presenting it again at ThingsCon Salon Cologne on August 3rd. (Thingscon.com/events has all up-to-date details.)

Media, etc.

Brand Eins interviewed me about IoT and how it challenges our notion of ownership and trust. Details in my blog post here.

My somewhat eclectic newsletter Connection Problem has completed Season 3 with just over 30 installments. I’m taking a writing break of a few weeks, and then I’ll kick off Season 4 soon. Sign up now if you want to follow along!

ThingsCon

With ThingsCon, we co-signed not one but two declarations and open letters: The Toronto Declaration about AI and human rights (initiated by AccessNow) and the Open Letter to G20 Leaders.

Travel & Events I Attended

I got to join a whole bunch of things those last few weeks.

I thoroughly enjoyed both a workshop on IoT security and market surveillance by Stiftung Neue Verantwortung, where we discussed all things certification, incentives and assessment frameworks; and the always fascinating Museum of the Future workshop in Berlin. I’d been to one in Amsterdam before, and even though I’m spoiled by greatly curated events, the group that Noah & team convene in this context is humbling and fascinating and the only thing I wished is that I could have been there full time, which this time alas wasn’t possible.

In between the two I got to go to New York City for meetings and a quick swing-by at Data & Society, as well as Toronto for the Mozilla Foundation’s all-hands where I was kindly invited to participate as a fellow. Speaking of committed & warm & driven groups!

After that, some family time in the Pacific Northwest, and a short vacation, which included a little road trip through the Cascades. What a stunning & wonderful region!

What’s next?

On one hand I’m gearing up the planning for fall. If you’d like to work with me in the upcoming months, I have very limited availability but am always happy to have a chat.

On the other I’m pretty much heads-down to get the trustmark to the next level. This includes the nitty gritty work of both improving the trustmark assessment tool, and of lining up launch partners. It also means planning a little road show to expose this idea to more eyes and ears, including ThingsCon Salon Cologne, Mozfest, ThingsCon Amsterdam, and a few other events in between. We’re also in the middle of copy-editing the upcoming 2018 issue of the ThingsCon report “The State of Responsible IoT” (#RIoT). More on that soon.

So back to the text mines!

Have a great August.

Yours truly, P.

A Trustmark for the Internet of Things: First thoughts

A

I’ve been researching the potential of consumer trust labels for IoT for quite some time as I believe that trustworthy connected products should be easier to find for consumers, and the companies (or other organizations) that make connected things should have a way to differentiate their products and services through their commitment to privacy, security, and overall just better products.

One milestone in this research was a report I authored last fall, A Trustmark for IoT, based on research within the larger ThingsCon community and in collaboration with Mozilla Foundation. (Full disclosure: My partner works for Mozilla.)

Ever since I’ve been exploring turning this research into action. So far this has taken two strands of action:

  1. I’ve been active (if less than I wanted, due to personal commitments) in the #iotmark initiative co-founded by long-time friend and frequent collaborator Alexandra Deschamps-Sonsino. The #iotmark follows a certification model around privacy, security, and related topics.
  2. I’ve also been collecting thoughts and drafting a concept for a separate trustmark that follows a commitment model.

At this point I’d like to share some very early, very much draft stage thoughts about the latter.

A note: This trustmark is most likely to happen and be developed under the ThingsCon umbrella. I’m sharing it here first, today, not to take credit but because it’s so rough around the edges that I don’t want the ThingsCon community to pay for any flaws in the thinking, of which I’m sure there are still plenty. This is a work in progress, and shared openly (and maybe too early) because I believe in sharing thought processes early even if it might make me stupid. It’s ok if I look stupid; it’s not ok if I make anyone else in the ThingsCon community look stupid. That said, if we decide to push ahead and develop this trustmark, we’ll be moving it over to ThingsCon or into some independent arrangement—like most things in this blog post, this remains yet to be seen.

Meet Project Trusted Connected Products (working title!)

In the trustmark research report, I’ve laid out strengths and weaknesses of various approaches to consumer labeling from regulation-based (certification required to be allowed to sell in a certain jurisdiction) to voluntary-but-third-party-audited certification to voluntary-self-audited labels to completely self-authorized labels (“Let’s put a fancy sticker on it!”). It’s a spectrum, and there’s no golden way: What’s best depends on context and goals. Certifications tend to require more effort (time, money, overhead) and in turn tend to be more robust and have more teeth; self-labeling approaches tend to be more lightweight and easier to implement, and in turn tend to have less teeth.

The mental model I’ve been working with is this: Certifications (like the #iotmark) can be incredibly powerful at weeding out the crap, and establishing a new baseline. And that’s very powerful and very important, especially in a field as swamped by crappy, insecure, not-privacy-respecting products like IoT. But I’m not an expert in certifications, and others are, so I’d rather find ways of collaborating rather than focusing on this approach.

What I want to go for instead is the other end of the spectrum: A trustmark that aims not at raising the baseline, but a trustmark that raises the bar at the top end. Like so:

Image: Peter Bihr (Flickr)

I’d like to keep this fairly lightweight and easy for companies to apply, but find a model where there are still consequences if they fail to follow through.

The mechanism I’m currently favoring leans on transparency and a control function of the public. Trust but verify.

Companies (or, as always, other orgs) would commit to implementing certain practices, etc., (more on what below) and would publicly document what they do to make sure this works. (This is an approach proposed during the kickoff meeting for the #iotmark initiative in London, before the idea of pursuing certification crystalized.) Imagine it like this:

  • A company wants to launch a product and decides to apply for the trustmark. This requires them to follow certain design principles and implement certain safeguards.
  • The company fills out a form where they document how they make sure these conditions for the trustmark are met for their product. (In a perfect world, this would be open source code and the like, in reality this wouldn’t ever work because of intellectual property; so it would be a more abstract description of work processes and measures taken.)
  • This documentation is publicly available in a database online so as to be searchable by the public: consumers, consumer advocates and media.

If it all checks out, the company gets to use the label for this specific product (for a time; maybe 1-2 years). If it turns out they cheated or changed course: Let the public shaming begin.

This isn’t a fool proof, super robust system. But I believe the mix of easy-to-implement-but-transparent can be quite powerful.

What’s in a trustmark?

What are the categories or dimensions that the trustmark speaks to? I’m still drafting these and this will take some honing, but I’m thinking of five dimensions (again, this is a draft):

  • Privacy & Data Practices
  • Transparency
  • Security
  • Openness
  • Sustainability

Why these five? IoT (connected products) are tricky in that they tend not to be stand-alone products like a toaster oven of yore.

Instead, they are part of (more-or-less) complex systems that include the device hardware (what we used to call the product) with its sensors and actuators and the software layer both on the device and the server infrastructure on the backend. But even if these were “secure” or “privacy-conscious” (whatever this might mean specifically) it wouldn’t be enough: The organization (or often organizations, plural) that make, design, sell, and run these connected products and services also need to be up to the same standards.

So we have to consider other aspects like privacy policies, design principles, business models, service guarantees, and more. Otherwise the ever-so-securely captured data might be sold or shared with third parties, it might be sold along with the company’s other assets in case of an acquisition or bankruptcy, or the product might simply cease working in case the company goes belly-up or changes their business model.

This is where things can get murky, so we need to define pretty clear standards of what and how to document compliance, and come up with checklists, etc.

In some of these areas, the ThingsCon community has leading experts, and we should be able to find good indicators ourselves; in others, we might want to find other indicators of compliance, like through existing third party certifications, etc.; in others yet, we might need to get a little creative.

For example, a indicator that counts towards the PRIVACY & DATA PRACTICES dimension could be strong (if possibly redundant) aspects like “is it GDPR compliant”, “is it built following the Privacy by Design principle”, or “are there physical off-switches or blockers for cameras”. If all three checkboxes are ticked, this would be 3 points on the PRIVACY & DATA PRACTICES score. (Note that “Privacy by Design” is already a pre-condition to be GDPR compatible; so in this case, one thing would add two points; I wouldn’t consider this too big an issue: After all we want to raise the bar.)

What’s next?

There are tons of details, and some very foundational things yet to consider and work out. There are white spots on the metaphorical map to be explored. The trustmark needs a name, too.

I’ll be looking to get this into some kind of shape, start gathering feedback, and also will be looking for partners to help make this a reality.

So I’m very much looking forward to hear what you think—I just ask to tread gently at this point rather than stomping all over it just yet. There’ll be plenty of time for that later.

What’s long-term success? Outsized positive impact.

W

For us, success is outsized positive impact—which is why I’m happy to see our work becoming part of Brazil’s National IoT Plan.

Recently, I was asked what long-term success looked like for me. Here’s the reply I gave:

To have outsized positive impact on society by getting large organizations (companies, governments) to ask the right questions early on in their decision-making processes.

As you know, my company consists of only one person: myself. That’s both boon & bane of my work. On one hand it means I can contribute expertise surgically into larger contexts, on the other it means limited impact when working by myself.

So I tend (and actively aim) to work in collaborations—they allow to build alliances for greater impact. One of those turned into ThingsCon, the global community of IoT practitioners fighting for a more responsible IoT. Another, between my company, ThingsCon and Mozilla, led to research into the potential of a consumer trustmark for the Internet of Things (IoT).

I’m very, very happy (and to be honest, a little bit proud, too) that this report just got referenced fairly extensively in Brazil’s National IoT Plan, concretely in . (Here’s the post on Thingscon.com.)

To see your work and research (and hence, to a degree, agenda) inform national policy is always exciting.

This is exactly the kind of impact I’m constantly looking for.

Monthnotes for January 2018

M

January isn’t quite over, but since I’ll be traveling starting this weekend, I wanted to drop these #monthnotes now. A lot of time this month went into prepping an upcoming project which is likely to take up the majority of my time in 2018. More on that soon.

×

Capacity planning: This year my work capacity is slightly reduced since I want to make sure to give our new family member the face time he deserves. That said, this year’s capacity is largely accounted for, which is extra nice given it’s just January, and it’s for a thing I’m genuinely excited about. That said, I think it’s important to work on a few things in parallel because there’s always potential that unfolds from cross-pollination; so I’m up for a small number of not-huge projects in addition to what’s already going on, particularly in the first half of the year. Get in touch.

×

On Sunday, I’m off to San Francisco for a work week with the good folks at Mozilla because reasons and a number of meetings in the Bay Area. (Full disclosure: my partner works at Mozilla). Last year I’ve done some work with Mozilla and ThingsCon exploring the idea of a trustmark for IoT (our findings).

Image: commons (SDASM Archives)

Should you be in SF next week, ping me and we can see if we can manage a coffee.

×

IoT, trust & voice: More and more, I’m coming around to the idea that voice is the most important—or at least most imminent—manifestation of IoT regarding user data. Voice, and how it relates to trust, is what I’ll be focusing on a lot of my work in 2018.

×

User profiling in smart homes: Given my focus on voice & trust in IoT this year, I was very happy that Berlin tech & policy think tank Stiftung Neue Verantwortung invited me to a workshop on user profiling in smart homes. It was all Chatham House rules and I don’t want to dive into specifics at this point, but smart homes and voice assistants are worth a deep dive when it comes to trust—and trustworthiness—in IoT.

Connected homes and smart cities

Not least because (as I’ve been hammering home for a long time) the connected home and the smart city are two areas that most clearly manifest a lot of the underlying tensions and issues around IoT at scale: Connected homes, because traditionally the home was considered a private space (that is, if you look at the last 100 years in the West), and embedded microphones in smart homes means it’s not anymore. And smart cities, because in public space there is no opt-out: Whatever data is collected, processed, and acted on in public space impacts all citizens, if they want it or not. These are fundamental changes with far reaching consequences for policy, governance, and democracy.

×

Worth your time: A few pointers to articles and presentations I found worthwhile:

  • Kate Crawford’s talk on bias in AI training data is ace: The Trouble with Bias [Youtube].
  • TechCrunch has a bit of a top-level explainer of GDPR, Europe’s General Data Protection Regulation that goes into effect in May this year. It’s being widely lauded in Europe (except by the usual suspects, like ad-land), and been unsurprisingly criticized in Silicon Valley as disruptive regulation. (See what I did there?) So it came as a pleasant surprise to me that TechCrunch of all places finds GDPR to be a net positive. Worth 10 minutes of your time! [TechCrunch: WTF is GDPR?]
  • noyb.eu—My Privacy is none of your Business: Max Schrems, who became well-known in European privacy circles after winning privacy-related legal battles including one against Facebook and one that brought down the US/EU Safe Harbor Agreement, is launching a non-profit: They aim to enforce European privacy protection through collective enforcement, which is now an option because of GDPR. They’re fundraising for the org. The website looks crappy as hell very basic, but I’d say it’s a legit endeavor and certainly an interesting one.

×

Writing & thinking:

  • In How to build a responsible Internet of Things I lay out a few basic, top-level principles distilled from years of analyzing the IoT space—again with an eye on consumer trust.
  • On Business Models & Incentives: Some thoughts on how picking the wrong business model—and hence creating harmful incentives for an organization to potentially act against its own customers—is dangerous and can be avoided.
  • I’ve been really enjoying putting together my weekly newsletter together. It’s a little more personal and interest-driven than this blog, but tackles similar issues of the interplay between tech & society. It’s called Connection Problem. You can sign up here.

I was also very happy that Kai Brach, founder of the excellent Offscreen magazine kindly invited me to contribute to the next issue (out in April). The current one is also highly recommended!

×

Again, if you’d like to work with me in the upcoming months, please get in touch quickly so we can figure out how best to work together.

×

That’s it for January. See you in Feb!

How to build a responsible Internet of Things

H

Over the last few years, we have seen an explosion of new products and services that bridge the gap between the internet and the physical world: The Internet of Things (IoT for short). IoT increasingly has touch points with all aspects of our lives whether we are aware of it or not.

In the words of security researcher Bruce Schneier: “The internet is no longer a web that we connect to. Instead, it’s a computerized, networked, and interconnected world that we live in. This is the future, and what we’re calling the Internet of Things.”1

But IoT consists of computers, and computers are often insecure, and so our world becomes more insecure—or at the very least, more complex. And thus users of connected devices today have a lot to worry about (because smart speakers and their built-in personal digital assistants are particularly popular at the moment, we’ll use those as an example):

Could their smart speaker be hacked by criminals? Can governments listen in on their conversations? Is the device always listening, and if so, what does it do with the data? Which organizations get to access the data these assistants gather from and about them? What are the manufacturers and potential third parties going to do with that data? Which rights do users retain, and which do they give up? What happens if the company that sold the assistant goes bankrupt, or decides not to support the service any longer?

Or phrased a little more abstractedly2: Does this device do what I expect (does it function)? Does it do anything I wouldn’t normally expect (is it a Trojan horse)? Is the organization that runs the service trustworthy? Does that organization have trustworthy, reliable processes in place to protect myself and my data? These are just some of the questions faced by consumers today, but they face these questions a lot.

Trust and expectations in IoT
Trust and expectations in IoT. Image: Peter Bihr/The Waving Cat

Earning (back) that user trust is essential. Not just for any organization that develops and sells connected products, but for the whole ecosystem.

Honor the spirit of the social contract

User trust needs to be earned. Too many times have users clicked “agree” on some obscure, long terms of service (ToS) or end user license agreement (EULA) without understanding the underlying contract. Too many times have they waived their rights, giving empty consent. This has led to a general distrust—if not in the companies themselves then certainly in the system. No user today feels empowered to negotiate a contractual relationship with a tech company on eye level—because they can’t.

Whenever some scandal blows up and creates damaging PR, the companies slowly backtrack, but in too many cases they were legally speaking within their rights: Because nobody understood the contract but the abstract product language suggests a certain spirit of mutual goodwill between product company and their users that is not honored by the letter of that contract.

So short and sweet: Honor the spirit of the social contract that ties companies and their users together. Make the letters of the contract match that spirit, not the other way round. Earning back the users’ trust will not just make the ecosystem more healthy and robust, it will also pay huge dividends over time in brand building, retention, and, well, user trust.

Respect the user

Users aren’t just an anonymous, homogeneous mass. They are people, individuals with diverse backgrounds and interests. Building technical systems at scale means having to balance individual interests with automation and standardization.

Good product teams put in the effort to do user research and understand their users better: What are their interests, what are they trying to get out of a product and why, how might they use it? Are they trying to use it as intended or in interesting new ways? Do they understand the tradeoffs involved in using a product? These are all questions that basic, but solid user research would easily cover, and then some. This understanding is a first step towards respecting the user.

There’s more to it, of course: Offering good customer service, being transparent about user choices, allowing users to control their own data. This isn’t a conclusive list, and even the most extensive checklist wouldn’t do any good in this case: Respect isn’t a list of actions, it’s a mindset to apply to a relationship.

Offer strong privacy & data protection

Privacy and data protection is a tricky area, and one where screwing up is easy (and particularly damaging for all parties involved).

Protecting user data is essential. But what that means is not always obvious. Here are some things that user data might need to be protected from:

  • Criminal hacking
  • Software bugs that leak data
  • Unwarranted government surveillance
  • Commercial third parties
  • The monetization team
  • Certain business models

Part of these fall squarely into the responsibility of the security team. Others are based on the legal arrangements around how the organization is allows (read: allows itself) to use user data: The terms of services. Others yet require business incentives to be aligned with users’ interests.

The issues at stake aren’t easy to solve. There are no silver bullets. There are grey areas that are fuzzy, complex and complicated.

In some cases, like privacy, there are even cultural and regional differences. For example, to paint with a broad brush, privacy protection has fundamentally different meanings in the US than it does in Europe. While in the United States, privacy tends to mean that consumers are protected from government surveillance, in Europe the focus is on protecting user data from commercial exploitation.

Whichever it may be—and I’d argue it needs to be both—any organization that handles sensitive user data should commit to the strongest level of privacy and data protection. And it should clearly communicate that commitment and its limits to users up front.

Make it safe and secure

It should go without saying (but alas, doesn’t) that any device that connects to the internet and collects personal data needs to be reliably safe and secure. This includes aspects ranging from the design process (privacy by design, security by design) to manufacturing to safe storage and processing of data to strong policies that protect data and users against harm and exploitation. But it doesn’t end there: Especially the end-of-life stage of connected products are important, too. If an organization stops maintaining the service and ceases to update the software with security patches, or if the contract with the user doesn’t have protections against data spills at the acquisition or liquidation stage of a company, then the data could have been safe for years but all of a sudden poses new risks.

IT security is hard enough as it is, but security of data-driven systems that interconnect and interact is so much harder. After all, the whole system is only as strong as its weakest component.

Alas, there is neither fame nor glory in building secure systems: At best, there is no scandal over breaches. At worst, there are significant costs without any glamorous announcements. In the same way that prevention in healthcare is less attractive than quick surgery to repair the damage, it is also more effective and cheaper in the long run. So hang in there, and the users might just vote with their feet and dollars to support the safest, most secure, most trustworthy products and organizations.

Choose the right business model

A business model can make or break a company. Obviously, without a business model, a company won’t last long. But without the right business model, it’ll thrive not together with its customers but at their expense.

We see so much damage done because wrong business models—and hence, wrong incentives—drive and promote horrible decision making.

If a business is based on user data—as is often the case in IoT—then finding the right business model is essential. Business models, and the behaviors they incentivize, matter. More to the point, aligning the organization’s financial incentives with the users’ interests matters.

As a rule of thumb, data mining isn’t everything. Ads, and the surveillance marketing they increasingly require, have reached a point of being poisonous. If, however, an organization finds a business model that is based on protecting its users’ data, then that organization and its customers are going to have a blast of a time.

To build sustainable businesses—businesses that will sustain themselves and not poison their ecosystem—it’s absolutely essential to pick and align business models and incentives wisely.


  1. Bruce Schneier: Click Here to Kill Everyone. Available at http://nymag.com/selectall/2017/01/the-internet-of-things-dangerous-future-bruce-schneier.html 
  2. See Peter Bihr: Trust and expectations in IoT. Available at https://thewavingcat.com/2017/06/28/trust-and-expectation-in-iot/ 

Interview with Netzpolitik.org: Regulierung und Datenschutz im Internet der Dinge

I

In September I spoke at Netzpolitik’s annual conference, Das ist Netzpolitik. While I was there, Netzpolitik.org also recorded an interview with me: “Regulierung und Datenschutz im Internet der Dinge“.

A big thank you to Netzpolitik and Stefanie Talaska for the conversation!

Launching the ThingsCon Fellowship Program

L

Please note: This is cross-posted from the ThingsCon blog.

We’re happy to announce the ThingsCon Fellowship Program.

The ThingsCon Fellowship recognizes achievements and commitment that advance the ThingsCon mission of fostering the creation of a responsible and human-centric IoT generally, and support for the ThingsCon community specifically.

With the program, we aim to amplify the fellows’ work in this area and to promote knowledge transfer and networking between fellows and the larger ThingsCon network.

The first round of fellows for 2017/2018 consists of a small cohort of ThingsCon allies. These individuals have over the past years put tremendous effort into advancing and promoting the ThingsCon mission.

We are both humbled and proud to welcome these six outstanding individuals as the inaugural ThingsCon Fellows:

ThingsCon Fellows 2017-2018

Alexandra Deschamps-Sonsino Ame Elliott Dries de Roeck Iohanna Nicenboim Michelle Thorne Ricardo Brito

Together with them we will develop and evolve the ThingsCon Fellowship program through a collaborative process of mutual exchange and shared learning.

Learn more about the program and the fellows on thingscon.com/fellowship.