Tagpolicy

How to plan & govern a smart city?

H

Taking the publication of Sidewalk Labs’ Master Innovation and Development Plan plan for the smart city development at Toronto’s waterfront (“Toronto Tomorrow”) as an occasion to think out loud about smart cities in general, and smart city governance in particular, I took to Twitter the other day.

If you don’t want to read the whole thing there, here’s the gist: I did a close reading of a tiny (!) section of this giant data dump that is the 4 volume, 1.500+ page Sidewalk Labs plan. The section I picked was the one that John Lorinc highlighted in this excellent article — a couple of tables on page 222 of the last of these 4 volumes, in the section “Supplemental Tables”. This is the section that gets no love from the developers; it’s also the section that deals very explicitly with governance of this proposed development. So it’s pretty interesting. This, by the way, is also roughly my area of research of my Edgeryders fellowship.

On a personal note: It’s fascinating to me how prescient our speakers at Cognitive Cities Conference were back in 2011 – eight years is a long time in this space, and it feels like we invited exactly the right folks back then!

Smart cities & governance: A thorny relationship

In this close reading I focused on exactly that: What does governance mean in a so-called smart city context. What is it that’s being governed and how, and maybe most importantly, by whom?

Rather than re-hash the thread here, just a quick example to illustrate the kind of issues. Where this plan speaks of publicly accessible spaces and decision-making taking into account community input, I argue that we need public spaces and full citizens rights. Defaults matter, and in cities we need the default to be public space and citizens to wield the final decision-making power over their environment. Not even the most benign or innovative company or other non-public entity is an adequate replacement for a democratically elected administration/government, and any but the worst governments — cumbersome as a government might be in some cases — is better than the alternatives.

My arguments didn’t go unnoticed, either. Canadian newspaper The Star picked up my thread on the thorny issue of governance and put it in context of other experts critical of privatizing the urban space; the few others I know from the thread make me think I’m in good company there.

What’s a smart city, anyway?

As a quick, but worthwhile diversion I highly recommend the paper Smart cities as corporate storytelling (Ola Söderström, Till Paasche, Francisco Klauser, published in City vol. 18 (2014) issue 3). In it, the authors trace not just the origin of the term smart cities but also the deliberate framing of the term that serves mostly the vendors of technologies and services in this space, in efficient and highly predictable ways. They base their analysis on IBM’s Smarter City campaign (highlights mine):

”this story is to a large extent propelled by attempts to create an ‘obligatory passage point’ (…) in the transformation of cities into ‘smart’ ones. In other words it is conceived to channel urban development strategies through the technological solutions of IT companies.

These stories are important and powerful:

Stories are important because they provide actors involved in planning with an understanding of what the problem they have to solve is (…). More specifically, they play a central role in planning because they “can be powerful agents or aids in the service of change, as shapers of a new imagination of alternatives.” (….) stories are the very stuff of planning, which, fundamentally, is persuasive and constitutive storytelling about the future.” (…)

The underlying logic is that of a purely data-driven, almost mechanical model of urban management that is overly simplistic and neither political, nor does it require expert matters. This logic is inherently faulty. Essentially, it disposes with the messiness that humans and all their pesky complex socio-cultural issues.

In this approach, cities are no longer made of different – and to a large extent incommensurable – socio-technical worlds (education, business, safety and the like) but as data within systemic processes. (…) As a result, the analysis of these ‘urban themes’ no longer seem to require thematic experts familiar with the specifics of a ‘field’ but only data- mining, data interconnectedness and software-based analysis.

So: Governance poor, underlying logic poor. What could possibly go wrong.

A better way to approach smart city planning

In order to think better, more productively about how to approach smart cities, we need to step back and look at the bigger picture.

If you follow my tweets or my newsletter, you’ll have encountered the Vision for a Shared Digital Europe before. It’s a proposed alternative for anything digital in the EU that would, if adopted, replace the EU’s Digital Single Market (DSM). Where the EU thinks about the internet, it’s through this lens of the DSM — the lens of markets first and foremost. the Vision for a Shared Digital Europe (SDE) however proposes to replace this logic of market first through 4 alternative pillars:

  • Cultivate the Commons
  • Decentralize infrastructure
  • Enable self-determination
  • Empower public institutions

Image: Vision for a Shared Digital Europe (shared-digital.eu)

I think these 4 pillars should hold up pretty well in the smart city planning context. Please note just how different this vision is from what Sidewalk Labs (and the many other smart city vendors) propose:

  • Instead of publicly available spaces we would see true commons, including but not limited to space.
  • Instead of centralized data collection, we might see decentralization, meaning a broader, deeper ecosystem of offerings and more resilience (as opposed to just more efficiency).
  • Instead of being solicited for “community input”, citizens would actively shape and decide over their future.
  • And finally, instead of working around (or with, to a degree) public administrations, a smart city after this school of thought would double down on public institutions and give them a strong mandate, sufficient funding, an in-house capacity to match the industry’s.

It would make for a better, more democratic and more resilient city.

So I just want to put this out there. And if you’d like to explore this further together, please don’t hesitate to ping me.

Monthnotes for March 2019

M

This installment of monthnotes features the wrap-up of a fellowship, updates on a PhD program I’ll be supervising for, a ThingsCon event, and an anniversary. Enjoy.

If you’d like to work with me in the upcoming months, I have limited availability but am always happy to have a chat. I’m currently doing the planning for Q3 and Q4 2019.

The Waving Cat turns 5

The Waving Cat just turned 5 officially. Which is still mind blowing to me. It’s been quite the ride, and 5 incredibly productive years.

In this time I’ve written 3 book-ish things and many reports, co-published multiple magazine-ish things and a proper academic paper. Co-chaired some amazing conferences like ThingsCon, Interaction16, UIKonf and more. Worked on strategy, policy and research across a pretty wide range of industries and clients from global tech to non-profit to governments. Was on a number of juries, and mentored a bunch of teams. Was a Mozilla Fellow. Launched a consumer trustmark. Helped kickstart a number of exciting projects including ThingsCon, Zephyr Berlin, Dearsouvenir and the Trustable Technology Mark. Spoke at about 40 events. Wrote, contributed or was quoted in about 60 articles.

So yeah, it’s been a good 5 years run. On to the next round of adventures.

(By the way, that anniversary is the company’s; the website & blog go way, way further back. All the way to like 2005.)

Wrapping up my Mozilla Fellowship

With the end of February, my Mozilla Fellowship officially wrapped up. (That is, the active part of the fellowship; Mozilla makes a point of the affiliation being for life.)

Technically this fellowship was about launching ThingsCon’s Trustable Technology Mark (which got so much great media coverage!) but it was so much more.

I’m glad and grateful for the opportunity to be warmly welcomed into this fantastic community and to meet and work with so many ambitious, smart, caring and overall awesome people.

Nothing could symbolize this better than the lovely ceremony the team put together for Julia Kloiber’s and my farewell. Unicorn gavels and flower crowns and laminated “for life” cards and bubbly were all involved. Thank you! ?

OpenDott is nearly ready

The collaboration with Mozilla isn’t ending anytime soon. OpenDott.org is a paid PhD program in responsible tech that is hosted by University of Dundee in collaboration with Mozilla and a host of smaller orgs including ThingsCon, and that I’ll supervising a PhD for.

I’m not logistically involved in this stage but my understanding is that the final paperwork is being worked out with the 5 future PhDs right now: The last YES’s collected, the last forms being signed, etc. Can’t wait for this to kick off for real, even though I’ll be only marginally involved. I mean, come on – a PhD in responsible tech? How awesome is that.

ThingsCon

The new ThingsCon website, thingscon.org, is by now more or less up and running and complete. Just in time for a (for ThingsCon somewhat unusual) event in May: A small and intimate unconference in Berlin about responsible paths in tech, economy, and beyond. Details and how to apply here.

Zephyrs: going fast

We’ve been making our ultimate travel pants under the Zephyr Berlin brand for about 2 years now. I’m not sure what happened but we must have landed on a relevant recommendations list or two as we’ve been getting a pretty sharp spike in orders these last few weeks. This is fantastic and a lot of fun. But the women’s cut is almost out now. We don’t know if/when we’ll produce the next batch, so if you’re looking to score one of those, don’t wait too long.

The Newsletter Experiment, continued

As I’ve mentioned in the last monthnotes, over in my personal(ish) newsletter Connection Problem I started an experiment with memberships. The gist of it is, I publish about 100K words a year, most of which are critical-but-constructive takes about tech industry and how we can maximize responsible tech rather than exploitation. You can support this independent writing by joining the membership.

It’s all happening under the principle of “unlocked commons”, meaning members support writing that will be available in the commons, for free, continuously. You can learn more in the newsletter archive or on this page. It’s an exciting experiment for me, and hopefully the output is something that’s useful and enjoyable for you, too.

AI, ethics, smart cities

I was invited to Aspen Institute’s annual conference on artificial intelligence, Humanity Defined: Politics and Ethics in the AI Age. It’s a good event, bringing (mostly US based) AI experts to Germany and putting them onstage with (mostly German) policy experts to spark some debate. I’ve been to this since it started last year and enjoyed it. This time, my highlight was some background on the European High Level Group on AI Ethics Guidelines shared there by one of the group’s ethicists, Thomas Metzinger. He made a convincing case that this might be the best AI ethics doc currently, globally (it’s going to be published next week); and that it has glaring, painful shortcomings, especially as far as red lines are concerned – areas or types of AI applications that Europe would not engage in. These red lines are notably absent in the final document. Which seems… a shame? More on that soon.

I’m just mentioning this here because there are a few exciting projects coming up that will give me an opportunity to explore the intersection of smart cities, policy, AI/machine decision learning and how insights from creating the Trustable Technology Mark can lead to better, more responsible smart cities, tech governance, and applied AI ethics. More on that soon.

What’s next?

This week I’ll be at the Internet Freedom Festival (IFF) in Valencia, Spain. Then later in the month I’ll be teaching for a day about trustable tech at Hochschule Darmstadt at the kind invitation of Prof. Andrea Krajewski. Otherwise it’s drafting outlines, writing some project proposals, and lots of meetings and writing.

If you’d like to work with me in the upcoming months, I have limited availability but am always happy to have a chat. I’m currently doing the planning for Q3 and Q4 2019.

Have a great April!

Yours truly,
P.

The 3 I’s: Incentives, Interests, Implications

T

When discussing how to make sure that tech works to enrich society — rather than extract value from many for the benefit of a few — we often see a focus on incentives. I argue that that’s not enough: We need to consider and align incentives, interests, and implications.

Incentives

Incentives are, of course, mostly thought of as an economic motivator for companies: Maximize profit by lowering costs or offsetting or externalizing it, or charging more (more per unit, more per customer, or simply charging more customers). Sometimes incentives can be non-economic, too, like in the case of positive PR. For individuals, it’s conventionally thought of in the context of consumers trying to get their products as cheaply as possible.

All this of course is based on what in economics is called rational choice theory, a framework for understanding social and economic behavior: “The rational agent is assumed to take account of available information, probabilities of events, and potential costs and benefits in determining preferences, and to act consistently in choosing the self-determined best choice of action.” (Wikipedia) Rational choice theory isn’t complete, though, and might simply be wrong; we know, for example, that all kinds of cognitive biases are also at play in decision-making. The latter is for individuals, of course. But organizations inherently have their own blind spots and biases, too.

So this focus on incentives, while near-ubiquitous, is myopic: While incentives certainly play a role in decision making, they are not the only factor at play. Neither do companies only work towards maximizing profits (I know my own doesn’t, and I daresay many take other interests into account, too). Nor do consumers only optimize their behavior towards saving money (at the expense, say, of secure connected products). So we shouldn’t over-index on getting the incentives right and instead take other aspects into account, too.

Interests

When designing frameworks that aim at a better interplay of technology, society and individual, we should look beyond incentives. Interests, however vaguely we might define those, can clearly impact decision making. For example, if a company (large or small, doesn’t matter) wants to innovate in a certain area, they might willingly forgo large profits and instead invest in R&D or multi-stakeholder dialog. This could help them in their long term prospects through either new, better products (linking back to economic incentives) or by building more resilient relationships with their stakeholders (and hence reducing potential friction with external stakeholders).

Other organizations might simply be mission driven and focus on impact rather than profit, or at least balance both differently. Becoming a B-Corp for example has positive economic side effects (higher chance of retaining talent, positive PR) but more than that it allows the org to align its own interests with those of key stakeholder groups, namely not just investors but also customers and staff.

Consumers, equally, are not unlikely by any means to prioritize price over other characteristics: Organic and Fairtrade food or connected products with quality seals (like our own Trustable Technology Mark) might cost more but offer benefits that others don’t. Interests, rational or not, influence behavior.

And, just as an aside, there are plenty of cases where “irrationally” responsible behavior by an organization (like investing more than legally required in data protection, or protecting privacy better than industry best practice) can offer a real advantage in the market if the regulatory framework changes. I know at least one Machine Learning startup that had a party when GDPR came into effect since all of a sudden, their extraordinary focus on privacy meant they where ahead of the pack while the rest of the industry was in catch-up mode.

Implications

Finally, we should consider the implications of the products coming onto the market as well as the regulatory framework they live under. What might this thing/product/policy/program do to all the stakeholders — not just the customers who pay for the product? How might it impact a vulnerable group? How will it pay dividends in the future, and for whom?

It is especially this last part that I’m interested in: The dividends something will pay in the future. Zooming in even more, the dividends that infrastructure thinking will pay in the future.

Take Ramez Naam’s take on decarbonization — he makes a strong point that early solar energy subsidies (first in Germany, then China and the US) helped drive development of this new technology, which in turn drove the price down and so started a virtuous circle of lower price > more uptake > more innovation > lower price > etc. etc.

We all know what happened next (still from Ramez):

“Electricity from solar power, meanwhile, drops in cost by 25-30% for every doubling in scale. Battery costs drop around 20-30% per doubling of scale. Wind power costs drop by 15-20% for every doubling. Scale leads to learning, and learning leads to lower costs. … By scaling the clean energy industries, Germany lowered the price of solar and wind for everyone, worldwide, forever.”

Now, solar energy is not just competitive. In some parts of the world it is the cheapest, period.

This type of investment in what is essentially infrastructure — or at least infrastructure-like! — pays dividends not just to the directly subsidized but to the whole larger ecosystem. This means significantly, disproportionately bigger impact. It creates and adds value rather than extracting it.

We need more infrastructure thinking, even for areas that are, like solar energy and the tech we need to harvest it, not technically infrastructure. It needs a bit of creative thinking, but it’s not rocket science.

We just need to consider and align the 3 I’s: incentives, interests, and implications.

Monthnotes for November 2018

M

This month: Trustable Technology Mark, ThingsCon Rotterdam, a progressive European digital agenda.

If you’d like to work with me in the upcoming months, I have very limited availability but am always happy to have a chat. I’m currently doing the planning for Q2 2019.

Trustable Technology Mark

ThingsCon’s trustmark for IoT, the Trustable Technology Mark now has a website. We’ll be soft-launching it with a small invite-only group of launch partners next week at ThingsCon Rotterdam. Over on trustabletech.org I wrote up some pre-launch notes on where we stand. Can’t wait!

ThingsCon Rotterdam

ThingsCon is turning 5! This thought still blows my mind. We’ll be celebrating at ThingsCon Rotterdam (also with a new website) where we’ll also be launching the Trustmark (as mentioned above). This week is for tying up all the loose ends so that we can then open applications to the public.

A Progressive European Digital Agenda

Last month I mentioned that I was humbled (and delighted!) to be part of a Digital Rights Cities Coalition at the invitation of fellow Mozilla Fellow Meghan McDermott (see her Mozilla Fellows profile here). This is one of several threads where I’m trying to extend the thinking and principles behind the Trustable Technology Mark beyond the consumer space, notably into policy—with a focus on smart city policy.

Besides the Digital Rights Cities Coalition and some upcoming work in NYC around similar issues, I was kindly invited by the Foundation for Progressive European Studies (FEPS) to help outline the scope of a progressive European digital agenda. I was more than a little happy to see that this conversation will continue moving forward, and hope I can contribute some value to it. Personally I see smart cities as a focal point of many threads of emerging tech, policy, and the way we define democratic participation in the urban space.

What’s next?

Trips to Rotterdam (ThingsCon & Trustmark), NYC (smart cities), Oslo (smart cities & digital agenda).

If you’d like to work with me in the upcoming months, I have very limited availability but am always happy to have a chat. I’m currently doing the planning for Q2 2019.

Yours truly, P.

Trust Indicators for Emerging Technologies

T

For the Trustable Technology Mark, we identified 5 dimensions that indicate trustworthiness. Let’s call them trust indicators:

  • Privacy & Data Practices: Does it respect users’ privacy and protect their data rights?
  • Transparency: Is it clear to users what the device and the underlying services do and are capable of doing?
  • Security: Is the device secure and safe to use? Are there safeguards against data leaks and the like?
  • Stability: How long a life cycle can users expect from the device, and how robust are the underlying services? Will it continue to work if the company gets acquired, goes belly-up, or stops maintenance?
  • Openness: Is it built on open source or around open data, and/or contributes to open source or open data? (Note: We treat Openness not as a requirement for consumer IoT but as an enabler of trustworthiness.)

Now these 5 trust indicators—and the questions we use in the Trustable Technology Mark to assess them—are designed for the context of consumer products. Think smart home devices, fitness trackers, connected speakers or light bulbs. They work pretty well for that context.

Over the last few months, it has become clear that there’s demand for similar trust indicators for areas other than consumer products like smart cities, artificial intelligence, and other areas of emerging technology.

I’ve been invited to a number of workshops and meetings exploring those areas, often in the context of policy making. So I want to share some early thoughts on how we might be able to translate these trust indicators from a consumer product context to these other areas. Please note that the devil is in the detail: This is early stage thinking, and the real work begins at the stage where the assessment questions and mechanisms are defined.

The main difference between consumer context and publicly deployed technology—infrastructure!—means that we need to focus even most strongly on safeguards, inclusion, and resilience. If consumer goods stop working, there’s real damage, like lost income and the like, but in the bigger picture, failing consumer goods are mostly a quality of life issue; and in the case of consumer IoT space, mostly for the affluent. (Meaning that if we’re talking about failure to operate rather than data leaks, the damage has a high likelihood of being relatively harmless.)

For publicly deployed infrastructure, we are looking at a very different picture with vastly different threat models and potential damage. Infrastructure that not everybody can rely on—equally, and all the time—would not just be annoying, it might be critical.

After dozens of conversations with people in this space, and based on the research I’ve been doing both for the Trustable Technology Mark and my other work with both ThingsCon and The Waving Cat, here’s a snapshot of my current thinking. This is explicitly intended to start a debate that can inform policy decisions for a wide range of areas where emerging technologies might play a role:

  • Privacy & Data Practices: Privacy and good data protection practices are as essential in public space as in the consumer space, even though the implications and tradeoffs might be different ones.
  • Transparency & Accountability: Transparency is maybe even more relevant in this context, and I propose adding Accountability as an equally important aspect. This holds especially true where commercial enterprises install and possibly maintain large scale networked public infrastructure, like in the context of smart cities.
  • Security: Just as important, if not more so.
  • Resilience: Especially for smart cities (but I imagine the same holds true for other areas), we should optimize for Resilience. Smart city systems need to work, even if parts fail. Decentralization, openness, interoperability and participatory processes are all strategies that can increase Resilience.
  • Openness: Unlike in the consumer space, I consider openness (open source, open data, open access) essential in networked public infrastructure—especially smart city technology. This is also a foundational building block for civic tech initiatives to be effective.

There are inherent conflicts and tradeoffs between these trust indicators. But **if we take them as guiding principles to discuss concrete issues in their real contexts, I believe they can be a solid starting point. **

I’ll keep thinking about this, and might adjust this over time. In the meantime, I’m keen to hear what you think. If you have thoughts to share, drop me a line or hit me up on Twitter.

Monthnotes for October 2018

M

This month: Mozfest, a Digital Rights Cities Coalition, Trustable Technology Mark updates, ThingsCon Rotterdam.

If you’d like to work with me in the upcoming months, I have very limited availability but am always happy to have a chat. I’m currently doing the planning for Q2 2019.

Mozfest

Mozfest came and went, and was lovely as always. It was the 9th Mozfest, 8 or so of which I participated in — all the way back to the proto (or prototyping?) Mozfest event called Drumbeat in Barcelona in, what, 2010? But no time for nostalgia, it was bustling as always. The two things that were different for me that one, I participated as a Mozilla Fellow, which means a different quality of engagement and two, M and I brought the little one, so we had a toddler in tow. Which I’m delighted to say worked a charm!

A Digital Rights Cities Coalition

At Mozfest, the smart and ever lovely Meghan McDermott (see her Mozilla Fellows profile here) hosted a small invite-only workshop to formalize a Digital Rights Cities Coalition — a coalition of cities and civil society to protect, foster, promote digital rights in cities. I was both delighted and honored to be part of this space, and we’ll continue working together on related issues. The hope is that my work with ThingsCon and the Trustable Technology Mark can inform and contribute value to that conversation.

Trustable Technology Mark

The Trustable Technology Mark is hurtling towards the official launch at a good clip. After last month’s workshop weekend at Casa Jasmina, I just hosted a Trustmark session at Mozfest. It was a good opportunity to have new folks take a look at the concept with fresh eyes. I’m happy to report that I walked away with some new contacts and leads, some solid feedback, and an overall sense that at least for the obvious points of potential criticism that present themselves at first glance there are solid answers now as to why this way and not that, etc etc.

Courtesy Dietrich, a photo of me just before kicking off the session wearing a neighboring privacy booth’s stick-on mustache.

Also, more policy and academic partners signing on, which is a great sign, and more leads to companies coming in who want to apply for the Trustmark.

Next steps for the coming weeks: Finalize and freeze the assessment form, launch a website, line up more academic and commercial partners, reach out to other initiatives in the space, finalize trademarks (all ongoing), reach out to press, plan launch (starting to prep these two).

The current assessment form asks a total of 48 questions over 5 dimensions, with a total of 29 required YES’s. Here’s the most up-to-date presentation:


ThingsCon Rotterdam

Our annual ThingsCon conference is coming up: Join us in Rotterdam Dec 6-7!

Early bird is just about to end (?), and we’re about to finalize the program. It’s going to be an absolute blast. I’ll arrive happily (if probably somewhat bleary eyed after a 4am start that day) in Rotterdam to talk Trustable Technology and ethical tech, we’ll have a Trustmark launch party of some sort, we’ll launch a new website (before or right there and then), and we’ve been lining up a group of speakers so amazing I’m humbled even just listing it:

Alexandra Deschamps-Sonsino, Cennydd Bowles, Eric Bezzem, Laura James, Lorenzo Romanoli, Nathalie Kane, Peter Bihr, Afzal Mangal, Albrecht Kurze, Andrea Krajewski, Anthony Liekens, Chris Adams, Danielle Roberts, Dries De Roeck, Elisa Giaccardi, Ellis Bartholomeus, Gaspard Bos, Gerd Kortuem, Holly Robbins, Isabel Ordonez, Kars Alfrink, Klaas Kuitenbrouwer, Janjoost Jullens, Ko Nakatsu, Leonardo Amico, Maaike Harbers, Maria Luce Lupetti, Martijn de Waal, Martina Huynh, Max Krüger, Nazli Cila, Pieter Diepenmaat, Ron Evans, Sami Niemelä, Simon Höher, Sjef van Gaalen.

That’s only the beginning!

Here’s part of the official blurb, and more soon on thingscon.com and thingscon.nl/conference-2018

Now, 5 years into ThingsCon, the need for responsible technology has entered the mainstream debate. We need ethical technology, but how? With the lines between IoT, AI, machine learning and algorithmic decision-making increasingly blurring it’s time to offer better approaches to the challenges of the 21st century: Don’t complain, suggest what’s better! In this spirit, going forward we will focus on exploring how connected devices can be made better, more responsible and more respectful of fundamental human rights. At ThingsCon, we gather the finest practitioners; thinkers & tinkerers, thought leaders & researchers, designers & developers to discuss and show how we can make IoT work for everyone rather than a few, and build trustable and responsible connected technology.

Media, etc.

In the UK magazine NET I wrote an op-ed about Restoring Trust in Emerging Tech. It’s in the November 2018 issue, out now – alas, I believe, print only.

Reminder: Our annual ThingsCon report The State of Responsible IoT is out.

What’s next?

Trips to Brussels, Rotterdam, NYC to discuss a European digital agenda, launch a Trustmark, co-host ThingsCon, translate Trustmark principles for the smart city context, prep a US-based ThingsCon conference.

If you’d like to work with me in the upcoming months, I have very limited availability but am always happy to have a chat. I’m currently doing the planning for Q2 2019.

Yours truly, P.

What’s long-term success? Outsized positive impact.

W

For us, success is outsized positive impact—which is why I’m happy to see our work becoming part of Brazil’s National IoT Plan.

Recently, I was asked what long-term success looked like for me. Here’s the reply I gave:

To have outsized positive impact on society by getting large organizations (companies, governments) to ask the right questions early on in their decision-making processes.

As you know, my company consists of only one person: myself. That’s both boon & bane of my work. On one hand it means I can contribute expertise surgically into larger contexts, on the other it means limited impact when working by myself.

So I tend (and actively aim) to work in collaborations—they allow to build alliances for greater impact. One of those turned into ThingsCon, the global community of IoT practitioners fighting for a more responsible IoT. Another, between my company, ThingsCon and Mozilla, led to research into the potential of a consumer trustmark for the Internet of Things (IoT).

I’m very, very happy (and to be honest, a little bit proud, too) that this report just got referenced fairly extensively in Brazil’s National IoT Plan, concretely in Action Plan / Document 8B (PDF). (Here’s the post on Thingscon.com.)

To see your work and research (and hence, to a degree, agenda) inform national policy is always exciting.

This is exactly the kind of impact I’m constantly looking for.