The IG BCE’s magazine Kompakt interviewed me about IoT, AI and why simple solutions so often are inappropriate for complex issues: The interview is in German, available as an e-paper. (5 November 2019)
This article is a few months old (and in German), but two points of view that I’ll just offer side by side as they pretty much sum up the state of play in smart cities these days.
For context, this is about a smart city partnership in which Huawei implement their technologies in Duisburg, a mid-sized German city with a population of about 0.5 million. The (apparently non-binding) partnership agreement includes Smart Government (administration), Smart Port Logistics, Smart Education (education & schools), Smart Infrastructure, 5G and broadband, Smart Home, and the urban internet of things.
Note: The quotes and paraphrases are roughly translated from the original German article.
Jan Weidenfeld from the Marcator Institute for China Studies:
“As a city administration, I’d be extremely cautious here.” China has a fundamentally different societal system, and a legal framework that means that every Chinese company, including Huawei, is required to open data streams to the communist party. (…)
Weidenfeld points out that 5 years ago, when deliberations about the project began, China was a different country than it is today. At both federal and state levels, the thinking about China has evolved. (…)
“Huawei Smart City is a large-scale societal surveillance system, out of which Duisburg buys the parts that are legally fitting – but this context mustn’t be left out when assessing the risks.”
Anja Kopka, media spokesperson for the city of Duisburg:
The city of Duisburg doesn’t see “conclusive evidence” regarding these security concerns.The data center meets all security requirements for Germany, and is certified as such. “Also, as a municipal administration we don’t have the capacity to reliably assess claims of this nature.” Should federal authorities whose competencies include assessing such issues provide clear action guidelines for dealing with Chinese partners in IT, then Duisburg will adapt accordingly.
The translation is a bit rough around the edges, but I think you’ll get the idea.
With infrastructure, when we see the damage it’s already too late
We have experts warning, but the warnings are of such a structural nature that they’re kinda of to big and obvious to prove. Predators will kill other animals to eat. ????
By the time abuse or any real issue can be proven, it’d be inherently to late to do anything about it. We have a small-ish city administration that knows perfectly well that they don’t have the capacity to do their due diligence, so they just take their partners’ word for it.
The third party here, of course, being a global enterprise with an operating base in a country that has a unique political and legal system that in many ways isn’t compatible with any notion of human rights, let alone data rights, that otherwise would be required in the European Union.
The asymetries in size and speed are vast
And it’s along multiple axes — imbalance of size and speed, and incompatibility of culture — that I think we see the most interesting, and most potentially devastating conflicts:
And it’s not an easy choice at some level: Someone comes in and offers much needed resources that you need and don’t have any chance to get, desperation might force you to make some sub-prime decisions. But this comes at a price — the question is just how bad that price will be over the long term.
I’m not convinced that any smart city of this traditional approach is worth implementing, or even might be worth implementing; probably not. But of all the players, the one backed by a non-democratic regime with a track record of mass surveillance and human rights violations is surely at the bottom of the list.
It’s all about the process
That’s why whenever I speak about smart cities (which I do quite frequently these days), I focus on laying better foundations: We can’t always start from scratch when considering a smart city project. We need a framework as a point of reference, and as a guideline, and it has to make sure to keep us aligned with our values.
Some aspects to take into account here are transparency, accountability, privacy and security (my mnemonic device is TAPS); a foundation based on human and digital rights; and participatory processes from day one.
And just to be very clear: Transparency is not enough. Transparency without accountability is nothing.
Please note that this blog post is based on a previously published item in my newsletter Connection Problem, to which you can sign up here.
I’m more than happy to give this a shout-out:
ThingsCon, or rather our annual ThingsCon conference, is coming up (Rotterdam, 12-13 December) and I think it’ll be a real blast.
Just a few of the speakers and workshop hosts of a true killer line-up: Marleen Stikker, Tracy Rolling, Alexandra Deschamp-Sonsino, Klasien van de Zandschulp, Mirena Papadimitriou, Cayla Key, Namrata Primlani, Tijmen Schep, Geke van Dijk, Klaas Kuitenbrouwer, Rob van Kranenburg, Lorenzo Romagnoli, Davide Gomba, Jeroen Barendse, Cristina Zaga, Heather Wiltse will all be there, and many more.
If you’re considering to join, earlier ticket sales help us a lot with our planning.
So if you’re planning to participate, the right moment to buy a ticket is TODAY.
About a year ago, we changed our home audio setup to “smart” speakers: We wanted to be able to stream directly from Spotify, as well we from our phones. We also wanted to avoid introducing yet another microphone into our living room. (The kitchen is, hands down, the only place where a voice assistant makes real sense to me personally. Your mileage may vary.) Preferably, there should be a line-in as well; I’m old school that way.
During my research I learned that the overlap of circles in this Venn diagram of speakers that are (a) connected (“smart”) for streaming, (b) have good sound and (c) don’t have a microphone is… very thin indeed.
The Sonos range looked best to me; except for those pesky microphones. Our household is largely voice assistant free, minus the phones, where we just deactivated the assistants to whatever degree we could.
In the end, we settled for a set of Bang & Olufsen Beoplay speakers for living room and kitchen: Solid brand, good reputation. High end. Should do just fine — and it better, given the price tag!
This is just for context. I don’t want to turn this into a product review. But let’s just say that we ran into some issues with one of the speakers. These issues appeared to be software related. And while from the outside they looked like they should be easy to fix, it turned out the mechanisms to deliver the fixes were somewhat broken themselves.
Long story short: I’m now trying to return the speakers. Which made me realize that completely different rules apply than I’m used to. In Germany, where we are based, consumer protection laws are reasonably strong, so if something doesn’t work you can usually return it without too much hassle.
But with a set of connected speakers, we have an edge case. Or more accurately, a whole stack of edge cases.
This is going to be an interesting process, I’m afraid. Can we return the whole family and switch on over to a different make of speakers? Or are we stuck with an expensive set of speakers that while not quite broken, is very much unusable in our context?
If so, then at least I know never to buy connected speakers again. Rather, then I guess I’d recycle these and instead go back to a high end analog speaker set with some external streaming connector – knowing full well that that connector will be useless in a few years time but that the speakers and amp would be around and working flawlessly for 15-20 years, like my old ones did.
And that is the key insight here for our peers in the industry: If your product in this nascent field fails because of lacking quality management, then you leave scorched earth. Consumers aren’t going to trust your products any more, sure. But they are unlikely to trust anyone else’s, either.
The falling tide lowers all boats.
So let’s get not just the products right: In consumer IoT, all too often we think in families/ecosystems. So we have to consider long-term software updates (and mechanisms to deliver them, and fall-backs to those mechanisms…) as well as return policies in case something goes wrong with one of the products. And while we’re at it, we need to equally upgrade consumer protection regulation to deal with these issues of ecosystems and software updates.
This is the only way to ensure consumer trust. So we can reap the benefits of innovation without suffering all the externalized costs as well as unintended consequences of a job sloppily done.
Update (Oct 2019): Turns out other companies also start recognizing that there’s a demand for mic-free speakers: Sonos just launched a speaker that they market specifically for it being mic-free, and it’s otherwise identical to one of their staples. (It’s called the One SL; I imagine the “SL” stands for “streamlined” or “stop listening” but I might be projecting.)
Throughout 2018, we developed the Trustable Technology Mark, a consumer trustmark for IoT, that our non-profit ThingsCon administers. As the project lead on this Trustmark, I spent countless hours in discussions and meetings, at workshops and conferences, and doing research about other relevant consumer labels, trustmarks and certifications that might offer us some useful paths forward. I thought it might be interesting to share what I’ve learned along the way.
(Please note that this is also the reason this blog post appears first on my website; it’s because if there’s anything problematic here, it’s my fault and doesn’t reflect ThingsCon positions.)
Launching a Trustmark is not about the label but about everything else. I’ve encountered probably dozens of cool label concepts, like “nutritional” labels for tech, “fair trade” style privacy labels, and many more. While there were many really neat approaches, the challenges lie elsewhere entirely. Concretely, the main challenges I see are the following:
We’ve solved some of these challenges, but not all. Our data sourcing has been working well. We’re doing well with our stakeholders and possible conflicts of interest (nobody gets paid, we don’t charge for applications/licenses, and it’s all open sourced: In other words, no conflicts of interest and very transparent stakeholders, but this raises sustainability challenges). We don’t yet have robust governance structures, need a bigger pool of experts for reviews, and haven’t built the reach and relevance yet that we’ll need eventually if this is to be a long term success.
Going into the project, I naively thought there must be existing models we could just adapt. But turns out, new problem spaces don’t always work that way. The nature of Internet of Things (IoT) and connected devices meant we faced a set of fairly new and unique challenges, and nobody had solved this issue. (For example, how to deal with ongoing software updates that could change the nature of a device multiple times without introducing a verification mechanism like reverse engineering that would be too cost intensive to be realistic.)
So we had to go back to the drawing board, and came out with a solution that I would say is far from perfect but better than anything else I’ve seen to date: Our human experts review applications that are based on information provided by the manufacturer/maker of the product, and this information is based on a fairly extensive & holistic questionnaire that includes aspects from feature level to general business practices to guarantees that the company makes on the record by using our Trustmark.
Based on that, our Trustmark offers a carrot; we leave it to others to be the stick.
That said, we did learn a lot from the good folks at the Open Source Hardware Association. (Thanks, OSHWA!)
We tried to collaborate as closely as possible with a number of friendly organizations (shout-out to Better IoT & Consumers International!) but also had to concede that in a project as fast moving and iterative it’s tough to coordinate as closely as we would have liked to have. That’s on us — by which I mean, it’s mostly on me personally, and I’m sorry I didn’t do a better job aligning this even better.
For example, while I did manage to have regular backchannel exchanges with collaborators, more formal partnerships are a whole different beast. I had less than a year to get this out the door, so anything involving formalizing was tricky. I was all the happier that a bunch of the partners in the Network of Centres and some other academic organizations decided to take the leap and set up lightweight partnerships with us. This allows a global footprint with partners in Brazil, United States, United Kingdom, Germany, Poland, Turkey, India and China. Thank you!
One of the most important take aways for me, however, was this: You can’t please everyone, or solve every problem.
For every aspect we would include, we’d exclude a dozen others. Every method (assessment, enforcement, etc.) used means another not used. Certification or license? Carrot or stick? Third party verification or rely on provided data? Incorporate life cycle analysis or focus on privacy? Include cloud service providers for IoT, or autonomous vehicles, or drones? These are just a tiny, tiny fraction of the set of questions we needed to decide. In the end, I believe that in order to have a chance at succeeding means cutting out many if not most aspects in order to have as clear a focus as possible.
And it means making a stand: Choose the problem space, and your approach to solving it, so you can be proud of it and stand behind it.
For the Trustable Technology Mark that meant: We prioritized a certain purity of mission over watering down our criteria, while choosing pragmatic processes and mechanisms over those we thought would be more robust but unrealistic. In the words of our slide deck, the Trustmark should hard to earn, but easy to document. That way we figured we could find those gems of products that try out truly novel approaches that are more respectful of consumers rights than the broad majority of the field.
Is this for everyone, or for everything? Certainly not. But that’s ok: We can stand behind it. And should we learn we’re wrong about something then we’ll know we tried our best, and can own those mistakes, too. We’ve planted a flag, a goal post that we hope will shift the conversation by setting a higher goal than most others.
The Trustable Technology Mark is a project under active development, and we’ll be happy sharing our learnings as things develop. In the meantime, I hope this has been helpful.
If you’ve got anything to share, please send it to me personally (email@example.com) or to firstname.lastname@example.org.
The Trustable Technology Mark was developed under the ThingsCon umbrella with support from the Mozilla Foundation.
I was super happy to be interviewed about ThingsCon and the Trustable Technology Mark for a report by the World Economic Forum (WEF) for their newly launched initiative Civil Society in the Fourth Industrial Revolution. You can download the full report here:
The report was just published at the WEF in Davos and it touches on a lot of areas that I think are highly relevant:
Grasping the opportunities and managing the challenges of the Fourth Industrial Revolution require a thriving civil society deeply engaged with the development, use, and governance of emerging technologies. However, how have organizations in civil society been responding to the opportunities and challenges of digital and emerging technologies in society? What is the role of civil society in using these new powerful tools or responding to Fourth Industrial Revolution challenges to accountability, transparency, and fairness?
Following interviews, workshops, and consultations with civil society leaders from humanitarian, development, advocacy and labor organizations, the white paper addresses:
— How civil society has begun using digital and emerging technologies
— How civil society has demonstrated and advocated for responsible use of technology
— How civil society can participate and lead in a time of technological change
— How industry, philanthropy, the public sector and civil society can join together and invest in addressing new societal challenges in the Fourth Industrial Revolution.
Thanks for featuring our work so prominently in the report. You’ll find our bit as part of the section Cross-cutting considerations for civil society in an emerging Fourth Industrial Revolution.
For the Trustable Technology Mark, we identified 5 dimensions that indicate trustworthiness. Let’s call them trust indicators:
Now these 5 trust indicators—and the questions we use in the Trustable Technology Mark to assess them—are designed for the context of consumer products. Think smart home devices, fitness trackers, connected speakers or light bulbs. They work pretty well for that context.
Over the last few months, it has become clear that there’s demand for similar trust indicators for areas other than consumer products like smart cities, artificial intelligence, and other areas of emerging technology.
I’ve been invited to a number of workshops and meetings exploring those areas, often in the context of policy making. So I want to share some early thoughts on how we might be able to translate these trust indicators from a consumer product context to these other areas. Please note that the devil is in the detail: This is early stage thinking, and the real work begins at the stage where the assessment questions and mechanisms are defined.
The main difference between consumer context and publicly deployed technology—infrastructure!—means that we need to focus even most strongly on safeguards, inclusion, and resilience. If consumer goods stop working, there’s real damage, like lost income and the like, but in the bigger picture, failing consumer goods are mostly a quality of life issue; and in the case of consumer IoT space, mostly for the affluent. (Meaning that if we’re talking about failure to operate rather than data leaks, the damage has a high likelihood of being relatively harmless.)
For publicly deployed infrastructure, we are looking at a very different picture with vastly different threat models and potential damage. Infrastructure that not everybody can rely on—equally, and all the time—would not just be annoying, it might be critical.
After dozens of conversations with people in this space, and based on the research I’ve been doing both for the Trustable Technology Mark and my other work with both ThingsCon and The Waving Cat, here’s a snapshot of my current thinking. This is explicitly intended to start a debate that can inform policy decisions for a wide range of areas where emerging technologies might play a role:
There are inherent conflicts and tradeoffs between these trust indicators. But **if we take them as guiding principles to discuss concrete issues in their real contexts, I believe they can be a solid starting point. **