Tag21c

Just enough City

J

In this piece, I’m advocating for a Smart City model based on restraint, and focused first and foremost on citizen needs and rights.

A little while ago, the ever-brilliant and eloquent Rachel Coldicutt wrote a piece on the role of public service internet, and why it should be a model of restraint. It’s titled Just enough Internet, and it resonated deeply with me. It was her article that inspired not just this piece’s title but also its theme: Thank you, Rachel!

Rachel argues that public service internet (broadcasters, government services) shouldn’t compete with commercial competitors by commercial metrics, but rather use approaches better suited to their mandate: Not engagement and more data, but providing the important basics while collecting as little as possible. (This summary doesn’t do Rachel’s text justice, she makes more, and more nuanced points there, so please read her piece, it’s time well spent.)

I’ll argue that Smart Cities, too, should use an approach better suited to their mandate?—?an approach based on (data) restraint, and on citizens’ needs & rights.

This restraint and reframing is important because it prevents mission creep; it also alleviates the carbon footprint of all those services.

Enter the Smart City

Wherever we look on the globe, we see so-called Smart City projects popping up. Some are incremental, and add just some sensors. Others are blank slate, building whole connected cities or neighborhoods from scratch. What they have in commons is that they mostly are built around a logic of data-driven management and optimization. You can’t manage what you can’t measure, management consultant Peter Drucker famously said, and so Smart Cities tend to measure… everything. Or so they try.

Of course, sensors only measure so many things, like physical movement (of people, or goods, or vehicles) through space, or the consumption and creation of energy. But thriving urban life is made up of many more things, and many of those cannot be measured as easily: Try measuring opportunity or intention or quality of life, and most Smart City management dashboards will throw an error: File not found.

The narrative of the Smart City is based fundamentally that of optimizing a machine to run as efficiently as possible. It’s neoliberal market thinking in its purest form. (Greenfield and Townsend and Morozov and many other Smart City critics have made those points much more eloquently before.) But that doesn’t reflect urban life. The human side of it is missing, a glaring hole right in the center of that particular vision.

Instead of putting citizens in that spot in the center, the “traditional” Smart City model aims to build better (meaning: more efficient, lower cost) services to citizens by collecting, collating, analyzing data. It’s the logic of global supply chains and predictive maintenance and telecommunications networks and data analytics applied to the public space. (It’s no coincidence of the large tech vendors in that space come from either one of those backgrounds.)

The city, however, is no machine to be run at maximum efficiency. It’s a messy agora, with competing and often conflicting interests, and it needs slack in the system: Slack and friction all increase resilience in the face of larger challenges, as do empowered citizens and municipal administrations. The last thing any city needs is to be fully algorithmically managed at maximum efficiency just to come to a grinding halt when — not if! — the first technical glitch happens, or some company ceases their business.

Most importantly, I’m convinced that depending on context, collecting data in public space can be a fundamental risk to a free society?—?and that it’s made even worse if the data collection regime is outside of the public’s control.

The option of anonymity plays a crucial role for the freedom of assembly, of organizing, of expressing thoughts and political speech. If sensitive data is collected in public space (even if it’s not necessarily personably identifiable information!) then the trust in the collecting entity needs to be absolute. But as we know from political science, the good king is just another straw man, and that given the circumstance even the best government can turn bad quickly. History has taught us the crucial importance of checks & balances, and of data avoidance.

We need a Smart City model of restraint

Discussing publicly owned media, Rachel argues:

It’s time to renegotiate the tacit agreement between the people, the market and the state to take account of the ways that data and technology have nudged behaviours and norms and changed the underlying infrastructure of everyday life.

This holds true for the (Smart) City, too: The tacit agreement between the people, the market and the state is that, roughly stated, the government provides essential services to its citizens, often with the help of the market, and with the citizens’ interest at the core. However, when we see technology companies lobby governments to green-light data-collecting pilot projects with little accountability in public space, that tacit agreement is violated. Not the citizens’ interests but those multinationals’ business models move into the center of these considerations.

There is no opt-out in public space. So when collecting meaningful consent to the collection of data about citizens is hard or impossible, that data must not be collected, period. Surveillance in public space is more often detrimental to free societies than not. You know this! We all know this!

Less data collected, and more options of anonymity in public space, make for a more resilient public sphere. And what data is collected should be made available to the public at little or no cost, and to commercial interests only within a framework of ethical use (and probably for a fee).

What are better metrics for living in a (Smart) City?

In order to get to better Smart Cities, we need to think about better, more complete metrics than efficiency & cost savings, and we need to determine those (and all other big decisions about public space) through a strong commitment to participation: From external experts to citizens panels to digital participation platforms, there are many tools at our disposal to make better, more democratically legitimized decisions.

In that sense I cannot offer a final set of metrics to use. However, I can offer some potential starting points for a debate. I believe that every Smart City projects should be evaluated against the following aspects:

  • Would this substantially improve sustainability as laid out in the UN’s Sustainable Development Goals (SGD) framework?
  • Is meaningful participation built into the process at every step from framing to early feedback to planning to governance? Are the implications clear, and laid out in an accessible, non-jargony way?
  • Are there safeguards in place to prevent things from getting worse than before if something doesn’t work as planed?
  • Will it solve a real issue and improve the life of citizens? If in doubt, cut it out.
  • Will participation, accountability, resilience, trust and security (P.A.R.T.S.) all improve through this project?

Obviously those can only be starting points.

The point I’m making is this: In the Smart City, less is more.

City administrations should optimize for thriving urban live and democracy; for citizens and digital rights — which also happen to be human rights; for resilience and opportunity rather than efficiency. That way we can create a canvas to be painted by citizens, administration and — yes! — the market, too.

We can only manage what we can measure? Not necessarily. Neither the population or the urban organism need to be managed; just given a robust framework to thrive within. We don’t always need real time data for every decision — we can also make good decision based on values and trust in democratic processes, and by giving a voice to all impacted communities. We have a vast body of knowledge from decades of research around urban planning and sociology, and many other areas: Often enough we know the best decisions and it’s only politics that keeps us from enacting them.

We can change that, and build the best public space we know to build. Our cities will be better off for it.

About the author

Just for completeness’ sake so you can see where I’m coming from, I’m basing this on years of working at least occasionally on Smart City projects. My thinking is informed by work around emerging tech and its impact on society, and a strong focus on responsible technology that puts people first. Among other things I’ve co-founded ThingsCon, a non-profit community that promotes responsible tech, and led the development of the Trustable Technology Mark. I was a Mozilla Fellow in 2018-19 and am an Edgeryders Fellow in 2019-20. You can find my bio here.

The 3 I’s: Incentives, Interests, Implications

T

When discussing how to make sure that tech works to enrich society — rather than extract value from many for the benefit of a few — we often see a focus on incentives. I argue that that’s not enough: We need to consider and align incentives, interests, and implications.

Incentives

Incentives are, of course, mostly thought of as an economic motivator for companies: Maximize profit by lowering costs or offsetting or externalizing it, or charging more (more per unit, more per customer, or simply charging more customers). Sometimes incentives can be non-economic, too, like in the case of positive PR. For individuals, it’s conventionally thought of in the context of consumers trying to get their products as cheaply as possible.

All this of course is based on what in economics is called rational choice theory, a framework for understanding social and economic behavior: “The rational agent is assumed to take account of available information, probabilities of events, and potential costs and benefits in determining preferences, and to act consistently in choosing the self-determined best choice of action.” (Wikipedia) Rational choice theory isn’t complete, though, and might simply be wrong; we know, for example, that all kinds of cognitive biases are also at play in decision-making. The latter is for individuals, of course. But organizations inherently have their own blind spots and biases, too.

So this focus on incentives, while near-ubiquitous, is myopic: While incentives certainly play a role in decision making, they are not the only factor at play. Neither do companies only work towards maximizing profits (I know my own doesn’t, and I daresay many take other interests into account, too). Nor do consumers only optimize their behavior towards saving money (at the expense, say, of secure connected products). So we shouldn’t over-index on getting the incentives right and instead take other aspects into account, too.

Interests

When designing frameworks that aim at a better interplay of technology, society and individual, we should look beyond incentives. Interests, however vaguely we might define those, can clearly impact decision making. For example, if a company (large or small, doesn’t matter) wants to innovate in a certain area, they might willingly forgo large profits and instead invest in R&D or multi-stakeholder dialog. This could help them in their long term prospects through either new, better products (linking back to economic incentives) or by building more resilient relationships with their stakeholders (and hence reducing potential friction with external stakeholders).

Other organizations might simply be mission driven and focus on impact rather than profit, or at least balance both differently. Becoming a B-Corp for example has positive economic side effects (higher chance of retaining talent, positive PR) but more than that it allows the org to align its own interests with those of key stakeholder groups, namely not just investors but also customers and staff.

Consumers, equally, are not unlikely by any means to prioritize price over other characteristics: Organic and Fairtrade food or connected products with quality seals (like our own Trustable Technology Mark) might cost more but offer benefits that others don’t. Interests, rational or not, influence behavior.

And, just as an aside, there are plenty of cases where “irrationally” responsible behavior by an organization (like investing more than legally required in data protection, or protecting privacy better than industry best practice) can offer a real advantage in the market if the regulatory framework changes. I know at least one Machine Learning startup that had a party when GDPR came into effect since all of a sudden, their extraordinary focus on privacy meant they where ahead of the pack while the rest of the industry was in catch-up mode.

Implications

Finally, we should consider the implications of the products coming onto the market as well as the regulatory framework they live under. What might this thing/product/policy/program do to all the stakeholders — not just the customers who pay for the product? How might it impact a vulnerable group? How will it pay dividends in the future, and for whom?

It is especially this last part that I’m interested in: The dividends something will pay in the future. Zooming in even more, the dividends that infrastructure thinking will pay in the future.

Take Ramez Naam’s take on decarbonization — he makes a strong point that early solar energy subsidies (first in Germany, then China and the US) helped drive development of this new technology, which in turn drove the price down and so started a virtuous circle of lower price > more uptake > more innovation > lower price > etc. etc.

We all know what happened next (still from Ramez):

“Electricity from solar power, meanwhile, drops in cost by 25-30% for every doubling in scale. Battery costs drop around 20-30% per doubling of scale. Wind power costs drop by 15-20% for every doubling. Scale leads to learning, and learning leads to lower costs. … By scaling the clean energy industries, Germany lowered the price of solar and wind for everyone, worldwide, forever.”

Now, solar energy is not just competitive. In some parts of the world it is the cheapest, period.

This type of investment in what is essentially infrastructure — or at least infrastructure-like! — pays dividends not just to the directly subsidized but to the whole larger ecosystem. This means significantly, disproportionately bigger impact. It creates and adds value rather than extracting it.

We need more infrastructure thinking, even for areas that are, like solar energy and the tech we need to harvest it, not technically infrastructure. It needs a bit of creative thinking, but it’s not rocket science.

We just need to consider and align the 3 I’s: incentives, interests, and implications.

Would you live in a robot?

W

Would you live in a robot?
“Would you live in a robot?” One of the lead questions at Vitra’s Hello, Robot exhibition.

“Would you live in a robot?” is one of the questions posed at #hellorobot, an excellent current exhibition at Vitra Design Museum in Weil am Rhein, Germany. The overall theme of the exhibition is to explore design at the intersection of human & machine – here meaning robots, algorithms, AI and the like.

Have you ever met a robot?
The entrance to Vitra Design Museum during the Hello, Robot exhibition, February 2017.

It’s rare that I travel just to attend an exhibition. In this case it was entirely worth it as #hellorobot addresses some themes that are relevant, salient, and urgent: How do we (want to) live in an age of increased automation? What does and should our relationship with machines look like? What do we think about ascribing personality and agency to machines, algorithms and artificial intelligence in their many forms?

These are all questions we need to think about, and quickly: They are merely early indicators of the kind of challenges and opportunities we face over the next decades, on all levels: as an individual, as businesses, as a society.

Coupland
One of Douglas Coupland’s Micro Manifestos at Vitra.

The above-mentioned questions are, in other words, merely a lead-up to larger ones around things like agency (our own and the algorithms’) and governance, around the role of humans in the economy. A concrete example, if robots take care of the tasks we now pay people to perform (factory work, cleaning up the city, doing research, generating reports…) and if then (under the current model) only 20% of people would be in jobs, what does that mean, how do we earn a living and establish our role and status as a productive and valued member of society?

This example of robots doing most of the work doesn’t strike me as an abstract, academic one. It seems to be blatantly obvious that we need to rethink which roles we want humans to play in society.

This example of robots doing most of the work doesn’t strike me as an abstract, academic one. It seems to be blatantly obvious that we need to rethink which roles we want humans to play in society. All vectors aim at an economy which won’t require—nor be able—to employ 95% of the working-age population full-time. Yet, at the same time per-capita value creation rises and rises, so on a societal level—big picture!—we’re better off. So either we figure out how to handle high double-digit unemployment rates or we reframe how to think about less tasks requiring humans to do, how to unlock the potential of all the newly freed-up time in the day in the lives of millions upon millions of people, and what we want the role of people to be going forward.

(Ryan Avent’s book The Wealth of Humans seems like a good place to read up on possible scenarios. Thanks to Max & Simon’s recommendation in their newsletter The Adventure Equation. I haven’t read it yet but it’s the top of my to-read pile.)

///

Have you met a robot?
“Robots are tools for dramatic effect.” Bruce Sterling quote at Vitra.

hellorobot provides a great snapshot of the artistic and commercial landscape around robots and AI. From artistic explorations like good old manifest, an industrial robot arm perpetually churning out algorithmically generated manifestos that’s been in ZKM since ca. 2008 or Dan Chen’s much more recent CremateBot which allows you to start cremating the skin and hairs you shed as you go through your live, to the extremely commercial (think Industry 4.0 manufacturing bots), everything’s here. The exhibit isn’t huge, but it’s sweeping.

Dan Chen's Crematebot
Dan Chen’s CremateBot at Vitra.

I was especially delighted to see many of our friends and ThingsCon alumni in the mix as well. Bruce Sterling was an adviser. Superflux’s Uninvited Guests were on display. Automato (Simone Rebaudengo‘s new outfit) had four or five pieces on display, including a long-time favorite, Teacher of Algorithms.

Trainer of Algorithms
Automato’s Teacher of Algorithms at Vitra.

I found it especially encouraging to see wide range of medical and therapeutic robots included as well. An exoskeleton was present, as was a therapeutic doll for dementia patients. It was great to see this recent toy for autistic kids:

Therapy doll
A doll for dementia therapy from 2001 at Vitra.

Leka smart toy
A toy for autistic kids at Vitra.

///

One section explored more day-to-day, in the future possibly banal scenarios. What might the relationship between robots and babies be, how could parenting change through these technologies? Will the visual language of industrial manufacturing sneak into the crib or will robots be as cutesy and cozy as other kids toys and paraphernalia?

My First Robot
My First Robot at Vitra.

My First Robot
Will the visual language of industrial manufacturing enter the baby crib?

///

When the smart home stops
What happens when your smart home stops or fails? Lovely photo project at Vitra.

“Would you live in a robot?” The question was likely meant to provoke. Even though clearly some of the older and more traditional German and Swiss visitors around me seemed genuinely to be challenged to consider their world view by the exhibition, I’d go out on a limb: In 2017 I’m not sure the question is even a bit provocative, even though we might want to rethink how we consider our built environment. We might not all live in a robot/smart home. However, I kind of arrived at the exhibition in robots (I had flown in, then taken a cab) and I constantly carry a black box full of bots (my smart phone). Maybe we need updated questions already, like “How autonomous a robot would you live in?”, “What do you consider a robot?”, or “Would you consider yourself a cyborg if you had an implanted pacemaker/hand/memory bank?”

“What makes a good robot, one you’d like to live with?”

Or maybe this leads us off on a wild goose chase. Maybe we just need to ask “What makes a good robot, one you’d like to live with?” Robot, of course, is in this case used almost interchange with algorithm.

///

hellorobot is great, and highly recommended. However, the money quote for me, the key takeaway if you will, is one that I don’t think the curators even considered—nor should they have—in their effort to engage in a conversation around automation and living with robots.

It’s a quite from a not-so-recent Douglas Coupland project, of all things:

Coupland
One of Douglas Coupland’s Micro Manifestos

“The unanticipated side effects of technology dictate the future” — Douglas Coupland

I think this quote pretty much holds the key to unlocking what the 21st century will be about. What are the unintended consequences of a technology once it’s deployed and starts interacting with other tech and systems and society at large? How can we design systems and technologies to allow for max potential upside and minimal potential downside?

This is also the challenge at the heart of ThingsCon’s mission statement, to foster the creation of a human-centric & responsible IoT.

Go see the exhibit if you’re in the vicinity. You won’t regret it.

ps. For more photos, see my Flickr album. Also, a heads-up based on personal experience: The exhibition opens at 10am, as does the café. There’s no warm place to hang out before nor a cup of coffee to be had, and the museum is in the middle of nowhere. Plan your arrival wisely.

Social Market Capitalism 2.0: How should robots and humans co-exist?

S

After reading a great piece on the role and relationships between humans and algorithms, I went on a little (constructive) rant on Twitter (starting here). Here’s what I said again, as a blog post, for reference reasons and easier readability:

In the debate around how we will tackle the redistribution of work due to more robotic labor I honestly cannot understand: How is the most obvious solution isn’t the most-discussed? That more productivity total, by hugely less people, requires major rethinking. Full-time employment is gone. Never coming back. That’s a problem with 19c/20c thinking, but doesn’t have to be going forward.

We produce more, ie. create and capture more value, it’s just even less equally distributed under the traditional market model. So what? This is a societal decision, we can change that model. It’s been changing since day 1. We just might need some awkward convos.

Basic universal income seems an obvious, comparatively small step, but an unavoidable one. How have we not done this already? But we need to rethink the human’s role in society, too. I think we define our roles too much through our work, salary, status. This is bound to fail going forward. We need alternative models of contributing to society beyond “bread winner”. Again, baby steps: First, incentivize currently underpaid roles, like carers, social work, etc. Then expand from there.

This assumes a world view where most people actually enjoy working one way or another of course. Which I believe. It just partially uncouples salary & status & identity from job title, and couples it more closely with things we choose to do. More choice, more leeway in prioritizing work or free time, in balancing freedom and financials. Anyone who likes to earn more could still work more; this isn’t a post-capitalist approach. It’s social market capitalism 2.0.

Is it really that complicated?

I’m becoming an e-citizen of Estonia

I

I had been vaguely aware of Estonia’s initiative e-Estonia, in which people from around the world could sign up for a sort of e-citizenship for this most technologically advanced country of not just the Baltics, but maybe the world. But at the time, you had to pick up the actual ID in Estonia, which seemed slightly over the top (for now).

Fast forward to today, when I stumbled over Ben Hammersley‘s WIRED article about e-Estonia and learned that the application process now works completely online and a trip to our local Estonian embassy (a mere 20min or so by bike or subway away) now does the trick.

That’s exciting!

e-Estonia is not, of course, an actual citizenship, even though for many intents and purposes it does provide a surprisingly large number of services that traditionally were tied to residency of a nationstate.

(more…)