Tag21c

The 3 I’s: Incentives, Interests, Implications

T

When discussing how to make sure that tech works to enrich society — rather than extract value from many for the benefit of a few — we often see a focus on incentives. I argue that that’s not enough: We need to consider and align incentives, interests, and implications.

Incentives

Incentives are, of course, mostly thought of as an economic motivator for companies: Maximize profit by lowering costs or offsetting or externalizing it, or charging more (more per unit, more per customer, or simply charging more customers). Sometimes incentives can be non-economic, too, like in the case of positive PR. For individuals, it’s conventionally thought of in the context of consumers trying to get their products as cheaply as possible.

All this of course is based on what in economics is called rational choice theory, a framework for understanding social and economic behavior: “The rational agent is assumed to take account of available information, probabilities of events, and potential costs and benefits in determining preferences, and to act consistently in choosing the self-determined best choice of action.” (Wikipedia) Rational choice theory isn’t complete, though, and might simply be wrong; we know, for example, that all kinds of cognitive biases are also at play in decision-making. The latter is for individuals, of course. But organizations inherently have their own blind spots and biases, too.

So this focus on incentives, while near-ubiquitous, is myopic: While incentives certainly play a role in decision making, they are not the only factor at play. Neither do companies only work towards maximizing profits (I know my own doesn’t, and I daresay many take other interests into account, too). Nor do consumers only optimize their behavior towards saving money (at the expense, say, of secure connected products). So we shouldn’t over-index on getting the incentives right and instead take other aspects into account, too.

Interests

When designing frameworks that aim at a better interplay of technology, society and individual, we should look beyond incentives. Interests, however vaguely we might define those, can clearly impact decision making. For example, if a company (large or small, doesn’t matter) wants to innovate in a certain area, they might willingly forgo large profits and instead invest in R&D or multi-stakeholder dialog. This could help them in their long term prospects through either new, better products (linking back to economic incentives) or by building more resilient relationships with their stakeholders (and hence reducing potential friction with external stakeholders).

Other organizations might simply be mission driven and focus on impact rather than profit, or at least balance both differently. Becoming a B-Corp for example has positive economic side effects (higher chance of retaining talent, positive PR) but more than that it allows the org to align its own interests with those of key stakeholder groups, namely not just investors but also customers and staff.

Consumers, equally, are not unlikely by any means to prioritize price over other characteristics: Organic and Fairtrade food or connected products with quality seals (like our own Trustable Technology Mark) might cost more but offer benefits that others don’t. Interests, rational or not, influence behavior.

And, just as an aside, there are plenty of cases where “irrationally” responsible behavior by an organization (like investing more than legally required in data protection, or protecting privacy better than industry best practice) can offer a real advantage in the market if the regulatory framework changes. I know at least one Machine Learning startup that had a party when GDPR came into effect since all of a sudden, their extraordinary focus on privacy meant they where ahead of the pack while the rest of the industry was in catch-up mode.

Implications

Finally, we should consider the implications of the products coming onto the market as well as the regulatory framework they live under. What might this thing/product/policy/program do to all the stakeholders — not just the customers who pay for the product? How might it impact a vulnerable group? How will it pay dividends in the future, and for whom?

It is especially this last part that I’m interested in: The dividends something will pay in the future. Zooming in even more, the dividends that infrastructure thinking will pay in the future.

Take Ramez Naam’s take on decarbonization — he makes a strong point that early solar energy subsidies (first in Germany, then China and the US) helped drive development of this new technology, which in turn drove the price down and so started a virtuous circle of lower price > more uptake > more innovation > lower price > etc. etc.

We all know what happened next (still from Ramez):

“Electricity from solar power, meanwhile, drops in cost by 25-30% for every doubling in scale. Battery costs drop around 20-30% per doubling of scale. Wind power costs drop by 15-20% for every doubling. Scale leads to learning, and learning leads to lower costs. … By scaling the clean energy industries, Germany lowered the price of solar and wind for everyone, worldwide, forever.”

Now, solar energy is not just competitive. In some parts of the world it is the cheapest, period.

This type of investment in what is essentially infrastructure — or at least infrastructure-like! — pays dividends not just to the directly subsidized but to the whole larger ecosystem. This means significantly, disproportionately bigger impact. It creates and adds value rather than extracting it.

We need more infrastructure thinking, even for areas that are, like solar energy and the tech we need to harvest it, not technically infrastructure. It needs a bit of creative thinking, but it’s not rocket science.

We just need to consider and align the 3 I’s: incentives, interests, and implications.

Would you live in a robot?

W

Would you live in a robot?
“Would you live in a robot?” One of the lead questions at Vitra’s Hello, Robot exhibition.

“Would you live in a robot?” is one of the questions posed at #hellorobot, an excellent current exhibition at Vitra Design Museum in Weil am Rhein, Germany. The overall theme of the exhibition is to explore design at the intersection of human & machine – here meaning robots, algorithms, AI and the like.

Have you ever met a robot?
The entrance to Vitra Design Museum during the Hello, Robot exhibition, February 2017.

It’s rare that I travel just to attend an exhibition. In this case it was entirely worth it as #hellorobot addresses some themes that are relevant, salient, and urgent: How do we (want to) live in an age of increased automation? What does and should our relationship with machines look like? What do we think about ascribing personality and agency to machines, algorithms and artificial intelligence in their many forms?

These are all questions we need to think about, and quickly: They are merely early indicators of the kind of challenges and opportunities we face over the next decades, on all levels: as an individual, as businesses, as a society.

Coupland
One of Douglas Coupland’s Micro Manifestos at Vitra.

The above-mentioned questions are, in other words, merely a lead-up to larger ones around things like agency (our own and the algorithms’) and governance, around the role of humans in the economy. A concrete example, if robots take care of the tasks we now pay people to perform (factory work, cleaning up the city, doing research, generating reports…) and if then (under the current model) only 20% of people would be in jobs, what does that mean, how do we earn a living and establish our role and status as a productive and valued member of society?

This example of robots doing most of the work doesn’t strike me as an abstract, academic one. It seems to be blatantly obvious that we need to rethink which roles we want humans to play in society.

This example of robots doing most of the work doesn’t strike me as an abstract, academic one. It seems to be blatantly obvious that we need to rethink which roles we want humans to play in society. All vectors aim at an economy which won’t require—nor be able—to employ 95% of the working-age population full-time. Yet, at the same time per-capita value creation rises and rises, so on a societal level—big picture!—we’re better off. So either we figure out how to handle high double-digit unemployment rates or we reframe how to think about less tasks requiring humans to do, how to unlock the potential of all the newly freed-up time in the day in the lives of millions upon millions of people, and what we want the role of people to be going forward.

(Ryan Avent’s book The Wealth of Humans seems like a good place to read up on possible scenarios. Thanks to Max & Simon’s recommendation in their newsletter The Adventure Equation. I haven’t read it yet but it’s the top of my to-read pile.)

///

Have you met a robot?
“Robots are tools for dramatic effect.” Bruce Sterling quote at Vitra.

hellorobot provides a great snapshot of the artistic and commercial landscape around robots and AI. From artistic explorations like good old manifest, an industrial robot arm perpetually churning out algorithmically generated manifestos that’s been in ZKM since ca. 2008 or Dan Chen’s much more recent CremateBot which allows you to start cremating the skin and hairs you shed as you go through your live, to the extremely commercial (think Industry 4.0 manufacturing bots), everything’s here. The exhibit isn’t huge, but it’s sweeping.

Dan Chen's Crematebot
Dan Chen’s CremateBot at Vitra.

I was especially delighted to see many of our friends and ThingsCon alumni in the mix as well. Bruce Sterling was an adviser. Superflux’s Uninvited Guests were on display. Automato (Simone Rebaudengo‘s new outfit) had four or five pieces on display, including a long-time favorite, Teacher of Algorithms.

Trainer of Algorithms
Automato’s Teacher of Algorithms at Vitra.

I found it especially encouraging to see wide range of medical and therapeutic robots included as well. An exoskeleton was present, as was a therapeutic doll for dementia patients. It was great to see this recent toy for autistic kids:

Therapy doll
A doll for dementia therapy from 2001 at Vitra.

Leka smart toy
A toy for autistic kids at Vitra.

///

One section explored more day-to-day, in the future possibly banal scenarios. What might the relationship between robots and babies be, how could parenting change through these technologies? Will the visual language of industrial manufacturing sneak into the crib or will robots be as cutesy and cozy as other kids toys and paraphernalia?

My First Robot
My First Robot at Vitra.

My First Robot
Will the visual language of industrial manufacturing enter the baby crib?

///

When the smart home stops
What happens when your smart home stops or fails? Lovely photo project at Vitra.

“Would you live in a robot?” The question was likely meant to provoke. Even though clearly some of the older and more traditional German and Swiss visitors around me seemed genuinely to be challenged to consider their world view by the exhibition, I’d go out on a limb: In 2017 I’m not sure the question is even a bit provocative, even though we might want to rethink how we consider our built environment. We might not all live in a robot/smart home. However, I kind of arrived at the exhibition in robots (I had flown in, then taken a cab) and I constantly carry a black box full of bots (my smart phone). Maybe we need updated questions already, like “How autonomous a robot would you live in?”, “What do you consider a robot?”, or “Would you consider yourself a cyborg if you had an implanted pacemaker/hand/memory bank?”

“What makes a good robot, one you’d like to live with?”

Or maybe this leads us off on a wild goose chase. Maybe we just need to ask “What makes a good robot, one you’d like to live with?” Robot, of course, is in this case used almost interchange with algorithm.

///

hellorobot is great, and highly recommended. However, the money quote for me, the key takeaway if you will, is one that I don’t think the curators even considered—nor should they have—in their effort to engage in a conversation around automation and living with robots.

It’s a quite from a not-so-recent Douglas Coupland project, of all things:

Coupland
One of Douglas Coupland’s Micro Manifestos

“The unanticipated side effects of technology dictate the future” — Douglas Coupland

I think this quote pretty much holds the key to unlocking what the 21st century will be about. What are the unintended consequences of a technology once it’s deployed and starts interacting with other tech and systems and society at large? How can we design systems and technologies to allow for max potential upside and minimal potential downside?

This is also the challenge at the heart of ThingsCon’s mission statement, to foster the creation of a human-centric & responsible IoT.

Go see the exhibit if you’re in the vicinity. You won’t regret it.

ps. For more photos, see my Flickr album. Also, a heads-up based on personal experience: The exhibition opens at 10am, as does the café. There’s no warm place to hang out before nor a cup of coffee to be had, and the museum is in the middle of nowhere. Plan your arrival wisely.

Social Market Capitalism 2.0: How should robots and humans co-exist?

S

After reading a great piece on the role and relationships between humans and algorithms, I went on a little (constructive) rant on Twitter (starting here). Here’s what I said again, as a blog post, for reference reasons and easier readability:

In the debate around how we will tackle the redistribution of work due to more robotic labor I honestly cannot understand: How is the most obvious solution isn’t the most-discussed? That more productivity total, by hugely less people, requires major rethinking. Full-time employment is gone. Never coming back. That’s a problem with 19c/20c thinking, but doesn’t have to be going forward.

We produce more, ie. create and capture more value, it’s just even less equally distributed under the traditional market model. So what? This is a societal decision, we can change that model. It’s been changing since day 1. We just might need some awkward convos.

Basic universal income seems an obvious, comparatively small step, but an unavoidable one. How have we not done this already? But we need to rethink the human’s role in society, too. I think we define our roles too much through our work, salary, status. This is bound to fail going forward. We need alternative models of contributing to society beyond “bread winner”. Again, baby steps: First, incentivize currently underpaid roles, like carers, social work, etc. Then expand from there.

This assumes a world view where most people actually enjoy working one way or another of course. Which I believe. It just partially uncouples salary & status & identity from job title, and couples it more closely with things we choose to do. More choice, more leeway in prioritizing work or free time, in balancing freedom and financials. Anyone who likes to earn more could still work more; this isn’t a post-capitalist approach. It’s social market capitalism 2.0.

Is it really that complicated?

I’m becoming an e-citizen of Estonia

I

I had been vaguely aware of Estonia’s initiative e-Estonia, in which people from around the world could sign up for a sort of e-citizenship for this most technologically advanced country of not just the Baltics, but maybe the world. But at the time, you had to pick up the actual ID in Estonia, which seemed slightly over the top (for now).

Fast forward to today, when I stumbled over Ben Hammersley‘s WIRED article about e-Estonia and learned that the application process now works completely online and a trip to our local Estonian embassy (a mere 20min or so by bike or subway away) now does the trick.

That’s exciting!

e-Estonia is not, of course, an actual citizenship, even though for many intents and purposes it does provide a surprisingly large number of services that traditionally were tied to residency of a nationstate.

(more…)