When discussing how to make sure that tech works to enrich society — rather than extract value from many for the benefit of a few — we often see a focus on incentives. I argue that that’s not enough: We need to consider and align incentives, interests, and implications.
Incentives
Incentives are, of course, mostly thought of as an economic motivator for companies: Maximize profit by lowering costs or offsetting or externalizing it, or charging more (more per unit, more per customer, or simply charging more customers). Sometimes incentives can be non-economic, too, like in the case of positive PR. For individuals, it’s conventionally thought of in the context of consumers trying to get their products as cheaply as possible.
All this of course is based on what in economics is called rational choice theory, a framework for understanding social and economic behavior: “The rational agent is assumed to take account of available information, probabilities of events, and potential costs and benefits in determining preferences, and to act consistently in choosing the self-determined best choice of action.” (Wikipedia) Rational choice theory isn’t complete, though, and might simply be wrong; we know, for example, that all kinds of cognitive biases are also at play in decision-making. The latter is for individuals, of course. But organizations inherently have their own blind spots and biases, too.
So this focus on incentives, while near-ubiquitous, is myopic: While incentives certainly play a role in decision making, they are not the only factor at play. Neither do companies only work towards maximizing profits (I know my own doesn’t, and I daresay many take other interests into account, too). Nor do consumers only optimize their behavior towards saving money (at the expense, say, of secure connected products). So we shouldn’t over-index on getting the incentives right and instead take other aspects into account, too.
Interests
When designing frameworks that aim at a better interplay of technology, society and individual, we should look beyond incentives. Interests, however vaguely we might define those, can clearly impact decision making. For example, if a company (large or small, doesn’t matter) wants to innovate in a certain area, they might willingly forgo large profits and instead invest in R&D or multi-stakeholder dialog. This could help them in their long term prospects through either new, better products (linking back to economic incentives) or by building more resilient relationships with their stakeholders (and hence reducing potential friction with external stakeholders).
Other organizations might simply be mission driven and focus on impact rather than profit, or at least balance both differently. Becoming a B-Corp for example has positive economic side effects (higher chance of retaining talent, positive PR) but more than that it allows the org to align its own interests with those of key stakeholder groups, namely not just investors but also customers and staff.
Consumers, equally, are not unlikely by any means to prioritize price over other characteristics: Organic and Fairtrade food or connected products with quality seals (like our own Trustable Technology Mark) might cost more but offer benefits that others don’t. Interests, rational or not, influence behavior.
And, just as an aside, there are plenty of cases where “irrationally” responsible behavior by an organization (like investing more than legally required in data protection, or protecting privacy better than industry best practice) can offer a real advantage in the market if the regulatory framework changes. I know at least one Machine Learning startup that had a party when GDPR came into effect since all of a sudden, their extraordinary focus on privacy meant they where ahead of the pack while the rest of the industry was in catch-up mode.
Implications
Finally, we should consider the implications of the products coming onto the market as well as the regulatory framework they live under. What might this thing/product/policy/program do to all the stakeholders — not just the customers who pay for the product? How might it impact a vulnerable group? How will it pay dividends in the future, and for whom?
It is especially this last part that I’m interested in: The dividends something will pay in the future. Zooming in even more, the dividends that infrastructure thinking will pay in the future.
Take Ramez Naam’s take on decarbonization — he makes a strong point that early solar energy subsidies (first in Germany, then China and the US) helped drive development of this new technology, which in turn drove the price down and so started a virtuous circle of lower price > more uptake > more innovation > lower price > etc. etc.
We all know what happened next (still from Ramez):
“Electricity from solar power, meanwhile, drops in cost by 25-30% for every doubling in scale. Battery costs drop around 20-30% per doubling of scale. Wind power costs drop by 15-20% for every doubling. Scale leads to learning, and learning leads to lower costs. … By scaling the clean energy industries, Germany lowered the price of solar and wind for everyone, worldwide, forever.”
Now, solar energy is not just competitive. In some parts of the world it is the cheapest, period.
This type of investment in what is essentially infrastructure — or at least infrastructure-like! — pays dividends not just to the directly subsidized but to the whole larger ecosystem. This means significantly, disproportionately bigger impact. It creates and adds value rather than extracting it.
We need more infrastructure thinking, even for areas that are, like solar energy and the tech we need to harvest it, not technically infrastructure. It needs a bit of creative thinking, but it’s not rocket science.
We just need to consider and align the 3 I’s: incentives, interests, and implications.