Taggovernance

Developing better urban metrics for Smart Cities

D

When we embed connected technologies — sensors, networks, etc. — into the public space*, we create connected public space. In industry parlance, this is called a Smart City. (I prefer “connected city”, but let’s put the terminology discussion on the back burner for now.) And data networks change the way we live.

* Note: Increasingly, the term “public space” has itself come under attack. In many cities, formerly public (as in publicly owned & governed) has been privatized, even if it’s still accessibly by the public, more or less. Think of a shopping mall, or the plazas that are sometimes attached to a shopping mall: You can walk in, but a mall cop might enforce some house rules that were written not by citizens but the corporation that owns the land. I find this not just highly problematic, I also recommend flat out rejecting that logic as a good way forward. Urban space — anything outside closed buildings, really — should, for the most part, be owned by the public, and even where for historical reasons it can’t be owned, it should at least be governed by the public. This means the rules should be the same in a park, a shopping mall-adjacent plaza, and the street; they should be enforced by (publicly employed) police rather than (privately employed) mall cops. Otherwise there’s no meaningful recourse for mistreatment, there’s no ownership, citizens are relegated from stakeholders to props/consumers.

Networks and data tend not to ease but to reinforce power dynamics, so we need to think hard about what type of Smart City we want to live in:

  • Do we want to allow people to get faster service for a fee (“Skip the line for $5”), or prefer everyone to enjoy the same level of service, independent of their income?
  • Do we want to increase the efficiency for 90% of the population through highly centralized services even if it means making the life of the other 10% much harder, or do we plan for a more resilient service delivery for all, even if it means the overall service delivery is a tad slower?
  • Do we want to cut short-term spending through privatization even if it means giving up control over infrastructure, or do we prioritize key infrastructure in our budgeting process so that the government can ensure quality control and service delivery in the long term, even if it costs more in the short term?

These are blunt examples, but I reckon you can tell where I’m going with this: I think democratic life requires public space and urban infrastructure to be available to all citizens and stakeholders, and to work well for all citizens. Pay for play should only apply for added non-essential services.

“Don’t confuse the data you can capture with the things you need to know!

In order to shape policies in this space meaningfully, we need to think about what the things are that we prioritize. Here, a brief warning is in place: the old management adage “you can’t manage what you don’t measure” is problematic to say the least. All too often we see organizations act on the things they can measure, even if these things are not necessarily meaningful but just easy to measure. Don’t confuse the data you can capture with the things you need to know!

What do we want to prioritize, and maybe even measure?

That said, what are the things we want to prioritize? And might it even be possible to measure them?

Here I don’t have final answers, just some pointers that I hope might lead us into the right direction. These are angles to be explored whenever we consider a new smart city project, at any scale — even, and maybe especially, for pilot projects! Let’s consider them promising starting points:

Participation
Has there been meaningful participation in the early feedback, framing, planning, governance processes? If feedback has been very limited and slow, what might the reasons be? Is it really lack of interest, or maybe the barrier to engagement was just too high? Were the documents to long, too full of jargon, to hard to access? (See Bianca Wylie’s thread on Sidewalk Labs’ 1.500+ page development plan.) Were the implications, the pros and cons, not laid out in an accessible way? For example, in Switzerland there’s a system in place that makes sure that in a referendum both sides have to agree on the language that explains pros and cons, so as to make sure both sides’ ideas are represented fairly and accessibly.

Sustainability
Would these changes significantly improve sustainability? The UN’s Sustainable Development Goals (SGD) framework might offer a robust starting point, even though we should probably aim higher given the political (and real!) climate.

Will it solve a real issue, improve the life for citizens?
Is this initiative going to solve a real issue and improve lives meaningfully? This is often going to be tricky to answer, but if there’s no really good reason to believe it’s going to make a meaningful positive impact then it’s probably not a good idea to pursue. The old editors’ mantra might come in handy: If in doubt, cut it out. There are obvious edge cases here: Sometimes, a pilot project is necessary to explore something truly new; in those cases, there must be a plausible, credible, convincing hypothesis in place that can be tested.

Are there safeguards in place to prevent things from getting worse than before if something doesn’t work as planned?
Unintended consequences are unavoidable in complex systems. But there are ways to mitigate risks, and to make sure that the fallback for a failed systems are not worse then the original status. If any project would be better while working perfectly but worse while failing, then that deserves some extra thought. If it works better for some groups but not for others, that’s usually a red flag, too.

When these basic goals are met, and only then, should we move on to more traditional measurements, the type that dominates the discourse today, like:

  • Will this save taxpayers’ money, and lead to more cost-effective service delivery?
  • Will this lead to more efficient service delivery?
  • Will this make urban management easier or more efficient for the administration?
  • Will this pave the way for future innovation?

These success factors / analytical lenses are not grand, impressive ideas: They are the bare minimum we should secure before engaging in anything more ambitious. Think of them as the plumbing infrastructure of the city: Largely unnoticed while everything works, but if it ever has hiccups, it’s really bad.

We should stick to basic procedural and impact driven questions first. We should incorporate the huge body of research findings from urban planners, sociologists, and political scientists rather than reinvent the wheel. And we should never, ever be just blinded by a shiny new technological solution to a complex social or societal issue.

Let’s learn to walk before we try to run.

How to plan & govern a smart city?

H

Taking the publication of Sidewalk Labs’ Master Innovation and Development Plan plan for the smart city development at Toronto’s waterfront (“Toronto Tomorrow”) as an occasion to think out loud about smart cities in general, and smart city governance in particular, I took to Twitter the other day.

If you don’t want to read the whole thing there, here’s the gist: I did a close reading of a tiny (!) section of this giant data dump that is the 4 volume, 1.500+ page Sidewalk Labs plan. The section I picked was the one that John Lorinc highlighted in this excellent article — a couple of tables on page 222 of the last of these 4 volumes, in the section “Supplemental Tables”. This is the section that gets no love from the developers; it’s also the section that deals very explicitly with governance of this proposed development. So it’s pretty interesting. This, by the way, is also roughly my area of research of my Edgeryders fellowship.

On a personal note: It’s fascinating to me how prescient our speakers at Cognitive Cities Conference were back in 2011 – eight years is a long time in this space, and it feels like we invited exactly the right folks back then!

Smart cities & governance: A thorny relationship

In this close reading I focused on exactly that: What does governance mean in a so-called smart city context. What is it that’s being governed and how, and maybe most importantly, by whom?

Rather than re-hash the thread here, just a quick example to illustrate the kind of issues. Where this plan speaks of publicly accessible spaces and decision-making taking into account community input, I argue that we need public spaces and full citizens rights. Defaults matter, and in cities we need the default to be public space and citizens to wield the final decision-making power over their environment. Not even the most benign or innovative company or other non-public entity is an adequate replacement for a democratically elected administration/government, and any but the worst governments — cumbersome as a government might be in some cases — is better than the alternatives.

My arguments didn’t go unnoticed, either. Canadian newspaper The Star picked up my thread on the thorny issue of governance and put it in context of other experts critical of privatizing the urban space; the few others I know from the thread make me think I’m in good company there.

What’s a smart city, anyway?

As a quick, but worthwhile diversion I highly recommend the paper Smart cities as corporate storytelling (Ola Söderström, Till Paasche, Francisco Klauser, published in City vol. 18 (2014) issue 3). In it, the authors trace not just the origin of the term smart cities but also the deliberate framing of the term that serves mostly the vendors of technologies and services in this space, in efficient and highly predictable ways. They base their analysis on IBM’s Smarter City campaign (highlights mine):

”this story is to a large extent propelled by attempts to create an ‘obligatory passage point’ (…) in the transformation of cities into ‘smart’ ones. In other words it is conceived to channel urban development strategies through the technological solutions of IT companies.

These stories are important and powerful:

Stories are important because they provide actors involved in planning with an understanding of what the problem they have to solve is (…). More specifically, they play a central role in planning because they “can be powerful agents or aids in the service of change, as shapers of a new imagination of alternatives.” (….) stories are the very stuff of planning, which, fundamentally, is persuasive and constitutive storytelling about the future.” (…)

The underlying logic is that of a purely data-driven, almost mechanical model of urban management that is overly simplistic and neither political, nor does it require expert matters. This logic is inherently faulty. Essentially, it disposes with the messiness that humans and all their pesky complex socio-cultural issues.

In this approach, cities are no longer made of different – and to a large extent incommensurable – socio-technical worlds (education, business, safety and the like) but as data within systemic processes. (…) As a result, the analysis of these ‘urban themes’ no longer seem to require thematic experts familiar with the specifics of a ‘field’ but only data- mining, data interconnectedness and software-based analysis.

So: Governance poor, underlying logic poor. What could possibly go wrong.

A better way to approach smart city planning

In order to think better, more productively about how to approach smart cities, we need to step back and look at the bigger picture.

If you follow my tweets or my newsletter, you’ll have encountered the Vision for a Shared Digital Europe before. It’s a proposed alternative for anything digital in the EU that would, if adopted, replace the EU’s Digital Single Market (DSM). Where the EU thinks about the internet, it’s through this lens of the DSM — the lens of markets first and foremost. the Vision for a Shared Digital Europe (SDE) however proposes to replace this logic of market first through 4 alternative pillars:

  • Cultivate the Commons
  • Decentralize infrastructure
  • Enable self-determination
  • Empower public institutions

Image: Vision for a Shared Digital Europe (shared-digital.eu)

I think these 4 pillars should hold up pretty well in the smart city planning context. Please note just how different this vision is from what Sidewalk Labs (and the many other smart city vendors) propose:

  • Instead of publicly available spaces we would see true commons, including but not limited to space.
  • Instead of centralized data collection, we might see decentralization, meaning a broader, deeper ecosystem of offerings and more resilience (as opposed to just more efficiency).
  • Instead of being solicited for “community input”, citizens would actively shape and decide over their future.
  • And finally, instead of working around (or with, to a degree) public administrations, a smart city after this school of thought would double down on public institutions and give them a strong mandate, sufficient funding, an in-house capacity to match the industry’s.

It would make for a better, more democratic and more resilient city.

So I just want to put this out there. And if you’d like to explore this further together, please don’t hesitate to ping me.

Getting our policies ready for AI futures

G

In late 2016, the White House published a report, “Artificial Intelligence, Automation, and the Economy” (PDF). It’s a solid work of research and forecasting, and proposes equally solid policy recommendations. Here’s part of the framing, from the report’s intro:

AI-driven automation will continue to create wealth and expand the American economy in the coming years, but, while many will benefit, that growth will not be costless and will be accompanied by changes in the skills that workers need to succeed in the economy, and structural changes in the economy. Aggressive policy action will be needed to help Americans who are disadvantaged by these changes and to ensure that the enormous benefits of AI and automation are developed by and available to all.

This cuts right to the chase: Artificial intelligence (AI) will create wealth, and it will replace jobs. AI will change the future of work, and the economy.

AI will change the future of work, and the economy.

Revisiting this report made me wonder if similar policy research exists in Germany and at the European level? A quick search online brought up bits and pieces (Merkel arguing for bundling know-how for AI and acknowledging that AI spending is low in Europe, demands for transparency in algorithms). However, there doesn’t seem to be an overarching guiding policy. (I asked federal government spokesperson Steffen Seibert on Twitter, but so far he hasn’t responded. Which is fair—why would he!)

Germany has a mixed track record of tech policy

For the record: In other areas, Germany is making good progress. Take autonomous driving, for example. Germany just adopted an action plan on automated driving that regulates key points of how autonomous vehicles should behave on the street—and regulates it well! Key points include that autonomous driving is worth promoting because it causes fewer accidents, dictates that damage to property must take precedence over personal injury (aka life has priority), and that in unavoidable accident situations there may not be any discrimination between individuals based on age, gender, etc. It even includes data sovereignty for drivers. Well done!

On the other hand, for the Internet of Things (IoT) Germany squandered opportunities in that IoT is framed almost exclusively as industrial IoT under the banner of Industrie 4.0. This is understandable given Germany’s manufacturing-focused economy, but it excludes a huge amount of super interesting and promising IoT. It’s clearly the result of successful lobbying but at the expense at a more inclusive, diverse portfolio of opportunities.

So where do we stand with artificial intelligence in Germany? Honestly, in terms of policy I cannot tell.

So where do we stand with artificial intelligence in Germany? Honestly, in terms of policy I cannot tell.

Update: The Federal Ministry of Education and Research recently announced an initiative to explore AI: Plattform Lernende Systeme (“platform living systems”). Thanks to Christian Katzenbach for the pointer!

AI & the future of work

The White House AI report talks a lot about the future of work, and of employment specifically. This makes sense: It’s one of the key aspects of AI. (Some others are, I’d say, opportunity for the creation of wealth on one side and algorithmic discrimination on the other.)

How AI will impact the work force, the economy, and the role of the individual is something we can only speculate about today.

In a recent workshop with stipendiaries of the Heinrich-Böll-Foundation on the future of work we explored how digital, AI, IoT and adjacent technologies impact how we work, and how we think about work. It was super interesting to see this diverse group of very, very capable students and young professionals bang their heads against the complexities in this space. Their findings mirrored what experts across the field also have been finding: That there are no simple answers, and most likely we’ll see huge gains in some areas and huge losses in others.

Like all automation before, depending on the context we’ll see AI either displace human workers or increase their productivity.

The one thing I’d say is a safe bet is this: Like all automation before, depending on the context we’ll see AI either displace human workers or increase their productivity. In other words, some human workers will be super-powered by AI (and related technologies), whereas others will fall by the wayside.

Over on Ribbonfarm, Venkatesh Rao phrases this very elegantly: Future jobs will either be placed above or below the API: “You either tell robots what to do, or are told by robots what to do.” Which of course conjures to mind images of roboticized warehouses, like this one:

Just to be clear, this is a contemporary warehouse in China. Amazon runs similar operations. This isn’t the future, this is the well-established present.

Future jobs will either be placed above or below the API: “You either tell robots what to do, or are told by robots what to do.”

I’d like to stress that I don’t think a robot warehouse is inherently good or bad. It depends on the policies that make sure the humans in the picture do well.

Education is key

So where are we in Europe again? In Germany, we still try to define what IoT and AI means. In China it’s been happening for years.

This picture shows a smart lamp in Shenzhen that we found in a maker space:

What does the lamp do? It tracks if users are nearby, so it can switch itself off when nobody’s around. It automatically adjusts light the temperature depending on the light in the room. As smart lamps go, these features are okay: Not horrible, not interesting. If it came out of Samsung or LG or Amazon I wouldn’t be surprised.

So what makes it special? This smart lamp was built by a group of fifth graders. That’s right: Ten and eleven year olds designed, programmed, and built this. Because the curriculum for local students includes the skills that enable them to do this. In Europe, this is unheard of.

I think the gap in skills regarding artificial intelligence is most likely quite similar. And I’m not just talking about the average individual: I’m talking about readiness at the government level, too. Our governments aren’t ready for AI.

Our governments aren’t ready for AI.

It’s about time we start getting ready for AI, IoT, and robotics. Always a fast mover, Estonia considers a law to legalize AI, and they smartly kick off this process with a multi-stakeholder process.

What to do?

In Germany, the whole discussion is still in its earliest stages. Let’s not fall into the same trap as we did for IoT: Both IoT and AI are more than just industry. They are both broader and deeper than the adjective industrial implies.

The White House report can provide some inspiration, especially around education policy.

We need to invest in what OECD calls the active labor market policies, i.e. training and skill development for adults. We need to update our school curricula to get youths ready for the future with both hands-on applicable skills (coding, data analysis, etc.) and with the larger contextual meta skills to make smart decisions (think humanities, history, deep learning).

We need to reform immigration to allow for the best talent to come to Europe more easily (and allow for voting rights, too, because nobody feels at home where they pay taxes with no representation).

Without capacity building, we’ll never see the digital transformation we need to get ready for the 21st century.

Zooming out to the really big picture, we need to start completely reforming our social security systems for an AI world that might not deliver full employment ever again. This could include Universal Basic Income, or maybe rather Universal Basic Services, or a different approach altogether.

This requires capacity building on the side of our government. Without capacity building, we’ll never see the digital transformation we need to get ready for the 21st century.

But I know one thing: We need to kick off this process today.

///

Please note: This is cross-posted from Medium.

Challenges for governance in the Internet of Things

C

Image by Paula Vermeulen via Unsplash

I’d like to share 3 short stories that demonstrate just a few of the challenges of governance for IoT.

1) In the fall of 2016 Facebook, Twitter, Netflix and other popular consumer websites were temporarily shut down in a so-called Distributed Denial of Service (DDoS) attack. This isn’t unusual in itself—it happens all the time in smaller scale. What WAS unusual was the attack vector: For the first time, a large-scale DDoS attack was driven by IoT products, mainly cheap, unsecured, internet-connected CCTV cameras. Who suffers the consequences? Who’s responsible? Who’s liable?

2) As part of the European Digital Single Market, the EU just passed the The General Data Protection Regulation, or GDPR for short. It’s is designed to enable individuals to better control their personal data. However, experts around the globe are scrambling to figure out how this applied to the IoT: Almost certainly, a lot of the type of data collection and personalization that’s part of consumer IoT products falls squarely under the GDPR. What will IoT-related services look like 5 years from now? Is it going to be different services depending on where you are? Based on where your provider is? Based on where your residency is? Or will it just stay the same?

3) In 2015, Mount Sinai Hospital in New York launched an interesting research project called Deep Patient. They applied artificial intelligence (AI) techniques—concretely machine learning algorithms—to analyze their patient records for patterns. It turned out that these algorithms were extremely good at predicting certain medical conditions; much better than human doctors. But it wasn’t clear how they got to these predictions. Is it responsible to act on medical predictions if the doctors don’t know what they’re based on? Is it responsible not to? How do we deal with intelligence and data that we don’t understand? What if our fridges, cars, or smartphones knew better what’s good for us than we do?

These 3 short stories demonstrate how wide the range of questions is that we face in IoT. The width and depth of this range makes questions of governance more than just a little tricky.

Image: Paula Vermeulen, Unsplash

AI: Process v Output

A

TL;DR: Machine learning and artificial intelligence (AI) are beginning to govern ever-greater parts of our lives. If we want to trust their analyses and recommendations, it’s crucial that we understand how they reach their conclusions, how they work, which biases are at play. Alas, that’s pretty tricky. This article explores why.

As machine learning and AI gain importance and manifest in many ways large and small wherever we look, we face some hard questions: Do we understand how algorithms make decisions? Do we trust them? How do we want to deploy them? Do we trust the output, or focus on process?

Please note that this post explores some of these questions, connecting dots from a wide range of recent articles. Some are quoted heavily (like Will Knight’s, Jeff Bezos’s, Dan Hon’s) and linked multiple times over for easier source verification rather than going with endnotes. The post is quite exploratory in that I’m essentially thinking out loud, and asking more questions than I have answers to: tread gently.

///

In his very good and very interesting 2017 shareholder letter, Jeff Bezos makes a point about not over-valuing process: “The process is not the thing. It’s always worth asking, do we own the process or does the process own us?” This, of course, he writes in the context of management: His point is about optimizing for innovation. About not blindly trusting process over human judgement. About not mistaking existing processes for unbreakable rules that are worth following at any price and to be followed unquestioned.

Bezos also briefly touches on machine learning and AI. He notes that Amazon is both an avid user of machine learning as well as building extensive infrastructure for machine learning—and Amazon being Amazon, making it available to third parties as a cloud-based service. The core point is this (emphasis mine): “Over the past decades computers have broadly automated tasks that programmers could describe with clear rules and algorithms. Modern machine learning techniques now allow us to do the same for tasks where describing the precise rules is much harder.”

super mario cube
Algorithms as a black box: Hard to tell what’s going on inside (Image: ThinkGeek)

That’s right: With machine learning, we can learn to get desirable results but without necessarily knowing how to describe the rules that get us there. It’s pure output. No—or hardly any—process in the sense that we can interrogate or clearly understand it. Maybe not even instruct it, exactly.

Let’s keep this at the back of our minds now, we’ll come back to it later. Exhibit A.

///

In s4e12 of his excellent newsletter Things That Have Caught My Attention, Dan Hon writes, reflecting on Jeff Bezos’ shareholder letter (I replaced Dan’s endnotes with direct links):

“Machine learning techniques – most recently and commonly, neural networks[1] – are getting pretty unreasonably good at achieving outcomes opaquely. In that: we really wouldn’t know where to start in terms of prescribing and describing the precise rules that would allow you to distinguish a cat from a dog. But it turns out that neural networks are unreasonably effective (…) at doing these kinds of things. (…) We’re at the stage where we can throw a bunch of images to a network and also throw a bunch of images of cars at a network and then magic happens and we suddenly get a thing that can recognize cars.”

Dan goes on to speculate:

“If my intuition’s right, this means that the promise of machine learning is something like this: for any process you can think of where there are a bunch of rules and humans make decisions, substitute a machine learning API. (…) machine learning doesn’t necessarily threaten jobs like “write a contract between two parties that accomplishes x, y and z” but instead threatens jobs where management people make decisions.”

In conclusion:

“Neural networks work the other way around: we tell them the outcome and then they say, “forget about the process!”. There doesn’t need to be one. The process is inside the network, encoded in the weights of connections between neurons. It’s a unit that can be cloned, repeated and so on that just does the job of “should this insurance claim be approved”. If we don’t have to worry about process anymore, then that lets us concentrate on the outcome. Does this mean that the promise of machine learning is that, with sufficient data, all we have to do is tell it what outcome we want?”

The AI, our benevolent dictator

Now if we answered Dan’s question with YES, then this is where things get tricky, isn’t it? It opens the door to a potentially pretty slippery slope.

In political science, a classic question is what the best form of government looks like. While a discussion about what “best” means—freedom? wealth? health? agency? for all or for most? what are the tradeoffs?—is fully legitimate and should be revisited every so often, it boils down to this long-standing conflict:

Can a benevolent dictator, unfettered by external restraints, provide a better life for their subjects?

versus

Does the protection of rights, freedom and agency offered by democracy outweigh the often slow and messy decision-making processes it requires?

Spoiler alert: Generally speaking, democracy won this debate a long time ago.

(Of course there are regions where societies have held on to the benevolent dictatorship model; and the recent rise of the populist right demonstrates that populations around the globe can be attracted to this line of argument.)

The reason democracy—a form of government defined by process!—has surpassed dictatorships both benevolent and malicious is that overall it seems a human endeavor to have agency and freely express it, rather than be governed by an unfettered, unrestricted ruler of any sorts.

Every country that chooses democracy over a dictator sacrifices efficiency for process: A process that can be interrogated, understood, adapted. Because, simply stated, a process understood is a process preferred. Being able to understand something gives us power to shape it, to make it work for us: This is true both on the individual and the societal level.

Messy transparency and agency trumps blackbox efficiency.

Let’s keep that in mind, too. Exhibit B.

Who makes the AI?

Andrew Ng, who was heavily involved in Baidu’s (and before Google’s) AI efforts, emphasizes the potential impact of AI to transform society: “Just as electricity transformed many industries roughly 100 years ago, AI will also now change nearly every major industry?—?healthcare, transportation, entertainment, manufacturing?—?enriching the lives of countless people.”

He continues:

“I want all of us to have self-driving cars; conversational computers that we can talk to naturally; and healthcare robots that understand what ails us. The industrial revolution freed humanity from much repetitive physical drudgery; I now want AI to free humanity from repetitive mental drudgery, such as driving in traffic. This work cannot be done by any single company?—?it will be done by the global AI community of researchers and engineers.”

While I share Ng’s assessment of AI’s potential impact, I got to be honest: His raw enthusiasm for AI sounds a little scary to me. Free humanity from mental drudgery? Not to wax overly nostalgic, but mental drudgery—even boredom!—has proven really quite important for humankind’s evolution and played a major role in its achievements. Plus, the idea that engineers are the driving force seems risky at least: It’s a pure form of stereotypical Silicon Valley think, almost a cliché. I’m willing to give him the benefit of the doubt and assume that by “researchers” he also meant to include anthropologists, philosophers, political scientists, and all the other valuable perspectives of social sciences, humanities, and other related fields.

Don’t leave something as important as AI to a bunch of tech bros (Image: Giphy)

Something as transformative as this should not, in the 21st century, be driven by a tiny group of people with very homogenous backgrounds. Diversity is key, in professional backgrounds and ways of thinking as much as in gender, ethnic, regional and cultural backgrounds. Otherwise, algorithms are bound to encode and help enforce unhealthy policies.

Engineering-driven, tech-deterministic, non-diverse expansionist thinking delivers sub-optimum results. File under exhibit C.

Automated decision-making

Bezos writes about the importance of making decisions fast, which often requires making them with incomplete information: “most decisions should probably be made with somewhere around 70% of the information you wish you had. If you wait for 90%, in most cases, you’re probably being slow. Plus, either way, you need to be good at quickly recognizing and correcting bad decisions. If you’re good at course correcting, being wrong may be less costly than you think, whereas being slow is going to be expensive for sure.”

This, again, he writes in the context of management—presumably by and through humans. How will algorithmic decision-making fit into this picture? Will we want our algorithms to start deciding—or issuing recommendations—based on 100 percent of information? 90? 70? Maybe there’s an algorithm that figures out through machine learning how much information is just enough to be good enough?

Who is responsible for making algorithmically-made decisions? Who bears the responsibility for enforcing them?

If the algorithmic load-optimizing (read: overbooking), tells airline staff to remove a passenger from a plane and it ends up in a dehumanizing debacle, who’s fault is that?

Teacher of Algorithm by Simone Rebaudengo and Daniel Prost

More Dan Hon! Dan takes this to its logical conclusion (s4e11): “We’ve outsourced deciding things, and computers – through their ability to diligently enact policy, rules and procedures (surprise! algorithms!) give us a get out of jail free card that we’re all too happy to employ.” It is, in extension, a jacked up version of “it’s policy, it’s always been our policy, nothing I can do about it.” Which is, of course, the oldest and laziest cop-out there ever was.

He continues: “Algorithms make decisions and we implement them in software. The easy way out is to design them in such a a way as to remove the human from the loop. A perfect system. But, there is no such thing. The universe is complicated, and Things Happen. While software can deal with that (…) we can take a step back and say: that is not the outcome we want. It is not the outcome that conscious beings that experience suffering deserve. We can do better.”

I wholeheartedly agree.

To get back to the airline example: In this case I’d argue the algorithm was not at fault. What was at fault is that corporate policy said this procedure has priority, and this was backed up by an organizational culture that made it seem acceptable (or even required) for staff to have police drag a paying passenger off a plane with a bloody lip.

Algorithms blindly followed, backed up by corporate policies and an unhealthy organizational culture: Exhibit D.

///

In the realm of computer vision, there’s been a lot of advances through (and for) machine learning lately. Generative adversarial networks (GANs), in which one network tries to fool another, seem particularly promising. I won’t pretend to understand the math behind GANs, but Quora got us covered:

“Imagine an aspiring painter who wants to do art forgery (G), and someone who wants to earn his living by judging paintings (D). You start by showing D some examples of work by Picasso. Then G produces paintings in an attempt to fool D every time, making him believe they are Picasso originals. Sometimes it succeeds; however as D starts learning more about Picasso style (looking at more examples), G has a harder time fooling D, so he has to do better. As this process continues, not only D gets really good in telling apart what is Picasso and what is not, but also G gets really good at forging Picasso paintings. This is the idea behind GANs.”

So we got two algorithmic networks sparring with one another. Both of them learn a lot, fast.

Impressive, if maybe not lifesaving results include so-called style transfer. You’ve probably seen it online: This is when you can upload a photo and it’s rendered in the style of a famous painter:

collection style transfer examples
Collection Style Transfer refers to transferring images into artistic styles. Here: Monet, Van Gogh, Ukiyo-e, and Cezanne. (Image source: Jun-Yan Zhu)

Maybe more intuitively impressive, this type of machine learning can also be applied to changing parts of images, or even videos:

failure modesbr> Sometimes, failure modes are not just interesting but also look hilarious (Image source: Jun-Yan Zhu)

This is the kind of algorithmic voodoo that powers things like Snapchat “world lenses” and Facebook’s recent VR announcements (“Act 2“).

Wait, how did we get here? Oh yes, output v process!

Machine learning requires new skills (for creators and users alike)

What about skill sets required to work with machine learning, and to make machines learn in interesting, promising ways?

Google has been remaking itself as a machine learning first company. As Christine Robson, who works on Google’s internal machine learning efforts puts it: “It feels like a living, breathing thing. It’s a different kind of engineering.”

Technology Review features a stunning article absolutely worth reading in full: In The Dark Secret at the Heart of AI, author Will Knight interviews MIT professor Tommi Jaakkola who says:

“Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. ‘It is a problem that is already relevant, and it’s going to be much more relevant in the future. (…) Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.’”

And machine learning doesn’t just require different engineering. It requires a different kind of design, too. From Machine Learning for Designers (ebook, free O’Reilly account required): “These technologies will give rise to new design challenges and require new ways of thinking about the design of user interfaces and interactions.”

Machine learning means that algorithms learn from—and increasingly will adapt to—their own performance, user behaviors, and external factors. Processes (however oblique) will change, as will outputs. Quite likely, the interface and experience will also adapt over time. There is no end state but constant evolution.

Technologist & researcher Greg Borenstein argues that “while AI systems have made rapid progress, they are nowhere near being able to autonomously solve any substantive human problem. What they have become is powerful tools that could lead to radically better technology if, and only if, we successfully harness them for human use.”

Borenstein concludes: “What’s needed for AI’s wide adoption is an understanding of how to build interfaces that put the power of these systems in the hands of their human users.”

Future-oriented designers seem to be at least open to this idea. As Fabien Girardin of the Near Future Laboratory argues: “That type of design of system behavior represents a future in the evolution of human-centered design.”

Computers beating the best human Chess and Go players have given us Centaur Chess in which humans and computers play side-by-side in a team: While computers beat humans in chess, these hybrid team of humans and computers playing in tandem beat computers hands-down.

In centaur chess, software provides analysis and recommendations, a human expert makes the final call. (I’d be interested in seeing the reverse being tested, too: What if human experts gave recommendations for the algorithms to consider?)

How does this work? Why is it doing this?

Now, all of this isn’t particularly well understood today. Or more concretely, the algorithms hatched that way aren’t understood, and hence their decisions and recommendations can’t be interrogated easily.

Will Knight shares the story of a self-driving experimental vehicle that was “unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.”

What makes this really interesting is that it’s not entirely clear how the algorithms learned:

The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did (…) It isn’t completely clear how the car makes its decisions.”

Knight stresses just how novel this is: “We’ve never before built machines that operate in ways their creators don’t understand.”

We know that it’s possible to attack machine learning with adversarial examples: So-called adversarial examples are intentionally designed to cause the model to make a mistake, to train the algorithm incorrectly. Even without a malicious attack, algorithms also simply don’t always get the full—or right—picture: “Google researchers noted that when its [Deep Dream] algorithm generated images of a dumbbell, it also generated a human arm holding it. The machine had concluded that an arm was part of the thing.”

This—and this type of failure mode—seems relevant. We need to understand how algorithms work in order to adapt, improve, and eventually trust them.

Consider for example two areas where algorithmic decision-making could directly decide about life or death: Military and medicine. Speaking of military use cases, David Dunning of DARPA’s Explainable Artificial Intelligence program explains: “It’s often the nature of these machine-learning systems that they produce a lot of false alarms, so an intel analyst really needs extra help to understand why a recommendation was made.” Life or death might literally depend on it. What’s more, if a human operator doesn’t fully trust the AI output then that output is rendered useless.

We need to understand how algorithms work (Image: Giphy)

Should we have a legal right to interrogate AI decision making? Again, Knight in Technology Review: “Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.”

It seems likely that this could currently not even be enforced, that the creators of these algorithmic decision-making systems might not even be able to find out what exactly is going on.

There have been numerous attempt of exploring this, usually through visualizations. This works, to a degree, for machine learning and even other areas. However, often machine learning is used to crunch multi-dimensional data sets. We simply have no great way of visualizing this in a way that makes it easy to analyze (yet).

This is worrisome to say the least.

But let me play devil’s advocate for a moment: What if the outcomes are really so good, so much better than the human-powered analysis or decision-making skills. Might not using them be simply irresponsible? Knight gives the example of a program at Mount Sinai Hospital in New York called Deep Patient that was “just way better” at predicting certain diseases from patient records.

If this prediction algorithm has a solid track record of successful analysis, but neither developers nor doctors understand how it reaches its conclusions, is it responsible to prescribe medication based on its recommendation? Would it be responsible not to?

Philosopher Daniel Dennett who studies consciousness of the mind takes it a step further. An explanation by an algorithm might not be good enough. Humans aren’t great at explaining themselves, so if an AI “can’t do better than us at explaining what it’s doing, then don’t trust it.”

It follows that an AI would need to provide a much better explanation than a human in order for it to be trustworthy. Exhibit E.

Now where does that leave us?

Let’s assume that the impact of machine learning, algorithmic decision-making and AI will keep increasing. A lot. Then We need to understand how algorithms work in order to adapt, improve, and eventually trust them.

Machine learning allows us to get desirable results, but without necessarily knowing how (exhibit A). It’s essential for a society to be able to understand and shape its governance, and to have agency in doing so. So in AI just like in governance: Transparent messiness is more desirable than oblique efficiency. Black boxes simply won’t do. We cannot have black boxes govern our lives (exhibit B). Something as transformative as this should not, in the 21st century, be driven by a tiny group of people with very homogenous backgrounds. Diversity is key, in professional backgrounds and ways of thinking as much as in gender, ethnic, regional and cultural backgrounds. Engineering-driven, tech-deterministic, non-diverse, expansionist thinking delivers sub-optimum results (exhibit C). Otherwise, algorithms are bound to encode and help enforce unhealthy policies. Blindly followed, backed up by corporate policies and an unhealthy organizational culture, this is bound to deliver horrible results (exhibit D). Hence we need to be able to interrogate algorithmic decision-making. And if in doubt, an AI should provide a much better explanation than a human in order for it to be trustworthy (exhibit E).

Machine learning and AI hold great potential to improve our lives. Let’s embrace it, but deliberately and cautiously. And let’s not hand over the keys to software that’s too complex for us to interrogate, understand, and hence shape to serve us. We must apply the same kinds of checks and balances to tech-based governance as to human or legal forms of governance—accountability & democratic oversight and all.

I’m becoming an e-citizen of Estonia

I

I had been vaguely aware of Estonia’s initiative e-Estonia, in which people from around the world could sign up for a sort of e-citizenship for this most technologically advanced country of not just the Baltics, but maybe the world. But at the time, you had to pick up the actual ID in Estonia, which seemed slightly over the top (for now).

Fast forward to today, when I stumbled over Ben Hammersley‘s WIRED article about e-Estonia and learned that the application process now works completely online and a trip to our local Estonian embassy (a mere 20min or so by bike or subway away) now does the trick.

That’s exciting!

e-Estonia is not, of course, an actual citizenship, even though for many intents and purposes it does provide a surprisingly large number of services that traditionally were tied to residency of a nationstate.

(more…)

Thoughts on the smart city

T

Over the last few months, I once more had the chance to work on smart city-related topics. (I say once more because it’s been a while since I did a deep dive into the field back with Cognitive Cities Conference in 2011. Ever since I’ve been following the field closely, but not actively contributed much.)

So recently I’ve had several occasions to work on smart city-related things. It’s been exciting to me that these engagements came through different vectors – in one case it was related to prior work in and around politics & e-governance and has a policy angle, in one case the approach was from an #iot angle and focused on connectivity in a wider sense. There might be more, and with a stronger overlap, as the circles in this particular Venn diagram increasingly move closer together.

I hope (and think) that large chunks of these recent projects will be made accessible publicly at some point. For now, it’ll have to stay a bit on the vague end I’m afraid. Once things get published, you’ll find out through the usual channels.

Long story short: I’ve been thinking about smart cities a fair bit. And two major questions have been popping up over and over again.

(more…)