For an upcoming day of teaching I started compiling a list of resources relevant for the ethical, responsible development of tech, especially public interest tech. This list is very much incomplete, a starting point.
(For disclosure’s sake, I should add that I’ve started lists like this before: I’ll try, but cannot promise, to be maintaining this one. Just assume it’s a snapshot, useful primarily in the now and as an archive for future reference.)
I also can take only very partial credit for it since I asked Twitter for input. I love Twitter for this kind of stuff: Ask, and help shall be provided. My Twitter is a highly curated feed of smart, helpful people. (I understand that for many people Twitter feels very, very different. My experience is privileged that way.) A big thank you in particular to Sebastian Deterding, Alexandra Deschamps-Sonsino, Dr. Laura James, Iskander Smit, and to others I won’t name because they replied via DM and this might have been a privacy-related decision. You know who you are – thank you!
Here are a bunch of excellent starting points to dig deeper, ranging from books to academic papers to events to projects to full blown reading lists. This list covers a lot of ground. You can’t really go wrong here, but choose wisely.
One of the joys of my working at the intersection of emerging tech and its impact is that I get to discuss things that are by definition cutting edge with people from entirely different backgrounds—like recently with my dad. He’s 77 years old and has a background in business, not tech.
We chatted about IoT, and voice-enabled connected devices, and the tradeoffs they bring between convenience and privacy. How significant chunks of the internet of things are optimized for costs at the expense of privacy and security. How IoT is, by and large, a network of black boxes.
When I tried to explain why I think we need a trustmark for IoT (which I’m building with ThingsCon and as a Mozilla fellow)—especially regarding voice-enabled IoT—he listened intently, thought about it for a moment, and then said:
“We’re at a point in time where the world doesn’t know where it wants to go.”
And somehow that exactly sums it up, ever so much more eloquently than I could have phrased it.
Only I’m thinking: Even though I can’t tell where the world should be going, I think I know where to plant our first step—and that is, towards a more transparent and trustworthy IoT. I hope the trustmark can be our compass.
When Congress questioned representatives of Facebook, Google and Twitter, it became official: We need to finally find an answer to a debate that’s been bubbling for months (if not years) about the role of the tech companies—Google, Apple, Facebook, Amazon, Microsoft, or GAFAM—and their platforms.
The question is summed up by Ted Cruz’s line of inquiry (and here’s a person I never expected to quote) in the Congressional hearing: “Do you consider your sites to be neutral public fora?” (Some others echoed versions of this question.)
Platform or media?
Simply put, the question boils down to this: Are GAFAM tech companies or media companies? Are they held to standards (and regulation) of “neutral platform” or “content creator”? Are they dumb infrastructure or pillars of democracy?
These are big questions to ask, and I don’t envy the companies for their position in this one. As a neutral platform they get a large degree of freedom, but have to take responsibility for the hate speech and abuse on their platform. As a media company they get to shape the conversation more actively, but can’t claim the extreme point of view of free speech they like to take. You can’t both be neutral and “bring humanity together” as Mark Zuckerberg intends. As Ben Thompson points out on Stratechery (potentially paywalled), neutrality might be the “easier” option:
the “safest” position for the company to take would be the sort of neutrality demanded by Cruz — a refusal to do any sort of explicit policing of content, no matter how objectionable. That, though, was unacceptable to the company’s employee base specifically, and Silicon Valley broadly
I agree this would be easier. (I’m not so sure that the employee preference is the driving force, but that’s another debate and it certainly plays a role.) Also, let’s not forget that each of these companies plays a global game, and wherever they operate they have to meet legal requirements. Where are they willing to draw the line? Google famously didn’t enter the Chinese market a few years ago, presumably because they didn’t want to meet the government’s censorship requirements. This was a principled move, and I would expect not an easy one for a big market. But where do you draw the line? US rules on nudity? German rules on censoring Nazi glorification and hate speech? Chinese rules on censoring pro-democracy reporting or on government surveillance?
For GAFAM, the position has traditionally been clear cut and quite straightforward, which we can still (kind of, sort of) see in the Congressional hearing:
“We don’t think of it in the terms of ‘neutral,'” [Facebook General Counsel Colin] Stretch continued, pointing out that Facebook tries to give users a personalized feed of content. “But we do think of ourselves as — again, within the boundaries that I described — open to all ideas without regard to viewpoint or ideology.” (Source: Recode)
[Senator John] Kennedy also asked Richard Salgado, Google’s director of law enforcement and information security, whether the company is a “newspaper” or a neutral tech platform. Salgado replied that Google is a tech company, to which Kennedy quipped, “that’s what I thought you’d say.” (Source: Business Insider)
Now that’s interesting, because while they claim to be “neutral” free speech companies, Facebook and the others have of course been hugely filtering content by various means (from their Terms of Service to community guidelines), and shaping the attention flow (who sees what and when) forever.
This aspect isn’t discussed much, but worth noting nonetheless: How Facebook and other tech firms deal with content has been based to a relatively large degree by United States legal and cultural standards. Which makes sense, given that they’re US companies, but doesn’t make a lot of sense given they operate globally. To name just two examples from above that highlight how legal and cultural standards differ from country to country, take pictures of nudity (largely not OK in the US, largely OK in Germany) versus positively referencing the Third Reich (largely illegal in Germany, largely least legal in the US).
Big tech platforms are a new type of media platform
Here’s the thing: These big tech platforms aren’t neutral platforms for debate, nor are they traditional media platforms. They are neither neither dumb tech (they actively choose and frame and shape content & traffic) nor traditional media companies that (at least notionally) primarily engage in content creation. These big tech platforms are a new type of media platform, and new rules apply. Hence, they require new ways of thinking and analysis, as well as new approaches to regulation.
(As an personal, rambling aside: Given we’ve been discussing the transformational effects of digital media and especially social media for far over a decade now, how do we still even have to have this debate in 2017? I genuinely thought that we had at least sorted out our basic understanding of social media as a new hybrid by 2010. Sigh.)
We might be able to apply existing regulatory—and equally important: analytical—frameworks. Or maybe we can find a way to apply existing ones in new ways. But, and I say this expressly without judgement, these are platforms that operate at a scale and dynamism we haven’t seen before. They are of a new quality, they display qualities and combinations of qualities and characteristics we don’t have much experience with. Yet, on a societal level we’ve been viewing them through the old lenses of either media (“a newspaper”, “broadcast”) or neutral platforms (“tubes”, “electricity”). And it hasn’t worked yet, and will continue not to work, because it makes little sense.
That’s why it’s important to take a breath and figure out how to best understand implications, and shape the tech, the organizations, the frameworks within which they operate.
It might turn out, and I’d say it’s likely, that they operate within some frameworks but outside others, and in those cases we need to adjust the frameworks, the organizations, or both. To align the analytical and regulatory frameworks with realities, or vice versa.
This isn’t an us versus them situation like many parties are insinuating: It’s not politics versus tech as actors on both the governmental and the tech side sometimes seem to think. It’s not tech vs civil society as some activists claim. It’s certainly not Silicon Valley against the rest of the world, even though a little more cultural sensitivity might do SV firms a world of good. This is a question of how we want to live our lives, govern our lives, as they are impacted by the flow of information.
It’s going to be tricky to figure this out as there are many nation states involved, and some supra-national actors, and large global commercial actors and many other, smaller but equally important players. It’s a messy mix of stakeholders and interests.
But one thing I can promise: The solution won’t be just technical, not just legal, nor cultural. It’ll be a slow and messy process that involves all three fields, and a lot of work. We know that the status quo isn’t working for too many people, and we can shape the future. So that soon, it’ll work for many more people—maybe for all.
AI-driven automation will continue to create wealth and expand the American economy in the coming years, but, while many will benefit, that growth will not be costless and will be accompanied by changes in the skills that workers need to succeed in the economy, and structural changes in the economy. Aggressive policy action will be needed to help Americans who are disadvantaged by these changes and to ensure that the enormous benefits of AI and automation are developed by and available to all.
This cuts right to the chase: Artificial intelligence (AI) will create wealth, and it will replace jobs. AI will change the future of work, and the economy.
AI will change the future of work, and the economy.
For the record: In other areas, Germany is making good progress. Take autonomous driving, for example. Germany just adopted an action plan on automated driving that regulates key points of how autonomous vehicles should behave on the street—and regulates it well! Key points include that autonomous driving is worth promoting because it causes fewer accidents, dictates that damage to property must take precedence over personal injury (aka life has priority), and that in unavoidable accident situations there may not be any discrimination between individuals based on age, gender, etc. It even includes data sovereignty for drivers. Well done!
On the other hand, for the Internet of Things (IoT) Germany squandered opportunities in that IoT is framed almost exclusively as industrial IoT under the banner of Industrie 4.0. This is understandable given Germany’s manufacturing-focused economy, but it excludes a huge amount of super interesting and promising IoT. It’s clearly the result of successful lobbying but at the expense at a more inclusive, diverse portfolio of opportunities.
So where do we stand with artificial intelligence in Germany? Honestly, in terms of policy I cannot tell.
So where do we stand with artificial intelligence in Germany? Honestly, in terms of policy I cannot tell.
Update: The Federal Ministry of Education and Research recently announced an initiative to explore AI: Plattform Lernende Systeme (“platform living systems”). Thanks to Christian Katzenbach for the pointer!
AI & the future of work
The White House AI report talks a lot about the future of work, and of employment specifically. This makes sense: It’s one of the key aspects of AI. (Some others are, I’d say, opportunity for the creation of wealth on one side and algorithmic discrimination on the other.)
How AI will impact the work force, the economy, and the role of the individual is something we can only speculate about today.
In a recent workshop with stipendiaries of the Heinrich-Böll-Foundation on the future of work we explored how digital, AI, IoT and adjacent technologies impact how we work, and how we think about work. It was super interesting to see this diverse group of very, very capable students and young professionals bang their heads against the complexities in this space. Their findings mirrored what experts across the field also have been finding: That there are no simple answers, and most likely we’ll see huge gains in some areas and huge losses in others.
Like all automation before, depending on the context we’ll see AI either displace human workers or increase their productivity.
The one thing I’d say is a safe bet is this: Like all automation before, depending on the context we’ll see AI either displace human workers or increase their productivity. In other words, some human workers will be super-powered by AI (and related technologies), whereas others will fall by the wayside.
Over on Ribbonfarm, Venkatesh Rao phrases this very elegantly: Future jobs will either be placed above or below the API: “You either tell robots what to do, or are told by robots what to do.” Which of course conjures to mind images of roboticized warehouses, like this one:
Just to be clear, this is a contemporary warehouse in China. Amazon runs similar operations. This isn’t the future, this is the well-established present.
Future jobs will either be placed above or below the API: “You either tell robots what to do, or are told by robots what to do.”
I’d like to stress that I don’t think a robot warehouse is inherently good or bad. It depends on the policies that make sure the humans in the picture do well.
Education is key
So where are we in Europe again? In Germany, we still try to define what IoT and AI means. In China it’s been happening for years.
This picture shows a smart lamp in Shenzhen that we found in a maker space:
What does the lamp do? It tracks if users are nearby, so it can switch itself off when nobody’s around. It automatically adjusts light the temperature depending on the light in the room. As smart lamps go, these features are okay: Not horrible, not interesting. If it came out of Samsung or LG or Amazon I wouldn’t be surprised.
So what makes it special? This smart lamp was built by a group of fifth graders. That’s right: Ten and eleven year olds designed, programmed, and built this. Because the curriculum for local students includes the skills that enable them to do this. In Europe, this is unheard of.
I think the gap in skills regarding artificial intelligence is most likely quite similar. And I’m not just talking about the average individual: I’m talking about readiness at the government level, too. Our governments aren’t ready for AI.
Our governments aren’t ready for AI.
It’s about time we start getting ready for AI, IoT, and robotics. Always a fast mover, Estonia considers a law to legalize AI, and they smartly kick off this process with a multi-stakeholder process.
What to do?
In Germany, the whole discussion is still in its earliest stages. Let’s not fall into the same trap as we did for IoT: Both IoT and AI are more than just industry. They are both broader and deeper than the adjective industrial implies.
The White House report can provide some inspiration, especially around education policy.
We need to invest in what OECD calls the active labor market policies, i.e. training and skill development for adults. We need to update our school curricula to get youths ready for the future with both hands-on applicable skills (coding, data analysis, etc.) and with the larger contextual meta skills to make smart decisions (think humanities, history, deep learning).
We need to reform immigration to allow for the best talent to come to Europe more easily (and allow for voting rights, too, because nobody feels at home where they pay taxes with no representation).
Without capacity building, we’ll never see the digital transformation we need to get ready for the 21st century.
Zooming out to the really big picture, we need to start completely reforming our social security systems for an AI world that might not deliver full employment ever again. This could include Universal Basic Income, or maybe rather Universal Basic Services, or a different approach altogether.
This requires capacity building on the side of our government. Without capacity building, we’ll never see the digital transformation we need to get ready for the 21st century.
But I know one thing: We need to kick off this process today.
I’d like to share 3 short stories that demonstrate just a few of the challenges of governance for IoT.
1) In the fall of 2016 Facebook, Twitter, Netflix and other popular consumer websites were temporarily shut down in a so-called Distributed Denial of Service (DDoS) attack. This isn’t unusual in itself—it happens all the time in smaller scale. What WAS unusual was the attack vector: For the first time, a large-scale DDoS attack was driven by IoT products, mainly cheap, unsecured, internet-connected CCTV cameras. Who suffers the consequences? Who’s responsible? Who’s liable?
2) As part of the European Digital Single Market, the EU just passed the The General Data Protection Regulation, or GDPR for short. It’s is designed to enable individuals to better control their personal data. However, experts around the globe are scrambling to figure out how this applied to the IoT: Almost certainly, a lot of the type of data collection and personalization that’s part of consumer IoT products falls squarely under the GDPR. What will IoT-related services look like 5 years from now? Is it going to be different services depending on where you are? Based on where your provider is? Based on where your residency is? Or will it just stay the same?
3) In 2015, Mount Sinai Hospital in New York launched an interesting research project called Deep Patient. They applied artificial intelligence (AI) techniques—concretely machine learning algorithms—to analyze their patient records for patterns. It turned out that these algorithms were extremely good at predicting certain medical conditions; much better than human doctors. But it wasn’t clear how they got to these predictions. Is it responsible to act on medical predictions if the doctors don’t know what they’re based on? Is it responsible not to? How do we deal with intelligence and data that we don’t understand? What if our fridges, cars, or smartphones knew better what’s good for us than we do?
These 3 short stories demonstrate how wide the range of questions is that we face in IoT. The width and depth of this range makes questions of governance more than just a little tricky.
We know there’s a lot of great work happening around the world that promotes human-centric IoT and responsible, empowering technology. But when we were looking for a good overview we couldn’t find one!
So we decided to try creating that ourselves, in the most simple format we could think of: A monthly newsletter consisting of a list of countries, and the 3 most interesting projects from each countries, curated by a trusted local expert. (All credit for the idea goes to Monique van Dusseldorp. Thanks Monique!)
From time to time, we might be partnering up with other organizations or networks who work along the same lines and co-produce the newsletter—for example GIG, where Max is heavily involved.
Meet “Best of responsible tech around the world”, our new monthly newsletter! (The name is still a bit of a mouthful, so that might still change.)
What exactly is the content going to be, you ask? In terms of format, it’s going to be pretty straight forward: A list of up to three links per country with a 1-sentence explanation for context. In terms of projects featured, we’ll try a you know it when you see it approach und trust our local experts to pick the most interesting. It’s going to be human curation at its best!
We’d do this with a new newsletter list, published under the ThingsCon label, with full credit given to the local experts and published under Creative Commons (BY-NC-SA), so everyone can use and share that content non-commercially).
We aim to send this out once a month. It’s an experiment for us: if there’s enough demand, we’ll keep it up, otherwise we’ll retire the list, no harm done!
There’s a been a lot (a lot!) of talk about Europe’s, and particularly Germany’s, take on digitization and tech innovation. Sometimes using the Industry 4.0 terminology (connected factories and the like), sometimes framed using European vs US startup success stories (“Where’s a German Google?”).
While a debate about tech innovation, adaption rates and access to the benefits of new technology is necessary – especially when it comes to providing a supporting political framework – I can’t help but notice a few narratives floating around that are quite wide-spread and seem to be dubious at best.