Collaborative Ethical Emerging Tech Talking Points

I just re-discovered something I’d sent out to a few collaborators a while back: A half-formed idea, shared to see if it made sense to peers in our space.

The short version is this: Civil society voices often are under-represented in discussion around policy and emerging tech. A single go-to address to get civil society positions/talking points and an overview of who’s working on what could potentially lower the barrier to getting that perspective into the debate. So I proposed a small informal alliance, a rag tag band of orgs that share their voices online.

That was wall over a year ago and it never came to pass but I figured sharing the idea might still, for all its obvious shortcomings, be useful or interesting going forward. If anyone wants to build this, please go for it.

The below is included more or less unedited from what I sent out, just with personal names removed for privacy. Please also note that the organizations referenced in the examples are just that: examples. Neither of them is associated with this; all bad ideas here are mine and mine only.

With that out of the way, let me know what you think?

Here goes:

Dear trusted collaborators, friends, partners,

I’m writing about an idea that came about in a conversation with [redacted]; it came up almost randomly, but struck a chord with me, and hence I’d like to propose a project. Don’t worry, it should be extremely lightweight, yet potentially have significant impact.

What’s the issue?

Policy makers work, very often (depressingly often, really) on legislation and regulation with very little input from external experts. From conversations with folks at just about any level of government, it appears that the one group that always gets to get input is big industry, for the simple reason that their lobbyists are right there, easy and quick to access and draw on. This means that the external expertise informing legislation is extremely one-sided. Often, civil society perspectives aren’t addressed. This is especially true for new and fast-changing fields like emerging technology (IoT, AI, etc.)

How to address the issue?

Lacking the resources for full-time lobbyists to lobby on behalf of civil society organizations in this space, I want to propose creating a shared resource of talking points: Very lightweight in format, this resource represents our organizations positions on key issues, easily and readily accessible. Plus contact details to request more input.

Strength in numbers, and effectivity through easy-of-use!

If we can make this relevant enough, it would by necessity become the first go-to resource that policy makers couldn’t get around. We’d make sure that civil society is heard in issues related to emerging technologies.

What will it take to do this?

I think not that much if we’re smart about it and don’t focus on institutionalizing. Concretely, I’d suggest a very simple website (yay WordPress). Essentially a dozen or so pages of issues sliced by

  • verticals: mobility, governance, economy etc.
  • technologies: AI, IoT, etc.
  • impact areas: inclusion & diversity, sustainability, etc.

I tend towards simply offering either one of these lenses, and simply link back to the same content blocks, which should be very top level along the lines of:

What is this? Why is this relevant or problematic, and for whom? What approaches do we think are essential? Which organization can us help understand this better?

And this shouldn’t take more than a couple paragraphs. Then the name(s) of the org(s) that wrote that part, and we’re done.

An example: AI-powered facial recognition

Issue: AI-powered facial recognition
Filed under: #AI, #IoT, #inclusion&diversity, #justice, #privacy

What is this?
Through massive advances in artificial intelligence (and machine learning/neural networks specifically), facial recognition software has become very powerful over the last few years. Software is now able to identify individuals in crowds, which could be used for security-related or for commercial activities (like retail analytics).

Why is this relevant or problematic, and for whom?
Mass-surveillance in public space is a massive challenge to citizen privacy, and to freedom of movement and gather, all of which are essential to a functional democracy – which is why we see AI-powered facial recognition in public space as a key issue of building and maintaining a resilient democracy. Furthermore, many commercially available facial recognition solutions work better for some demographic groups than others; for example, facial recognition software often works less reliably with darker skinned people. If automated facial recognition systems are tied into other automated systems (like predictive policing software), erroneous behavior of the software like false positives can reinforce systemic discrimination.

What approaches do we think are essential?
As a general rule, we advise against automated AI-powered facial recognition systems, especially in public space. Potential risks and damages of such a massively invasive system tend to outweigh potential benefits. If a facial recognition system is for some reason unavoidable, there needs to be extreme care, testing, and verification of reliability all the way back to discrimination-free, consented training data sets; extreme transparency about the use of such systems; and reliable channels for redress (human-in-the-loop).

Which organizations can us help understand this better?
AI Now Institute, Ada Lovelace Institute, ThingsCon.


Essentially something like this. Not rocket science, really, just a starting point.