The video above isn’t particularly new (some four months old I think at the time I’m posting it), but I find it both well made and really relevant. The stuff Neil Harbisson has been working on – both on the interface side of things and the societal/political angle the Cyborg Foundation tackles – are at the tip of the iceberg we’re just starting to learn more about: human-machine interfaces.
What we’ve seen so far is baby steps. When Neil says in the video that Prince Charles “sounds good” it can give us a hint at the kind of re-thinking and re-evaluating we’ll have to do, as a society, over the next ten, twenty, fifty years. As we see technological progress, like less invasive brainwave-based interfaces or more powerful artificial eyes and limbs, the cultural norms will have to adapt and shift considerably.
There’s a lot of brilliant thinking going on in this field and it’s super interesting to follow. For some primers, I recommend following Amber Case and Nathan Jurgenson, to name just two.
Earlier today I visited barrierefrei kommunizieren, a not-for profit organization based in Berlin and Bonn that advises on barrier-free computer interfaces. Mostly that is relevant in educational purposes, but individuals are also welcome to test things out.
My friend Diana works there and kindly gave me a tour. It’s a super interesting and important topic. Even though I’ve had the occasional touch point with the topic in a former stint at politik digital as well as working on government websites at a digital agency, where barrier freedom is high on the agenda, I’ve never had a chance to test out the interfaces that help navigate the content on the user end. And don’t be fooled to think otherwise: We’re talking about a full-on digital gap.
So Diana guided me through the parcours they set up to test out different tools:
What I found particularly interesting is the interplay between software and hardware. On one hand you have super-specialized hardware: Keyboards that magnetically guide a pen for those whose muscles won’t allow typing on traditional keyboards, a device to control the cursor with your lips, including blowing and sucking for right or left clicks, or cameras that help you guide the cursor with head or eye movements. On the other hand, there’s software that allows you to use more off-the-shelf hardware, by allowing voice control or using an iPad as a flexible tool. Sadly, the Kinect control didn’t quite work yet, but it’s easy to imagine great use cases for this too. In the end, it all depends on individual needs, and from there on out you got to figure out what works in that context. This is where barrierefrei kommunizieren comes in as an organization.
What stood out for me is that it doesn’t seem like one software paradigm has emerged as a clear front runner. Some tools are open source, some are quite expensive (small market, low sales numbers etc etc). Some are for Windows (widely used), others for Linux (easy to hack), a few for emerging platforms like tablets of different varieties.
Also, it seems to be that there is a significant number of people out there with special needs that make up a quite significant potential market, and who are largely under-served. Particularly, as I learned in our conversation, by the game industry. Surely there must be game designers out there that have good ideas for games that don’t require either fine-control by mouse (PC games) or full body action (Wii), and that would be fun for everyone to play? And surely it wouldn’t be all that tough to label games according to their compatibility for at least the most commonly used barrier-free devices?
After all, our most-used interface paradigms — keyboard and mouse/touchpad — are all pretty much historic legacies, and more than ready for an overhaul.