Categorydesign

Toolkits for designers & developers around ethics, privacy & security

T

At SimplySecure’s excellent Underexposed conference we discussed the importance of making it easier for those involved in making connected products and services to make safe, secure, and privacy-conscious products. After all, they might be experts, but necessarily security experts, for example. So, toolkit time!

I asked participants in the room as well as publicly on Twitter which toolkits and resources are worth knowing. This is what this looked like in the room:

“Which toolkits should we all know? Ethics, privacy, security”

Here’s the tweet that went with it:

So what are the toolkit recommendations? Given the privacy-sensitive nature of the event, I’m linking to the source only where people send the recommendations on public Twitter. Also, please note I’m including them without much background, and unchecked. So here goes:

This list can by no means claim to be complete, but hopefully it will still be useful to some of you.

Would you live in a robot?

W

Would you live in a robot?
“Would you live in a robot?” One of the lead questions at Vitra’s Hello, Robot exhibition.

“Would you live in a robot?” is one of the questions posed at #hellorobot, an excellent current exhibition at Vitra Design Museum in Weil am Rhein, Germany. The overall theme of the exhibition is to explore design at the intersection of human & machine – here meaning robots, algorithms, AI and the like.

Have you ever met a robot?
The entrance to Vitra Design Museum during the Hello, Robot exhibition, February 2017.

It’s rare that I travel just to attend an exhibition. In this case it was entirely worth it as #hellorobot addresses some themes that are relevant, salient, and urgent: How do we (want to) live in an age of increased automation? What does and should our relationship with machines look like? What do we think about ascribing personality and agency to machines, algorithms and artificial intelligence in their many forms?

These are all questions we need to think about, and quickly: They are merely early indicators of the kind of challenges and opportunities we face over the next decades, on all levels: as an individual, as businesses, as a society.

Coupland
One of Douglas Coupland’s Micro Manifestos at Vitra.

The above-mentioned questions are, in other words, merely a lead-up to larger ones around things like agency (our own and the algorithms’) and governance, around the role of humans in the economy. A concrete example, if robots take care of the tasks we now pay people to perform (factory work, cleaning up the city, doing research, generating reports…) and if then (under the current model) only 20% of people would be in jobs, what does that mean, how do we earn a living and establish our role and status as a productive and valued member of society?

This example of robots doing most of the work doesn’t strike me as an abstract, academic one. It seems to be blatantly obvious that we need to rethink which roles we want humans to play in society.

This example of robots doing most of the work doesn’t strike me as an abstract, academic one. It seems to be blatantly obvious that we need to rethink which roles we want humans to play in society. All vectors aim at an economy which won’t require—nor be able—to employ 95% of the working-age population full-time. Yet, at the same time per-capita value creation rises and rises, so on a societal level—big picture!—we’re better off. So either we figure out how to handle high double-digit unemployment rates or we reframe how to think about less tasks requiring humans to do, how to unlock the potential of all the newly freed-up time in the day in the lives of millions upon millions of people, and what we want the role of people to be going forward.

(Ryan Avent’s book The Wealth of Humans seems like a good place to read up on possible scenarios. Thanks to Max & Simon’s recommendation in their newsletter The Adventure Equation. I haven’t read it yet but it’s the top of my to-read pile.)

///

Have you met a robot?
“Robots are tools for dramatic effect.” Bruce Sterling quote at Vitra.

hellorobot provides a great snapshot of the artistic and commercial landscape around robots and AI. From artistic explorations like good old manifest, an industrial robot arm perpetually churning out algorithmically generated manifestos that’s been in ZKM since ca. 2008 or Dan Chen’s much more recent CremateBot which allows you to start cremating the skin and hairs you shed as you go through your live, to the extremely commercial (think Industry 4.0 manufacturing bots), everything’s here. The exhibit isn’t huge, but it’s sweeping.

Dan Chen's Crematebot
Dan Chen’s CremateBot at Vitra.

I was especially delighted to see many of our friends and ThingsCon alumni in the mix as well. Bruce Sterling was an adviser. Superflux’s Uninvited Guests were on display. Automato (Simone Rebaudengo‘s new outfit) had four or five pieces on display, including a long-time favorite, Teacher of Algorithms.

Trainer of Algorithms
Automato’s Teacher of Algorithms at Vitra.

I found it especially encouraging to see wide range of medical and therapeutic robots included as well. An exoskeleton was present, as was a therapeutic doll for dementia patients. It was great to see this recent toy for autistic kids:

Therapy doll
A doll for dementia therapy from 2001 at Vitra.

Leka smart toy
A toy for autistic kids at Vitra.

///

One section explored more day-to-day, in the future possibly banal scenarios. What might the relationship between robots and babies be, how could parenting change through these technologies? Will the visual language of industrial manufacturing sneak into the crib or will robots be as cutesy and cozy as other kids toys and paraphernalia?

My First Robot
My First Robot at Vitra.

My First Robot
Will the visual language of industrial manufacturing enter the baby crib?

///

When the smart home stops
What happens when your smart home stops or fails? Lovely photo project at Vitra.

“Would you live in a robot?” The question was likely meant to provoke. Even though clearly some of the older and more traditional German and Swiss visitors around me seemed genuinely to be challenged to consider their world view by the exhibition, I’d go out on a limb: In 2017 I’m not sure the question is even a bit provocative, even though we might want to rethink how we consider our built environment. We might not all live in a robot/smart home. However, I kind of arrived at the exhibition in robots (I had flown in, then taken a cab) and I constantly carry a black box full of bots (my smart phone). Maybe we need updated questions already, like “How autonomous a robot would you live in?”, “What do you consider a robot?”, or “Would you consider yourself a cyborg if you had an implanted pacemaker/hand/memory bank?”

“What makes a good robot, one you’d like to live with?”

Or maybe this leads us off on a wild goose chase. Maybe we just need to ask “What makes a good robot, one you’d like to live with?” Robot, of course, is in this case used almost interchange with algorithm.

///

hellorobot is great, and highly recommended. However, the money quote for me, the key takeaway if you will, is one that I don’t think the curators even considered—nor should they have—in their effort to engage in a conversation around automation and living with robots.

It’s a quite from a not-so-recent Douglas Coupland project, of all things:

Coupland
One of Douglas Coupland’s Micro Manifestos

“The unanticipated side effects of technology dictate the future” — Douglas Coupland

I think this quote pretty much holds the key to unlocking what the 21st century will be about. What are the unintended consequences of a technology once it’s deployed and starts interacting with other tech and systems and society at large? How can we design systems and technologies to allow for max potential upside and minimal potential downside?

This is also the challenge at the heart of ThingsCon’s mission statement, to foster the creation of a human-centric & responsible IoT.

Go see the exhibit if you’re in the vicinity. You won’t regret it.

ps. For more photos, see my Flickr album. Also, a heads-up based on personal experience: The exhibition opens at 10am, as does the café. There’s no warm place to hang out before nor a cup of coffee to be had, and the museum is in the middle of nowhere. Plan your arrival wisely.

Living with Alexa

L

I’d like to make a case for being careful with spreading second- or third-hand stories and rather on gathering first-hand experience of interesting products and services. I believe it’s the best way to feel our way into a future shaped by emerging technologies, and to make informed decisions about them. So in the name of science, I lived with Amazon Echo/Alexa for a week. Here’s my experience.

///

We talk a lot about smart homes, about connected domestic devices, about conversational interfaces and artificial intelligence. A surprising amount of what’s talked about and what’s reported on is word of mouth: I heard somewhere that Amazon Echo ordered a thousand doll houses and boxes of cookies after someone mentioned it on TV! The makers of the doll houses couldn’t believe their luck, and consumers are screwed!

(For the record: In reality, it was likely “a handful” of dollhouse orders; it’s not trivially simple to order—let alone unknowingly—via the device; and Amazon has a full refund policy for physical products ordered this way.)

Word-of-mouth information is bad for all kinds of reasons

This word-of-mouth information is bad for all kinds of reasons. (One could cynically argue that it perfectly fits our times of so-called “post-factual” news and politics.) I believe there’s plenty of reason to be critical of connected services, and even more convinced consumers of (or everyone exposed to) connected services should be able to make informed decisions about their use.

For that reason, I think we should expect from both journalists and everyone in the tech scene (expert peer group!) to be careful about what information and narrative we spread: Instead of rumors we should focus on facts and first-hand experience.

I make a point of frequently testing emerging technologies even when I’m not convinced they’ll be a good fit for my life

This is why I make a point of frequently testing emerging technologies even when I’m not convinced they’ll be a good fit for my life, but that are misunderstood or discussed heavily but with little informational basis. This way I’ve kickstarted smart watches, worn fitness trackers, spit in tubes to have my DNA analyzed. None of it killed me; a lot of it was bland and boring; every time I learned a lot, even if it was only that these technologies offered a lot less risk & reward than the hype suggested.

So we lived for a week with an Amazon Echo and it’s voice-controlled assistant Alexa.

First, for clarification: Amazon Echo is the physical full-size device; Dot is a smaller version; Alexa is the software backend that’s also available as a platform to build apps (in Amazon speak, skills) on through an API.

Second, I’d like to acknowledge that this isn’t exactly pioneering work: the Echo has been available in the US since mid-2015; only in Germany it didn’t come out until fall of last year (Wikipedia). I’d had the chance to learn a bit of its design process and decision making earlier at conference (like Interaction15), so I had a fairly good idea what to expect.

Now, what’s it like to live with a device that aims to be a smart home hub, that is often said to listen in on you permanently (partially true, but likely not in the creepy way often suggested), and that might follow you around on the web: More than once in conversations about Alexa people mentioned that other people had experienced online ads after mentioning a product in front of Alexa. This latter was always related in a friend-of-a-friend context: Nobody could point to a source or documentation, it was all hearsay. Case in point.

So from experience I can say that yes, Alexa might respond to things on TV, but it’s very rare. In an interview I recently gave for RBB Kulturradio (DE) on smart homes and their implications, the host half-joked on the air that ordering Alexa to play their channel during his show might boost their listenership stats; alas he failed to get the syntax right. (I tried to replicate it later by playing his recording to Alexa. Nothing happened.)

Much more annoyingly, it often responds to mentions of similar-sounding names, like Alex. But what might be the most frustrating is that fairly frequently it simply wouldn’t respond when I addressed it, because I wouldn’t stick to the exact tonality of the voice training I had done during setup. And if it did, it often would misunderstand—this may be partially because I mumbled or got caught up mid-sentence while trying to get the syntax right, or because I wasn’t familiar with what orders were OK to give and what was out of scope. I imagine this is part of a learning curve; a week in I could play most music without a hitch (except M.I.A., see below).

It got really, really bad once we switched Alexa to German. Playing music got really tricky. The music streaming service default I had set up before in the English-language interface (in this case Spotify) had to be set up once more. English band names would have to be pronounced in English (they’re names after all), but often would be misinterpreted. Trying to play M.I.A., Alexa would always, 100 percent of the time, play German band Mia. (If you compare the two, you’ll agree this isn’t a mixup you’re likely to enjoy.) It’s perfectly understandable this is a tough nut to crack, but hey, it really shouldn’t be the users’ problem.

How seamlessly the voice and screen control go hand-in-hand is really a thing of beauty: If it works, this is a glimpse into a near future that I’d kinda like.

That said, in English playing music was quite pleasant. The interface is OK enough to make it work. If there’s a mix-up, it’s easy to correct or change course through the Spotify app on your phone. How seamlessly the voice and screen control go hand-in-hand is really a thing of beauty: If it works, this is a glimpse into a near future that I’d kinda like.

But beyond playing music, we couldn’t find any real use case for Alexa. Our house doesn’t have many smart home appliances, and none of the ones we do can interact through Alexa—as far as we know, that is. Alexa apps (“skills”) are legion, but not discovered easily.

Setting a timer is also easy, so in the kitchen these two things alone—playing music and setting timers hands-free—might make for an appealing use case. Almost anything else I found a little disappointing: “How long to get to Hot Spot Restaurant?” failed to produce a result because there’s no routing or mapping services available by default. (Or if there is, I couldn’t find out how to find it.) Online searches for anything are likely to return sub-par results as they’re not powered by Google but Bing, and I still find the difference enormous.

If you’re after dad jokes, you’re in luck.

Alexa is choke-full of easter eggs, like “Alexa, tell me a joke.” So if you’re after dad jokes, you’re in luck.

Otherwise, I noted that most people who hadn’t spent any time with an Echo were a little cautious (“Is it safe to speak in front of it?”) or curious to test the interface (“Alexa, what’s the weather?”, “Alexa, how are you?”, “Alexa, buy a doll house and some cookies, haha!”). This kind of breaks the fourth wall, but of course only highlights how much of a learned behavior it is to interact with a voice-controlled digital assistant. A voice controlled digital assistant is very emphatically not an intuitive interface because we don’t usually talk to our appliances.

A voice controlled digital assistant is very emphatically not an intuitive interface because we don’t usually talk to our appliances.

This is a point that Alexander Aciman makes very clear in a rough take-down of Alexa on Quartz. There he argues that the current manifestation of Alexa isn’t the future of AI, it’s a glorified radio clock, and I tend to agree. Partly it’s that there are some essential default apps missing, including a better search engine integration (where Google obviously has a huge advantage, but competition between the what Bruce Sterling calls the Stacks means Amazon won’t use Google’s search): “Her response to 95% of basic search queries is ‘I can’t find the answer to the question I heard.'” But even once a skill is activated, describes Alexander point-on, “You can’t say ‘Alexa, find my phone,’ but instead must ask say ‘Alexa, ask TrackR to find my phone.’ And God forbid you should accidentally forget the name TrackR, you’ll need your phone to look it up.”

This makes for a rougher-than-necessary user experience. The Alexa companion app tries to make up for this by constantly surfacing new skills and tutorials. This is necessary for sure, but also total kludge.

In short, I found myself using Alexa only to play music—an activity we were set up for perfectly before Alexa. Despite the maybe rough criticism above, there’s something interesting there. It’s important to look at this as an early technology. Things will likely improve and start working just a little better. Interesting use cases might emerge over time.

Alexa is a little too much like simply having a physical token of Amazon, the company, in your living room, like having a print-out of a corporate powerpoint framed on your wall.

As things are today, Alexa doesn’t feel particularly smart, or threatening. Instead Alexa is a little too much like simply having a physical token of Amazon, the company, in your living room, like having a print-out of a corporate powerpoint framed on your wall. What it’s not is a solution to any problem, or a great convener of convenience. Instead it feels very explicitly like it’s the stacks, manifested.

Interaction 16 is a wrap

I

Interaction 16 opening session, photo by MJ Broadbent Interaction 16 kickoff. (Photo by MJ Broadbent)

Headed back from co-chairing Interaction 16 in Helsinki, my head is still spinning with all the conversations, talks and experiences of the last week.

The UX community is great: Welcoming, inclusive, interested in a huge range of topics. It was like being adopted by a whole new family.

Ever since Sami Niemelä invited me to co-chair this gathering, one thing I’ve been particularly interested in was to see how the more emerging tech-focused topics we included in the program would resonate with this design-focused community.

(more…)

Connected Products: Legibility & Failure Modes

C

Note: The following is not a review of The Dash, but a look at some deeper interface and interaction questions around connected products.

A few days ago, I received a package from a courier service. Opening it, this is what I found:

Bragi - The Dash

It’s The Dash, a smart, connected, wireless, waterproof, vital sign tracking in-ear headphone from Munich-based startup Bragi. I backed The Dash on Kickstarter in February 2014 (as backer number 5,362).

(more…)

Understanding the Connected Home: Shared connected objects

U

This blog post is an excerpt from Understanding the Connected Home, an ongoing exploration on the implications of connectivity on our living spaces. (Show all posts on this blog.) The whole collection is available as a (free) ebook: Understanding the Connected Home: Thoughts on living in tomorrow’s connected home

As anyone who’s lived in a shared household can attest, there will be objects that you share with others.

Be it the TV remote, a book, the dining room table, or even the dishes, the connected home will not doubt be filled with objects that will be used by multiple people, sometimes simultaneously and sometimes even without the owner’s permission.

On the whole, you find wealth much more in use than in ownership. — Aristotle

Rival vs. non-rival goods

What will these shared, connected objects be like? What characteristics will define them?

(more…)

Understanding the Connected Home: Managing Conflict

U

Understanding the Connected Home is an ongoing series that explores the questions, challenges and opportunities around increasingly connected homes. (Show all posts on this blog.). Update: As of Sept 2015, we turned it into a larger research project and book at theconnectedhome.org.

When we introduce connectedness into infrastructure like buildings – into our homes – we stitch a technological network into, or better: onto, our lives. And with it we introduce smart agents of sorts: Software that has more or less its own goals and agendas.

For example, a Nest thermostat’s primary goal might be to achieve and maintain a certain temperature in the living room; a secondary goal might be to save energy.

Of course the Nest’s owner has given that goal to the thermostat. And while it will undergo some interpretation at the hand of the algorithm (say you express you prefer a desire for the temperature to be 19° Celsius and the algorithm knows to translate this statement into “you want 19° Celsius in your living room when you are at home but while you’re gone temperature can vary to lower energy consumption”), the goals come more or less from the user.

“User” as in singular human individual. It’s important to stress this as these kinds of interaction models tend to break down, or at least be challenged, along three axes once we do not talk about single-user scenarios:

  • user-to-user conflicts
  • user-to-agent conflicts
  • agent-to-agent conflicts

    (more…)