Understanding the Connected Home: Managing Conflict

Understanding the Connected Home is an ongoing series that explores the questions, challenges and opportunities around increasingly connected homes. (Show all posts on this blog.). Update: As of Sept 2015, we turned it into a larger research project and book at theconnectedhome.org.

When we introduce connectedness into infrastructure like buildings – into our homes – we stitch a technological network into, or better: onto, our lives. And with it we introduce smart agents of sorts: Software that has more or less its own goals and agendas.

For example, a Nest thermostat’s primary goal might be to achieve and maintain a certain temperature in the living room; a secondary goal might be to save energy.

Of course the Nest’s owner has given that goal to the thermostat. And while it will undergo some interpretation at the hand of the algorithm (say you express you prefer a desire for the temperature to be 19° Celsius and the algorithm knows to translate this statement into “you want 19° Celsius in your living room when you are at home but while you’re gone temperature can vary to lower energy consumption”), the goals come more or less from the user.

“User” as in singular human individual. It’s important to stress this as these kinds of interaction models tend to break down, or at least be challenged, along three axes once we do not talk about single-user scenarios:

  • user-to-user conflicts
  • user-to-agent conflicts
  • agent-to-agent conflicts

1.) User-to-user conflicts: In a multi-person household, the technology could easily be faced by conflicting goals. Think a couple with different temperature preferences. These are things that technology won’t be able to solve elegantly: They are social challenges that require social solutions, not technological ones.

2.) User-to-agent conflicts: If the multi-user scenario challenge wasn’t hard enough, think of the couple a friend mentioned recently, in which only one partner uses the thermostat’s app and the other doesn’t want to engage with it, which in the eye of the thermostat makes that person disappear completely: If the app user isn’t at home, the home appears to the thermostat to be empty – and hence needs no temperature control. This is clearly a technological problem, and one that should have never appeared to begin with. Technology, especially in the home, needs to be sensible about its demands on the residents: No appliance should ever force a resident to engage with it in order to function properly.

3.) Agent-to-agent conflicts: When we introduce smart appliances to our home, we deal with software agents. Individually, in isolation, these might do a good, or good enough, or even stellar job. But combined into a complex and often unpredictable network, it can all get a little messy. Here we need a systemic view. This is seriously challenging terrain for designers.

Good friend and excellent systems thinker and designer Louisa Heinrich articulated this perfectly with her example of different appliances fighting over the blinds being open or closed based on their respective goals: The coffee machine wants them closed so the milk keeps longer, dishwasher wants them open to grab some solar power, the Nest wants them closed to keep the temperature low, the plants want them open…

Who moderates these conflicts? What happens when you – the master – are not at home? Here’s Louisa Heinrich at NEXT14 (esp. starting around the 7min mark):

The underlying question here is, of course, if automation is really that desirable – it strikes me as a low-hanging fruit and only a small part of the potential upside, but one fraught with downside (see the Framing The Debate post from this very series for more).

Groundbreaking researcher and interface researcher Douglas Englebart framed it right (thanks to Scott Jenson for the pointer!) in the motto:

Augmentation not automation“.

Or once more in Louisa’s words: “There is a narrow but very deep gap between assistance and replacement”.

We still need a bit of a framework to interrogate and explore these conflicts between the smart agents in our environment. Enter Scott Smith at ThingsCon earlier this year and his (most excellent) research project Thingclash, a framework for considering cross-impacts and implications of colliding technologies, systems, cultures and values around the IoT:

As Scott points out starting around the 27 min mark (but I strongly recommend watching the whole talk), it’s crucial to think about the legacy of UX decisions, business models, etc., to explore the different cross-impacts, the implications.

“Can we find a way to surface and make legible the tensions and the frictions and the conflicts that arise when new connected, data-collecting objects are introduced into our world?”

For now, just assume that putting several smart agents/appliances to work side by side might yield unintended consequences. These might not be devastating – most likely they’re simply annoying, like lots of notifications on your phone or shutters that open and close seemingly at random.

As connectivity becomes more deeply engrained in our physical environments – homes, vehicles, bodies, cities – we better figure out how to make all these agents work together smoothly: In our interests, by our standards, and to augment rather than simply automate.

Leave a Reply