The New York Times R&B department, NYT Labs, just showcased the Reveal. It is a prototype of something we’ve seen before as design studies: an interactive bathroom mirror. But here we have a prototype, built more or less from off-the-shelf compontents. A screen, speech recognition, a Microsoft Kinect. It’s all there, combined into a new device that could blend right into our daily lives without too much fuss.
By using a special semi-reflective glass surface, the users of the mirror are able to see both a normal reflection of the real world as well as overlaid, high-contrast graphics. We’ve dubbed this “augmented reflection”. Conceptually, the idea is that our mirror can reveal the halos of data around real-world objects, including ourselves.
If you spot some interesting stuff while brushing your teeth, tap your phone against the mirror and you can read the articles on the subway. It is, at least in theory, a smooth, embedded experience. Not an interruption, but an enhancement of your daily routine.
It’s also a glanceable of sorts, a screen that wouldn’t necessarily require our focus, but gives us easy-to-access information at a glance, without drawing much attention.
As such, it’s quite amazing. In fact, I’m convinced that we’ll see a whole new market segment emerge of this type of thing: Highly networked devices that add an information layer to the things in our lives that have, so far, been quite passive and inanimate. Things that won’t stand out much, no major investments or eye catchers, but day-to-day objects. Like bathroom mirrors.