Blog

Always Be Experimenting with Your Daily Routines

A

Having been self-employed most of my life, and often been part of a peer-group that tends to be interested in experimenting with self-organization (cough did someone just say life hacks), I’ve had the privilege to be very much in charge of my daily routines for most of my adult life.

So I made a point early on in my career to experiment with them and see what sticks, what helps me be more productive, more aware, more awake, more creative—or simply be in a better mood.

After a period of experimentation, I tend to settle into a pattern that works well—for a while. The last few years, that has been a pretty steady, almost comically traditional day at the office, if with a somewhat relaxed schedule. I’d show up between 8:30 and 10, would have a lunch break (preferably without meetings), and try to leave between 5 and 7. At any given time the details would depend on the current ongoing projects: Higher workload meant longer and more intense hours, a lighter workload meant more time to read, write, and meet with folks. It was almost as if I had the most traditional routing because I didn’t have to. I got pretty effective and efficient with my workflows. This was pretty much a management schedule (as opposed to a maker schedule), optimized for conference calls and meetings rather than uninterrupted periods of deep work time that would allow flow.

Image: Public Domain. Image from page 517 of "Railway mechanical engineer" (1916)

But recently, especially since we had a baby, this has been a little less satisfying: I’ve been doing a lot more deep work (research, writing) that isn’t really all that compatible with a management-style schedule, so I’ve been needing more uninterrupted time to get into the flow. Also, I now need a bit more flexibility to take care of the little one or relieve M even while she’s on parental leave now (I’ll take a leave a little later, too). Still, it’s not like I need to simulate an “orderly” workday for anyone: There’s still no boss to convince I’m working if I’m not. Additionally to the deep work time I need more of, I also want to make a point of allowing me to put in more time to learn and develop new skills: It feels like I’ve been plateauing on my core skills and it’s time for upgrades in adjacent branches of the skill tree. (Yes, I’m nerdy enough that I used to play pen-and-paper role playing games.)

In other words, time for another round of experimentation.

I plan to read some more about opportunities and frameworks to optimize for combinations of deep work and learning new skills, and will seek out some the advice of friends who know more about this than I do.

In the meantime, here’s what I’ll be trying for a while:

  • Spend more time in offline, especially in the morning: No checking emails and social media for as long as possible in the mornings, and absolutely not before breakfast. This should help with mindfulness and to have more control over the way my day starts. I like to be proactive rather than reactive. The inbox is the natural enemy of being proactive.
  • Schedule time for reading, writing, learning. Especially I’ll set aside 1-2 longer uninterrupted blocks per week for learning or upgrading skills, like producing podcasts, Python, machine learning basics, or even notionally boring-but-important management things like better accounting/budgeting/leadership skills.
  • More walks. I often and frequently walk, it’s the best catalyst I know for thinking through challenging problems. Recently I’ve fallen short, I’ve walked less than usual. This will change right away. Walking is the best thing ever.
  • Cluster meetings and calls in the afternoon. Part of this will be to have calls and meetings in the afternoon as much as possible. It’s my least productive time in terms of focused input/output, but it’s perfect for conversations.

I hope that this might lead to concrete improvements and outcomes:

  • Stronger focus for longer periods of time, which should result in more long text output (essays, blog posts, maybe a book or two).
  • Less reactive scheduling, and more productive use of my time.
  • More flexibility to be present in my family as the better use of my time leads to less time-at-desk and rather to better-output-per-day.
  • Both new opportunities and improvements in my practice through new skills.

Are there any techniques or approaches you found very helpful yourself? Give me a shout, I’m curious!

Zephyr Berlin: Featured in The Craftsman

Z

The Craftsman header

The brilliant and kind Gianfranco Chicco writes a super lovely monthly newsletter called The Craftsman. For it, he meets and interviews craftsmen (and women, obviously) around the world about their projects, products, and passions.

I’m super happy, and very much humbled, that Gianfranco approached us to feature Zephyr Berlin in the March edition (read issue #006 on Medium).

Zephyr Berlin is very much a passion project of Michelle’s and mine, and we dug deep into the craft aspect when working with our designer Cecilia. Also, I loved that he gave a shout-out to our iterated designs that feature extra deep pockets, the model we internally nick-named The Deep-Pocketed One.

Here’s the blog post over on zephyrberlin.com.

Monthnotes for February 2018

M

What happened in February? I’m a little short on time today so let’s keep it short and sweet:

×

The National IoT Plan in Brazil has been published by the Brazilian National Development Bank (BNDES)—and it’s so good to see our ThingsCon & Mozilla Trustmark for IoT report picked up there.

I’m very, very happy (and to be honest, a little bit proud, too) that this report just got referenced fairly extensively. To learn more context, here’s Brazil’s National IoT Plan, concretely in Action Plan / Document 8B (PDF). (Here’s the post on Thingscon.com.)

This is exactly the kind of outsized impact I always strive and hope for.

×

We’re headed for a social media winter. I think we’re arriving in the post-social media era. It’s going to be interesting to see what’s next. My money is on small, private groups (think Whatsapp chats).

×

Less formal media: For somewhat more off-the-cuff, more personal takes and pointers come join my semi-personal newsletter, Connection Problem.

×

More formal media: For the first time in a long time, I have some things to advocate for (responsible IoT, trustmarks, etc.) and a story to tell. So I’m looking to improve my media presence beyond the occasional, fairly random interview or article. Still figuring out how to best go about it. Any pointers are welcome!

×

If you’d like to work with me in the upcoming months, please get in touch.

×

That’s it for today. Have a great March!

Welcome to the Post-Social Media Era

W

The last decade was the era of Social Media: Community-driven platforms like Facebook, Twitter, and even LinkedIn have completely changed the way we interact with, and perceive, the world.

(Purely anecdotally: I joined Twitter in 2006, about a year after it launched—and felt I was late to the game. Since then, I think I owe a great deal of my career to the people I met through Twitter.)

Societally, the impact of these platforms has been amazing: They have enabled communities to form, they allowed people with niche interests to find likeminded folks around the globe, and they have empowered groups to advocate and campaign for their causes globally without the need for traditional, large scale campaign infrastructure.

Social media also has made us all (with a caveat: some more than others) commentators, and active participants in the global media conversation. In the process, they allowed for real-time fact checking and commentary of media and politics. For a while, it seemed this was a bottom-up revolution that propelled society to more truth, easier access to facts and experts, and a more informed public.

Image (Public Domain): U.S. National Archives: Actual Demonstration by the Fire Department Training Station. Photographer: David Falconer.

And it has, to a degree. But at the same time, the same mechanics have also led to large scale harassment and fake news, and have helped undermine trust in journalism (aka “main stream media”) and political institutions like governments and political parties. Turns out tools aren’t neutral or a-political; and even if they were, Bad Guys are really savvy using tools for nefarious purposes.

By now, the combination and scale of fake news, harassment, and intransparent platforms with their black box algorithms are killing social media as we know it:

Social media first undermined the media’s and institutions’ credibility, and now their own. Facebook and Twitter (the platforms) are the tech world’s functional equivalent of main stream media; Facebook and Twitter (the companies) are the institutions.

In their place small, private groups thrive (think Whatsapp), but public social media has peaked.

We’re headed into a social media winter. The post-social era has begun.

What’s long-term success? Outsized positive impact.

W

For us, success is outsized positive impact—which is why I’m happy to see our work becoming part of Brazil’s National IoT Plan.

Recently, I was asked what long-term success looked like for me. Here’s the reply I gave:

To have outsized positive impact on society by getting large organizations (companies, governments) to ask the right questions early on in their decision-making processes.

As you know, my company consists of only one person: myself. That’s both boon & bane of my work. On one hand it means I can contribute expertise surgically into larger contexts, on the other it means limited impact when working by myself.

So I tend (and actively aim) to work in collaborations—they allow to build alliances for greater impact. One of those turned into ThingsCon, the global community of IoT practitioners fighting for a more responsible IoT. Another, between my company, ThingsCon and Mozilla, led to research into the potential of a consumer trustmark for the Internet of Things (IoT).

I’m very, very happy (and to be honest, a little bit proud, too) that this report just got referenced fairly extensively in Brazil’s National IoT Plan, concretely in Action Plan / Document 8B (PDF). (Here’s the post on Thingscon.com.)

To see your work and research (and hence, to a degree, agenda) inform national policy is always exciting.

This is exactly the kind of impact I’m constantly looking for.

Monthnotes for January 2018

M

January isn’t quite over, but since I’ll be traveling starting this weekend, I wanted to drop these #monthnotes now. A lot of time this month went into prepping an upcoming project which is likely to take up the majority of my time in 2018. More on that soon.

×

Capacity planning: This year my work capacity is slightly reduced since I want to make sure to give our new family member the face time he deserves. That said, this year’s capacity is largely accounted for, which is extra nice given it’s just January, and it’s for a thing I’m genuinely excited about. That said, I think it’s important to work on a few things in parallel because there’s always potential that unfolds from cross-pollination; so I’m up for a small number of not-huge projects in addition to what’s already going on, particularly in the first half of the year. Get in touch.

×

On Sunday, I’m off to San Francisco for a work week with the good folks at Mozilla because reasons and a number of meetings in the Bay Area. (Full disclosure: my partner works at Mozilla). Last year I’ve done some work with Mozilla and ThingsCon exploring the idea of a trustmark for IoT (our findings).

Image: commons (SDASM Archives)

Should you be in SF next week, ping me and we can see if we can manage a coffee.

×

IoT, trust & voice: More and more, I’m coming around to the idea that voice is the most important—or at least most imminent—manifestation of IoT regarding user data. Voice, and how it relates to trust, is what I’ll be focusing on a lot of my work in 2018.

×

User profiling in smart homes: Given my focus on voice & trust in IoT this year, I was very happy that Berlin tech & policy think tank Stiftung Neue Verantwortung invited me to a workshop on user profiling in smart homes. It was all Chatham House rules and I don’t want to dive into specifics at this point, but smart homes and voice assistants are worth a deep dive when it comes to trust—and trustworthiness—in IoT.

Connected homes and smart cities

Not least because (as I’ve been hammering home for a long time) the connected home and the smart city are two areas that most clearly manifest a lot of the underlying tensions and issues around IoT at scale: Connected homes, because traditionally the home was considered a private space (that is, if you look at the last 100 years in the West), and embedded microphones in smart homes means it’s not anymore. And smart cities, because in public space there is no opt-out: Whatever data is collected, processed, and acted on in public space impacts all citizens, if they want it or not. These are fundamental changes with far reaching consequences for policy, governance, and democracy.

×

Worth your time: A few pointers to articles and presentations I found worthwhile:

  • Kate Crawford’s talk on bias in AI training data is ace: The Trouble with Bias [Youtube].
  • TechCrunch has a bit of a top-level explainer of GDPR, Europe’s General Data Protection Regulation that goes into effect in May this year. It’s being widely lauded in Europe (except by the usual suspects, like ad-land), and been unsurprisingly criticized in Silicon Valley as disruptive regulation. (See what I did there?) So it came as a pleasant surprise to me that TechCrunch of all places finds GDPR to be a net positive. Worth 10 minutes of your time! [TechCrunch: WTF is GDPR?]
  • noyb.eu—My Privacy is none of your Business: Max Schrems, who became well-known in European privacy circles after winning privacy-related legal battles including one against Facebook and one that brought down the US/EU Safe Harbor Agreement, is launching a non-profit: They aim to enforce European privacy protection through collective enforcement, which is now an option because of GDPR. They’re fundraising for the org. The website looks crappy as hell very basic, but I’d say it’s a legit endeavor and certainly an interesting one.

×

Writing & thinking:

  • In How to build a responsible Internet of Things I lay out a few basic, top-level principles distilled from years of analyzing the IoT space—again with an eye on consumer trust.
  • On Business Models & Incentives: Some thoughts on how picking the wrong business model—and hence creating harmful incentives for an organization to potentially act against its own customers—is dangerous and can be avoided.
  • I’ve been really enjoying putting together my weekly newsletter together. It’s a little more personal and interest-driven than this blog, but tackles similar issues of the interplay between tech & society. It’s called Connection Problem. You can sign up here.

I was also very happy that Kai Brach, founder of the excellent Offscreen magazine kindly invited me to contribute to the next issue (out in April). The current one is also highly recommended!

×

Again, if you’d like to work with me in the upcoming months, please get in touch quickly so we can figure out how best to work together.

×

That’s it for January. See you in Feb!

How to build a responsible Internet of Things

H

Over the last few years, we have seen an explosion of new products and services that bridge the gap between the internet and the physical world: The Internet of Things (IoT for short). IoT increasingly has touch points with all aspects of our lives whether we are aware of it or not.

In the words of security researcher Bruce Schneier: “The internet is no longer a web that we connect to. Instead, it’s a computerized, networked, and interconnected world that we live in. This is the future, and what we’re calling the Internet of Things.”1

But IoT consists of computers, and computers are often insecure, and so our world becomes more insecure—or at the very least, more complex. And thus users of connected devices today have a lot to worry about (because smart speakers and their built-in personal digital assistants are particularly popular at the moment, we’ll use those as an example):

Could their smart speaker be hacked by criminals? Can governments listen in on their conversations? Is the device always listening, and if so, what does it do with the data? Which organizations get to access the data these assistants gather from and about them? What are the manufacturers and potential third parties going to do with that data? Which rights do users retain, and which do they give up? What happens if the company that sold the assistant goes bankrupt, or decides not to support the service any longer?

Or phrased a little more abstractedly2: Does this device do what I expect (does it function)? Does it do anything I wouldn’t normally expect (is it a Trojan horse)? Is the organization that runs the service trustworthy? Does that organization have trustworthy, reliable processes in place to protect myself and my data? These are just some of the questions faced by consumers today, but they face these questions a lot.

Trust and expectations in IoT
Trust and expectations in IoT. Image: Peter Bihr/The Waving Cat

Earning (back) that user trust is essential. Not just for any organization that develops and sells connected products, but for the whole ecosystem.

Honor the spirit of the social contract

User trust needs to be earned. Too many times have users clicked “agree” on some obscure, long terms of service (ToS) or end user license agreement (EULA) without understanding the underlying contract. Too many times have they waived their rights, giving empty consent. This has led to a general distrust—if not in the companies themselves then certainly in the system. No user today feels empowered to negotiate a contractual relationship with a tech company on eye level—because they can’t.

Whenever some scandal blows up and creates damaging PR, the companies slowly backtrack, but in too many cases they were legally speaking within their rights: Because nobody understood the contract but the abstract product language suggests a certain spirit of mutual goodwill between product company and their users that is not honored by the letter of that contract.

So short and sweet: Honor the spirit of the social contract that ties companies and their users together. Make the letters of the contract match that spirit, not the other way round. Earning back the users’ trust will not just make the ecosystem more healthy and robust, it will also pay huge dividends over time in brand building, retention, and, well, user trust.

Respect the user

Users aren’t just an anonymous, homogeneous mass. They are people, individuals with diverse backgrounds and interests. Building technical systems at scale means having to balance individual interests with automation and standardization.

Good product teams put in the effort to do user research and understand their users better: What are their interests, what are they trying to get out of a product and why, how might they use it? Are they trying to use it as intended or in interesting new ways? Do they understand the tradeoffs involved in using a product? These are all questions that basic, but solid user research would easily cover, and then some. This understanding is a first step towards respecting the user.

There’s more to it, of course: Offering good customer service, being transparent about user choices, allowing users to control their own data. This isn’t a conclusive list, and even the most extensive checklist wouldn’t do any good in this case: Respect isn’t a list of actions, it’s a mindset to apply to a relationship.

Offer strong privacy & data protection

Privacy and data protection is a tricky area, and one where screwing up is easy (and particularly damaging for all parties involved).

Protecting user data is essential. But what that means is not always obvious. Here are some things that user data might need to be protected from:

  • Criminal hacking
  • Software bugs that leak data
  • Unwarranted government surveillance
  • Commercial third parties
  • The monetization team
  • Certain business models

Part of these fall squarely into the responsibility of the security team. Others are based on the legal arrangements around how the organization is allows (read: allows itself) to use user data: The terms of services. Others yet require business incentives to be aligned with users’ interests.

The issues at stake aren’t easy to solve. There are no silver bullets. There are grey areas that are fuzzy, complex and complicated.

In some cases, like privacy, there are even cultural and regional differences. For example, to paint with a broad brush, privacy protection has fundamentally different meanings in the US than it does in Europe. While in the United States, privacy tends to mean that consumers are protected from government surveillance, in Europe the focus is on protecting user data from commercial exploitation.

Whichever it may be—and I’d argue it needs to be both—any organization that handles sensitive user data should commit to the strongest level of privacy and data protection. And it should clearly communicate that commitment and its limits to users up front.

Make it safe and secure

It should go without saying (but alas, doesn’t) that any device that connects to the internet and collects personal data needs to be reliably safe and secure. This includes aspects ranging from the design process (privacy by design, security by design) to manufacturing to safe storage and processing of data to strong policies that protect data and users against harm and exploitation. But it doesn’t end there: Especially the end-of-life stage of connected products are important, too. If an organization stops maintaining the service and ceases to update the software with security patches, or if the contract with the user doesn’t have protections against data spills at the acquisition or liquidation stage of a company, then the data could have been safe for years but all of a sudden poses new risks.

IT security is hard enough as it is, but security of data-driven systems that interconnect and interact is so much harder. After all, the whole system is only as strong as its weakest component.

Alas, there is neither fame nor glory in building secure systems: At best, there is no scandal over breaches. At worst, there are significant costs without any glamorous announcements. In the same way that prevention in healthcare is less attractive than quick surgery to repair the damage, it is also more effective and cheaper in the long run. So hang in there, and the users might just vote with their feet and dollars to support the safest, most secure, most trustworthy products and organizations.

Choose the right business model

A business model can make or break a company. Obviously, without a business model, a company won’t last long. But without the right business model, it’ll thrive not together with its customers but at their expense.

We see so much damage done because wrong business models—and hence, wrong incentives—drive and promote horrible decision making.

If a business is based on user data—as is often the case in IoT—then finding the right business model is essential. Business models, and the behaviors they incentivize, matter. More to the point, aligning the organization’s financial incentives with the users’ interests matters.

As a rule of thumb, data mining isn’t everything. Ads, and the surveillance marketing they increasingly require, have reached a point of being poisonous. If, however, an organization finds a business model that is based on protecting its users’ data, then that organization and its customers are going to have a blast of a time.

To build sustainable businesses—businesses that will sustain themselves and not poison their ecosystem—it’s absolutely essential to pick and align business models and incentives wisely.


  1. Bruce Schneier: Click Here to Kill Everyone. Available at http://nymag.com/selectall/2017/01/the-internet-of-things-dangerous-future-bruce-schneier.html 
  2. See Peter Bihr: Trust and expectations in IoT. Available at https://thewavingcat.com/2017/06/28/trust-and-expectation-in-iot/