Before we’re headed into the long Easter Holiday weekend, a quick rundown of what happened in March.
Mozilla Fellowship & an open trustmark for IoT
I’m happy to share that I’ve joined the Mozilla Fellows program (concretely, the IoT fellows group to work with Jon Rogers and Julia Kloiber), and that Mozilla supports the development of an open trustmark for IoT under the ThingsCon umbrella.
There’s no doubt going to be a more formal announcement soon, but here’s the shortest of blog posts over on ThingsCon.
(As always, a full disclosure: My partner works for Mozilla.)
I had already shared first thoughts on the IoT trustmark. We’ll have a lot more to share on the development of the trustmark now that it’s becoming more official. You can follow along here and over on the ThingsCon blog.
By the way, this needs a catchy name. Hit me up if you have one in mind we could use!
Zephyr interviews: The Craftsman, Deutsche Welle
We were humbled and delighted that Gianfranco Chicco covered Zephyr Berlin in the recent issue of his most excellent newsletter, The Craftsman. Links and some background here.
We also had an interview with Deutsche Welle. We’ll share it once it’s available online.
It’s great that this little passion project of ours is getting this attention, and truly humbled also by the super high quality feedback and engagement from our customers. What a lovely crowd! ?
Learning about Machine Learning
I’ve started Andrew Ng’s Machine Learning Stanford course on Coursera. Due to time constraints it’s slow going for me, and as expected, it’s a bit math heavy for my personal taste but even if you don’t aim to necessarily implement any machine learning or code to that effect there’s a lot to take away. Two thumbs up.
Notes from a couple of events on responsible tech
Aspen Institute: I was kindly invited to an event by Aspen Institute Germany about the impact of AI on society and humanity. One panel stood out to me: It was about AI in the context of autonomous weapons systems. I was positively surprised to hear that
- All panelists agreed that if autonomous weapons systems, then only with humans in the loop.
- There haven’t been significant cases of rogue actors deploying autonomous weapons, which strikes me as good to hear but also very surprising.
- A researcher from the Bundeswehr University Munich pointed out that introducing autonomous systems introduces instability, pointing out the possibility of flash wars triggered by fully autonomous systems interacting with one another (like flash crashes in stock markets).
- In the backend of military logistics, machine learning appears to already be a big deal.
Digital Asia Hub & HiiG: Malavika Jayaram kindly invited me to a small workshop with Digital Asia Hub and the Humboldt Institute for Internet and Society (in the German original abbreviated as HiiG). It was part of a fact finding trip to various regions and tech ecosystems to figure out which items are most important from a regulatory and policy perspective, and to feed the findings from these workshops into policy conversations in the APAC region. This was super interesting, especially because of the global input. I was particularly fascinated to see that Berlin hosts all kinds of tech ethics folks, some of which I knew and some of which I didn’t, so that’s cool.
Both are also covered in my newsletter, so I won’t just replicate everything here. You can dig into the archives from the last few weeks.
Thinking & writing
Season 3 of my somewhat more irreverent newsletter, Connection Problem, is coming up on its 20th issue. You can sign up here to see where my head is these days.
If you’d like to work with me in the upcoming months, I have very limited availability but happy to have a chat.
That’s it for today. Have a great Easter weekend and an excellent April!