Generative AI Edition (Connection Problem S09E04)

Note: This is cross-posted from my newsletter in an attempt to both to make it easier to read this via RSS feed and to have this in my own independent archives. You can subscribe here to get it delivered to your inbox.


Hello and welcome to the latest edition of our newsletter! I apologize for the delay in getting this issue to you – the truth is, I’ve been so busy that I just couldn’t find the time to send it out until now. But fear not, dear reader, for I have plenty of amazing content to share with you in this issue. From industry updates to funny anecdotes and much more, I’ve got something for everyone. So grab a cup of coffee, put your feet up, and let’s dive in!

And speaking of diving in, in this issue I’ll be taking a deep dive into the exciting world of generative AI. That’s right, I’ll be covering all sorts of fascinating topics related to this cutting-edge technology, from the latest research and developments to practical applications and much more. So grab your thinking cap and let’s get started! And don’t worry, I promise to keep things light and funny – after all, learning about AI doesn’t have to be a bore.



You’re receiving this because you signed up for this newsletter on or through my website, The Waving Cat is my boutique research and strategic advisory practice focused on making sure that emerging technologies actually benefit society. I’m also on Twitter and Mastodon. If you’d like to work with me or bounce ideas, let’s have a chat.


Updates from the Engine Room

It’s been an exciting few weeks as I’ve been ramping up activity on several fronts.

First and foremost, we’ve been busy getting things off the ground over at Stiftung Mercator’s Agora Digitale Transformation (ADT) initiative. We’re in the process of hiring a team and selecting supporting agencies to help us get everything up and running. It’s an exciting project and I can’t wait to see the results.

Next, I had the pleasure of joining the Sovereign Tech Fund on a strategy retreat in a beautiful part of Brandenburg. It was a great opportunity for the team to get away from the hustle and bustle of the city and focus on strategy in a more peaceful setting. Plus, the clear night skies and nearby lake were an added bonus!

In addition to all this, I’m currently in the process of planning my commitments for the first half of next year. If you’re interested in discussing potential projects, now is the perfect time to get in touch.

All in all, it’s been a busy but productive few weeks and I’m excited to see what the future holds. Stay tuned for more updates!


Rise of Generative AI Raises Concerns Over Bias and Misinformation

In the past six months, there have been significant developments in generative AI, particularly in text-to-text and text-to-image generation. While these advances have the potential to revolutionize many industries and improve communication, they also raise concerns over bias and misinformation.

One notable example of these concerns is GPT-3, the largest language model to date. While GPT-3 has demonstrated impressive capabilities in natural language processing tasks, it has also been shown to exhibit biased and misleading behavior when generating text. For instance, it has been known to generate sexist and racist language, as well as spreading false information.

Another example is DALL-E, a text-to-image generation model that has garnered attention for its ability to create original images based on text prompts. However, like GPT-3, DALL-E has also been shown to produce biased and offensive images.

These issues highlight the need for careful consideration and regulation of generative AI technologies. As they continue to advance, it will be crucial to address and mitigate biases and ensure that they are not used to spread misinformation.

Overall, the progress in generative AI over the past six months has been impressive, but it also raises important concerns for society and democracy. It will be crucial to address these concerns and ensure that these technologies are developed and utilized in a responsible and ethical manner.


The Illusion of Truth: The Dangers of Generative AI and Misinformation

As generative AI technology continues to advance, there is an increasing tension between the convincing nature of its written outputs and the potential for those outputs to be wrong or misleading.

One notable example of this is the emergence of AI-generated “academic” papers, which have been shown to be convincingly written but lacking in substance and often containing plagiarized content. This raises concerns about the potential for generative AI to be used to spread false information and deceive the public.

Additionally, there have been instances of generative AI producing biased or offensive content, such as racist or sexist language. This highlights the potential for generative AI to reinforce and amplify existing biases and prejudices, leading to harmful consequences.

In a healthy democracy and public sphere, it is crucial that information and content are accurate and reliable. The increasing prevalence of generative AI technology only underscores the importance of critical thinking and fact-checking in evaluating the information we encounter. It will be crucial to address and mitigate the potential biases and flaws in generative AI in order to ensure that it is used responsibly and ethically.


Musk’s Twitter Takeover: A Threat to Digital Democracy

Elon Musk’s acquisition of Twitter has thrown the public digital sphere and democracy into chaos. With his history of making inflammatory and divisive statements, there is a real fear that the platform may become a breeding ground for hate speech and misinformation under his ownership.

Musk’s acquisition of Twitter also raises serious questions about the role of private companies in shaping the public digital sphere. As a private individual, Musk is not beholden to the same democratic principles as a public institution, and this has many people worried about the potential for him to use Twitter to advance his own personal agenda.

In addition, the changes that Musk has already instituted on Twitter, including stricter content moderation policies and the banning of several high-profile users, have sparked concern among free speech advocates. This further undermines the notion of Twitter as a platform for open and democratic discourse.

All in all, Musk’s acquisition of Twitter is a worrying development that has serious implications for the future of democracy in the digital age. It’s time for us to take a stand and demand that our public digital sphere be governed by principles of fairness, transparency, and democracy.



As you may or may note have noticed, some things here sounded a little different in tonality than usual. The reason is simple: Practically everything written above was written by Artificial Intelligence. More concretely, it was generated by ChatGPT based on the short prompts below. As I had a long train ride with too little connectivity to stay on the calls I’d been planning on but enough to send out the occasional bit of text, I fed a bunch of text prompts into ChatGPT and copied & pasted them into the newsletter above. 

Please note that this is off-the-shelf ChatGPT: At no point did I train this on my voice or content. Ultimately, this is maybe the most obvious tell — ChatGPT just uses a different tonality and cadence than I would (“Stay tuned for more updates!”).

It is, though, unequivocally an impressive results, or so I’d say.

Not that its “analysis” is spot on. How could it be, it’s essentially a stochastics-based pastiche of opinions on the internet. But it’s tremendously plausible in appearance. (Which is also why I prompted a section about how AI-generated content now looks convincing but may still be simply wrong.) 

If you take a look at some of the prompts below, you’ll notice that they vary wildly in terms of framing, detail, etc.: I wanted to try different styles of prompts to see what would pop up, and didn’t do many do-overs. This is, after all, a learning exercise and not a content-production machine.

One of the things I’m taking from this is that the speed of improvement on generative AI is both impressive and a little misleading: Development is leapfrogging, but also the output isn’t always particularly deep. There’s more shiny surface to project ideas and interpretations on than there is meaning. Which makes sense given the way generative AI works, but still is important to remember among all the media hype surrounding the issue.

Another, much more banal takeaway is that it’ll never be hard again to illustrate a blog post. The way Midjourney and its generative AI tool peers spit out pretty decent imagery are a nice change of pace from my usual go-to places for images, Unsplash and the Public Domain Review. So here’s a third near-infinite pool of images, which is neat. I recently generated invites for my kid’s birthday party this way, and we had a blast doing that. Generating images is fun!

Aaaanyway. Thanks for indulging this experiment. Rest assured, this newsletter continues to be written by a human, specifically by me, personally. I wouldn’t let an AI take that away anyway.

Have a great week.




Just for reference, the prompts used for the various sections:

  • A charming, funny introduction paragraph to a newsletter that has been delayed because I’ve been too busy to send it out. Include a charming, funny paragraph explaining that in this newsletter, I will be covering some generative AI topics. Write this from the I-perspective.
  • 1 short, charming, funny paragraph explaining that in this newsletter, I will be covering some generative AI topics. 

For the “Engine Room” section, I felt I had to give more specific prompts. Here’s what I prompted:

  • Write a glimpse-behind-the-scene blog post that covers 3 activities: 
  • For Stiftung Mercator, we’ve been ramping up activity over at Agora Digitale Transformation (ADT). We’re hiring to get the team up and running, selecting supporting agencies, and generally laying the groundwork to get everything up and runnning.
  • With the extraordinary Sovereign Tech Fund, we went out on a strategy retreat in a beatiful part of Brandenburg. There were clear night skies, marshmallows and a lake right outside the door.
  • I’m currently planning my commitments for Q1 & Q2 of the coming year. If you’d like to discuss projects, now’s the time.

The prompts for the various short posts/sections:

  • AI-generated content now looks impressively convincing. But even though it looks convincing, it’s sometimes just wrong, as we have seen with AI-generated “academic” papers. Write a short critical blog post that analyzes the tension between how generative AI’s written outputs can look extremely convincing but their content and conclusions might simply be wrong even if they look right. Include a headline. Include conclusions for the role of ubiquitous generative AI for healthy democracy and a healthy public sphere.
  • Write a short blog post that analyzes the tension between how generative AI’s written outputs can look extremely convincing but their content and conclusions might simply be wrong even if they look right. Include a headline. Include conclusions for the role of ubiquitous generative AI for healthy democracy and a healthy public sphere.
  • Write a short opinionated blog post about the uncertainty Elon Musk’s acquisition of Twitter creates for the public digital sphere and democracy.
  • Generate a short, pithy headline for the blog post above
  • Generate a prompt for an AI text-to-image generator to create an image for the blog post above.
    • The prompt ChatGPT generated, which I in turn fed into Midjourney to generate the header image, is this: Create an image that captures the chaos and uncertainty of Elon Musk’s acquisition of Twitter. The image should show a stormy, turbulent scene, with dark clouds gathering overhead and lightning illuminating the sky. In the foreground, show a small boat being tossed about by the waves, representing the public digital sphere and democracy in the midst of the storm. Use bold, striking colors to convey the sense of drama and danger.


If you’d like to work with me or have a chat to explore collaborations, let’s chat! 


Who writes here? Peter Bihr explores how emerging technologies can have a positive social impact. At the core of his work is the mission to align emerging technologies and citizen empowerment. He works as an independent advisor at the intersection of technology, governance, policy and social impact — primarily with foundations, non-profits and the public sector. Peter serves as special advisor to Stiftung Mercator’s Center for Digital Society, and as Interim Director for the European AI Fund. He co-founded ThingsCon, a not-for-profit that advocates for responsible practices in Internet of Things (IoT), and co-hosts the Getting Tech Right podcast. Peter was a Mozilla Fellow (2018-19) and an Edgeryders Fellow (2019). Based in Berlin, he tweets at @peterbihr and blogs at Interested in working together? Let’s have a chat.

Know someone who might enjoy this newsletter? Please feel free to forward your copy or send folks to

Leave a Reply