12 | Think Before You AI: The Pitfalls of Automated Content Creation
Catching the AI Wave, Missing the Swim Trunks
Much is written about the work being done to prevent generative AI apps such as ChatGPT, Claude, and others from generating “bad” content — biased, bigoted, pornographic, exploitative. Where to draw the line is an interesting question, and from a woman who lived in various parts of central/southern Europe, it’s also a line that can vary by culture!
Pause for a moment and think about what you would find to be too offensive for a computer to generate. Think about it from the perspective of who might view the content… you, your kids, your mom/grandmother, your religious leader.
Now look at this image, generated by DALL-E 3 via ChatGPT 4 for my article yesterday.
ChatGPT and DALL-E 3 Sucked Me In
Here’s how I got to that image—an abbreviated synopsis of my dialogue with ChatGPT (my prompts in bold, explanations/notes in italics):
I'm writing an article titled "Don't Be Left Behind: AI Apps Are Transforming Nonprofits Now." Its text is below. Suggest cover images that would fit the theme of the title. {Then I pasted the text from yesterday’s article, linked here}
ChatGPT/DALL-E replied with five suggested prompts representing different image themes and asked if I wanted to proceed with any. I replied: Yes, #2, #4, and #5.
It generated a few images. I replied, Generate more like #3.
More images, and I felt that we were getting closer. I replied, How about some images that use the surfing theme of #1 without the people on top of the wave. And can you make it photorealistic with multiracial people.
Very close now … four pretty good images. I replied one last time, Generate more like #4
DALL-E responded with four new images, shown below. I chose the third one, whose prompt generated by DALL-E (and reported back to me) was “The beach ambiance is enhanced by waves shimmering with tech icons. Multiracial surfers with their boards converse, symbolizing the harmony of humanity and technological progress.”




That exchange took a bit over 5 minutes, and I was totally into it! The cycles of back and forth helped me to refine what I was looking for in terms of image content, and the time required for DALL-E to generate the images (around 30 seconds or so in the middle of a Monday afternoon) allowed me to step back and admire/critique the aesthetics.
In short, they sucked me in. I was totally absorbed in seeing what new ideas would come with each round of images and pondering how I might guide the exchange toward finding “just the right one.”
I missed the, um, missing swimsuits……!
The Mistake I Made
Where I went wrong, and is so very tempting when involved in an AI chat dialogue, is that I didn’t pause to check out the AI-generated image before sharing it.
Shame on me!
Cautionary Tales About AI Gone Wrong
My exposure (heh heh, get it? exposure?!) of this image to the small StrefaTECH audience is mildly embarrassing, perhaps slightly entertaining, but quite benign.
There have been other examples, though, of AI gone wrong that have gone viral, with potentially much more severe outcomes. These have been colloquially dubbed “hallucinations” by the AI community, which downplays their potential severity. A few notorious examples from the last few months1:
The National Eating Disorders Association (NEDA) had to take down its AI-powered chatbot, dubbed “Tessa,” after some users reported negative experiences with it. The chatbot was generating harmful and unrelated advice to those coming to it for support, such as urging one user to count calories and try to lose weight after the user told the tool that they had an eating disorder
In May 2023, a lawyer named Steven Schwartz submitted a brief to the Southern District of New York that cited at least six cases that were made up by OpenAI’s ChatGPT model. The judge in the case, Kevin Castel, called the situation “an unprecedented circumstance” and ordered Schwartz to show cause why he shouldn’t be sanctioned for using false and fraudulent notarization
In a promotional video released by Google in February 2023, Google’s AI chatbot Bard incorrectly claimed that the James Webb Space Telescope took the first image of a planet outside of the solar system
The Bottom Line
… Pause ...
… Take a deep breath or a short break …
… Verify …
… Look more closely …
Are you sure it’s accurate?
Are you positive there’s nothing objectionable??
… Check just one more time before hitting Send/Post/Save …
But don’t run away, put your head in the sand, or give up!
These AI technologies are here to stay, they’re improving rapidly, and they’ll work best for those who practice using them … and who develop habits that will avoid circulating images of surfers in birthday suits.
I found these by using Bing, which I chose because it’s connected to the internet (so can give examples from this year) and provides citations. My initial prompt: “Give 5 examples of where AI hallucinations or content containing pornography/biases have gone viral in 2023. Describe each in 1-2 sentences and give a link to a citation describing the example.“ As is often the case with AI chatbots, I had to continue the conversation to get what I wanted, but it only took a few minutes.





