
A while ago I had a post about playing with AI generated paintings. This is a fast-moving field, and between the recent release of ChatGPT and working with machine learning for natural language processing tools in my day-job, I thought I’d share some more insights.
First, can you even tell the difference?
Staying with pictures for a moment, can you tell if these artworks were created by artificial intelligence or a human? Go on, take the quiz and let me know how you did. Even if you played a lot with the various AI Art engines, I’m sure you’ll be mildly shocked.
It Gets Creepier
The ABC ran an excellent article about this: A journey inside our unimaginable future. It brings the story of Loab, a woman created with these AI models but constantly surrounded by disturbing images out of horror genre. The theme was hauntingly persistent across retries. The researchers even tried generating text to interview her (via text-based AI models).
Whether you want to see any deeper meaning in this or not, there is no doubt that these tools will have a dramatic impact on the future of what we consume as entertainment (or forced to consume as advertisement). It’s really is an excellent article, and I heartily recommend you read it.
What’s really interesting, though, is how the article authors used the GPT-3 engine (The newly released ChatGPT is based on the newer GPT-3.5 engine) to create a “voice” for Loab. They asked it questions, and let it respond.
That it certainly did. And hauntingly so.
So should we welcome our new artistic overlords?
Not so fast.
Remember that the AI has to learn from sample data — whether artworks or texts. As mentioned in the previous article, there are numerous complaints about the AI Engines using copyrighted works (text and images) to learn from. The latest one being Lensa, a phone app that uses AI to “beautify” your images. Except that you can sometimes see signatures and copyright marks from the original source images.
Copyright, of course, doesn’t matter until someone asserts violation and damages in court. Morgan Freeman may or may not find this amazing deepfake amusing, but artists protesting is just publicity — that is, until someone decides to deep-fake a Disney princess for profit. Then we’ll learn the implications.
When it comes to natural language, it’s a bit blurrier picking up the original text but conceptually similar. There is a definite regression to the mean. Although you can train the AI engine on a set of great examples of something to get variations of the good content, that’s still exactly that — variations of the same thing.
This does have excellent applications. It could provide better rewrite tools to Grammarly / ProWritingAid for improving the style and readability of a paragraph. I’ve seen someone build a “business email beautifier” for a sole-trader: that person was a gardener setting up his own business. While good with plants, his communications with customers were of the “one-line no-punctuation” type. By applying AI, his emails were “translated” into proper customer communications that clearly conveyed his message. It’s a great tool to help someone, assisting them in areas they struggle while letting them concentrate on their core business.
On the other hand, when someone asked ChatGPT about how AI can help Product Managers it gave all the right answers…. which everyone who has been in a corporate environment longer than 5 minutes could tell was empty buzz words. It’s no good telling someone that AI could improve their communications to help clarify the product vision when they’re stuck between an angry executive and a demanding customer.
But when it comes to creative outlets, this would be an even bigger failure. That’s where the regression to the mean, or simply regurgitating older content, comes in. Because the AI is trained on a sample set of content that is defined as “good”, it will try to emulate that specific content. Unique voices would be blended and faded, original ideas would be rehashed.
I could wax polemic on middle grade lit (where a series is specifically built to be repetitive, to help children build knowledge via familiarity of structure) or trashy airport romances that are distressingly all the same (and whose authors no doubt make so much more money than I). I’ll also freely admit that there is something in literary theory, there story structure is important, that clarity of prose matters.
But I don’t think the AI tools that we have available yet are truly generative. A prompt is not the same as a story. In the same way that you can give a prompt to the image bots and get a blended image back, you could give a prompt to a chat bot and get a blended text back. But it’s blended, it’s simply a rehashing and rearranging of previous content.
Every author — heck, every start-up founder — knows that ideas are cheap, it’s execution that matters. It’s trying to find your own voice, of creating twists that aren’t formulaic, of expressing your imagined worlds in a way that connects you to your readers. A good editor will help you clean up structure and prose while retaining your voice. A bot tool will just put it through the wash, leach the colour and “standardise” it to blandness.
Considering how many things occupy an author — from actual writing, through editing to marketing — I think there’s an allure to get “time saving” tools to help, some panacea to all our problems that if we just unlocked it we could happily write all day long. And, as with all pots of gold at the end of a rainbow, this isn’t quite it. While there may be useful applications for such tools in the creative space (improving on the dreaded blurb, maybe), for the core of what being an author is, for the actual creative writing, I’d urge caution before using these tools.
Help me get back to writing! Read the free short stories and novels, and leave reviews! I got the next volume, In Victrix, written and half-edited. As soon as I’m settled in the new place (oh, didn’t I mention? We’re moving interstate in December), I’ll get back to it…
I believe that the only artists who can actually say something meaningful about it are those who’re experimenting with them. Not just having “opinions” but really have an understanding of what kind of output they can expect when they put something into the the machine.
LikeLike