GenAI is like an Angle Grinder

There’s a lot of froth about AI — specifically Generative AI — these days.

I have been a creator (author) for nearly a decade. I’ve also, in my day job, been building software products for 25 years, managing and launching new products for 15, and had AI modules in their products for about 6 years (long before it was cool).

I’ve been around enough to be a cranky old product manager, and be frustrated by the misunderstandings on both sides.

Think on AI is an angle grinder. It’s a power tool that great at doing certain jobs (err, grinding angles — I’m a software guy, give me a break). Currently, there are two distinctly different problems with it:

  1. People use it to prune their roses and paint the shed.
  2. Companies building angle grinders from parts they stole from the neighbours.

The first is a hype problem, with people touting it as a panacea to solve all problems. For those of you who still believe in the tooth fairy, let me remind you that silver bullets and magic hammers just don’t exist.

The second problem is one of business ethics. It’s not that angle grinders are bad as an idea, is that some of the leading companies in the field are dishonest.

It’s absolutely possible to train smaller models on ethically-sourced curated content (which will likely cost less and perform better), and use them to solve problems they are well suited to.

Basically, those models are built to produce the statistically likely average response. Sometimes they pick a less likely response and we call it an hallucination, but it’s still just number crunching (AI is linear algebra at scale ;). It’s useful when you want the bland average, not so useful when you want accurate content (which can be solved with some techniques), and entirely useless for creating art or communication (expressing human feelings and intent).

If you want to understand why that is so in more depth, I urge you to read these two amazing New York Times pieces (please! they’re amazing and important):

  • ChatGPT Is a Blurry JPEG of the Web does a wonderful job of explaining how the training condenses everything like a lossy jpeg compression, and how it’s basically returning a statistical average — not anything original.
  • Why A.I. Isn’t Going to Make Art presents an interesting distinction between skill and and intelligence, and how generative models are pretty much neither. As a striking example, your child’s note with “I love you” on it is absolutely average in terms of originality or content — but that’s entirely beside the point.

You can create models that are trained and will produce technically great output — but that’s just raising the bar of the average. The most technically gorgeous image doesn’t mean anything compared to a child’s stick drawing of their family, because it lacks the intent.

Usually at this point, tech bros come up with “the prompt is the intent.” Something like this:

If you didn’t read this ironically, something is wrong with your moral compass

While it’s true that you normally see only the best results (not the 50 crap ones, unless they are so wrong the originator shared them for hilarity), that isn’t the point. Creating something value requires intent and effort.

Claiming that AI generation is the intent without the effort, is like setting your alarm clock to go to the gym, but using a forklift to do the exercises. Or, to put it in terms familiar to the tech world, it’s like thinking a start-up idea is all it takes and expecting a multi-million dollar exit without putting the effort of building the company.

As the saying goes, Genius is 1 percent inspiration and 99 percent perspiration. See, it’s in the effort of creating that you often realise the intricacies and complexities of the idea. There is little value both to the person creating and to the consumer in something that it superficially technically correct, but lacks the depth of human engagement that went into it’s creation.

In simpler words: I write my stories because I want to tell them. I want to find out for myself where they go and how they twist. Only I can create my stories, and I can only do it through consistent effort.

Gen AI, on the other hand, just creates something that could be technically correct — but soulless. It’s having an glimmer of an idea but never following through, never thinking for yourself whether the idea has merit. It’s like accepting all the editing suggestions Word, Grammerly, ProWritingAid and their ilk make. It removes the humanity. This is also why I use human editors. Here is screenshot of my kidding with my editor about introducing spelling mistakes and playing with slang. She knows when the characters are drunk, what voice I am aiming for, and can help me to create better stories:

Part of the reason is the misunderstanding of what intelligence is. Ask any psychologist, and they could give you the textbook definition, while admitting that it’s a deep philosophical issue without a clear one. However, it’s clear to see that intelligence isn’t just the ability to string words together into coherent sentences.

The LLM hype is born from taking a specific tool or technique and trying to apply it for general problems. On the surface the massive data sets make it look plausible, but when you dig into it you find that it’s not, and cannot, solve those problems. So it’s not a “create anything without effort”, it’s not “the only skill you need it prompt engineering”, and, happily, it’s not “the singularity is coming and Skynet is upon us.”

It’s simply a case of some dodgy angle grinder manufacturers riding out a trillion dollar hype cycle.


I do think the industry is changing, and I do hope that those changes are for the better. Gartner is already saying we are past the peak of the hype, even while OpenAI is admitting it can’t make money without copyright violations (poor Sam is about to be kicked out of the private space-ship club, can we pass the hat around?), the RIAA is taking AI music providers to court, and there governments are introducing legislation.

It will take a while, but it’s important to understand that Gen AI can be useful as an assisstive technology. It can’t — and ultimately won’t — remove the need for artists and creators. In the same way that Photoshop and Illustrator didn’t kill photography and graphic design, and getting a license to them doesn’t suddenly make you a graphic designer. You need the foundational education (which involved doing things by hand and putting in the effort) in order to utilise those tools.

But I see people complain about “all AI is bad, why does my doctor use it to take notes?” You aren’t paying the doc for her note-taking skills, you’re paying her for the 20 years of experience and effort she put at med school. If GenAI can shave a minute of trivial tasks and she can see two more patients per day — everyone wins. This is what an AI-powered application (one that was build ethically and safely) would be exactly great for.


Anyway, I hope you found this useful. Do go back and read the New Yorker articles! They are amazingly well written and do a far better job explaining some of the intricacies.

If you’d like to see me preach the same about ethics and product management principles (right tools used to solve the right problems), you can visit my professional blog. The top three are recent columns on those subject, while the last explained what the Hype Cycle is and what’s the fundamental error of using it to replace human contact:

Would love to hear from you! Leave a comment with your thoughts.


Want see what I write without AI? Check out the free short stories, or the novels — including the recently released In Victrix with it’s introduction by Harry Turtledove!

0 Comments

Leave a comment