I Will Not Give Up My Mistakes For Robots

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

I often discuss the impact of AI on creativity with Serdar. We’re both authors and in technical areas, so it’s something both personal and intimate for us. You can probably guess neither of us is happy about it – and being authors we like to discuss that often at length.

Serdar recently did a blog post on LLMs and intelligence, and it is quite worth reading like all of his work. But one thing he discusses in the post, and in our own discussions, is how LLMs use treats writing as a product. That fascinates me, because there are people who want to do creative work but don’t want to be creative – they want to push a button and get a product.

I could go on about the psychology of this – and indeed I probably will in time – but these are people who want results without making mistakes of their own. You can’t decouple creativity from mistakes, false starts, false ends, and sometimes just producing utter crap. Those aren’t problems, that’s part of creativity.

Creativity is not a linear, mechanical process, as much as we sometimes want it to be. Creativity snags on edges, creativity takes strange detours that somehow get you to the destination more effectively. I’m sure you’ve seen human made creative works that were created just a bit too mechanically, and there’s something wrong when you partake of them, a kind of metallic mental taste in your mind.

Part of this creative work is screwing up sometimes in epic ways. Actually, I’m sure if you’re any kind of creative, you’ve made some awful stuff, and trust me so have I.

Anyone who writes, draws, cosplays, and acts has a mental list of things they regret. They went out there, did the thing, published the book, went to the audition and completely and utterly whiffed it. Creativity in its unpredictable glory gives us infinite things to make and infinite ways to humiliate ourselves.

Creativity requires mistakes, and sometimes you don’t know if you’re making one until you’re done with a work. To complete a work even if it turns out to be lousy is to fully explore your ideas. So often we have to get something out if only, upon completion, to finally understand why it was a stupid idea. That’s fine, that’s what creativity is all about.

Even the journey is necessary. To wrestle with a concept. To implement it. To get it out. Every terrible novel or lousy cosplay or mediocre piece of art is a testimony that someone could get it done and learned on the way. They might not be thrilled with the result of the journey, but at least they made it.

I think this is why some trashy works and B or Z grade films fascinate me. The flawed nature reveals the author’s dreams, ambitions, and efforts. Bad as they are, there’s also a drive there you feel and relate to.

The creativity-as-product takes away all these passionate, painful, wonderful mistakes. It takes away the informative disasters and the joy of hardheaded persistence against your own good senses. It is just pushing a button and at best you become a better button-pusher, but you don’t become more creative.

To make creative work, even if you make something awful, you need to create. You need to be that author or artist. You need to grow from the experience, even if it’s painful. It is to be, i na way, a better person for what you did – even if the better person might be the one who admits “my writing is crap” and move on to something else.

Just pushing a button and pummeling the resulting writing product into a marketing-shaped form isn’t creative. No matter how well the work sells, you run the terrible chance you won’t screw up as much as you need to.

Steven Savage

Long-term Language Misery

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

AI is irritatingly everywhere in news and discussions as I write this, like sand after a beach trip. Working in IT, I could hold forth on such issues as reliability, power consumption or how they’re really “Large Language Models” (Clippy on steroids). But I’d like to explore something that does not involve complaining about AI – hold your surprise.

Instead, I’d like to complain about people. What can I say, sometimes you stick with traditions.

As is often noted in critique of AI is they really are sort of advanced autocomplete, which is why I prefer the term Large Language Model (LLM). They don’t think or feel or have morals, anything we attribute to humans and intelligence. They just ape the behavior, delivering information and misinformation in a way that sounds human.

(Yeah, yeah it’s a talk about AI but I’m going to call them LLM. Live with it.)

However when I look at LLM bullshit, misinformation, and mistakes, something seems familiar. The pretend understanding, the blatant falsehood, the confident-sounding statements of utter bullshit. LLM’s remind me of every conspiracy theorist, conspirtualist, political grifter, and buy-my-supplement extremist. You could replace Alex Jones, TikTok PastelAnon scammers, and so on with LLMs – hell, we should probably worry how many people have already done this.

LLM’s are a reminder that so many of our fellow human beings spew lies no differently than a bunch of code churning out words assembled into what we interpret as real. People falling for conspiracy craziness and health scams are falling for strings of words that happen to be put n the right order. Hell, some people fall for their own lies, convinced by by “LLM’s” they created in their own heads.

LLM’s require us to confront many depressing things, but how we’ve been listening to the biological equivalent of them for so long has got to be up there.

I suppose I can hope that critique of LLMs will help us see how some people manipulate us. Certainly some critiques to call out conspiracy theories, political machinations, and the like. These critiques usually show how vulnerable we can be – indeed, all of us can be – to such things.

I mean we have plenty of other concerns about LLMs an their proper and improper place. But cleaning up our own act a certainly can’t hurt.

Steven Savage

AI: Same As We Never Admitted It Was

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

(I’d like to discuss Large Language Models and their relatives – the content generation systems often called AI.  I will refer to them as “AI” in quotes because they may be artificial, but they aren’t intelligent.)

Fears of “AI” damaging human society are rampant as of this writing in May of 2023.  Sure, AI-generated Pizza commercials seem creepily humorous, but code-generated news sites are raking in ad sales and there are semi-laughable but disturbing political ads.  “AI” seems to be a fad, a threat, and a joke at the same time.

But behind it all, even the laughs, is the fear that this stuff is going to clog our cultures with bullshit.  Let me note that bullshit has haunted human society for ages.

Disinformation has been with us since the first criminal lied about their whereabouts.  It has existed in propaganda and prose, skeevy gurus and political theater.  Humans have been generating falsehoods for thousands of years without computer help – we can just do it faster.

Hell, the reason “AI” is such a threat is that humans have a long history of deception and the skills to use it.  We got really good doing this, and now we’ve got a new tool.

So why is it so hard for people to admit that the threat of “AI” exists because of, well, history?

Perhaps some people are idealists.  To admit AI is a threat is to admit that there are cracks and flaws in society where propaganda and lies can slither in and split us apart.  Once you admit that you have to acknowledge this has always been happening, and many institutions and individuals today have been happily propagandizing for decades.

Or perhaps people really wanted to believe that the internet was the Great Solution to ignorance, as opposed to a giant collection of stuff that got half-bought out by corporations.  The internet was never going to “save” us, whatever that means.  It was just a tool, and we could have used it better.  “AI” isn’t going to ruin it – it’ll just be another profit-generating tool for our money-obsessed megacorporate system, and that will ruin things.

Maybe a lot of media figures and pundits don’t want to admit how much of their jobs are propaganda-like, which is why they’re easily replaced with “AI.”  It’s a little hard to admit how much of what you do is just lying and dissembling period.  It’s worse when a bunch of code may take away your job of spreading advertising and propaganda.

Until we admit that the vulnerabilities society has to “AI” are there because of issues that have been with us for a while, we’re not going to deal with them.  Sure we’ll see some sensationalistic articles and overblown ranting, but we won’t deal with the real issues.

Come to think of it, someone could probably program “AI” to critique “AI” and clean up as a sensationalist pundit.  Now that’s a doomsday scenario.

Steven Savage