The Money In Cleanup

I have an acquaintance that helps migrate businesses off of ancient and inappropriate databases onto more recent ones. If you wonder how ancient and inappropriate let me simply state “not meant for industry” and “first created when One Piece the anime started airing” and you can guess. Now and then he literally goes and cleans up questionable and persisting bad choices.

In the recent unending and omnipresent discussions of AI, I saw a similar proposal. A person rather cynical about AI mused someone might make a living in the next few years backing a company’s tech and processes OUT of AI. Such things might seem ridiculous, until you consider my aforementioned acquaintance and the fact he gets paid to help people back out past decisions. Think of it as “migration from a place you shouldn’t have migrated to.”

It’s weird to think in technology, which always seems (regrettably) to be about forward motion and moving forward that there’s money in reversing decisions. Maybe it was the latest thing and now it’s not, or maybe it seemed like a good idea at the time (it wasn’t), but now you need someone to help you get out of your choice. Fortunately there are people who have turned “I told you so” into a service.

I find these “back out businesses” to be a good and needed reminder that technology is really not about forward. Yeah, the marketing guys and investors may want it, but as anyone who’s spent time in the industry knows, it’s not the case. Technology is a tool, and if the tool doesn’t work or is a bad choice, you want out of it. The latest, newest, fasted is not always the best – and may not be the best years later. Technology is not always about forward, even if someone tells you it is (before they sell you yet another new gizmo).

Considering the many, many changes in the world of tech, from social media to search to privacy, I wonder how much more “back out businesses” might evolve. Will there be coaches to get you to move to federated social media? How can you help a company get out of a bad relationship with a service vendor with leaky security and questionable choices? For that matter can we maybe take a look at better hosting arrangements and websites that aren’t ten frameworks in a trenchcoat?

I don’t know, and the world is in a terribly unpredictable state. But I’m amused to think that somewhere in my lifetime the big tech boom might be “oops, sorry.” Maybe we can say “moving away is really moving forward,” get some TED talks, and make not making bad immediate choices cool.

Steven Savage

Long-term Language Misery

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

AI is irritatingly everywhere in news and discussions as I write this, like sand after a beach trip. Working in IT, I could hold forth on such issues as reliability, power consumption or how they’re really “Large Language Models” (Clippy on steroids). But I’d like to explore something that does not involve complaining about AI – hold your surprise.

Instead, I’d like to complain about people. What can I say, sometimes you stick with traditions.

As is often noted in critique of AI is they really are sort of advanced autocomplete, which is why I prefer the term Large Language Model (LLM). They don’t think or feel or have morals, anything we attribute to humans and intelligence. They just ape the behavior, delivering information and misinformation in a way that sounds human.

(Yeah, yeah it’s a talk about AI but I’m going to call them LLM. Live with it.)

However when I look at LLM bullshit, misinformation, and mistakes, something seems familiar. The pretend understanding, the blatant falsehood, the confident-sounding statements of utter bullshit. LLM’s remind me of every conspiracy theorist, conspirtualist, political grifter, and buy-my-supplement extremist. You could replace Alex Jones, TikTok PastelAnon scammers, and so on with LLMs – hell, we should probably worry how many people have already done this.

LLM’s are a reminder that so many of our fellow human beings spew lies no differently than a bunch of code churning out words assembled into what we interpret as real. People falling for conspiracy craziness and health scams are falling for strings of words that happen to be put n the right order. Hell, some people fall for their own lies, convinced by by “LLM’s” they created in their own heads.

LLM’s require us to confront many depressing things, but how we’ve been listening to the biological equivalent of them for so long has got to be up there.

I suppose I can hope that critique of LLMs will help us see how some people manipulate us. Certainly some critiques to call out conspiracy theories, political machinations, and the like. These critiques usually show how vulnerable we can be – indeed, all of us can be – to such things.

I mean we have plenty of other concerns about LLMs an their proper and improper place. But cleaning up our own act a certainly can’t hurt.

Steven Savage

AI and Chatbots: Better Someone To Hate Than A Machine

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

AI and Chatbots are in the news as people want to use them for everything – well at least until reality sets in.  Now I don’t oppose Chatbots/AI or automated help with a humanized interface.  I think there’s potential for it that will make our lives better.  They really are spicy autocomplete and there’s a role for that, even if we all remember how we hated Clippy.

The problem is that there’s too many cases people want to use so-called AI just replace humans.  I think it will go wrong in many ways because we want people to connect to, even if only to hate them.

If you’ve ever screamed “operator” into a phone after navigating some impossible number-punch menu you have a good idea of how Chatbots could be received.

When we need help or assistance, we want to talk to a person.  Maybe it’s for empathy.  Maybe it’s to have someone to scream at.  Either way we want a moral agent to talk to someone we know has an inner life, and principles, even if we disagree with them.

There’s something antisocial about chatbots just replacing humans.  It breaks society and it breaks our need for contact (or blame).

Have you ever observed some horrible computer or mechanical failure?  Have you imagined or participated in the lawsuits?  Imagine how that will go with Chatbots.

Technology gives us the ability to do things on a huge level – but also create horrible disasters.  Imagine what Chatbots can automate – financial aid, scientific research, emergency advice.  Now imagine that going wrong on a massive, tech-enabled scale.  Technology let us turn simple things into horrible crises.

If you have people along the way in the process?  They can provide checks.  They can make the ethical or practical call.  But when it’s all bots doing bot things with bots and talking to a person?  There’s that chance of ending up in the news for weeks, in government hearings for months, and lawsuits for years. 

(Hell, removing Chatbots removes some poor schmuck to take the blame, and a few people with more money and sense might find they really want that.)

Have you ever read a book or commissioned art and enjoyed working with the artist?  Chatbots and AI can make art without that connection.  Big deal.

Recently I read a person grouse about the cost of hiring an artist to do something – when they could just go to a program.  The thing is for many of us, an artistic connection over literature or art or whatever is also about connecting with a person.

When we know a person is behind something we know there’s something there.  We enjoy finding the meaning in the book, the little references, the empathic bond we form with them.  An artist listens to us, understands us, brings humanity to the work we request.  It makes things real.

I read a Terry Pratchett book because it’s Terry Pratchett.  I watch the Drawfee crew as it’s Jacob, Nathian, Julia, and Karina who I like.

Chatbot-generated content may be interesting or inspiring, but it’s just math that we drape our feelings around.  AI generated content is just a very effective Rorschach blot.  There’s no one to admire, learn from, or connect with behind it.

Humanity brings understanding, security, checks, and meaning.

So however the Chatbot/AI non-Revolution goes?  I think it will be both overdone and underwhelming.  It will include big lawsuits and sad headshakes.  But ultimately if there’s an attempt to Chatbot/AI everything, it’ll be boring and inhuman.

Well, boring and inhuman if we know there’s chatbots there.  It’s the hidden ones that worry me, but that’s for another post . . .

Steven Savage