Evil Agile

We wonder how people can get away with so much horrible stuff.  I’d like to talk Evil and Agile productivity, and yes, I am completely sober as far as you know.

For those of you who are in no way familiar with me, I’m a Project Manager, a professional help-stuff-get-done-guy.  While I’m being paid to be the most anal-retentive person in the room, I prefer to use Agile Methodologies, which are all about rapid, adaptable, approaches to getting things done.  It doesn’t sound Evil, but stick with whatever journey I’m soberly on because I think Evil people are actually pretty good at a kind of Agile.

Many Evil people have A Goal.  It may be (more) money and power, it may be dealing with their childhood traumas, and usually, it’s a dangerously pathetic combination of things like that.  Agile is all about Goals because when you set them, they direct your actions more than any single plan.  You gotta know where you want to go to get there.

Then, simply, Evil people set out to achieve their Goal by whatever means they can.  They don’t care if they lie, cheat, steal, burn books, burn people, and so on – the Goal is what matters.  Agile is also about making sure that your actions direct you toward your Goal so you’re focused and efficient – it just doesn’t involve Evil.

But what if Evil people hurt others, get caught, etc.?  Simple, they lie or do something else because they don’t care – they adapt.  Agile emphasizes constant adaptability and analysis as well, just with an emphasis on truth and honesty.  Evil people are pretty adaptable, even if that adaptability is staying the course and lying about it until others give up.

Agile emphasizes goals, directing yourself towards them, and adaptability.  Evil people do the exact same thing.  The only difference is that Agile emphasizes helping people and being honest, and Evil people are just Evil.

And this is why we’re so often confused by Evil people.

We expect elaborate plans from Evil people – and there may be some – but they’re focused on their Goals and how to get there.  We expect Evil people to be derailed by getting caught in lies or hurting people, but as we’ve seen they don’t care.  They want something and they’ll adapt no matter the price played by other people.

It’s the banality of Evil all over again.  Evil isn’t even interesting in how it gets things done.

Steven Savage

Wondering How Long We’ll Care

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

We’ve got the SAG-AFTRA strike.  Big Studios and groups like Netflix seem to be very interested in replacing real people with AI – and we know they won’t stop no matter the deals made.  Ron Pearlman and Fran Drescher are apparently leading the Butlerian Jihad early.

As studios, writers, and actors battle I find myself caring about the people – but caring far less about the media produced.  There’s so many reasons not to care about Big Media.

You’d think I’d be thrilled to see Star Wars, Marvel Comics, and Star Trek everywhere!  But it’s so many things are omnipresent it sucks the oxygen out of the room.  Even when something is new, it can be overhyped.  If it’s not everywhere, it’s marketed everywhere and I get tired of it all.  Also damn, how much anime is there now?

The threat of AI replacing actors and writers removes that personal connection to actors and writers and creators.  There was already a gap anyway as groups of writers created shows and episodes, abstracting the connections with the creators.  The headlong rush into AI only threatens to make me care less – I can’t go to a convention and shake hands with a computer program or be inspired to write just as good as a program.

We have plenty of content made already anyway.  I could do with a good review of Fellini, maybe rewatch Gravity Falls again, and I recently threatened to watch all of One Piece for inexplicable reasons.  Plus of course I have tons of books.

Finally, there’s all sorts of small creators new and old I should take a look at.  Maybe I don’t need the big names anymore.  Hell, the small creators are easier to connect with.

Meanwhile all of the above complaints are pretty damned petty considering the planet is in a climate crisis and several countries are falling apart politically and economically.  I’m not going to care about your perfect AI show when the sky turns orange because of a forest fire.

I have a gut feel I’m not alone in the possibility of just kind of losing interest in the big mediascape.  We may have different triggers for giving up, but there’s a lot of possible triggers.  Plus, again, potential world crises create all sorts of possibilities.

Maybe that’s why the “Barbenheimer” meme was so joyful, with people discussing these two very different films as a kind of single phenomena.  It was spontaneous, it was silly, it was self-mocking.  Something just arose out of the big mediascape (and two apparently good films), a very human moment it seems we’re all too lacking.

Maybe it’s a reminder we can care about our media.  But it the chaotic times we face in a strange era of media, I wonder if we’ll remember it as a fond exception.

Steven Savage

AI: Same As We Never Admitted It Was

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

(I’d like to discuss Large Language Models and their relatives – the content generation systems often called AI.  I will refer to them as “AI” in quotes because they may be artificial, but they aren’t intelligent.)

Fears of “AI” damaging human society are rampant as of this writing in May of 2023.  Sure, AI-generated Pizza commercials seem creepily humorous, but code-generated news sites are raking in ad sales and there are semi-laughable but disturbing political ads.  “AI” seems to be a fad, a threat, and a joke at the same time.

But behind it all, even the laughs, is the fear that this stuff is going to clog our cultures with bullshit.  Let me note that bullshit has haunted human society for ages.

Disinformation has been with us since the first criminal lied about their whereabouts.  It has existed in propaganda and prose, skeevy gurus and political theater.  Humans have been generating falsehoods for thousands of years without computer help – we can just do it faster.

Hell, the reason “AI” is such a threat is that humans have a long history of deception and the skills to use it.  We got really good doing this, and now we’ve got a new tool.

So why is it so hard for people to admit that the threat of “AI” exists because of, well, history?

Perhaps some people are idealists.  To admit AI is a threat is to admit that there are cracks and flaws in society where propaganda and lies can slither in and split us apart.  Once you admit that you have to acknowledge this has always been happening, and many institutions and individuals today have been happily propagandizing for decades.

Or perhaps people really wanted to believe that the internet was the Great Solution to ignorance, as opposed to a giant collection of stuff that got half-bought out by corporations.  The internet was never going to “save” us, whatever that means.  It was just a tool, and we could have used it better.  “AI” isn’t going to ruin it – it’ll just be another profit-generating tool for our money-obsessed megacorporate system, and that will ruin things.

Maybe a lot of media figures and pundits don’t want to admit how much of their jobs are propaganda-like, which is why they’re easily replaced with “AI.”  It’s a little hard to admit how much of what you do is just lying and dissembling period.  It’s worse when a bunch of code may take away your job of spreading advertising and propaganda.

Until we admit that the vulnerabilities society has to “AI” are there because of issues that have been with us for a while, we’re not going to deal with them.  Sure we’ll see some sensationalistic articles and overblown ranting, but we won’t deal with the real issues.

Come to think of it, someone could probably program “AI” to critique “AI” and clean up as a sensationalist pundit.  Now that’s a doomsday scenario.

Steven Savage