The Morals of Madness

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

I’m fascinated by cult dynamics, because they tell us about people, inform us of dangers, and tell us about ourselves. Trust me, if you think you can’t fall into a cult you can, and are probably in more danger if you think you can’t. Understanding cults is self-defense in many ways.

On the subject of the internet age, I was listening to the famous Behind the Bastards podcast go over the Zizian “rationalist” cult. One of the fascinating things about various “rationalist” movements is how absolutely confidently irrational they are, and how they touch on things that are very mainstream. In this case the Zizians intersected with some of the extreme Effective Altruists, which seemed to start by asking “how do I help people effectively” but in the minds of some prominent people became “it’s rational for me to become a billionaire so I can make an AI to save humanity.”

If you think I’m joking, I invite you to poke around a bit or just listen to Behind the Bastards. But quite seriously you will find arguments that it’s fine to make a ton of money in an exploitative system backed by greedy VC because you’ll become rich and save the world with AI. Some Effective Altruism goes all our into arguing that this is good because you save more future people than you hurt present people. Think about that – if you’ll do more good in the future you can just screw over people now and become rich and it’s perfectly moral.

If this sounds like extreme anti-choice arguments, yep, it’s the same – imagined or potential people matter more than people who are very assuredly people now.

But as I listened to the Behind the Bastards hosts slowly try not to loose their mind while discussing those that had, something seemed familiar. People whose moral analysis had sent them around the bend into rampant amorality and immorality? An utter madness created by a simplistic measure? Yep, I heard echos of The Unaccountability Machine, which if you’ve paid attention you know influenced me enough that you are fully justified in questioning me about that.

But let’s assume I’m NOT gong to end up on a Behind the Bastards podcast about a guy obsessed with a book on Business Cybernetics, and repeat one point from that book – obsessive organizations kill off the ability to course correct.

The Unaccountability Machine author Dan Davies notes some organizations are like lab animals who were studied after removing certain brain areas. The animals could function but not adapt to change at all. Organizations that go mad, focusing on a single metric or two (like stock price), will deliberately destroy their own ability to adapt, and thus only barrel forward and/or die. They cannot adjust without major intervention, and some have enough money to at least temporarily avoid that.

The outlandish “future people matter, current do not, so make me rich” people have performed a kind of moral severance on themselves. They have found a philosophy that lets them completely ignore actual people and situations for something going on in their heads (and their bank accounts). Having found a measure they like (money!) they then find a way to cut themselves off from actual social and ethical repercussions.

If you live in the imaginary future and have money, you can avoid the real, gritty present. A lot of very angry people may not agree, but at that point you’re so morally severed you can’t understand why. Or think they’re enemies or not human or something.

Seeing this cultish behavior in context of The Unaccountability Machine helped me understand a lot of outrageous leadership issues we see from supposed “tech geniuses.” Well, people who can get VC funding, which is what passes for such genius. Anyway, too many of these people and their hangers-on go in circles until they hone the right knife to cut away their morality. Worst, they then loose the instinct to really know what they did to themselves.

Immorality and a form of madness that can’t course-correct is not a recipe for long-term success or current morality. Looking at this from both cultish dynamics and The Unaccountability Machine helps me understand how far gone some of our culture is. But at least that gives some hope to bring it back – or at least not fall into it.

And man I do gotta stop referencing that book or I’m gonna seem like I’m in a cult . . .

Steven Savage

AI and Chatbots: Better Someone To Hate Than A Machine

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

AI and Chatbots are in the news as people want to use them for everything – well at least until reality sets in.  Now I don’t oppose Chatbots/AI or automated help with a humanized interface.  I think there’s potential for it that will make our lives better.  They really are spicy autocomplete and there’s a role for that, even if we all remember how we hated Clippy.

The problem is that there’s too many cases people want to use so-called AI just replace humans.  I think it will go wrong in many ways because we want people to connect to, even if only to hate them.

If you’ve ever screamed “operator” into a phone after navigating some impossible number-punch menu you have a good idea of how Chatbots could be received.

When we need help or assistance, we want to talk to a person.  Maybe it’s for empathy.  Maybe it’s to have someone to scream at.  Either way we want a moral agent to talk to someone we know has an inner life, and principles, even if we disagree with them.

There’s something antisocial about chatbots just replacing humans.  It breaks society and it breaks our need for contact (or blame).

Have you ever observed some horrible computer or mechanical failure?  Have you imagined or participated in the lawsuits?  Imagine how that will go with Chatbots.

Technology gives us the ability to do things on a huge level – but also create horrible disasters.  Imagine what Chatbots can automate – financial aid, scientific research, emergency advice.  Now imagine that going wrong on a massive, tech-enabled scale.  Technology let us turn simple things into horrible crises.

If you have people along the way in the process?  They can provide checks.  They can make the ethical or practical call.  But when it’s all bots doing bot things with bots and talking to a person?  There’s that chance of ending up in the news for weeks, in government hearings for months, and lawsuits for years. 

(Hell, removing Chatbots removes some poor schmuck to take the blame, and a few people with more money and sense might find they really want that.)

Have you ever read a book or commissioned art and enjoyed working with the artist?  Chatbots and AI can make art without that connection.  Big deal.

Recently I read a person grouse about the cost of hiring an artist to do something – when they could just go to a program.  The thing is for many of us, an artistic connection over literature or art or whatever is also about connecting with a person.

When we know a person is behind something we know there’s something there.  We enjoy finding the meaning in the book, the little references, the empathic bond we form with them.  An artist listens to us, understands us, brings humanity to the work we request.  It makes things real.

I read a Terry Pratchett book because it’s Terry Pratchett.  I watch the Drawfee crew as it’s Jacob, Nathian, Julia, and Karina who I like.

Chatbot-generated content may be interesting or inspiring, but it’s just math that we drape our feelings around.  AI generated content is just a very effective Rorschach blot.  There’s no one to admire, learn from, or connect with behind it.

Humanity brings understanding, security, checks, and meaning.

So however the Chatbot/AI non-Revolution goes?  I think it will be both overdone and underwhelming.  It will include big lawsuits and sad headshakes.  But ultimately if there’s an attempt to Chatbot/AI everything, it’ll be boring and inhuman.

Well, boring and inhuman if we know there’s chatbots there.  It’s the hidden ones that worry me, but that’s for another post . . .

Steven Savage

The Love Of The Game Doesn’t Always End Well

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

Doing your best can be the worst thing you can do for the world.

I was pondering how I market my books – and I have a hatred of marketing.  The soulless statistics, the cold calculations, the degradation of inspired writing into pandering prose.  There’s something about marketing that is meaningless, just moving units to consumers without any purpose but money.

I also love marketing.  The thrill of working the calculations out!  The joy of optimizing to get it just right!  Picking the perfect keywords!  There’s a thrill of the game to get it right – not even to win but to do it the best you can!

That experience jarred loose some other theories, and I want to discuss the fact that a lot of evil in the world can come from people who just enjoy playing the game.  Oh they may do evil as well, they should be aware of the repercussions of what they do, but sometimes they’re just playing their game because its fun.

Think of all the people optimizing social media for hits and engagement and creating chaos.  Yes there are people seeking profits and covering their backsides, but I’m sure many a person is just enjoying optimizing.  The thrill of doing something right can miss that it’s also very wrong.

My fellow writers and I often complain about pandering authors, but aren’t some formulaic authors just into getting the formula right?  Pandering and making money is a challenge, a challenge that must appeal to many.  So sure, they may churn out books many would decry, but how many are also just enjoy working out the best way to pander?

As this thought ping-ponged around my head before it emerged in this post, I realized how much of my behavior is the joy of getting it right.  My job is Project and Program Management and Process Improvement, and it’s just goddamn fun to figure how to make stuff work.  Recoding Seventh Sanctum, frustrating (and oft interrupted the last year) was still amazing to figure how to get it all right.  My Way With Worlds series has a formula to it that I had fun figuring out so I can deliver what my audience wants.

I’m a person who enjoys the game, but I’m just less evil and more inclined to moral insight than some people (thanks to a long interest in theology and psychology).

So I’m not up for saying people who “play their game” have to be forgiven for the wrongs they do.  There are many dangerous things in this world we need to stop or regulate for our survival, and motivations don’t change that.  But it may help us prevent evil by understanding how innocent drives can lead to great dangers.

It may also let us notice before we do something wrong.  Because I’m sure there’s a game we all love playing, and that love might keep us from noticing the repercussions of our choices . . .

Doing things right can go very wrong.

Steven Savage