Different Times, Different Mes

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

One of my obsessions for while has been to ask what the world could be like if our combination of technology and culture had taken different directions. In 2024 a friend said that it felt like nothing new or really good had been invented in 15 years, especially internet-wise. That has had me reviewing all the different choices and events that have led us to where we are now from a technical-cultural standpoint, and how it might be better.

So I started reflecting and asking what did I want to see? Where could things have gone different – and gone better?

That led me to some speculations of course, such as if there had been more social media regulation, or if certain technologies had becom popular at different times. But know what really got interesting?

Asking who I’d be if things had been different in the worlds of technology and culture.

This started by me imagining a world where the internet B.S. of today had never arrived – something I may write about. I tried to imagine myself in a world with different technologies, a world more environmentally conscious, a world where we weren’t doomscrolling. It was essentially writing speculative fiction in my head, but the mental exercise hit hard.

I can see how in some cases I’d have been the exact same kind of person, just using different technologies. I could see how I’d also be different a few twists and turns in the economy and I’d never have become a programmer. I could see how I’d also be the same- because I in many cases I’d still be a Project Manager, even in a semi-Solarpunk, not-quite-utopia I imagined.

Relating a possible future to a possible me, helped me grasp such trends and potentials much better.

This led me to another speculation – I began asking about what my life would be like in the case of particular technological divergences. That has proven to be a great way to understand our world the way it is and what it could be?

What if Work from Home had come early (and believe me it was seeded earlier than you think)? Or phone companies had seen things like AOL and come up with competition? What about prefab homes returning? What would it take to have technologies be different, culture be different, and what would I experience?

I find that idea of imagining “being there” really helped me understand impacts – and unintended impacts. It also helped me understand a few things about myself – such as my ability to get enthused about cool stuff even if it is kind of dumb.

I may actually write some of these ideas up and make a series of it. What can I speculate en and learn from using my knowledge of technology and history? What can I share – and what can we discuss – about possible worlds to understand this one.

But if I write it or not, I want you give it a shot. Ask about “historical divergences” you can imagine, and who you’d be if they happened. Especially if it’s about a better world – since you might be surprised at who you are even in a more ideal place and time.

Steven Savage

It’s The Ones We Noticed

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

People developing psychosis while using Chat GPT has been in the news a lot. Well, the latest story is about an Open AI investor who seemed to lose it in real time, leading to shall we say concerns. The gentleman in question seemed to spiral into thinking the world was like the famous SCP Foundation collective work.

Of course people were a little concerned. A big investor AI losing his mind isn’t exactly building confidence in the product or the company. Or for that matter, investing.

But let me gently suggest that the real concern is that this is the one we noticed.

This is not to say all sorts of AI bigwigs and investors are losing their minds – I think some of them have other problems or lost their minds for different reasons. This isn’t to say the majority of people using AI are going to go off into some extreme mental tangent. The problem is that AI, having been introduced recently, is going to have impacts on mental health that will be hard to recognize because this is all happening so fast.

Look, AI came on quick. In some ways I consider that quite insidious as it’s clear everyone jumped on board looking for the next big thing. In some ways it’s understandable because, all critiques aside (including my own), some of it is cool and interesting. But like a lot of things we didn’t ask what the repercussion might be, which has been a bit of a problem since around about the internal combustion engine.

So now that we have examples of people losing their minds – and developing delusions of grandeur – due to AI, what are we missing?

It might not be as bad as the cases that make the news – no founding a religion or creating some metafiction roleplay that’s too real to you. But a bit of an extra weird belief, that strange thing you’re convinced on, something that’s not as noticeable but too far. Remember all the people who got into some weird conspiracies online? Yeah, well, we’ve automated that.

We’re also not looking for it and maybe it’s time we do – what kind of mental challenges are people developing due to AI that we’re not looking for?

There might not even be anything – these cases may just be unfortunate ones that stand out. But I’d really kind of like to know, especially as the technology spreads, and as you know I think it’s spreading unwisely.

Steven Savage

It’s Bad It’s So Bad It’s Good

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

All right, it’s time to talk AI again. This also means I have to use my usual disclaimer of “what we call AI has been around for awhile, it’s been very useful and is useful, but we’re currently in an age of hype that’s creating a lot of crap.” Anyway, there, packed that disclaimer into one sentence, go me.

I’ve seen “AI-ish stuff” for 30 years, and the hype for it is way different this time.

Watching the latest hype for “AI” (that pile of math and language that people are cramming into everything needed or not) I started listening to the hype that also seemed to be a threat. We have to build this. We have to build this before bad guys build it. We have to build a good AI before a bad AI. This may all be dangerous anyway!

Part of current AI marketing seems to be deliberately threatening. In a lot of cases it’s the threat of AI itself, which you know, may not be a selling point. I mean I don’t want a tool that might blow up in my face. Also Colossus: The Forbin Project freaked me out as a kid and that was about competing AI’s teaming up so you’re not selling me with the threat that we have to make AI to stop AI.

But this marketing-as-threat gnawed at me. It sounded familiar, in that “man, that awful smell is familiar” type way. It also wasn’t the same as what I was used to in tech hype, and again, I’ve worked in tech for most of my life. Something was different.

Then it struck me. A lot of the “hype of the dangerous-yet-we-must-use-it” aspects of AI sounded like the lowest form of marketing aimed at men.

You know the stuff. THIS energy drink is SO dangerous YET you’re a wimp if you don’t try it. Take this course to make you a super-competitive business god – if you’re not chicken, oh and your competitors are taking it anyway. Plus about every Influencer on the planet with irrelevant tats promising to make you “more of a man” with their online course. The kind of stuff that I find insulting as hell.

Male or female I’m sure you’re used to seeing these kind of “insecure dude” marketing techniques. If you’re a guy, you’re probably as insulted as I am. Also you’d like them to stop coming into your ads thanks to algorithms.

(Really, look online ads, my prostate is fine and I’m not interested your weird job commercials).

Seeing the worst of AI hype as being no different than faux-macho advertisements aimed to sell useless stuff to insecure guys really makes it sit differently. That whiff of pandering and manipulation, of playing to insecurity mixed with power fantasies, is all there. The difference between the latest AI product and untested herbal potency drugs is nill.

And that tells me our current round of AI hype is way more about hype than actual product, and is way more pandering than a lot of past hype. And after 30+ years in IT, I’ve been insulted by a lot of marketing, and this is pretty bad.

With that realization I think I can detect and diagnose hype easier. Out of that I can navigate the current waters better – because if your product marketing seems to be a mix of scaring and insulting me, no thanks.

Steven Savage