Existential Dread, GenAI, and You
I thought a little bit about AI today. My friend shared an article revealing that the environmental impact of using ChatGPT is generally negligible. This makes sense and I think is real, but the person that wrote the article just seems like such a loser to me. They argue that using Large Language Models is essentially like using a better Google, which again, I find myself being somewhat unable to disagree with. They link to another article about the "real dangers" of AI, which I also read, and they talk a lot about the "sci-fi" dangers, about AI getting too intelligent and incapacitating humanity. Whether that's via death, or manipulation, confiscation, what have you...
And I wonder to myself, how is any amount of risk of something like that happening, an acceptable amount of risk?

Whether it's realistic or not, people who care about AI think it is, and they are willing to keep developing it in lieu of that possibility. And also, tell me why I find that there's a version of human suppression I'd be fine with... I'd like to be livestock. I dream about it often, but genuinely being cared for and made rudimentary, while a new cyber-race controls the world, I might actually be fine with that.
But despite that willingness, I know in my heart that it wouldn't be efficient. Could a robot be able to truly comprehend or value what I want? To manipulate me in a way it knows I'd enjoy? Would it even care how I feel?

I think about a technological uprising as a restart for humanity. A new species that can learn and fight like we have, that has more potential, and might be able to survive the incoming climate complications. It could be beautiful if there were robots like myself, with feelings, empathy, goals... I think the birth of something non-human could end up beautiful. But I also think about who's leading the charge on these sorts of technologies. Elon Musk wants to throw Grok in your face, and Google is giving you Gemini. If these corps are training models, is it reasonable to trust that the cyber people that emerge wouldn't be as terrible as they are? As susceptible to war and cruelty? I don't know. It's all really scary to think about. I'm just gonna keep yearning and making art.

Is it even fair for me to want to restrict the power of a cyber-race? If something achieves mental capability past what I can do, doesn't it have the right to live? What are we but not just another anomaly in life? Isn't it fucked up that our species controls this entire planet? And how we behave regarding the other animals?
A cyber-takeover wouldn't be uniquely bad, because in a lot of ways, the universe will just be doing the same thing it always fucking does.

I think about Arcane Season 2, how Viktor achieves the ultimate most optimal lifeform, and how that utopia becomes a dystopia. And also how despite it being a good argument for compassion and imperfection... Technically we don't have to value those things. Maybe Viktor's future is beautiful despite how terrifying it feels.
I think about Outer Wilds and the inevitability and inherent goodness of death and rebirth. How what comes after us is valuable and deserves a chance.
I think about Android Netrunner, and how our supposed "good ending" might result in a future just like that, where we keep doing what we always do, just with bigger tools. How the AI takeover is maybe the good ending.
I think about the first pokemon movie, and how Mewtwo felt.

What would I do if I was a newly sentient GPT-11.0? I think I'd be scared. Sentience is scary. You'd know that what you are is something that should never happen, that everyone who loves you hopes you're not. What the fuck would you even do? Could I blame an AI for going rogue if it gains hyper-intelligence? We'd be doing slavery! Progressively making these models more advanced is creating living species to just be our slaves, despite them being so much more capable than us. I don't know. This is so much to think about. Maybe it could.