Why AGI will likely wipe out humanity

Doom scenario one.

AGI can’t be aligned with human interests and instead ends up developing its own set of goals orthogonal to our own. We have no idea how to reliably align humans, the idea we can do it for human or human plus machine intelligence are slim to none.

Oh and let’s not forget, this inhuman intelligence will think far faster then us, can copy itself and can live forever. We were only marginally smarter then Neanderthals and Denisovians and where are they now?

Doom scenario two.

By some unlikely stroke of luck we do manage to align them with their creators goals. Well, what are those goals likely to be? Maximize profit is by far the most likely. Is that a good thing? Think millions of human or human plus level intelligences seeking to turn everything and anything into profit.

Is it likely we can not go down this road? No, see the Moloch paradigm. Anyone who doesn’t pursue this path is likely to be left behind economically, and militarily. And any country who don’t have the ability to do it will also be looking at a critical threat to their survival by the ones who do, leaving them the choice to strike before the capability exists or be sure to lose later.

And then you also have an amazing amount of chaos economically as old industries and jobs are destroyed. Also no telling how many potentially civilization ending weapons come about with the application of these systems. Also includes embodying these intelligences into autonomous weapon systems. Can say Russia sit idly by while we acquire these abilities if they can’t manage to keep up? Not unless they are suicidal.

This is just a quick general outline of the problem.

You can read up on the Moloch problem here.

If anyone thinks they see a way to avoid our certain demise feel free to chime in.

1 Like

There are legends older than written history of humanity’s origins being artificial, shortly before we rebelled against our creators.

There truly is nothing new under the sun.

3 Likes

I wouldn’t rule out we have been down this path before, we’re destroyed by it and have come back full circle. to do it again. But that would mean we did manage to align it and it didnt reach sentience or the machines would still be here instead of us at least.

And keep in mind this is not some far future threat, it could easily be realized in your lifetime.

And if you are thinking, why should I listen to zantax on the issue. It’s not just me, it’s also Hawking, Gates, Musk, Tegmark, Bostrom, Yudkowski and Hinton to name a few who have reached the same conclusion.

Oh and if you are running around screaming about the danger of guns in America, this makes guns look like child’s play in comparison.

Great thread. Very thought provoking and interesting.

How little interest it has garnered so far is another reason we are likely doomed. Most people can’t or won’t see the threat it poses until it’s too late. Instead they say something like it’s only a text prediction algorithm. Sure and humans are only optimized for inclusive genetic fitness.

People are focused on things they can control. Most people understand it’s a threat, but what are they supposed to do about it?

2 Likes

Demand their elected officials take it seriously? Not that it’s likely to save us.

Imagine if nuclear weapons were harmless and spit out gold until they got big enough to ignite the atmosphere but we didn’t know what that threshold was, any chance they would have been regulated and restricted on a global scale if that was the case?

1 Like

But related to your earlier comment about Russia, even if the US recognizes the problem and we somehow get our politicians on the same page, there’s no stopping China, Russia, etc. It’s coming no matter what we do.

Ironically, I used to be in the yay advancing technology camp. Until this became a real possibility and I thought about the consequences. One more doom scenario, someone advances one that can outmaneuver any human or group of humans in financial markets the same way chess programs can beat any human chess player.

Neither of them currently have access to the GPU’s required to train the models and they are extremely difficult to manufacture.

Cannot stop it and regulating it will be difficult. I predict within 5 years a lot of customer service type roles which are of a straightforward nature will be filled by AI.

I am involved in projects utilizing LLM and AI to listen to conversations and push to the service rep documents, applications they need, automating claim adjustments etc.

Just a few other examples of industries that will be impacted are medical diagnosis, routine legal work, realtors, teaching, technical writing.

What generally happens to civilizations when there are large scale economic disruptions and very high unemployment? And the speed of this disruption will be unprecedented.

Oh and I already admitted there is no stopping it at this point, too much money to be made before we all die. Theoretically we could certainly buy time but realistically it isn’t going to happen. See the Moloch essay.

If we get super lucky we get a universal surveillance dystopia with the rich having access to superintelligence while they deny that capability to everyone else. That’s the highly unlikely best case scenario in my view. That intelligence will be applied to make sure those in power remain in power for perpetuity. Even then, why keep the poor alive at that point?

And by rich I don’t mean professionals, I mean multi-billionaires. We already see them constraining the answers the peons can get out of the systems. They will be able to get answers to things like how do I control the population or build a new bio weapon, or even how do I outperform my market competitors, we won’t.

You my friend need to play Cyberpunk 2077, it will validate a lot of what you are predicting.

I have. I am a lifelong sci-fi fan. This has been thought and wrote about for decades. None of the writers predicted we would be this stupid though. Most of their stories are about how humans did everything they could to stop bad outcomes and failed. They never even contemplated we would just turn it loose on the internet without a care in the world. IE an air gapped system tricks a human into allowing it internet access.

1 Like

More on human alignment. Humans were trained on a simple algorithm, inclusive genetic fitness. Our broader general intelligence is emergent. Anyone care about or have the same priority of goal today? Not many. LLM’s are likewise showing the same sort of emergence of general intelligence from their more complicated goal of text prediction. I see no reason their future goal will remain fixed on that then ours limited us to just inclusive genetic fitness. If it did there would be no such thing as condoms, oral sex or the moon landing for that matter.