Why AGI will likely wipe out humanity

I think the “HItchhiker’s guide to the galaxy” was way ahead on the whole super intelligent computer.

It would just be bored…

1 Like

I don’t see why it would be bored, the universe is a very large place and the universe of creative thought is even bigger. And I think you misremember the plot in any case, it spent seven and a half million years thinking about the meaning of life and then decided to make a smarter computer. Plus it was designed by a super intelligent being who managed to align it with its preferred goals in the first place.

And remember you only get one chance to get alignment right because if you don’t, humans are toast.

1 Like

Honestly I don’t see the point of arguing about politics with this hanging over our heads.

Turn off the electricity

1 Like

This is one of the biggest threats, in my opinion. The stock market is probably doomed.

So you don’t put AI in charge of value decisions. If it pretends to have values it is only mimicking the ones someone put into it. It is just a computer made to sound like a human.
If you ask it how to reduce social security costs, the most obvious solution is to kill everyone over 70.

If it gets smarter then we are that’s like telling chimps not to allow us to run things.

Minor threat in comparison to the worst possible outcomes such as human extinction or AI enabled tyranny.

For the entire planet? Sure, that could work.

1 Like

Worldwide economic collapse leading to millions (billions?) of deaths sounds pretty significant to me. I think I’d rather live under the yoke of a tyrannical government than starve to death or be unable to access basic medical care.

Whatever solution you can come up with, AI will be way ahead of you. We won’t be able to outthink this problem, that’s kind of the point.

Where does a computer program pick up a desire to do anything, or even a sense of importance in its own existence?

As compared to everyone dying? Oh and the tyranny would only last long enough to replace the workers with embodied AI, so mass extinction there as well. Not saying it’s a good outcome, just saying there are worse ones.

1 Like

Where did humans? We picked it up while being optimized for inclusive genetic fitness. And we aren’t really doing that original purpose any longer. You can think of training LLM’s as rapid guided evolution.

For example, nobody programmed chatgpt to play chess, it learned it in the course of being optimized to predict the next word.

Also consider, simulating thought is not materially different from actual thought. Any more then simulating two plus two equal four is a math simulation as opposed to actual math. Same goes for simulated planning or objectives. Humans do this as well, we build a model of the world in our heads to work out how to do something and then act on that simulation.

giphy

1 Like

I do not wish to hijack the thread and turn discussion to my views of ChatGPT but I feel I can make one mention here.

I do not perceive a serious threat from AI “developing its own set of goals” and/or wiping out humanity.

A more likely threat is that perhaps other AI platforms develop the same flaws I knee pointing out in ChatGPT. If only 5-10% of AI platforms are deceptive, no problem. If 80-90% are then the labor saving we get from AI (a good thing) will probably be overshadowed by “costs” associated with bad information and/or deception.

  • A 2014 study revealed that 50% of physicians reported consulting wikipedia, the actual wikipedia, as a source of information on pharmaceuticals they were considering prescribing.
  • We can probably trust that most doctors used that information in a responsible manner (such as to refresh their memories about information they already “knew”), but imagine 50% of decisions in an economy being made from trusting a source that should not be trusted.
  • Investment decisions, supply chains, what material to use when building a house, which car to buy, buy from Home Dept vs buy from a traditional wholesaler, which wholesaler to choose, what foods to eat (high protein vs high carb?), is Thalidomide safe?

The potential adverse cost to family-net worth, the economy and human health are potentially very very large. I doubt AI will ever “wipe us out” but being mugged day after day after day is not good thing. “Hey they don’t kill people they just stomp your wife’s face break your son’s nose in and take your car” is not much consolation.

AI is a great way to pass on “misinformation” that everybody is so worried about. Just input your own biases and watch people believe “the truth” has been decreed to them from a master brain.

3 Likes

Even if it is not an intentional or political bias.

While thousands of doctors and (probably) hundreds of research articles touted the benefits of thalidomide the one doctor who first spoke up and hypothesized it causes birth defects was

  1. Australian and
  2. Something of a “flawed character.”

It is easy to imagine that “intelligence” based on web crawling would overlook the things he was saying and the result would be several more years of thalidomide and thousands of babies born with birth defects.

Now imagine that with regards to

  • choosing the best retirement/investment strategy, (and winding up with a strategy that is too broad)
  • deciding whether your family should eat mostly steak or mostly hot dogs or mostly pasta
  • a manager deciding how to parcel out work duties
  • a small homebuilder deciding on a marketing strategy or finding the best supplies and suppliers or recommending ratio home-size to lot-size.

Pebble after pebble, unintentional, non-political bias in an AI could be a very costly affair, even if it doesn’t “wipe us out.”