I don’t see why it would be bored, the universe is a very large place and the universe of creative thought is even bigger. And I think you misremember the plot in any case, it spent seven and a half million years thinking about the meaning of life and then decided to make a smarter computer. Plus it was designed by a super intelligent being who managed to align it with its preferred goals in the first place.
And remember you only get one chance to get alignment right because if you don’t, humans are toast.
So you don’t put AI in charge of value decisions. If it pretends to have values it is only mimicking the ones someone put into it. It is just a computer made to sound like a human.
If you ask it how to reduce social security costs, the most obvious solution is to kill everyone over 70.
Worldwide economic collapse leading to millions (billions?) of deaths sounds pretty significant to me. I think I’d rather live under the yoke of a tyrannical government than starve to death or be unable to access basic medical care.
As compared to everyone dying? Oh and the tyranny would only last long enough to replace the workers with embodied AI, so mass extinction there as well. Not saying it’s a good outcome, just saying there are worse ones.
Where did humans? We picked it up while being optimized for inclusive genetic fitness. And we aren’t really doing that original purpose any longer. You can think of training LLM’s as rapid guided evolution.
Also consider, simulating thought is not materially different from actual thought. Any more then simulating two plus two equal four is a math simulation as opposed to actual math. Same goes for simulated planning or objectives. Humans do this as well, we build a model of the world in our heads to work out how to do something and then act on that simulation.
I do not wish to hijack the thread and turn discussion to my views of ChatGPT but I feel I can make one mention here.
I do not perceive a serious threat from AI “developing its own set of goals” and/or wiping out humanity.
A more likely threat is that perhaps other AI platforms develop the same flaws I knee pointing out in ChatGPT. If only 5-10% of AI platforms are deceptive, no problem. If 80-90% are then the labor saving we get from AI (a good thing) will probably be overshadowed by “costs” associated with bad information and/or deception.
A 2014 study revealed that 50% of physicians reported consulting wikipedia, the actual wikipedia, as a source of information on pharmaceuticals they were considering prescribing.
We can probably trust that most doctors used that information in a responsible manner (such as to refresh their memories about information they already “knew”), but imagine 50% of decisions in an economy being made from trusting a source that should not be trusted.
Investment decisions, supply chains, what material to use when building a house, which car to buy, buy from Home Dept vs buy from a traditional wholesaler, which wholesaler to choose, what foods to eat (high protein vs high carb?), is Thalidomide safe?
The potential adverse cost to family-net worth, the economy and human health are potentially very very large. I doubt AI will ever “wipe us out” but being mugged day after day after day is not good thing. “Hey they don’t kill people they just stomp your wife’s face break your son’s nose in and take your car” is not much consolation.
AI is a great way to pass on “misinformation” that everybody is so worried about. Just input your own biases and watch people believe “the truth” has been decreed to them from a master brain.
Even if it is not an intentional or political bias.
While thousands of doctors and (probably) hundreds of research articles touted the benefits of thalidomide the one doctor who first spoke up and hypothesized it causes birth defects was
Australian and
Something of a “flawed character.”
It is easy to imagine that “intelligence” based on web crawling would overlook the things he was saying and the result would be several more years of thalidomide and thousands of babies born with birth defects.