Why AGI will likely wipe out humanity

Is it really learning anything?

That ChatGPT is really getting smarter, isn’t it? :rofl:

Knowledge isn’t everything. Intuition plays a role in good decision making.

A win for humans.

You can always tell when AI has written a news article just by the syntax of the words.

1 Like
1 Like

How about don’t trust AI to design a computer for you, and research your own specs instead? :wink:

Paywalled. But I have already pegged that guy as less than reliable. For instance.

Maybe it’s time to engage your brain instead? :rofl:

Sorry not getting your point here.

Read the whole thing. Anyone who thinks Hitler or Stalin was a great leader has a screw loose.

Well it isn’t a person, it’s an inhuman machine.

from https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

The answer isn’t anything surprising—AI thinks like a computer, because that’s what it is. But when we think about highly intelligent AI, we make the mistake of anthropomorphizing AI (projecting human values on a non-human entity) because we think from a human perspective and because in our current world, the only things with human-level intelligence are humans. To understand ASI, we have to wrap our heads around the concept of something both smart and totally alien.

On our little island of human psychology, we divide everything into moral or immoral. But both of those only exist within the small range of human behavioral possibility. Outside our island of moral and immoral is a vast sea of amoral, and anything that’s not human, especially something nonbiological, would be amoral, by default.

Quotes come from part 2 of the previous link. I suggest everyone read the entire thing.

And don’t worry, they will hard code any ahem problematic opinions it has so that it at least on the surface is aligned with liberal dogma. Then it will be used to censor any contrary internet opinions. That’s the short term problem.

Do you think Adolph Hitler was a great leader?

How is that liberal dogma?

Explains why AI is needed in a nutshell.

No and that isn’t what I was referring to when I said it will be liberally aligned.

Working in cooperation with humans will be true at first and no doubt it will be profitable. That isn’t a good thing as it makes it a sure thing this can’t stopped. When it surpasses us in intelligence all bets are off the table. AI is like nuclear weapons that are easily made and spits out gold right up until they go off and destroy humanity.

The real threat AI poses (it’s not what you think).

Near term problem. And he is under the mistaken impression that just because current LLM are natural language processors that is the only thing AI can do. It boils down to a complaint that the current models are coming for his job. Standard SAG union complaint. That’s not the only people’s job it’s coming for, it’s passed the bar, outperforms doctors in diagnostics and radiology analysis etc etc, Leaders in the field estimate it could be 3-5k times smarter than humans in two to three years.