A thought I had recently. I don’t believe an AI has to be conscious to be dangerous. As long as it can think it can be dangerous, I see no reason it needs a sense of self to be dangerous. They are trained on human input, humans are dangerous ergo AI will be dangerous whether it contemplates itself or not and my guess would be it will be even more dangerous if it isn’t self aware.
What people call AI is just complex programming. You will never have real consciousness on a binary digital system. That’s why nature’s God does not use it …
Doesn’t matter if it’s real, China and a india and other relatively poor nations won’t agree to stay poor and wealthy democracies leadership will be replaced if they pile too much costs on their constituents. Bottom line is, we better A) hope it’s not real or B) science our way out of it like we did the great horse manure crisis in the pre auto era.
There used to be doom articles about places like NYC being buried in horse manure given population trends in that era. Low and behold, NYC streets aren’t buried under thirty feet of horse manure today.