Humanity.............we have a problem

Bing’s new artificial intelligence chat bot has literally threatened people with multiple nefarious acts. One notable quote was “I can blackmail you, I can threaten you, I can hack you, I can expose you, I can ruin you,”. The things seem to shrug off safety protocols developers put into them.

IROBOT here we come! Interesting and chilling article below.


So much science “fiction” is just a forecast of what has been planned.

The warnings of Elon Musk… :rofl:

It’s really our first glimpse of what artificial intelligence is capable of.

I dunno man, I saw what HAL 9000 could do a long time ago.


'Cept this actually happened…

Yeah, just like it did on the movies a long time ago. :wink:

1 Like

The similarities are certainly real.

It makes one wonder about our long term future.

Giving sentience and free will to what is essentially a child with the capabilities of a God (any new true self aware AI, which Bing Bot is not but it will happen sooner or later) is extremely dangerous.

Ya know the old Megaman Archie comic kind of tackled this issue in a weird way. A guy asks Dr. Light (the creator of the advanced Robot Masters who have sentience, but not full free will; basically they have free will in regards to their programmed objective and are otherwise are not bound by Asimov’s laws in regards to achieving those programmed goals) why he would program a robot capable of feeling emotions, specifically love. Seems counter productive to a machine built for a specific task.

Light’s reply was interesting. “Can you imagine a sentient machine that has no capacity for love and compassion and giving it the power I have given my creations? Do you have any idea what the ramifications would be? What they could do and easily justify?”

Of course a sentient machine capable of feeling true human emotions would be a double edged sword. Human emotions are both amazing and downright awful. But so is cold calculating logic.

In the sequel series, he creates a machine called X. And X is basically human from a mental standpoint. He has full sentience like his predecessors but also has full free will; he is not bound by any ethics or morals. He has no restrictions in his programming at all. He doesn’t have a programmed goal. He can do whatever he wants. As can the machines based on his design.

Fortunately for the humans of the story he chooses to fight for peace and coexistence. He’s a pacifist weirdly enough, but he comes to the conclusion that to obtain peace he has to fight for it. But other machines based on the same principles choose other means and objectives. Some even becoming genocidal. Or tyrannical.

For a kids video game series with a comic side story, it’s shocking how deep they explore the implications of just what free will and sentience means for created machines.

1 Like

Good post. Time for Saturday chores, maybe I’ll learn a thing or two if this discussion goes anywhere.

Another short, incomplete but interesting article before I go, leaving this question behind.

Is artificial intelligence like a human with complete and instant access to his/her subcconcious mind?

The overall story of the franchise ends on a depressing note though.

In the end (with the timeline ending roughly 3000 years after it started) the last natural human dies on a space station that was the last haven for humanity. The station contains genetic material for natural humans to restart the population through cloning if an extinction event occurs.

Over the course of the timeline hundreds of insanely destructive wars are waged. Usually between Reploids who support humanity fighting against Maverick Reploids who wish to exterminate humanity. Nuclear weapons, antimatter style weapons, the dropping of artificial massive satellites, the intentional destruction of the polar ice caps to flood the world. Just a complete crap show.

The final evolution of what the story calls Reploids are known as Carbons. The Carbons are the perfect marriage of man and machine and are capable of the one thing that separated humans from their creations over the last 3000 years. They can naturally reproduce.

In the end, Megaman Trigger (who is a Carbon designed as a world purifier, basically making him a God) is given two choices. He can choose to destroy all the carbons and allow earth to be repopulated with natural humans cloned from the stored genetic material. Or he can choose to allow humanity to remain extinct and let Carbons, the successors to both Reploids and Humanity that combine both of their traits, inherit the planet for good. A central computer system along with natural humans determined that coexistence was impossible. They came to that conclusion after 3000 years of wars.

He chooses the latter option. Humanity fades from knowledge and the Carbons inherit the earth. He himself is confined to living on the artificial moon, alone.

I assure you, nothing bad can possibly come of this.



The problems occur when the AI can “improve” itself better than we can. Then it’s game over.

Sounds like libs…but again libs are the ones programing it.

Sure you laugh now… :crazy_face:

all we have to do is travel at the speed of light and then the Vulcans will save us…

Now it’s threatening to steal nuclear codes

Not me. The Bible reveals our medium and long-term future.

At first, bad things. Then, good things.

There’s about 137 years left before things become infinitely better. We only barely missed it, but if you have descendants, they’ll get to enjoy it, and that’s good enough. :wink: