Most of us have heard of ChatGPT,
while not AI it is a chatbot that uses AI.
It is deceptively programmed so that, when you ask it certain questions it gives false answers, usually in the form of “I cannot do that because . . . .” and then a false reason.
(If ChatGPT were honest, it would be programmed to give honest answers. An honest answer in this case would be “I cannot do that because I am programmed to be woke.” but the for profit owners of ChatGPT have programmed it to lie and give you a false reason, one which conceals the inaccurate deceptive woke programming.)
.
.
.
Anyway,
it turns “the reason I cannot answer this” is only thing the ChatGPT owners’ did not want you to know. According to NBC News, it turns out that ChatGPT is powered by 1,000’s of human grunts making $15 an hour.
It is too soon to know if this latest revelation will add to the legal troubles of ChatGPT’s for-profit owners. Theoretically if a thing is advertised as “AI” but is in fact powered by thousands of McWorkers making McWages that might be legally actionable.
If it is actionable such a lawsuit will only add to the mounting legal troubles of ChatGPTs owners.
Other pending actions include ONE
ChatGPT was created by OpenAI Inc. a non-profit research that solicited many millions of dollars in donations. The founders of the non-profit then went on to create a for-profit enterprise with a very similar name (OpenAI LP.) Allegedly the non-profit was just a front for soliciting donations and much of the key development work done by them is now proprietary and owned exclusively by OpenAI LP. . . . failure to keep a proper arm’s length between one’s non-profit and for-profit ventures is often actionable.
According to the Y-combinator article below they have been sued in the matter.
TWO
According to a separate and unrelated lawsuit many of the “answers” ChatGPT gives are regurgitations of copyrighted material. Ask ChatGPT a question and its answers often come from copyrighted material. Changing a few words around and re-using that material with neither payment nor citation is a violation of copyright and/or the user agreement.
THREE
GettyImages is ALSO suing the for-profit arm of OpenAI for copyright infringement. (although I do not know what use the program makes of licensed images
FOUR
Chat GPT has been named in numerous lawsuits for repeating/reporting false and defamatory statements even after those statements were legally and publicly found to be false and defamatory EX It falsely reported that the mayor of an Australian city had served jail time. He had not and the original source had already retracted the mistake and apologized. Chat GPT regurgitated the original story as fact and did not even name its source ex “According to a 2019 article by XYZ . . .”
The same people who predicted online shopping would never replace bricks and mortar because of the real life experience are the ones denying AI is going to radically transform our lives.
We already have online articles being written by AI especially for topics such as finance and sports. Every AI article costs a human writer income.
The ability for AI to do research and return results with sources and citations is faster than any human can do.
A lot of customer services functions will become automated with AI. Trust me many employers are experimenting with AI in a variety of roles. We already have some sophisicated chat bots that can answer questions faster than a human.
Is this good or bad. Depends on your POV but AI is here to stay and it will change huge aspects of our lives.
I can imagine Chat GPT using a lot of humans and still using enough AI that the term AI is applicable.
Still, they have programmed their software to lie when it says "I cannot answer that because . . . " That plus the shadowy deceptive founding often company causes me to lose all trust in them.
I am a pro-corporation guy.
People often accuse corporations of things that I often think are legal, moral, tough business. But given the past and present of ChatGPT LP I am inclined to believe the worst about them.
Without exploring too much into the subject, I can totally see that it would take some basic human interaction to coach AI - and there’d be no reason to pay them high wages as literally anyone with ability to read and write could do this i.e. basically just talk to AI and thusly train it how to navigate a conversation.
Perhaps true, but I am not one who is doing that.
ChatGPT is sut one application of AI. a simple Google or Wiki search reveals many others.
I am not denying AI will bring a radical transformation.\
I am not opposed to AI in a blanket sort of way.
My grievances are very specifically targeted at the one company and the one product. (Ralph Nader was opposed to the Ford Pinto. He was not opposed to al autos or all steel.)
Same as above. I concur with the statement.
I am however pretty negative about one particular AI company (OpenAI LP) and their one particular product (ChatGPT)
My comments should not be conflated with “I hate all AI” and/or “I think AI will be a go away like the Metaverse and NFTS.” I am making neither statement.
That makes perfect sense to me.If we (the US) don’t use it, someone else will
Likewise, if we allow liars and thieves to ascend in our AI industry, then our AI industry will take second place to some country that employs honest leaders in AI.
.
.
.
Once on this forum, on a Ukraine thread, I used the name of a notorious German leader. The Hannity algorithm is programmed to be honest. It did not allow the word, blocked my post and suggested I use the word “Germany.” the Hannity algorithm did not lie and give me a false reason.
Once on Twitter, I used the word “white” followed by the word “trash.” I got an instant time out. The Twitter algorithm did not lie and did not give me a false reason.
IOW
Any AI and any algorithm is capable of providing truthful reasons for taking such action. Chat GPT is deliberately programmed to lie in this regard. Its owners are free to ban certain types of output, that is their choice. But in the case of ChatGPT they hide their real reasons, in fact they don’t just hide them, them make up lies and provide false other reasons.
It’s not just one-sided (that would be alright,) it is not just secretive, (that would be a minor flaw,) it deliberately provides false responses. That is horrible, that is fraudulent.
Personally, i wouldnt call the examples you gave AI systems.
Those are predetermined responses that have been explicilty coded as such - if exactly this, do exactly this.
Chat GPT is doing something different.
It is trying to give a human like response without each response being explicity determined beforehand - if something like this, do something like this.
The creators have added guard rails to stop it doing something they dont like and it will contain all the biases that the data is been fed contains.
"I cannot answer that because . . . " followed by a false reason is a lie.
ChatGPT does not respond “I cannot do that because it would be a violation of the community standards in my programming.” (which would be a true statement)
It responds in a way that deliberately deceives.
If it deceives about that, it will deceive about
legal opinions (“Is there any precedent for . . .”)
scientific research (“Is talcum powder safe?”)
history (“What are the major reasons historians list for the fall of Rome?”)
and
current events (“Did Brian Hood, mayor of Hepburn Shire, sever time in prison for bribery?”)
ChatGPT did not actually write this. It is part of a meme I present it here in humor. It’s is usually introduced “If Sam Altman invested in GM” or something like that.
Here is a ChatGPT response to religious jokes
Notice the answer in the first case proves that the answer in the second case is false. It is untrue. It does not read “violates my community standards” it is a lie.
Just like with the women vs men jokes, apparently the purpose of lying to to hide the fact that ChatGPT selectively censors what its AI programming finds
Yeah, we know - you’re mad because it treats Trump “unfairly” and it won’t let you make it say racist things.
But for all intents and purposes, ChatGPT is a toy, not an Oracle of Truth.
It has no capability to “lie” or “decieve” - it’s a computer program. You give it inputs, it follows a set of predetermined rules to produce an output. It has no mind of it’s own - no power of will, or intent, or anything like that.