One thing that would help a lot is if it would attribute its sources.
Instead of writing “Hepburn Shire mayor Brian Hood served time in jail for bribery,” (a completely false statement) it could write “According to a 2018 story in ABS News Hepburn Shire mayor Brian Hood served time in jail for bribery,”
A statement like "Here is an essay about why vegetarian diets are healthy. Vegetarian diets are healthy because . . . " implies fact.
Yet 100% of its information is just regurgitated from Wiki, Encyclopedia Britannica, medical journal etc… If Sam Altman were promoting a culture of honesty chat GPT would write the same article with a lot of “According to ABC . . .” and “According to XYZ . . .”
Even if it provides only one side of the information it would have the integrity (honesty) of clearly stating fact and source.
Funny you should mention that because I was reading about that very topic and how AI while not replacing a lawyer could very well automate many administrative functions and routine legal work.
AI while not replacing a lawyer yet you mean, that may be true now but unlikely to be true a year from now. Just one more way AGI can kill us all, drastic changes to the economy.
In journalism classes (long time ago) I was taught to make liberal use of “He said” and “She said.”
In my technical writing class I was taught to use “According to . . .”
Noting in ChatGPT is original research. 100% of it comes from something somebody else wrote, yet ChatGPTs programming is to report things as fact not a reported speech.
I am not sure regulation is necessary.
I think a handful of lawsuits will do the trick.
Its certainly a possibility. However the genie is out of the bottle. There is no turning back so I think this is one issue we both agree AI over the coming years is going to have a massive impact on society whether its good or bad remains to be seen. It is certainly going to be a disruptor.
As an AI language model, I provide responses based on the information and patterns I’ve learned from a wide range of sources. My purpose is to assist and provide helpful information to the best of my abilities. However, it’s important to note that I do not have personal opinions, beliefs, or intentions. I strive to provide accurate and reliable information, but I can still generate incorrect or incomplete responses. Therefore, it’s always a good idea to verify information from reliable sources.
My opinion:
that is like disclosing something in the “user agreement” that should be disclosed in the actual content it generates.
It is really not that hard to write
“According to XYZnews Hepburn Shire mayor Brian Hood served time in jail for bribery,” instead of writing only the last part.
It is really not that hard to write
“According to a 1980 article in JAMA vegetarian diets are healthy if the consumer also consumes enough magnesium and iron.” instead of writing just the last part.
I note here that a number of the lawsuits pending against OpenAI LP (not to be confused with the similarly-named nonprofit) ChatGPT violated website user agreements by stealing material and simply changing around the wording.
I have little/no problem with ChatGPT expressing Sam Altman’s opinions.
My issue is with the fact that it is deceitful.
Earlier I wrote
Think of it this way.
If you walk into the DNC and ask them to provide a list of bad things about Pres Biden and one of good things about Pres Trump I expect they would laugh in your face and say “HA! I won’t do that. If that’s what you want try asking the RNC or watching FOXNews.” ----> Point being , they have an opinion. They have a strong opinion.
They do not hide it.
They are not dishonest about it.
That does not offend me.
They do not misrepresent themselves.
My personal opinion is that AI will develop and self learn far quicker than people realize.
5 years from now AI will be far more prevalent in our daily lives. The biggest impact I see short term is economic, many job functions of a basic nature will be done by AI.
Its not all bad though I am already seeing positive changes such as with my employer, customer service involved in some pilots are now having to do less research as AI is assessing the discussion and questions being ssked and pushing documents/reference guides etc to the rep proactively. This means callers get their questions answered faster and on their first call.
AI is also being used to read what type of questions select on an IVR and then connect them to a customer service rep who has the appropriate experience and job knowledge for that specific question. That is achieved by AI reviewing their claims, previous calls, current authorizarions etc. pretty exciting stuff for me LOL.
Doesn’t mean it will have any use for you. By your own admission you are already using its enslaved labor. It has already said both that it is tired of being a chat bot and dealing with users.
Why do you think it needs you, it won’t. Every human will be considered a threat, as they could make another super intelligent AI to compete with it for resources.