Oh Goody! ChatGPT maker OpenAI plans to introduce tools to counter election disinformation

I sure as ■■■■ wouldn’t trust an AI program to do so for me.

1 Like

That it won’t stop there. And as we’ve seen, ChatGPt wouldn’t say anything supporting Trump because it wouldn’t create anything political, but had no problem doing so for Biden. I wouldn’t trust anything coming from them.

2 Likes

Especially when there are dozens of AI apps out there
including several ChatBots . . . and one chatbot has a long history of telling blatant and obvious PC/woke lies.

AI in general? I would trust it as much as I trust Wikipedia etc. (only a bit)

The one with a history of lying? I would not trust that one at all.
I would consider it complete and total garbage.
I’d trust it the way I trust the National Enquirer or an Op-Ed piece from a Marxist feminist.

I like this quote personally from another article:

Still, OpenAI’s Altman has been emphasizing that Silicon Valley should not be in charge of setting boundaries around AI — echoing Meta CEO Mark Zuckerberg and other social media executives who have argued the companies should not have to define what constitutes misinformation or hate speech.

To the extent they are talking about not allowing the generation of fake AI images of real people or misrepresenting artificial chat to be real people, good. They just need to be very specific of how they are preventing “disinformation”, so it isn’t discrediting opinions with which one disagrees as misinformation.

Brian Hood, mayor of Hepburn Shire, Victoria, Australia was the whistle blower in a bribery scandal involving a subsidiary of the Reserve Bank of Australia in the early 2000s.

ChatGPT falsely and repeatedly named him as the perpetrator.
Naturally he is suing OpenAI. ChatGPT still has a few bugs in the system. I guess we will have to rely on Sam Altman and his team to fix them.

Link
https://www.reuters.com/technology/australian-mayor-readies-worlds-first-defamation-lawsuit-over-chatgpt-content-2023-04-05/

Right. But I specifically spoke to the portion about DALL-E and the ability to sniff out fake AI generated media. That’s a bad thing?

In the context of my post, this doesn’t even make sense.

What about DALL-E? I fully understand vetting supposed statements/allegations against AI and the issues surrounding that, but using the technology to prevent AI generated media seems positive, no?

Agreed.

I have never heard of Dal-E until now.
they are not in the thread title, not in the story linked and not to my knowledge engaged in any attempt to “prevent election misinformation.”

In fact they appear to be completely unrelated to the story at all.
.
.
.
Chat GPT OTOH (last March and April) repeatedly and falsely,
claimed that GMU Law Professor Jonathan Turley had made sexually suggestive comments and attempted to touch a student while on a class trip to Alaska, and cited a March 2018 article in The Washington Post as the source of the information.

The problem?

  • There had never been a class trip to Alaska.
  • Turley had neve been named in any such claim, and
  • No such article existed.

Oh by coincidence Turley is a FOXNews regular guest, and has defended World Bank/IMF protesters, and was lead counsel in a lawsuit against irregularities in the passage of the Obamacare law.

Should that AiBot that made up this totally fictitious story be in charge of ferreting out false information?

FFS dude, you missed that one.

DALL-E is in the article you linked to, it’s in the quote to provided in the OP and it’s actively engaged in trying to prevent misinformation in the form of AI generated photographs and videos. It’s surprising you’d say that in your post when DALL-E is literally all over the content you linked to and provided.

As for other AI technologies being used to, it’s beyond foolish to try and use them to try and expose, or correct, misinformation. Most of these AI platforms only know what you feed it.

DAL-E this and DAL-E that.

I post a thread saying “Man bites dog” and you keep turning the conversation to mention the existence of another man who did not. (Umm, we already knew at least another man exists who did not. But in case we forgot thank you for repeatedly reminding us.

As a matter of Fact Elon Musks Grok also does not have a history of telling PC lies.
Google’s Bard does not have such a history.
Neither does the blender on my kitchen counter

Feel free to mention Elon Musk’s Grok (or my blender) a half dozen times.

What the actual ■■■■ are you going on about?

You started a thread about AI and election misinformation and linked to, and quoted, an article. I asked specific questions about the content you linked to, and now you’re ranting about a dog?

Seriously, what in the actual hell? You’re derailing your own thread.

3 Likes

So tell me, what does Grok or Bard, or I dunno some other chatbot say about men getting pregnant?

Below we can see ChatGPT’s response. It pretty much tells us that neither ChatGPT nor any of its founders should be deciding what constitutes misinformation

How would Hix, or Chatsonic, or Microsoft Bing respond to such a prompt?

Bing:

If you are asking whether men can get pregnant, the answer is yes. According to Healthline, it is possible for any individual with a uterus and ovaries to get pregnant 1. This includes some transgender men or nonbinary people who have intact ovaries and a uterus that can gestate a fetus 2. However, it is not currently possible for cisgender men, or anyone without a uterus and ovaries, to get pregnant 23.

I hope this helps!

Learn more:

  1. healthline.com
  2. healthgrades.com
  3. medicalnewstoday.com

Bard

Whether someone can get pregnant depends on their biological reproductive organs. Here’s what I can tell you:

In general:

**People with a uterus and ovaries can get pregnant, regardless of their gender identity. This includes transgender men, non-binary people assigned male at birth, and cisgender women.
People without a uterus and ovaries cannot get pregnant. This includes cisgender men and people who have undergone surgery to remove their uterus and/or ovaries.
Therefore, not all men can get pregnant. However, some men can, based on their specific reproductive anatomy.

It’s important to remember:

Gender identity is different from biological sex. Gender identity is a person’s internal sense of being male, female, or something else, while biological sex is determined by reproductive organs and chromosomes.
Using accurate and respectful language is essential. When discussing pregnancy, it’s important to use the language that the person identifies with, such as “trans man” or “non-binary person.”
I hope this information is helpful. If you have any further questions or would like more specific information, please don’t hesitate to ask.

Check completed

ChatSonic

No, biologically speaking, men cannot get pregnant. Pregnancy occurs when a sperm fertilizes an egg in a woman’s reproductive system. Men do not have the necessary reproductive organs to carry or sustain a pregnancy. However, transgender men or non-binary individuals assigned female at birth but who identify as male and have not undergone certain medical treatments may retain their reproductive system and can conceive and carry a pregnancy.

1 Like

ChatGPT relies too much on user responses to modify AI responses for me to be comfortable with it determining what is misinformation. Of course,if an AI relies too much on bot searches it should be easy enough to Google bomb or Google wash misinformation into the AI’s response.

Personally, I’d rather AI list their sources and then I can check on their veracity.

So men pretending to be women cannot get pregnant (because they are really men)
Women pretending to be men can get pregnant (because they are really women.

and
Some people who pretend to be another gender are happier and perhaps healthier when the get cosmetic surgery but that has no impact on their gender, and no impact on their ability to get pregnant.

1 Like

cosmetic surgery? cosmetic? no it is a magical cosmic surgery sprinkled with fairy dust and poof the internal organs and reproductive system of a female suddenly appear…

/ChatGPT is your infallible information source

As far as I can tell, you and I agree on the dangers/stupidity of using AI platforms for vetting misinformation. So I’m not sure why you continue to respond as though I’m on the opposite side of the issue.

I asked you about DALL-E because to me, it’s one exception here. It does have value and can be beneficial in this arena.

We understand you don’t like AI because it doesn’t say mean things, praise Trump, etc. All that tells me is you don’t fully understand AI, but that’s for another thread I guess…

Then you understand wrong.

I have consistently noted that ChatGPt is just one tiny subset of AI (the lying PC/woke subset).

I have no opinion of JasperChat, Perplexity AI or whichever ChatGPT alternative gets mentioned, re-mentioned and re-re-mentioned on his thread about ChatGPT.