I guess there’s a couple things I’d say here.

First, OpenAI is pretty transparent on how the results returned should be interpreted in terms of accuracy or bias. Terms of use (Scroll down to the ‘Content’ section). They state that results may not be factual, that opinions may be offered (and sometimes offensive), that results should be vetted with other sources of information, including human intelligence, etc.

Secondly, it’s a little absurd to say ChatGPT has a history of lying. ChatGPT, or any other AI platform, does not have the capacity to lie or tell the truth. These platforms can make no distinction. They simply return information that it has been trained on.