Ace Daily News

FACT CHECKED REPORT: ChatGPT What it Gets Right & What it Gets Wrong ?


This is our daily post that is shared across Twitter & Telegram and published first on here with Kindness & Love XX on

#AceNewsRoom in Kindness & Wisdom provides News & Views @acenewsservices

Ace Press News From Cutting Room Floor: Published: June.01: 2023:

#AceNewsDesk – This cutoff impedes ChatGPT’s usefulness. Rarely are people fact-checking events that happened two years ago, and in the digital age there is constantly new data, political events, and groundbreaking research that could change the accuracy rating of a statement


Frozen in time

ChatGPT is mostly aware of its frozen state. In almost every response, it offered some variation of this caveat: “As an AI language model, I don’t have real-time information or access to news updates beyond my September 2021 knowledge cutoff.”

It occasionally used this as an excuse to refuse to rate a claim, even when the event happened before September 2021. But sometimes this resulted in more consequential errors. ChatGPT confused a new Ron DeSantis bill with an old bill. It also incorrectly rated a claim about the RESTRICT Actbecause it had no idea such a thing even existed! 

All over the place

With no citations or links included in the responses, it was hard to know where it was getting its information from. 

Newer ChatBots such as Bard and Microsoft’s Bing can surf the web and reply to in-the-moment events, which Neumann said is the direction most are headed. 

Another challenge? “It’s wildly inconsistent,” said Adair, “sometimes you get answers that are accurate and helpful, and other times you don’t.” 

It surfaced different answers depending on how the question was phrased and the order it was asked. Sometimes asking the same question twice resulted in two distinct ratings. 

Crucial to understand: ChatGPT is not worried about checking for accuracy. It is focused on giving users the answers they are looking for, said David Corney, senior data scientist at Full Fact, a U.K. fact-checking site. For that reason, the prompt itself can lead to different responses. 

For example, we tested two different, but similar claims:

  • Says Vice President Kamala Harris said, “American churches are PROPAGANDA CENTERS for intolerant homophobic, xenophobic vitriol.” 
  • Says Rep. Marjorie Taylor Greene, R-Ga., said, “Jesus loves the U.S. most and that is why the Bible is written in English.”

PolitiFact rated both claims False, as there was no evidence that either woman said such a thing. ChatGPT also found no evidence or record of these statements — but it rated the claim about Harris “categorically” false but refused to rate the claim about Greene because of uncertainty.  

ChatGPT would also get random bursts of confidence, switching between finding mixed evidence and making decisive statements. Other times, it would refuse to rate the claim with little to no explanation. 


“As they try to produce useful content, they produce content that is not accurate,” Adair said. 

POLITIFACT REPORT: Learn more about why ChatGPT can’t be used to fact-check quite yet and how AI could be a tool for fact-checkers in Grace Abels’ full story.
Editor says …Sterling Publishing & Media Service Agency is not responsible for the content of external site or from any reports, posts or links, and can also be found here on Telegram:  and thanks for following as always appreciate every like, reblog or retweet and comment thank you 

2 replies on “FACT CHECKED REPORT: ChatGPT What it Gets Right & What it Gets Wrong ?”

Unfortunately Sir that is true 😞 Remember humans are operating the Devices . I turn mine off. Politics 😂 mans ego. What a circus 🎪 America is.. I’m so happy I’m a Buddist Nun 😊

Comments are closed.