Because while these devices are trained on internet data the internet itself is full of a lot of misinformation and on top of that learning algorithms are still kind of in their infancy, despite how promising they are they certainly still have a lot of issues that can result in a lot of false positives.
Although on the other hand in my experience chat GPT and similar AIs do not seem to lie nearly as much as some news articles claim.
Even tested this recently when they were news articles claiming that like chat GPT almost always gets math wrong, started asking it math questions of various degrees of hardness and it only really ended up getting one kind of wrong and it was due to a mistake that I could see even a human making
And if I just usually ask it something basic like who was the 30th president of the USA or something it usually gets that right. It typically just seems to have issues with more logic related questions because the AI itself is not really designed to be logical it’s designed to be conversational
Latest Answers