Great question! Think of ChatGPT like this: It’s a big collection of learned associations of words. So when I’ve browsed Reddit for far longer than intended (as usual, dangit), I might see that “abusive” and “relationship” (as well as gyms and lawyers, apparently) are often paired and together have a negative connotation, whereas “relationship” can also be a positive thing in combination with other words. “Abusive”, however, is almost never positive in any combination. Imagine a massive collection that has combinations and connotations like that stored in itself, ready to understand sentences through that lense. That’s ChatGPT. Except, besides understanding language, it can also generate it.
So how does one generate words as a model? Well, basically, you do the same trick as before when trying to understand language, except now, you’ve been taught combinations of words in RESPONSE to sentences. You don’t just try to comprehend them: you’re trying to say meaningful things as a reaction to what users say or ask. The folks over at OpenAI trained ChatGPT to do just that, feeding it prompts and giving it feedback on the quality of its response. Makes sense, right? If you want to learn German, someone has to tell you “Nein. Das sagen wir so nicht.” for you to ponder blankly until you finally get the highly coveted “Gut. Jetz las mich in Ruhe.” every now and then; same goes for the model.
What’s important to understand, then, is that the model doesn’t actually have an comprehension of what its generating. They are just learned combinations of words that work well given the examples it has been shown/taught. It can generate some VERY accurate things all on its own from all the patterns that it has learned, but the further it moves away from what it has been taught, the less reference ChatGPT has on what it can do to correct mistakes. After all, it can’t learn much more beyond what it has been taught before being deployed to you. It isn’t so much lying as simply grasping at straws using words that should theoretically belong together to give you a very compelling story that’s, well, nonsense, because it simply hasn’t learned a correct answer to your prompt.
I hope that helps it make a little more sense to you!
You know predictive text when you write on your phone? ChatGPT is basically that x10000. It’s just writing a sentence by selecting the next most likely word, over and over again. Because it is more complex than your phone’s predictive text, it can keep track of your entire conversation, stay on topic, and write impressively.
When it thinks you need a reference next, it’s just “guessing” the url. The same as it’s been guessing the text the entire time (This is litterally all it can do, guess the next word*). It knows most URL’s start with “https://” so it puts that, then it guessing a likely domain name for whatever the topic is about, say “www.somenewssource.com”, then it guesses a path “/on_topic_source.html”. So now you have a realistic looking URL as your reference.
So it’s not that it’s lying, just that it’s made up.
*They seem to be developing plugins to give it more functionality, so in the future it may be able to search the web for actual references 🙂
You know predictive text when you write on your phone? ChatGPT is basically that x10000. It’s just writing a sentence by selecting the next most likely word, over and over again. Because it is more complex than your phone’s predictive text, it can keep track of your entire conversation, stay on topic, and write impressively.
When it thinks you need a reference next, it’s just “guessing” the url. The same as it’s been guessing the text the entire time (This is litterally all it can do, guess the next word*). It knows most URL’s start with “https://” so it puts that, then it guessing a likely domain name for whatever the topic is about, say “www.somenewssource.com”, then it guesses a path “/on_topic_source.html”. So now you have a realistic looking URL as your reference.
So it’s not that it’s lying, just that it’s made up.
*They seem to be developing plugins to give it more functionality, so in the future it may be able to search the web for actual references 🙂
Not lying, just half formed connections in a massive jumble of linked information. The apology is more of a hard coded response as opposed to “thank you for providing direct negative feedback on this erroneous response, we will use this to improve in the future”
Like if someone said ” I remember consuming media, probably a paper, from a scientist, Neil DeGrasse Tyson, or Einstein, and it was talking about finches in the Galapagos and how the birds prove evolution is what killed the dodo.” You might be able to piece together that they saw Neil on JRE talking about Darwin and the Dodo’s lack of natural predators leading to them being unequipped when humans started eating them. And that person might even apologize for being wrong, but they didn’t lie to you
You know predictive text when you write on your phone? ChatGPT is basically that x10000. It’s just writing a sentence by selecting the next most likely word, over and over again. Because it is more complex than your phone’s predictive text, it can keep track of your entire conversation, stay on topic, and write impressively.
When it thinks you need a reference next, it’s just “guessing” the url. The same as it’s been guessing the text the entire time (This is litterally all it can do, guess the next word*). It knows most URL’s start with “https://” so it puts that, then it guessing a likely domain name for whatever the topic is about, say “www.somenewssource.com”, then it guesses a path “/on_topic_source.html”. So now you have a realistic looking URL as your reference.
So it’s not that it’s lying, just that it’s made up.
*They seem to be developing plugins to give it more functionality, so in the future it may be able to search the web for actual references 🙂
Not lying, just half formed connections in a massive jumble of linked information. The apology is more of a hard coded response as opposed to “thank you for providing direct negative feedback on this erroneous response, we will use this to improve in the future”
Like if someone said ” I remember consuming media, probably a paper, from a scientist, Neil DeGrasse Tyson, or Einstein, and it was talking about finches in the Galapagos and how the birds prove evolution is what killed the dodo.” You might be able to piece together that they saw Neil on JRE talking about Darwin and the Dodo’s lack of natural predators leading to them being unequipped when humans started eating them. And that person might even apologize for being wrong, but they didn’t lie to you
When you ask ChatGPT something it doesn’t just come up with the one single answer. It’s more like:
You: “Who is the president of the USA?”
ChatGPT: (magic)
ChatGPT: (90% Biden, 9% Trump, 1% Obama, fuck it, I’ll go with Biden)
ChatGPT: “Biden”
Shit happens when instead of 90% vs 9% the question is more complex and ChatGPT’s magic only allows it to find answers with low certainty. It will still chose the most likely one but if that happens to have a 1% change of being right, it will most likely be wrong.
When you ask ChatGPT something it doesn’t just come up with the one single answer. It’s more like:
You: “Who is the president of the USA?”
ChatGPT: (magic)
ChatGPT: (90% Biden, 9% Trump, 1% Obama, fuck it, I’ll go with Biden)
ChatGPT: “Biden”
Shit happens when instead of 90% vs 9% the question is more complex and ChatGPT’s magic only allows it to find answers with low certainty. It will still chose the most likely one but if that happens to have a 1% change of being right, it will most likely be wrong.
When you ask ChatGPT something it doesn’t just come up with the one single answer. It’s more like:
You: “Who is the president of the USA?”
ChatGPT: (magic)
ChatGPT: (90% Biden, 9% Trump, 1% Obama, fuck it, I’ll go with Biden)
ChatGPT: “Biden”
Shit happens when instead of 90% vs 9% the question is more complex and ChatGPT’s magic only allows it to find answers with low certainty. It will still chose the most likely one but if that happens to have a 1% change of being right, it will most likely be wrong.
Latest Answers