Are language models like Gemini or ChatGPT leveraging complex code or is the technological advance based on aggregating and processing data?

312 viewsOtherTechnology

Speaking to the Advance Gemini AI, I can’t help but feel I’m no longer interacting with a language model. It seems to form analysis and share opinions in a way that is unreal. I’m trying to understand the underlying tech and understand the level of complexity. Was the barrier till now related to the ability to write code or is it more that we’ve now hit the critical mass in terms of data collection and technological ability in processing?

In: Technology

4 Answers

Anonymous 0 Comments

> Was the barrier till now related to the ability to write code or is it more that we’ve now hit the critical mass in terms of data collection and technological ability in processing?

The term [Machine Learning, was coined in 1959](https://en.wikipedia.org/wiki/Machine_learning#History) so it’s been around for a while.

The recent breakthrough (chatGPT in the 2020s) is really hardware, specifically advances and availability of GPUs and hardware specifically designed for the kinds of math that underlay LLMs.

> It seems to form analysis and share opinions in a way that is unreal.

It’s not opinions, it’s a statistical model of a response. It might sound like a nitpicky difference, but it’s important to understand that these things aren’t magic (ignoring Clark’s 3rd law), it is math LOTS AND LOTS of really advance math.

I hope that helps.

Anonymous 0 Comments

Okay let me try an actual explain like I’m five:

Older chatbots that you might remember from years ago were based more on programmers being very clever about telling the bot how to respond to super common things. So if you said “hi” to it, the bots actually programming would look through a list of programmed responses and say “oh hello!” Or enough variations of it that it felt like it was coming up with the response in its own. With time, those bots got more and more complex in order to handle different questions, but at the end of the day, each response was carefully written by SOMEBODY and stored in a database of responses.

As it seems that you’re aware, the new chatbots work entirely differently that nobody pre-wrote the response. Instead it works by constantly determining what the next word should be as it constructs its sentence, taking the context of everything you’ve asked it so far and what the question is. It’s sort of like the autocomplete on your phone except for way more calculated and precise.

The way it determines what’s the most likely word to say next to sound smart is by referring to all of its training, which is essentially studying what real humans have often said on the entire internet. So ultimately you get a chatbot that sort of echoes what most people have said in the past in the way that most people communicate on the internet. And the result is PRETTY GOOD, maybe even surprisingly good. In fact, it’s so surprisingly good that now a lot of scientists are taking it super seriously as a huge discovery that might lead to something that is purposefully intelligent. As of now, most scientists are considering chatgpt and other similar chatbots as a major stepping stone to true AI, but not really AI for real. Sort of like a really really cool party trick.

Anonymous 0 Comments

The architecture they use (“transformers”) is relatively new (first proposed in 2017), but it’s not particularly complicated. People just hadn’t realized before then that this particular arrangement could work so well if you have a lot of data.

Anonymous 0 Comments

So, does ChatGPT understand what we are asking it and the responses it provides or does it work like Chinese Generals problem?