The LLM isn’t pouring over that data every time you make a query. What happens is that all the data is analyzed during a training phase and that produces a model. The model is basically words and numbers describing the relationships between words. The model is what you interact and it contains a representation of the data that was analyzed during the training phase. Once you have the model, things are pretty quick, but training that model can take days.
Latest Answers