training and fine tuning a Large Language Model (LLM)

182 views

training and fine tuning a Large Language Model (LLM)

In: 0

Anonymous 0 Comments

Let’s suppose you are learning how to speak. You will need to understand the concepts of a house, a bicycle, anger… It takes you a lot of time to understand those concepts and then assign them to words. It is the training of a LLM. However, if you want to learn a new language, you don’t have to do all this work. You can start from what you know and just say “a house = une maison” in French. It goes way faster. This is fine-tunig.

Applied to machine learning in general, learning is using a huge dataset made of what your model will use (the features eg first word, second word…) and the output what you want. You pass the features in your model, which will generate a prediction, and you calculate the error that you have made. You then have methods to understand where this error come from and how to correct it. You repeat the process until you have enough precision. Fine-tuning is almost the same, but you start with an already trained model (a pre-trained model) and use (in general) a smaller dataset.

You are viewing 1 out of 1 answers, click here to view all answers.