LLMs can be specialized to a task without any fine-tuning where the prompt acts as a form of in-context learning approach, how does that work and what seems to happen internally in the network?
It’s fascinating! When LLMs are prompted without fine-tuning, they can still learn and specialize in a task by using the prompt as a way of understanding the context
It’s like they have their own way of learning and adapting internally within the network.
Anonymous 0
Comments
My understanding of the technology, and one of the “scary” things, is that those who created them cannot tell you how they do what they do. I don’t think there is currently an answer to this question.
Latest Answers