What happens when LLMs are prompted without fine-tuning?

171 views

LLMs can be specialized to a task without any fine-tuning where the prompt acts as a form of in-context learning approach, how does that work and what seems to happen internally in the network?

In: 0

2 Answers

Anonymous 0 Comments

My understanding of the technology, and one of the “scary” things, is that those who created them cannot tell you how they do what they do. I don’t think there is currently an answer to this question.

You are viewing 1 out of 2 answers, click here to view all answers.