Do Text Embedding Models Understand Meaning or Definitions?

143 viewsMathematicsOther

I read an article a few months back about how text embedding models created a sort of “vector space” to understand the meaning of human language. But, it wasn’t clear whether these models were trained to understand the meaning of the words or just the definitions. Is it currently possible to extract meaning into vectors relative to some “vector space” defined by a particular text embedding model?

Assume the definition of meaning for humans has two levels: one concrete, and one meta.

Concrete – Amy looked at the sky and she was happy.

Meta – Amy looked at the sky, her eyes squinted, and shortly after, the corners of her lips tilted upwards…almost involuntarily.

Meaning, in this context, is closer to what is conveyed by meta.

In: Mathematics

0 Answers