I work with AI and people saying “we don’t know how these things work” is a pet peeve of mine. These things are entirely procedural and we do know how they work, and with a sufficient grasp of linear algebra and enough patience you could build these algorithms with nothing more than pen & paper if you really wanted to. A more nuanced way to say it is that we can’t logically explain why they work.
There’s an axiom in classical statistical analysis that says *”correlation doesn’t equal causation”*, and coming up with a logical explanation for why a relationship between two things is causative is an important step in any project. What the fields of machine learning & AI do is essentially chuck that axiom out the window. What matters to AI is finding correlations that make good predictions, so if there’s a correlation between two things and it’s reliable enough to make good predictions, who cares if it has no logical explanation.
Latest Answers