It’s not as much as how they don’t know how it works, but rather they won’t know the output. When you use engineering software like Ansys to figure out something, you roughly know what the output will be. But when it comes to AI’s, the magic happens because you don’t know what it’ll output, like if you ask an AI “What is the most aerodynamic shape for a car that can hold 5 passengers” you don’t know what the output will be, which is why we don’t really “understand” it, it’s more akin to discovery.
Latest Answers