I came across LLVM, and they are in a constant struggle to optimize IL into binary, and it seems they are always losing this battle in some way. Either by introducing weird bugs, or by finding themselves in checkmate position by not being able to apply some optimizations because it breaks invariants.
Why is it not possible? Is it just because CPUs are complex, or is there something fundamental going on? They have been trying to optimize C for x86 for quite a while now…
In: Other
Latest Answers