How do database engines manage to be so much faster at lookup / aggregation than your code?
its like how libraries used to have a manual index of the contents of the library that could help you narrow things down without having to search through the whole library to find some specific things.
database engines remember where things are stored so that they can be retrieved more quickly. they also use special flags called foreign keys, to navigate between related records more quickly than searching from scratch for each one.
they are also faster because they are usually implemented in languages that require less translations for the computer to understand.
Probably because they are written by people who’ve spent a lot of time figuring out the best ways to solve this problem, rather than just throwing some code at the wall.
SQL Server uses a database engine that is built around a computer science data structure called a B-tree.
The old way of searching for something required one to step through every single element in a list until you reached the key you were looking for. In the worst case, with N elements, the thing you were looking for would be at the very end of the list requiring you to walk though N elements.
A B tree is logically structured in an ordered way such that you can traverse a tree of N elements and find the thing you were looking for in around Log N steps. For very large lists Log N is significantly faster than N.
Depending on the DB implementation, it’s either because you’re accessing an in-memory database (saving I/O time) or indexing/caching. There are other possibilities (lower-level languages have way, way less overhead. Like, orders of magnitude less), but that will get you most of the way to explaining the difference.