A core is a kitchen. It has a sink, a cutting board, a refrigerator, a stove with one or two burners depending on the model of CPU, an oven, and all the other things that a kitchen has. But only 1, maybe 2, of each thing. In a real CPU these would be access to RAM/cache, an arithmetic unit, a floating point unit, a multiplication/division unit, vector math unit, and so on.
A thread is a cook/chef. The person actually doing the work with the tools, following the instructions in the recipe to make what’s ordered. In the CPU it executes instructions and has a limited memory of its own, used by software for very short term storage.
More than one cook in the kitchen does speed up cooking, but as I said there’s only 1 of each type of machine or resource in the kitchen. They must share. So the speed of cooking doesn’t literally double, but it does improve drastically. However it does also mean each cook slows down slightly compared to if they were working alone and not having to worry about sharing their tools.
And in the world of computer security, people have discovered this can be abused for learning information. If you can order one cook to make you something that requires constant access to the frying pan, and you notice the job took about twice as long as normal, you can infer the *other* cook needed the frying pan a lot as well, which may be information you can use.
Each core must have 1 thread at a minimum. More is considered better in the grand scheme of things, if the number of cores is to be the same, since more work can be done. But if you have more than enough cores for what you need, turning off hyperthreading and just allocating 1 thread per core may speed things up a bit.
Latest Answers