Computers process information which is represented using bits (short for binary digit, 0s and 1s). The data is represented as a sequence of such bits. 32 bit and 64 bit simply refers to the *length* of such sequences. Therefore, a 32 bit system uses data that is represented by a sequence of 32 bits. Most systems are now 64 bits, which means that they are able to work with data that is twice as large as 32 bit systems.
Edit: as many people pointed out, my answer wasn’t 100% accurate. Sorry bout that OP
Latest Answers