In real life, there are countless different possible wavelengths of visible light, countless more possible combinations of those different colors, and countless different ways an object might absorb each different wavelength. For example you might have an amber LED that only shines a single wavelength of light, and you might have a mixture of red and green light that looks about the same color to your eye, but hold a pink and a grey object under each light, and they might look the same under the amber light and different under the mixed red and green light.
Our eyes perceive color through three types of cones, each is sensitive to a range of colors, but each type is most sensitive to red, green, and blue respectively. If the camera had an identical spectral response to your eye, and your monitor could perfectly saturate only one of the types of cone cells in your eye, you might be able to accurately mimic any color. But the light producing elements in a display are not perfectly saturated, so the range of colors they can display is limited(the range of colors that can be displayed is called color gamut), and different display technologies have different ranges of reproducible color. There are also a limited number of brightness steps for each color, and the range of brightness of a display is less than real life(less dynamic range). Also, the spectral response of a camera sensor is not identical to the spectral response of the cone cells in your eye.
All of this adds up to a complicated mess, and different color systems for computers(RGB) vs print(CMYK, Pantone, etc.)
Latest Answers