When the model’s accuracy is very high, it’s likely to be “overfit,” which would mean it is only accurate for the data you trained it with. In other words, it doesn’t generalize. It can’t deal with data it hasn’t seen before.
There are methods to combat this, such as testing the trained model with a different set of data than the data you trained it with. However, if you pick from a set of trained models based on their test results, you’re effectively using the test data as part of the training data, which defeats the purpose entirely.
Latest Answers