Hopefully someone more qualified to answer this comes by, but my understanding is that more training with lower accuracy is better simply because it likely has a better set of parameters to work from.
Imagine a ML model that tries to determine if a number is prime without doing a sieve. And your random number generator keeps accidentally churning out multiples of 2, 3 and 5. The model tries some approach but lands on “always false” as one solution and it keeps working…until it lands on a real prime and fails. Now, do you want to stop at 100% accuracy because “all numbers are probably not prime” or do you want to keep training it on data to see if it comes up with a better solution?
Latest Answers