I don’t actually know how it is done in practice, but, if I were a programmer, I’d estimate it as follows:
1) determine file size/update size. Let’s say 100 Mb.
2) determine copy(or install) speed in Mb/s. If you don’t know it, then sample how much data has been copied thus far relative to the size determined in step 1 after a few seconds. Let’s say my copy speed is 10 Mb/s.
3) file size/copy speed=time left to copy. 10 seconds in this case.
That’s a really good question; it’s more difficult than it sounds!
Something like a directory copy isn’t too difficult as the estimate of time is directly related to the number of files & their sizes. And although the estimate will vary greatly depending on the hardware and bandwidth that the software is running on, it is fairly simple to time the first say, 5% of the copy, and then extrapolate.
Back in the day I was responsible for writing a Windows installation program, and although much of its job was to copy files around, it was a bit more complex as there were multiple stages involved (unzipping, updating registry, checking for dependencies & DLL versions etc.)
After a few days work, I was very proud that I had honed the algorithm for estimating the time remaining to the point where it was simply counting down the seconds remaining without any noticeable stuttering. (Basically by regularly remeasuring progress and re-extrapolating through all installation stages).
My pride was short lived as I heard the whole team complaining ‘why is it taking so long to install nowadays?’. Sure enough, my accurate ‘time remaining’ feature was now responsible for half the installation time. I was quickly instructed to replace it with a less accurate but more time efficient version. 🙁
It varies a bit, but the simplest case: Let’s say you want to copy one big file. You record how big it is, you record the time you start, then every so often you check how much you’ve done and how long it’s been. Eg, I’ve done 10% of the work, it’s been 5 seconds since I started, so the total time is probably gonna be about 50s (or 45s remaining).
You might do things like only take into account the average speed over the last n seconds, or fudge the numbers in some way you discovered to make them look nicer (ie avoid the time remaining timer going up), or other tricks. But the basics will still be compare how much you’ve done to how much you have left to do.
Simple answer is that they don’t! They guess. A computer isn’t able to know how long it will take to perform an action before it does that action, they dont have foresight. The best it can do is make an educated guess.
Theres a bunch of different metrics we could use to do this. In general were going to be timing how long is takes to do X percentage of the operation, and then extrapolate, i.e if it took 5 minutes to do 10%, we can guess it’ll take 50 minutes for 100%, 50-5 = 45 minutes left. The problem is in the definition of “percentage of the operation”. What are we measuring, exactly?
In older systems, the easy answer was that a program contains more than 1 file. So if we are installing a program made up of 1500 files, we can just say that 500 files is 1/3rd of the program downloaded, and use that for timing. The problem of course is that files aren’t all the same size. This is why loading bars on older software will often get stuck at certain percentages; If a program is 1000MB across a thousand files, but one of them is 500MB, this is a very poor metric. It is however easy to implement, and doesn’t require the computer to work very hard to give you an estimate.
More complex solutions can be measuring file size directly. we know that our install size is 1000mb, and if we’ve installed 500mb we know were half down. The problem is this is a more difficult thing to measure, requiring the computer to continuously check how much has been written to disk. This can make installation slower, but on more modern machines this is not too much of a concern. It is however still not a perfect metric. It might be more difficult for some aspects of the program to be written to disk due to the location on the disk its being saved, slow down from other programs, and just random chance.
A mixture of these methods, and others, can be used. In the end, as long as we can give a “good enough” guess it doesnt need to be 100% accurate.
Latest Answers