Why do loading times and loading % on computers jump around rather than smoothly increasing the numbers?

390 views

I’m playing a game and the loading screen shows that the loading is 0% for 10 seconds then jumps to 95% for another 8 seconds, and then the next digits change every second until it reaches 100%.

Computers have been like this forever. Isn’t there a better way to do loading times/completion % so it’s more accurate?

In: 6

9 Answers

Anonymous 0 Comments

Blame it on modern hardware and software!

If you go back to really, really old hardware, things are predictable. The computer is only ever doing one thing. The computer does exactly what’s asked. The hardware is dumb and predictable. The software is dumb and predictable.

Today none of that holds true. Your game doesn’t run on the actual hardware, but on top of Windows/Mac/Linux, which have their own ideas. For performance reasons they lie to the programs executing on them, for instance saying “I wrote your saved game” immediately, while actually doing it later, but only if there’s enough free memory available. There’s other things going on in the background, like autoupdates and antivirus scanning and file indexing. All of those consume resources in different amounts at different times. So applications can’t actually reliable predict performance anymore, it’s out of their hands.

Even hardware is unpredictable. We have things like SSD drives and shingled hard disks, both of which do internal housekeeping operations that are unpredictable to the computer. An old hard disk might write at 1MB/s, completely consistently. A modern SMR one will write at 500MB/s while there’s RAM cache, then drop to 100MB/s to the CMR cache, then suddenly get stuck if it runs out of that. That’s completely out of a game’s hands, and basically unpredictable.

Anonymous 0 Comments

Progress indicators usually cover a wide variety of steps. Each of those steps might depend on different hardware and have different bottlenecks. It’s just not worth spending the time and effort calculating exactly how long something will take based on your hardware just for a smooth progress indicator.

Anonymous 0 Comments

Let’s say you’re doing a single thing, folding laundry. You have 10 pieces of clothes to fold, so when you’ve folded 3 of them you are 30% done. Simple, right?

Now let’s say you’re doing two different things, folding laundry and washing dishes. Folding a piece of clothing is a lot faster than washing a single dish, so when you’ve folded 4 out of 10 clothes, and washed 0 out of 10 dishes, are you 20% done? Or do you try to adjust for the length of each task and say that you are 15% done?

When combining different tasks it’s simply hard to give an accurate picture of how done you are, and that’s even if we assume that we know how long the tasks are. Maybe one of the dishes was really dirty and it took three times as long to wash it, and that ruins your entire estimation.

Anonymous 0 Comments

Because it isn’t a single operation, and different operations take different time to complete.

Even something as simple as transferring files, how do you segment up the bar? If you transfer 1000 files ranging from 1MB to 10GB, do you do each file as 0.1%, or does each byte have its own percent?

If you did it by file, it would be obviously wrong because the 10GB file would not take the same amount of time as the 1MB file. However, if you do it by space, then the operations that are constant with each file (such as looking up where it is on the disk) are not properly represented.

Anonymous 0 Comments

I’ve programmed many of these.

Say I want to paint 100 intricate boxes on the screen, which should take several seconds total, and I have a progress bar. That’s easy. For every each box paint, I update the progress bar by 1. The computer’s time to paint each box is fixed, and there a known number of boxes, so my progress bar will be very, very exact.

But real world problems creep in. Say you’re doing 1,000 different tasks in loading a game. Some tasks take a long time, some are quick. You update the progress bar at the completion of each task. The task bar will thus jump a lot as large tasks are completed, and go slower when small tasks are completed. It would be a lot of effort to run completion times on each task and fudge the progress bar logic using that, just to give someone a smooth progress bar. This would be especially useless on PC games where you don’t know the hardware, don’t know anything about completion times on a specific user’s computer. Maybe he’s using an old 5,200 rpm disk instead of basic SSD, so he has a loading bottleneck you don’t know about. Or he has NVME so storage runs faster than you anticipated.

The worst progress bars are with downloading. You know the total amount of data to download, but not the download speed. So you start the download, and after a little bit you check how much data came down. You do the math (data/speed) and update the progress bar to where you are likely in the download percentage wise. But download speeds aren’t constant, they fluctuate widely (due to speeds on your end, the Internet in general, and their end). So a download speeds up, you do your math, and you extend the progress bar by a big jump because you will be done much sooner at this speed. But then the download slows down, you do your math, and you should take the progress bar backwards, but nobody wants to see that, so you keep holding the progress bar there until the amount of data downloaded catches up with its position. Given that, download progress bars are often quite erratic. You can smooth them somewhat by not being too reactionary to new speeds, looking at trends instead, but they’re still going to be a bit erratic.

Basically, real-world unknowns make it hard to provide a nice, smooth, and accurate progress bar.

Anonymous 0 Comments

I remember I once read a post from a developer here on Reddit. During busy days when lots of people had to load an in-house developed app it would sometimes take a few minutes for the assets and database queries to complete and will usually result in a lot of tech support calls complaining that the app had crashed.

So the Dev decided to code a fake loading screen that would increase a few % every few seconds, make a few rapid jumps every 15 to 20 seconds then back to 1% every few seconds (you get the idea) then usually by that time the app has finished loading and fake loading screen will close. No more support calls to their help desk.

Because of that I now think the majority of loading screens are fake.

Anonymous 0 Comments

The computer doesn’t know what % it’s at. It just knows it has a series of checkpoints where it will increment the % complete bar.

Think of it like this, you’re baking a cake and it has a progress bar.

* Get all the ingredients out – 10%

* Mix all wet ingredients – 20%

* Mix all dry ingredients – 30%

* Mix all together – 40%

* Pour into cake pan – 50%

* Put pan in oven – 60%

* Take pan out of oven – 70%

* Remove cake from pan – 80%

* Frost cake – 90%

* Serve cake – 100%

This progress bar could get up to 50% in just a few minutes. But then it would stick at 50% for the 30 minutes it needs to be in the oven. Then it would stick at 70% again because you then have to wait 20 minutes for the pan to cool. Then it would finish from there pretty quickly.

Anonymous 0 Comments

Because the progress bar is completely fictional. It’s based on a software developer’s *guess* about how long any particular step in a workflow should take. What’s more, there’s zero stakes in ensuring that the progress bar is accurate, so very rarely are they benchmarked against reality, and the difference in capabilities of varying types of hardware (storage, memory, CPU, system bus, network, etc.) make a one-size-fits-all benchmark virtually impossible.

The actual utility of the progress bar is to demonstrate to the user that the process hasn’t completely seized up.

Anonymous 0 Comments

A decent way of thinking about it is that the number is a percentage of necessary tasks that have been done, not a percentage of the time it’s going to take to load completely.

Some tasks can be accomplished quickly, others take longer, so the bar jumps around. The reason it’s measured this way is that the developer knows which tasks need doing, but cannot know how long they will take on different machines.