In general, yes. You’re giving it a bunch of equations and rules, and then letting it describe what happens when you change some of the inputs. You could still sit down and plug all your inputs into the same equations by hand and follow through all of the logic trees in the simulation and get the same result by hand though.
The reason you make a simulation though is usually speed. Your equations might be really complex and there might be lots of them. Doing it by hand once could take forever, and that’s just got one set of inputs. Now you need to do it for another hundred sets of inputs though, and you better hope you don’t get sleepy and make a mistake. Or you could do it on a computer. You kick it off and you could get the results from 1000 sets of inputs in a fraction of the time, and it’s never gunna forget to carry the 2 like a human might.
Obviously there’s trade offs though. It takes time upfront to make a simulation, so if you’re only gunna check the results once it might take longer to do that than to just do it by hand. You could also code in mistakes, so you need to have some kind of verification. And that’s probably going to be a scenario where you do know the actual result. Maybe you tested what you’re simulating in real life, or maybe you did it by hand for a scenario. So now you’ve got something to compare to and can hope that means your simulation is accurate.
There are maybe some exceptions if you’re using some type of learning algorithm, where you’re not telling it the rules you’re just giving it lots of examples of things that follow the rules and are letting it try to figure out what the rules are. It’s still just math under the hood though, math you could do by hand if you were dedicated enough. But in that case you’re just not “telling it what to do” as directly.
Latest Answers