I used the book “Fundamentals of Classical Thermodynamics” by Van Wylen and Sonntag for my thermodynamics classes in college. In the first chapter it discussed the magnitude of relativistic effects compared to the magnitude of the general effects that would be discussed through most of the book. The point was that these effects were so small from an engineering perspective they could be neglected.
A simulation is just another kind of engineering problem where care must be taken to identify elements that will have meaningful effects on the outcome and those that won’t. Adding more and more effects to the simulation, if their magnitudes are continually smaller, yields diminishing returns. The effects become marginally less important.
I spent five years working with a team simulating the logistics of aircraft reliability and maintenance and the team members revisited the effects continuously. There was even a proposal by the customer to include a whole host of additional elements to the simulation, and by a whole host I mean every tangential thing you could think of like the personnel doing the paperwork in the support units, the equipment and service bays needed to perform specialized operations associated with maintenance of certain parts, and more. The base simulation considered about eight major elements but this expansion would have added in the neighborhood of thirty more. It was ambitious, it was interesting, and it was even plausible.
The first problem is that the customer didn’t ultimately want to pay for it. The software framework was based on almost fifty years of GPSS spaghetti code that was fairly brittle and difficult to modify. If the whole thing had been reimplemented from scratch using a discrete-event simulation framework like the one I’ve been working on it could have been fairly tractable, and this may have happened over the last couple of years. I would certainly have no trouble doing it with a small team in a reasonable time.
The second problem was that it would have been extremely difficult to get the data to accurately represent the various elements for each situation, since each simulation configuration could involve different locations, equipment and facilities, number and type of personnel, number and type of aircraft, and so on.
I’ve mentioned previously that the accuracy of the elements we did model was limited by our inability to model some of the decisions, usually having to do with scheduling and priority optimizations, that the human managers within the system could make. Imagine now if we were trying to model a system where the range of human decisions is all but unlimited?
Modeling human action in a limited sense, like in the aircraft maintenance simulation, works because the range of actions considered is strictly limited. We also did simulations of people evacuating from buildings and even open spaces like the National Mall in Washington, DC. Those work because actors in the simulation only had a few choices. Go this way or that way down the hall, turn left or right at the T, take this stairway or the next, move to this node or that. The number of possible actions was limited.
People have long tried to model the economic actions of humans and this has met with limited success. While there has been some interesting work done in game theory, simulations that try to model wider range of actions are all but useless. Some economists assert that they are essentially impossible. The problems are twofold. One is that such simulations necessarily limit the number of choices the simulated actors can make when the legitimate range of choices they could make is effectively infinite. This is especially true when considering substitution effects (if the price of apples goes up will any individual agent by fewer, more, none at all, buy peaces instead, or bananas, or strawberries, or tennis rackets, or GI Joe action figures, or carpenter’s squares, or trips to Walla Walla, Washington, or some combination of all of those? Go ahead and list everything. Go ahead… I’ll wait!). The second problem is that even if the modeler could identify every possible choice for every actor it would be impossible to collect and incorporate data to describe them all in a meaningful way. You can try to measure what people have done in the past but this is necessarily incomplete and there is no guarantee they will behave the same way in the future. You can take surveys of people’s stated preferences but it is known that people’s actions rarely if ever reflect what they say they want to do beyond limited circumstances. This is true even if people are trying to be as honest and accurate as possible. It’s just the way people work.
Isaac Asimov’s classic Foundation series of books imagined that future humans had spread across the entire galaxy and that an empire spanning it all was in the process of collapsing. Enter a mathematician name Hari Seldon, discussed but not seen in the original trilogy, since it described events after his passing; his character was a main protagonist in some of the later books. The story arc is based on the idea that Seldon came up with a way to simulate the course of human action on a very large scale. It made for terrific science fiction but lousy math.
In either the second or third book of the original trilogy the organization set up by Seldon to use the simulations intended to guide human history in a way that would minimize the effects of the disintegration of the galactic empire begins to encounter major problems in their attempts to shape events. It turns out that a single individual who is never named but refers to himself as “The Mule,” arises that is able to read minds, and this gives him extraordinary leverage over people and events. He comes out of nowhere and threatens to disrupt everything. Fortunately for the citizens of the galaxy he is eventually neutralized.
Sometimes you just have to accept that there are things you can’t calculate. In simulation terms there isn’t only one Mule. In open-ended situations, everyone is The Mule.