A Simple Discrete-Event Simulation: Part 91

I started thinking about the design of processes that require resources and realized there are a lot of ways I could go with this, so I thought I’d back up and think it through in writing.

Let’s begin with a process that requires resources in a scenario where there is a single pool from which any number of identical resources can always be drawn and are available to be used by the process immediately. In this case an entity arrives in the process, the required resources are requested and arrive immediately, and the process can therefore begin immediately. This involves calling the advance function with the process duration as a parameter, which will place a new event in the future events queue. During this time the process component will be in an in process state. Assuming the process only handles one entity at a time, it will concurrently be in a closed state, meaning no other entities will be able to enter. When the forked process event is processed (i.e., it reaches the front of the future events queue), the resources immediately return to their designated pool, and the process enters a state where it begins trying to forward the entity to a subsequent component.

Now let’s start adding complications. I don’t think it’s possible to address them in a formal logical order because the considerations that arise are intertwined, so I’ll just work through them in the best way I can.

One complication is that the resources may take a finite amount of time to arrive once they are requested. There are two ways to handle this. The transfer delay time can be added to the process time (assuming it can be calculated directly — if the movement of resources is modeled in detail then it is governed by that model) and the transfer and process can be handled as a single event. Alternatively, the transfer delay and process can be handled as sequential, chained events. The the bookkeeping functions can record the data in any way that makes sense. When the process is complete, assuming the time for the resources to return to their pool is similarly finite and non-zero, the entity can move directly to its next state or operation and the resources can be processed separately.

The next thing to work out is whether the resources physically return to a central pool to be dispatched in answer to a subsequent request, or whether the resource pool is logical in nature and the resources can go directly to service the next request(s), if any are outstanding.

The pool of resources may have been drawn down so it does not have the number of resources requested (there should be no possibility that a process will request more resources than the maximum quantity the pool is defined to hold). If the entity in the process component must be processed in arrival order before anything else can happen (imagine a pure flow-through model of the type demonstrated to this point), then it will just place the request and stay in place until the resources are obtained. At this point it will have to enter a separate wait state, which means the function for processing the arrival of the entity in the process would have to terminate. The action of starting the clock on the process would have to be kicked off separately once the requested resources are received.

If the entity to be processed does not hold up any other activities (i.e., an entity in a flow-through model that can go to a holding area or secondary process, or an entity like an aircraft that sits in place and requires numerous services, which themselves can be modeled as queuing up to be worked off) then the requests can go into a queue independent of the entity. Any kind of logic can be applied, as long as it is carefully documented and followed.

The next idea to address is the mechanism for determining when the required number of resources become available if they were not available at the time of the request. The brute force method is to place a test event in the current events queue that checks the count of items in the pool after every discrete event item is processed. This has to be done when actions and side-effects and variable values might change unpredictably, but since the conditions under which it makes sense to perform the checks are known, things can be done more efficiently. In this case it means that a list of queued requests can be scanned and serviced whenever resources return to the pool.

Resources can represent many things. Parts needed to complete repair or assembly actions may be continually provided by some kind of supply logic. Replacement parts involved in repair actions may be drawn from local shelf stock, drawn from remote stock (shipped in after a long delay), returned from a refurbishment process, or cannibalized from another assembly (e.g., an aircraft waiting on multiple parts and services). Removed parts from repair actions may be discarded or sent to local or remote refurbishment processes. Workers may be available on a schedule that varies by time, vacation, illness, and breaks for meals, rests, vacations, training, and so on.

A major distinction between different kinds of resources is whether or not they are consumed during the model run. In a manufacturing assembly process new parts continually enter the system to be affixed to manufactured goods. In a repair process the number of mechanics might be fixed. A parts model in which most damaged parts are successfully refurbished but some are occasionally discarded or new parts are occasionally acquired might have elements of persistence and of flow-through and consumption.

Another obvious thing to consider is the presence of many different resource pools. Still another is the potential to continuously adjust the order in which resource requests are processed to take advantage of opportunities where they can be more fully utilized.

Yet another is the idea that one process may issue several requests for resources in succession, or even simultaneously. For example, mechanics performing a periodic service on an aircraft may need to complete several different tasks as part of the one inspection event, and the tasks themselves may require different numbers of mechanics. Should some of the mechanics go from the pool and work off inspection tasks one after another, never returning to the pool, or should they return to the pool after every task, so requests to service other processes may be interleaved with the inspection tasks? Similarly, should all of the requests be issued at once or should they be issued sequentially in time as each previous task is completed? If the transfer time between the pool and the task is zero then it matters less, but if the transfer times are finite and non-zero then these considerations get more complicated.

The final consideration is how the resources themselves are modeled. The behavior of the moving entities in our examples so far has been driven entirely by the logic built into the components, so that’s how all resources will be modeled as well. The processing may depend on the characteristics of the resources, like it is with other moving entities.

If you can think of any other considerations I’d love to hear about them.

This entry was posted in Software and tagged . Bookmark the permalink.

Leave a Reply