A Simple Discrete-Event Simulation: Part 23

Deciding to treat the entities moving through the system as being entirely passive ended up requiring a fairly serious change to the design of the discrete-event processing mechanism. This was actually somewhat fortuitous because it pointed out a design feature I had missed.

The way I had it, the state an element would go into when next activated was stored with the entity itself, so when the next item is pulled from the future events queue it would figure out what function to activate next by looking at the state specified within the element itself. This is fine as long as an element only has one “thread” of activities or, using SLX terminology, has only one puck going.

Consider what we’re trying to do. The arrivals component is supposed to generate a new set of entities at regular intervals (we’ve been using 30 minutes in the example). At the beginning of each such interval we’ve generated new entities that were active, so they had their own intelligence and would “wake themselves up” using their own future events queue hooks. However, if we make the entities completely passive, we have to change the discrete-event behavior of the arrivals component. There are several ways to do this (that I can think of in short order).

One way is to change the single timing mechanism within the arrival component so it increments to each entity generation event and then to the next even interval boundary. This involves a tiny bit of bookkeeping but is otherwise straightforward. It is the equivalent of doing more work with just the one puck inside the arrivals component.

Another way is to spin off one-shot objects that do nothing but generate a passive entity at the desired time. The required number of these objects would be generated at the beginning of each (30-minute) interval. This is the equivalent of using the one puck in the arrivals component and generating additional pucks that function as independent objects.

Yet another way to do this is to spin off new future events queue items that remain associated with the arrivals component. There is no real reason why several different cycles of pucks cannot be performing different actions associated with a given mother element. I think I’ve previously described an operation where I created a medical patient that generated reminder calls before the appointment, a number of successive activities during the actual appointment, and numerous communications with insurance companies both before and after the visit. Every one of those processes could employ a separate puck associated with the main patient element. To make matters even more complex, I think I actually modeled these activities as “appointments.” I could have modeled the basic activities as patients, each of which might generate multiple cycles of appointments, but that’s a different discussion. The point is that an arbitrary number of future events queue items could kick off any number of activities in the switch statement defined in the activate method. It would be up to the programmer to ensure that all of the related state information for the different threads is kept in order so the different “threads” of activity don’t step on each other.

In this example it would be pretty straightforward. One puck would fire off every 30 minutes to generate new entities, while one or more secondary (internal) pucks would generate the arriving entities as required.

The SLX language had a fairly graceful was of handling this, but firing off additional pucks using the same mechanism we have now is just the same in principle. The one exception is that if we’re going to have multiple pucks active at the same time, they can’t all refer to a single next-state variable stored with the element. Rather, the information about what the next state is to be has to be stored with the future events queue item. SLX did this implicitly by context (blocks of code within objects are declared as separate pucks), we have to do it explicitly. That’ll be tomorrow’s code change. The next day I’ll probably have to update the wait until condition mechanism to work the same way. There’s no reason it shouldn’t support multiple “threads” as well.

One final thought. I’m used to working in C-like languages where memory has to be allocated and deallocated explicitly for dynamic objects (stored on the heap). I remember being annoyed that SLX wouldn’t let you deallocate anything unless every reference to it was severed. This means that every possible pointer to the thing had to be redirected or nulled out, which seemed like a waste of CPU cycles to me. That was a decision made by the language designer to head off problems created by unwary programmers.

I bring this up because the process I’m creating here may involve creating a lot of entities in a very short period of time and I haven’t yet thought deeply about how JavaScript’s automatic garbage collection mechanism works. Initial reading suggests that references (pointers) to objects can be nulled out, but objects themselves can never be deleted explicitly in code. (Object properties can be deleted, Google the delete statement in Javascipt for explanations.) We instead have to rely on the garbage collection mechanism to do this automatically. For the record, I also find this to be just slightly annoying, but I understand why it’s done. Perhaps I will never get over the fetish for wasted CPU cycles that was instilled in me early on. I’m going to work on getting things running first (and who cares when these trivial demonstration models are so small), and then think about what might be getting created and destroyed.

This entry was posted in Simulation and tagged . Bookmark the permalink.

Leave a Reply