“The Secret of Selling Anything,” by Harry Browne

Today I took the time to finish reading a classic book on sales by Harry Browne, entitled The Secret of Selling Anything. I read it because it is regarded as a classic in certain circles, because I have appreciated many other aspects of Browne’s work, and because I need to keep putting myself in touch with resources I haven’t previously sought out.

I will offer that the book is extremely impressive and says everything it needs to in a very short space. It would be trivial to read in a single sitting, especially if you can scan through some of the early chapters that merely restate the concepts of basic economics in the framework offered by Ludwig von Mises and Murray Rothbard. Economics, of course, is merely the study of human choice under conditions of scarcity. One first describes how individuals make choices with respect to their own competing needs, and follows by describing how multiple individuals engage in exchange in in order to better meet their individual needs. In this light, trade is seen as mutually beneficial. It isn’t a zero-sum game where you’re trying to take something from an adversary, but rather a venture where you’re trying to solve a problem in cooperation with a friend.

Against this backdrop Browne walks you through the steps of getting your customer to identify what his or her problems are and how to be in the right frame of mind to actually hear and act upon the information offered. He offers a five step process for building a sale as follows:

  1. Identify the customer’s motivation. Do this by asking.
  2. Summarize the customer’s motivation.
  3. Present your product–in terms of the customer’s motivation.
  4. Answer questions–that will arise based on the customer’s motivation.
  5. Close the sale.

Notice a common theme here? It’s all about the customer, the customer’s problems, and the solution to those problems. Each topic, like all others in the book, is presented directly and without excess complication. He does not provide six examples where one will do. The lists of considerations he invokes here and there never have more than seven or eight items, and usually fewer. He observes that problems in latter stages of the process generally come from shortcomings of earlier stages, and that it’s rarely too late to circle back and correct those shortcomings.

The latter sections of the book are devoted to inverting the traditionally negative conceptions of what sales and salespeople are all about. A salesperson is an equal trying to solve a customer’s problem. A salesperson sells things customers want and not things they don’t want. Telephone sales are not to be universally feared. See the customer’s secretaries and gatekeepers not as adversaries but as allies. Be polite, thoughtful, and honest. You won’t get every sale, but what’s the point of getting sales the wrong way? You needn’t fear the process if you understand it properly. Opportunities for sales are everywhere.

Read correctly the book is about far more than just sales. It made an exceptional impression on me because its clearly stated concepts reminded me both of things that have come so easily at times and of things I’ve forgotten to my detriment. It struck me as something I needed right when I needed it, like any classic work should. Do yourself a favor and give it a read. Maybe two.

Posted in Soft Skills | Tagged , , , | Leave a comment

The Ultimate Limits of Simulation

Last time I discussed the factors that drive the scale of any simulation or business system. One a less practical, more theoretical, more fun note, we might ask just how far a digital simulation can go? What kind of scope and scale can it reasonably achieve?

Let’s jump right to the end game, where we see the two main limitations. The first limitation is that we don’t yet know enough about how the universe works at the quantum level, and beyond a certain point we may never know. Even if we could represent every particle in the universe we still would not know how they behave and interact. Things appear to be nondeterministic. Does that mean that the universe is a Monte Carlo simulation? Are there multiple universes? Does free will exist? How does observation change the behavior of what is being observed? Do the details we observe at smaller and smaller levels really exist all the time or only when we specifically observe them? (And so on…)

The other limitation is straightforward, and that is that atoms and the particles that make them up (and the particles that make them up) all have multiple characteristics, like mass, location, charge, velocity, direction, strong and weak interactions, charm, flavor, strangeness, spin, and so on, each of which would require representation by one or more floating point values, each of which (currently) requires a comparatively huge number of particles to represent. That is, the detail with which you could perform a digital simulation of the universe is a factor of the number of particles in the universe divided by the number if particles it takes to represent each particle in the universe. And that is is the mass of the entire universe is repurposed to perform a digital simulation of the universe, which is clearly absurd. In the end, the only way to simulate the universe is with an analog simulation of the universe, which is the universe.

Are we having fun yet?

This concept is touched on in a couple of places in science fiction that I’m aware of. Doubtless there are more. One is from Charles Stross’ book Accelerando, which sees a character’s simulated fork, sent across a large swath of space in a vessel the size of a soda can, trapped in a simulated environment where he gets to contemplate things for a while. The nod to the concept at hand is a reference to the simulation’s Planck length.

 ”How deep does reality go, here?” asks Sadeq. It’s almost the first question he’s asked of his own volition, and Amber takes it as a positive sign that he’s finally coming out of his shell.
 “Oh, the Planck length is about a hundredth of a millimeter in this world. Too small to see, comfortably large for the simulation engines to handle. Not like real space-time.”
 “Well, then.” Sadeq pauses. “They can zoom their reality if they need to?”
 “Yeah, fractals work in here.” Pierre nods. “I didn’t—“
 “This place is a trap,” Su Ang says emphatically.
 “No it isn’t,” Pierre replies, nettled.
 “What do you mean, a trap?” asks Amber.
 “We’ve been here a while,” says Ang. She glances at Aineko, who sprawls on the flagstones, snoozing or whatever it is that weakly superhuman AIs do when they’re emulating a sleeping cat. “After your cat broke us out of bondage, we had a look around. There are things out there that—“ She shivers. “Humans can’t survive in most of the simulation spaces here. Universes with physics models that don’t support our kind of neural computing. You could migrate there, but you’d need to be ported to a whole new type of logic—by the time you did that, would you still be you?…”

The Planck length of our universe, the scale at which classical ideas about gravity and space-time cease to be valid, and quantum effects dominate, is on the order of 10-35 meters, while that of the described simulation is on the order of 10-5 meters. This defines a factor of roughly 10-30, which is still a really, really big number, indicating a very, very limited simulation in the big scheme of things. Such a simulation would still be far beyond anything we humans could implement now. This is science fiction after all.

I loved this passage the moment I read it because it captures two powerful concepts, the ultimate granularity of simulations and the granularity required to adequately represent certain activities, in such a small number of words. A reader would have to bring a lot to the book to follow it intuitively. I will note that my knowledge of quantum physics doesn’t go beyond what can be read in Discover Magazine, but it’s enough for an intelligent layman to have some minor feel for the topic. I’m confident that the description I’ve shared here wouldn’t be found unnecessarily wanting by a specialized practitioner.

Another reference to this concept comes from Star Trek: The Next Generation, namely a clever Season Six episode entitled Ship In A Bottle. It is a follow-on to a Season Two episode, Elementary, Dear Data, in which Data becomes enamored of the Sherlock Holmes mythos but finds that his machine-like memory and powers of deduction allow him to solve the genre mysteries too easily to be enjoyable. Geordi La Forge then suggests that the holodeck create a Moriarty character worthy not of Sherlock Holmes, but of Data, whereupon the holodeck creates one that turns out to be sentient. The episode is rather touching because the sentient creation renounces the violence and evil of the fictional Moriarty character and pleads for a chance to live a life in the real world. This scene shows the character’s interaction with Picard.

In this episode the character agrees to have its simulation placed in suspension until a way can be found to give it life in the outside world. The producers of the show made the episode thinking that Sherlock Holmes was in the public domain, which turned out not to be the case. It took four years to work out a way to revisit this setup in the Season Six episode. The setup is slightly different and the character causes even more trouble, but the crew finally trap him in a simulation that would keep him sufficiently engaged to not cause any trouble in the outside world. The device in which he is trapped is shown as a computer the size of a breadbox.

This scene shows the episode’s denouement, where the crew touch on the philosophical idea of living inside a simulation, and where the simulation host is shown.


Here is a close-up of the device, which is the glowing module attached to a unit that provides additional memory and presumably external power.

The computing power of the holodeck and the subsequent simulation host circumscribe the complexity and granularity of what can be simulated within. The concept isn’t acknowledged directly, but is rather inferred. It is something I picked up on because my combination of experiences makes me sensitive to it.

I’m sure there are other invocations of this concept out there. Please share any that you find.

Posted in Simulation | Tagged , , | Leave a comment

What Drives The Scale Of A Business Or Simulation System?

The scale of a business or simulation (continuous or discrete-event) system can be driven by a number of factors. Scale can be measured in terms of CPU cycles, communication bandwidth, disk storage, dynamic memory usage, power consumption, and perhaps other factors. The ones that simulations and business systems have in common include:

  • The number of operations: The more different activities that are modeled, and the more examples of each activity there are, the bigger the model will be. While much larger models are possible, I’ve built models that could involve up to 84 different activities, some of which included up to a dozen stations. A business process with twenty different activities but with some involving 100 employees could in some sense be regarded as a larger model.
  • The number of arrival events: Many systems process entities that arrive at the beginning of a process, traverse the process over time, and exit the process at the end. In other systems there may be no true entries and exits, but events may be generated according to a schedule, while additional events may be generated through other mechanisms (like equipment failures, repair and supply actions, and so on. I’ve worked on systems that only process 40 items per hour at top speed and others that processed thousands.
  • The number of internal entities: The more entities that are or can be represented in a system the larger the scale. I’ve modeled furnaces that held only a dozen slabs of steel at a time but on document imaging systems that processed thousands of pieces of paper.
  • The complexity of representation of each entity: Some entities exhibit only simple behaviors and have only a few characteristics requiring data values, while entities and components of other systems might require large numbers of parameters to describe. In theory a system parameter will only exist if it can be changed or if events are affected by its value, so the existence of each additional parameter makes the system larger in scale.
  • The number of users: A fire-and-forget system has only one user and no interactions. A training simulator may handle input from multiple trainees simultaneously while a distributed simulation may involve many more. A large scale business system may have users throughout a building or campus, or all around the world. The more communications and interactions with users there are, the larger the system.
  • The degree of interactivity supported: An interactive system will incur more overhead than a fire-and-forget system. The more user interactions the system suppports the greater the scale may be.
  • The volume of internal communications: A single, desktop system may have little or no communication between processes. I’ve worked with single-threaded, fire-and-forget models and with single-processor systems running up to a dozen concurrent, real-time processes, many of which are in communication with each other as well as external processes. Any system designed to support communication between multiple processors will add complexity. In short, scale increases with more internal communications.
  • The amount of checking against external values: Simulations and business systems may both have to check against historical and other values for ongoing validation, compliance, and for other reasons.
  • The amount of reporting required: I’ve seen systems put out multi-gigabyte files and others which record very little information. Sometimes the output is highly graphical in nature and at other times consists only in hundreds of thousands of lines of text.
  • The amount of internal logging and bookkeeping: If a system needs to record values that are needed not for output as such but for the monitoring of the health of the system, the scale of the system increases.
  • The residence time of entities in the system: Entities can pass through some systems in a matter of real or simulated seconds while they may remain in other systems for months or years. The longer entities may reside in the system the greater the scale.

These factors apply to simulations but not business systems:

  • The amount of simulated time modeled: A system that models behavior over a longer elapsed time will take longer to run than one that models events over a shorter time. I’ve worked on systems that modeled just a few minutes of time and others that modeled up to a full year.
  • The size of time steps: If two systems that simulate the same amount of elapsed time then the one that does so in smaller time steps will, all other things being equal, take longer to run and be of a larger scale. This can apply to discrete-event simulations where many events happen quickly and the same may be true of business systems, but mostly this applies to continuous simulations.
  • The size of elements, nodes, volumes, etc.: This applies almost exclusively to continuous simulations. Thermal simulations, computational fluid dynamic simulations, stress analysis simulations, and other types may all model the behavior of different regions of volumes of solid, liquid, or gaseous material (as well as material in other states). The more granular the system is, i.e., the more nodes or sub-volumes it defines, the greater the scale of the model.

It is theoretically possible to consume computing resources approaching infinity by making nodes and time steps smaller and smaller. That is, increasing granularity increases the resources required to complete the simulation, but can also improve the accuracy (to a point, which is a different discussion). One generally tries to use the largest values possible that will yield usable (and accurate) answers. Many factors contribute to that accuracy and usability, including number scaling, number size (say, 32-bit vs. 64-bit or larger), the characteristics of intermediate results, rounding errors, chaos effects, Courant limits, and so on. One also wants to save time and money.

One may ponder the fact that CFD simulations can include large numbers of nodes around an aircraft that may be a few inches on a side or smaller, but one intended to model global weather evolutions, even if it defined a billion nodes, would still have to define nodes that are greater than a mile square and more than a mile high. How accurate is that going to be, particularly when considering that each surface node might have to define parameters for ground cover (buildings, roads, parking lots, vegetation, ice, fresh water, salt water, mud, rock), moisture retention, permeability, reflectivity, habitation, gas and particulate emissions, ability to absorb and emit radiation on different frequencies, elevation, angle to the sun, and so on? And that’s just the ground level. We haven’t even discussed the effects of clouds, oceans, solar radiation, and other things. And that’s also with a billion nodes, which would be a pretty large model. Early models defined nodes so large they ignored features like… England. Such models may also be run over a one hundred year time span. One can imagine that’s a large undertaking under any circumstances.

These factors apply to business systems but not simulations:

  • The amount of external regulation: While simulations themselves generally aren’t subject to regulation by governing bodies (though the actions based on their results might be, and evidence may have to be shown of the steps taken to verify, validate, and perhaps accredit them), but business systems may be subject to regulations of many kinds, or may represent processes that are highly regulated. Adding overhead to a system in order to meet regulatory requirements increases the scale of a system (in terms of many of the factors listed in the first section, above).
Posted in Tools and methods | Tagged , , , , , | Leave a comment

Discrete-Event Simulation vs. Business Logic

I wanted to continue yesterday’s discussion by describing some differences between discrete-event simulations and systems that might be implemented to carry out business logic. The first diagram below shows a model of privately owned vehicles and commercial vehicles passing through an imaginary border inspection. The vehicles all enter the model, proceed to a primary inspection, and then continue to one or more destinations depending on the results of various inspections. Eventually all vehicles will exit the port, end up in an impound lot, or return to their country of origin. The model never needs to consider other events beyond the behavior of each individual vehicle.

We can complicate the model incrementally in a number of ways. The first might be that a privately owned or commercial vehicle needs to park so its occupants can go to yet another process. This is fairly simple because the original vehicle can only wait until that process completes. Another possible complication is that the interactions between different processing stations and the Customs Shipment Database might be included in the model. In this case the model might have to generate multiple entities which must then be coordinated. This logic is shown in the second diagram, below.

The second diagram shows only the commercial operations in the port, along with the flow of tokens through the data side of the process. I’ve tied them together in the second diagram with a tiny bit of faux BPMN symbology. I’ve also added a complication where commercial vehicles may exit the port process but still have to proceed to a bonded warehouse. Only once such a vehicle reaches that destination does its full process terminate, and the physical vehicle and its associated manifest be matched up and cleared from the system.

I’ve actually done this in discrete-event simulations in the past. The SLX language allows the modeler to define multiple subentities called “pucks” that are associated with physical or logical entities. In my case the physical entities were patients in a dental practice (where the activities of the dentists, hygienists, assistants, administrative personnel, and insurance companies were also modeled) while the associated pucks were insurance claims that were processed in the office and forwarded to the insurance companies. Those claims (along with inquiries and other communications) might go through multiple cycles before being resolved. I’m not sure I ever tied the claim processes to the patient processes in any detail, although I could have. I was mainly concerned with recreating the activities of all participants in the office itself.

The interesting thing about the relationship between entities and subentities is that they can all be easily linked inside the software–if the software system is small enough. It’s a simple matter to maintain pointers to the subentities. That way, if something happens to one entity that should terminate all the others (an issue I mentioned yesterday), it should be a simple matter to locate and deal with them. The book I read on Monday and the associated reading I did on the same subjects would indicate that executing business logic as expressed by business notation is extremely difficult and not a straightforward, one-to-one proposition. I’m guessing that has something to do with the scale of systems that implement business logic. If a system is large enough, particularly if it is spread over multiple machines (let alone multiple processors on a single machine), then groups of entities cannot simply be linked using dynamic pointers, as they can be in a desktop simulation. What would have to happen instead is that all of the related entities would have to be tagged with identifying information that could be used to identify them as various data structures were scanned. That can be a time-consuming process. It’s easy to see why such applications can be difficult to scale.

One more difference between simulations and business logic systems is that there is no need to coordinate activities in time in a business system. Things just happen when they happen. You can definitely have coordination issues of the type discussed above, but those have to do with relative time–which event happened first or last–and not absolute time. Driving events like document and transaction arrivals and user actions happen on their own. In a simulation those events may have to be built into the simulation as a driving mechanism (using schedules, Poisson functions, randomly generated events, or combinations thereof).

Posted in Tools and methods | Tagged , , , | Leave a comment

Handling Complex Wait..Until Conditions

Last Wednesday I discussed some of the internal workings of discrete-event simulations. I should also mention that all of these discussions are based on a program running in a single thread that is trying to coordinate many activities. There are ways to create systems using multiple threads but for the time being I’m keeping it simple. I specifically discussed a general wait..until construct as shown in the following timing diagram:

The condition to be evaluated is based on the result of a logical or Boolean calculation based on one or more values. The goal is not to find an efficient way to do the evaluation–that’s trivial–but to figure out an efficient time to do it. You don’t want to it more often than you have to, like after every event executes (as shown in the timing diagram), but ideally only when any of the relevant values change.

There are two parts to the mechanism as well, and evaluating the until condition is only one of them. The other is to relate the evaluation to the affected entity. If you’re coding a system on your own you can do the check where it makes sense, which means you have to identify all of the details and instances. If, however, you want a system that is more automated, an extreme example of which would be a simulation language or tool that does these things automatically, you need a standard method.

The diagram shows a collection of entities in a wait..until state. The system has to be able to identify the types of entities that can enter into such a state (there may be more than one type of entity), identify the variables involved (which may vary for every entity and state), identify where in the code or scripts or whatever those variables might be changed (they can be in any data structure in any location and may be referenced directly, indirectly, or as part of an array, variant record, or other difficult-to-identify structure), institute a mechanism that performs the evaluation when any of the variables change, and link back to the entity so its until event can be executed when the until test finally resolves to true.

There are a few ways to make the process more efficient. The system can flag tests that might be active because one or more entities is in a wait..until state so the tests aren’t performed if they will never have an effect. Changes to relevant variables can be flagged so they are only executed at the end of processing for other events. That way, if more than one relevant variable changes you only have to do the test once. If the relevant variables are themselves associated with specific entities then the mechanism has to be able to identify which entities are relevant to the state of other entities.

I can see doing something like a two-pass compilation, the first of which would flag the variables that would have to be checked, and the second of which would set the code up to execute the checks in the right places. I can also see just brute-forcing it somehow. Clearly, many mechanisms are possible.

I’m pondering these issues because they’re interesting, but also because I am moved to think about how systems might be implemented to execute logic described by the BPMN notation I discussed yesterday. The system implementation seems much more straightforward to me, and examining it may give me some insights into how it might work in a general discrete-event simulation language or system.

Business processes may spawn numerous events or tokens which traverse through different states at different times. A process ends only when all events or tokens reach end states. If all of the (we’ll simplify and just say tokens going forward) can only reach a single end state and we know how many there are then where and when they need to be processed is clear. If tokens may reach different end states the coordination problem becomes more difficult. This is especially true if any one token invalidates the rest of the process. If that is the case then the system has to decide whether it would be more efficient to seek out and eliminate the other active tokens or let them go and dispose of them when they reach their end states. The implementer also has to consider whether the accounting for live tokens might affect the accuracy of KPI monitoring.

Tomorrow I’ll illustrate the similarities and differences between business process systems and discrete-event simulations in a couple of different ways. The more I think about it the more similarities there are.

Posted in Tools and methods | Tagged , , , | Leave a comment

Business Process Model and Notation (BPMN 2.0)

Today I was able to complete a thorough power-read of the book Real Life BPMN by Jakob Freund and Bernd Rücker.

I have performed discovery on, analyzed, characterized, automated, modeled, simulated, documented, controlled, and improved various kinds of customer processes more or less for my entire career. I have specified, designed, written, implemented, installed, tested, documented, tested, verified, validated, and commissioned software for a wide variety of languages, tools, environments, operating systems, and applications during that time as well. I have thought about how it all works from a very low to a very high level and how it has evolved over time from its earliest origins to current developments. With that background in mind, particularly after having devoted some thought to the details of discrete-event simulations last week, I found the book to be both fascinating and well-structured.

It is easy to think that if you can analyze one type of flow or network or logic problem that you can analyze a wide variety of them. Not so. I’ve learned over time that it’s generally a good idea to begin at the beginning when looking at something different from what you’ve done before. Other kinds of experience make excellent analogues but without proper grounding they can lead you into serious misunderstandings. I had the background to understand the implications of what is described in the book as I read it but I leaned that I did not actually know what it was about going in.

The authors, who apparently also developed the BPMN standard up through version 2.0, build an explanation of what it does from its simplest elements to the most complex, and then describe the implications of each feature and what it represents to the business analyst/process modeler, the process managers, participants, and engineers, and the process improvements and automations that are supposedly to follow. They do this in a very clear and understandable order and any questions that occurred to me as I read were answered at some point in the text.

The main difference between the types of business systems the BPMN is meant to describe and the analysis, modeling, and even business process reengineering I had done previously is that business processes often have multiple logical parts that have to be tracked, split, and rejoined over time in combinations of entirely rational but potentially dizzying complexity. I’ve worked with some complicated software, systems, and processes, and they have sometimes involved checking against multiple conditions with varying delays and methods of synchronization, but the picture they built up gave me a clear understanding of the challenges specific to this discipline.

The big issue is that the analysis of the business process itself, as described by the notation, is intended to live at a particular level of abstraction that doesn’t directly relate to the software in which the described system is meant to be implemented. It is meant, instead, to logically represent a (wait for it…) business process in terms of business operations. There is a particular reason for that, I think, as demonstrated by the creation of the standard after other modeling languages, techniques, and implementation systems had been in use for some time and found wanting, and the standard’s subsequent evolution through several iterations, each one incorporating improvements identified through application in the field.

The book is organized into seven chapters as follows:

  1. Introduction: Brief context in terms of vendor, application, audience, and relationships.
  2. The notation in detail: How to use the actual symbols.
  3. Strategic process models: How to use a very streamlined subset of the standard to diagram the top-level framework. It shortcuts the formal syntax in favor of clarity for a broad audience including participants and senior managers.
  4. Operational process models: How to use the notation to describe business processes in detail.
  5. Technical process flows and process automation: How and why the various software systems don’t match the models described by the notation and how to deal with that.
  6. Introducing BPMN on a broad base: Adopting BPM processes as an organization.
  7. Tips to get started: Getting up to speed as a practitioner. Very short.
Posted in Tools and methods | Tagged , , , , , , | Leave a comment

Discrete-Event Simulation: Looking Under The Hood

Yesterday I mentioned a few constructs that a discrete-event simulation system would have to have. They are:

  • A time-ordered future events queue where events are created and entered into the queue, and then processed one by one in time order. The clock is incremented to the next time whenever an event completes.
  • The ability to suspend a process for a specified amount of time, whereupon the process continues.
  • The ability to suspend a process until an external condition becomes true, whereupon the process continues.

The first feature is pretty basic and can be implemented in any language. The events have to have some kind of structure that gets stored in a future events queue. The future events queue can be implemented in a number of ways, though a splay tree appears to be most often chosen. If you are using a dedicated language or tool the details of this implementation will be hidden from you. If you are using a general purpose language you will have to implement this feature.

The items stored in the future events queue have to support access to two important pieces of information: the entity with which the event is associated and the time of the event. If two or more events are scheduled to take place at the same time the items may also have to support access to information that will determine priority. One possible implementation of such an item would be a structure or object consisting of a pointer to an entity, a floating point value for the time, and an integer value for the priority. Many variations on this theme are possible. The ultimate point of processing an event is to be able to jump to a desired piece of code that performs some action.

Note that however long it takes for any piece of code to run, each piece of code performs calculations associated with an instant in time having zero duration. Events handled by the future events queue are associated with the beginning and end of any modeled action that takes non-zero time to complete. If the event is associated with the end of a wait state, either because a specified amount of time elapsed or because some condition finally became true, then another piece of code will run that takes zero time and places the entity into another wait state.

I usually use the term entity to describe something in a model that either acts or is acted upon, but it’s also possible to have events that are just part of the program’s mechanism. For example, a program may fire a process at regular intervals that collects statistics or generates a smoothly advancing time display, but the process isn’t associated with anything being modeled per se. In the case of a modeled entity control of the program will pass to a piece of code associated with a structure or object (you can see why object-orientation might be handy here). In the case of a purely programmatic entity control of the program might pass to a piece of code associated with a standalone procedure or function.

So what should a chunk of event code look like?

In the case of a purely programmatic entity the code might be straightforward. If the code is meant to update a time the code might simply do so (by writing to a display or an event file or both) and return. If the code is meant to gather statistics it might have to scan a list of entities to determine how many are in a given state and either reset or update an increment counter.

In the case of a modeled entity the behavior might be quite complex. Each entity may have a list of destinations to visit or actions to perform. The list of things to accomplish might itself by modular and variable (e.g., different types of entities might have different lists of destinations and the lists may themselves vary under different conditions). It may have to follow a set of rules in order to move from place to place. Regardless of the implementation control of the program has to pass to a desired piece of code that will be dependent on the state of the entity with which the event is associated.

How those pieces of code get written and divided up can vary greatly with different implementations. We know that actions are carried out at various times and take specified amounts of time to complete. Remembering that all code executes in zero-time increments, the trick to designing discrete-event simulation code is to break the sections of code up across wait states. A pseudo-code implementation might look like this:

The point is that you have to specify actions for every possible state and its transitions to every other possible state. That may or may not be easy to do, but at least the implementation in a general purpose language or canned tool is easy to understand. What might be more difficult to understand is an implementation that allows you to write code that looks like normal, procedural code but that has waits embedded directly in it. The SLX language supports writing code in this way and I’ve spent a lot of time thinking about what must be going on under the hood.

The code looks like entire chunks of behavior for an entity are defined as single functions which can have multiple waits. When the waits complete the function’s execution continues at the next statement. I am used to a function calling convention where a current code pointer is pushed onto the stack along with parameters and local variables for the new function. When that function completes the process is unwound by popping all of those items off the stack in reverse order. As each event is peeled off the future events queue the function associated with the active entity gets called. The tricky part of this call is that it might not begin at the first statement but instead some mechanism would have to exist that allowed the code to jump to the statement just after the wait. Similarly, all of the local variables defined within the function would have to be restored from someplace other than the stack. As a practical matter the local variables would actually have to be state variables associated with the entity, and those would likely be stored with the entity’s representation on the heap.

I can’t say for sure what the underlying implementation of SLX or similar languages is, but it’s interesting to contemplate. Its possible that the compiler reorganizes the written code to work as I described in the pseudo-code above, but the execution looks pretty continuous when you step through it in the integrated debugger so that’s probably not what happens. It is certainly interesting to think about how the mechanism would be implemented in a general purpose language. It seems to me that the jump from the beginning of the function to the continuation point could be replicated with goto statements (possibly in combination with case or switch statements). I know, I know, goto‘s are evil… Anyway, such a mechanism would preclude implementation in a general purpose language that does not support them (e.g., Java). Clearly a dedicated store, retrieve, and jump operation would be easy to implement in assembly language generated by a high-level language compiler or interpreter.

An entity may be active in the sense that it carries all of its own logic, state data, and history, or it may be passive in the sense that its behavior is determined entirely by other entities with which it interacts. Different entities in a model might be both active and passive at different times; it’s all up to the implementation.

In a flock of birds it’s probably best to implement each bird as an active entity whose behavior is determined by a (hopefully simple) set of rules based partly on its own motivations and partly based on those of one or more nearby birds.

A pool of maintenance personnel might be represented in an entirely passive way. They start off in the pool and get called away to perform some action for a period of time determined by some other entity or process, and are then returned to the pool. While statistics might be kept on the usage rate of the pool as a whole the behavior and history of each entity might be of no specific interest.

Travelers crossing a land border might have to undergo various kinds of inspections. For example, all travelers of one type might have to go to a dedicated primary inspection, whereupon some of them will go to the port exit or one or more secondary inspections. The traveler’s next destination might be determined by rules associated with the traveling entity while the time each inspection takes might be determined according to roles associated with the inspection station. This would be an example of a hybrid architecture.

Posted in Simulation | Tagged , , | Leave a comment

How Timing Works: The Internal Architecture of a Discrete-Event Simulation

On Monday I described the different ways time and events are handled in continuous and discrete-event simulations. Today I want to go into a little more detail about how those things work in a discrete-event simulation because the internal architecture is more complex–and more interesting.

In order to have something to simulate the model will have to be populated with entities that either change or perform actions. They may all exist at the beginning of the simulation run, they may be introduced into the system during the course of the run, or a combination of these methods can be used. If the simulation is at all interesting or complex the state of each entity will be updated many times during its existence in the model. The state of each entity can be defined by one or more characteristics of the entity, and any number of those characteristics can be examined or modified during each simulated event.

Events are generated in a number of ways. The arrival of an entity into the system is an obvious example but other examples have to do with movement, the start or completion of some action, an entity’s removal from or placement into a pool or queue, a scheduled event, or some other mechanism. Each event causes a timing entry to be generated in the model and those are stored in a time-ordered structure that holds all events that are set to occur as the simulation runs. The simulation continually picks the first entry in the event list and performs actions defined to occur at that time. Events can change any characteristic of an entity, spawn or destroy other entities, begin a process, end a process, modify some value, or place the entity in a queue or pool where it needs to wait for something.

Roughly speaking there are two things an entity can be doing at any given time. It can be waiting to complete an action in which it is involved or it can be waiting for the completion of an action in which some other entity or process is involved. Examples of the former are performing an operation or undergoing an operation. Moving from one place to another may be thought of as one of those two possibilities. Examples of the latter are waiting in a queue for entities ahead of it to complete processes or waiting for some resource to become available or waiting in a pool to be called for a scheduled event or an event generated as part of some other process.

Let’s look at each of these in turn.

Wait for an action to complete in which the entity is involved: Suppose an entity enters the system at some time. It then needs to move to some place (this can be physical or logical) and the movement will take some amount of time. Assuming the arrival event is currently being processed, the action of the simulation will be to create a new event, assign it the details of the new location, assign it a future time when the action will be completed, and place the event in the future events queue. The simulation will then stop processing the current event, retrieve the next event from the future events queue, and process that. Eventually the simulation will come to the event designating the end of the original event and process that.

Wait for an action to complete in which some other entity or process is involved: The result of completing an operation may result in an entity being placed in a queue or pool. In this case the entity simply waits somewhere for other events to happen. Eventually, one of those events will involve the entity in question, and some other process will begin. The change in a specific value may also spawn a new action.

The mechanisms described define a need for a few constructs to manage events. I’ll use the same ones James O. Henriksen defined in his SLX language, which is an extremely powerful, general purpose, C-based programming language that includes features to handle discrete-event mechanisms.

  • When an event completes the clock advances to the time of the next event in the future events queue. The next event is then retrieved from the head of the queue.
  • When a process needs to wait for a known period of time. This means that the current event finishes and a new event is defined and placed in the future events queue at the known future time (current time plus elapsed time increment). When the simulation finally processes the new event the process is complete.
  • When a condition defined by one or more variables becomes true an event is fired. This means that the program must evaluate the expression when any of the variables is modified, which can get complex. This construct may be thought of as wait… until. The program must maintain its own structure of entities waiting for specified conditions to become true. Advanced languages like SLX do this automatically. Note that the condition might be based on the availability of other entities. For example, if a pool of ten maintenance workers has been defined, and a process needs five to begin, but only three are available, the process in question will have to wait. Some other event will be processed instead and then the condition can be reevaluated.

    Events are shown as triangles while the related entities are shown as circles.

The mechanism of waiting in a queue does not need to be handled by a dedicated language construct. It is handled using the mechanisms already described.

Posted in Simulation | Tagged , | Leave a comment

An Even Bigger Difference Between Continuous Simulation and Discrete-Event Simulation

Yesterday I gave an overview of how time is (typically) handled in continuous and discrete-event simulations. Today I want to discuss an even bigger difference between the two.

Discrete-event simulation is probably well-named because it describes what it does, which is model events individually exactly when they happen. Almost any calculations may be performed as each event is processed in its time, but there is no requirement that any calculus be used. It might be, but you can write some pretty hairy simulations that don’t involve the merest whiff of it. For example, I worked on a team that simulated the flight, maintenance, and logistics activities of groups of aircraft over time. The basis of our work was a model that have evolved over the course of several decades until it was considered the most complex ever created in its particular implementation language.

Continuous simulation, by contrast, is about the continual reevaluation of systems described by differential equations. While theoretically it might be possible to differentiate over any quantity (e.g., location, temperature, density), in practice the main quantity over which systems are differentiated is time. Running the reevaluations during each increment of time serves as a form of ongoing integration.

If we consider a simple system that models movement along a single axis we might having something like:

dx / dt = v

where
    x is a measure of distance along an x-axis
    t is time
    v is velocity

Integrating this over a specified period of time we can rewrite this as:

(x – x0) / (t – t0) = v

where
    x0 is the initial location
    t0 is the initial time

Rearranging yields something we can actually use:

x = x0 + v * Δt

That is, the new x location is equal to the original x location plus the (average) velocity over the interval being considered, times that interval of time. This is probably the simplest possible differential equation I could write. I think every beginning physics and math student starts off with this or something very much like it. In (let’s say C) code you would write something like:

x = x + (v * timestep);

Things can get very tricky from there very quickly. In many cases you might be dealing with nonlinear equations. At other times you might need to solve a large number of differential equations simultaneously. You might want to calculate new values using coefficients for properties that are themselves a function of the new value you’re trying to calculate. I’ll talk about a few of these over the next couple of days but this is the main point I wanted to get across.

In my personal experience I’ve found that continuous simulations can be harder to model in terms of how the activities are represented. This is not because “calculus!” but because getting all of the effects right, describing them in terms of the correct governing equations, and working out the solutions in running code can be complicated. That said, the basic control framework of a continuous simulation is almost mindless: Do some stuff. Advance the clock by a fixed amount. Repeat. I’ve found it much easier to write statements that describe the events that are modeled in discrete-event simulations, but the implementation of the discrete-event mechanism in code can be tricky. That’s why such simulations are often written in special languages devoted to this particular practice. They can be written in a more general language, but the practitioner has to know a lot more to make it work.

It is also possible, of course, to create hybrid simulations that combine both techniques, or to write each type of model using the other’s underlying timing mechanism. One might have to be of questionable sanity or sobriety to do the latter, but it could be done.

I’ve written continuous simulations that used time steps of anywhere from 1/4 of a second to 50 seconds, but as I mentioned yesterday the range of times can be much, much larger. I’ve also worked on systems where different processes used different time steps. I’ve written and worked on discrete-event simulations where some activities were scheduled to occur at regular intervals, but those never involved integrating differential equations over time.

What these two methods have in common is that they can both be complex. Whether describing the system is conceptually easy or hard and whether the underlying implementation is easy or hard it is still incumbent upon the modeler to identify and correctly characterize the components of the system being modeled. You can encounter plenty of difficulty understanding a system and its inputs, outputs, behaviors, and interactions. Just because you know how the pieces work doesn’t guarantee that you’re going to come up with the right answer. Teams of very smart people can work on these things for years on end and still find things that are wrong, missing, improperly defined, have incorrect data, and so on.

Posted in Simulation | Tagged , , | Leave a comment

A Major Difference Between Continuous Simulation and Discrete-Event Simulation

I’ve mentioned continuous simulation and discrete-event simulation previously but I wanted to take some time to illustrate the differences between them. For today I’m going to keep it simple. The truth is that when you know what you’re doing, you can use either method to accomplish whatever you’d like; the methods can therefore be said to be interchangeable. That said, each method is better at handling certain classes of problems, so let’s talk about what those are.

A major difference between the two, bearing in mind that we’re keeping it simple for now, is how time is handled. In continuous simulation all processing is typically performed in time periods that begin at regular intervals of simulated time. Even if real or simulated events happen between regularly-spaced intervals they are only processed on the specified interval. In discrete-event simulation the events can happen at arbitrary times and each event is processed exactly when it occurs.

Consider the timing diagram above. Let’s imagine that we have two systems that are each initialized and have some processes taking place. Events associated with these processes are shown on their respective timelines in green. As you can see, in the continuous simulation the events occur at regular intervals, while in the discrete-event simulation the events occur at irregular intervals. It is also possible for multiple events to occur at exactly the same time in the discrete-event simulation (shown by the stacked events in green). Events occurring at the same time can be processed in order of arrival or by some sort of priority defined within the system. All events in continuous simulations are processed at the same time, though they may be handled in a specified order within the block of processing done on that interval.

Now let’s consider the arrival of new entities into each system, shown in blue. In the discrete-event simulation these events are processed when they occur. In the continuous simulation these events are likely to be buffered and the processing is likely to be done on the next regularly-scheduled event interval. The same is true of user events, shown in red.

The following comments apply to both kinds of simulations except where noted.

Note that when I mention time in this discussion I am always describing simulated time (except, possibly, for the user events). In a continuous simulation the simulation clock is always advanced by a fixed amount while in a discrete-event simulation the simulation clock is advanced as far as it needs to be to get to the next event. If multiple events in a discrete-event simulation take place at the same simulated time the simulation clock will not be advanced until all those events have been processed. In computer time the event or events can be processed as quickly as possible, the clock can be advanced, and the next event or events can be processed. If simulated time moves more quickly than time moves in the real world (because the simulation is less complex, involves fewer or longer time steps, or has fewer entities than the host computer can process in real-time) then it becomes possible to insert wait states into the simulation so that it runs exactly in sync with real-time. If the simulation is more complex, involves more or shorter time steps, or has more entities than the host computer can process in real-time then there isn’t much that can be done to run the system in real-time.

The time scale of either type of simulation can vary widely. Simulations of subatomic particles might be on nearly infinitesimal time scales while simulations of geologic or astronomic processes might be on exceedingly long time scales. Simulations of human-scale activities like manufacturing processes, building evacuations, and maintenance operations might take place on a smaller range of human scales.

Some simulations take hours or days to run, even on supercomputers. Others might run almost instantaneously.

For most applications simulations will be run without wait states so they proceed as quickly as possible. Simulations for research, analysis, or design are typically run in this way. If the simulation is used for training, control, or games then the desire may be to run it in real-time.

Many simulations are not meant to handle user interactions; they are initialized and run to completion. In my experience these simulatiions are colloquially referred to as “fire and forget.” Simulations for research, analysis, or design are typically run in this way as well. Other types of simulations are meant to respond to interactive user or environmental input, and again these are likely to be for training, control, or gaming applications.

Note that not all systems that run in real time or respond to inputs are simulations. A simulation is intended to be a fairly specific thing, which I will discuss tomorrow.

Posted in Simulation | Tagged , , | Leave a comment