A Simple Discrete-Event Simulation: Part 71

Continuing yesterday’s investigations I did some experiments with defining and instantiating a few objects to see how JavaScript allocates memory for them. I used the memory inspector built into Firefox to track memory and object usage as closely as possible. It’s possible that JavaScript implementations will differ in details across implementations in different browsers but I think I got a reasonable feel for what the language mechanism is doing behind the scenes.

Language and framework implementations often try to balance optimization for speed and memory usage. If a mechanism optimized for memory it might allocate heap memory for items based on their known size exactly. If the mechanism wanted to expand the size of that item dynamically it might allocate the exact space needed for a new, larger item, copy the original information over to the new item, and then fill the new memory location at the end of the new item. (“HDR” in the first element refers to the overhead the implementation stores with items that are objects. This consumes quite a few bytes.)

If a mechanism wanted to optimize for speed it might allocate a large area of memory and then let an item expand within that area as needed. If a mechanism wanted to balance speed and memory usage then it will allocate an initial slug of memory that allows for some growth by an item, and will go through the process of expanding the item over time as it becomes necessary. The designers of the mechanism will balance the needs of flexibility and optimization on various resources.

I experimented with some objects I already had but then tried creating a test object that would illuminate the exact behaviors I was curious about. What I found was interesting.

It turns out that even if the mechanism “knows” how much memory needs to be allocated it still builds items in memory so in an incremental fashion. The initial allocation of memory for the objects I was working with was 224 bytes. That happened when I declared an object with only a single numeric member. I then kept adding one number at a time until the inspector showed a larger increment. The initial allocation stayed at 224 bytes until it finally jumped to 288. Subsequent increments happened at 352, 480, 736, 1248, 2272, 4320, and 8416 bytes. If you subtract 224 from each of those values you get powers of two, or 64, 128, 256, 512, 1024, 2048, 4096, and 8192. I don’t know if the pattern keeps doubling after that. In any case it looks like an initial allocation is made and then additional space is added as increasingly larger chunks of memory. I can’t say what form the memory allocation takes, either, whether it creates the new memory, copies the old stuff over, then deallocates the old stuff, or whether it simply daisy chains new slugs of memory. Given what I know about JavaScript the latter seems entirely possible. You can never count on things being allocated contiguously in memory, as I learned from working with multidimensional arrays. I also wonder what this means for the storage of variables that would span a gap between incrementally allocated slugs of memory. For example, if there are four bytes left in the current allocation and the object is to be expanded to hold an eight byte number, would the mechanism allocate four bytes at the end of the extant memory block and the first four bytes of the new one, or would it leave the four bytes at the end of the initial block unfilled in favor of taking the first eight contiguous bytes in the new increment of memory. This brings up some other questions. Does the mechanism pack different types of data items tightly or on 2-, 4-, or 8-byte boundaries? The latter option would answer the last question conclusively; there’s a lot to know.

Why does the allocation mechanism do this? I’m guessing it’s because the definitions of objects are so flexible the mechanism never actually knows what it’s going to need, so it always builds everything on the fly without regard for what some crazed software engineer might be trying to throw at it.

Once I understood this process I started digging into the question I really wanted to answer, which is how the declaration of member functions affects the allocation of memory for objects. Here I needed to instantiate not one object but several, to get further insight into what was happening. I didn’t trace into the initialization of the objects but only recorded the memory devoted to objects (and the count of objects) as each new statement was executed to do the instantiation. The test object included 26 number values (cleverly named this.a through this.z) which would be expected to consume 208 bytes of memory (at eight bytes per number). Allocating the first object consumed 352 bytes while subsequent instantiations each consumed 272 bytes. This suggests that the internal overhead for each object is 64 bytes and the global overhead for a type of object is an additional 80 bytes.

Next I declared 26 short functions that return the value of each of the number values. If I declared all of the methods inside the closure the memory consumed was 736 byes for the first instantiation and 656 for all allocations thereafter. If I declared all of the methods as external prototypes then the instantiation for all objects, including the first one, was only 272 bytes. That suggests to me that declaring methods as external prototypes does, indeed, reduce the memory consumption of allocated objects.

I got exactly the same results when I increased the amount of code in each of the methods, so the code management overhead appears to be independent of the size of the associated code. It’s possible that there’s a threshold effect with code just like I described for memory (i.e., if I made the code even bigger it may require more memory for each object), but that’s beyond my current need and desire to investigate.

What does this mean for the ongoing discrete-event simulation project? It means that I want to migrate the methods out of the various object constructor closures whenever possible. This gets tricky when the constructors include code which has to run as part of setting up the object (beyond simply copying parameter values to internal member values) but isn’t a huge problem. I’m also thinking that I might not even bother for the component objects that make up a simulation environment. There are usually going to be fewer components that there will be entities getting processed. Entities (can) come and go at a furious pace and it’s more important to save that memory if possible. I’ve worked with models that process multiple tens of thousands of entities and that were constrained by the host machine’s resources so the memory consumption can definitely add up.

The current project’s entity object is fairly simple, by design (remember that we chose to build the intelligence into the environment components and not the entities — so far), so it will be a simple matter to externalize most or all of its methods. We can also dispense with most of the setters and getters, which seem to be more trouble than they’re worth in JavaScript. It doesn’t feel like “good practice” but it seems right from the balance of engineering considerations.

Posted in Simulation | Tagged , , | Leave a comment

A Simple Discrete-Event Simulation: Part 70

Today I did some reading relevant to the To Do list item to learn how to use the prototype object pattern as opposed to the purely closure-based method I’ve been using. Suffice it to say there’s a lot of information to digest about the subtleties of objects in general within JavaScript. The details of using public, private, and privileged members are even more subtle.

Everything I’m going to write here is true for ECMAScript 5. Version 6 has added new ways to declare objects that may be easier to deal with but I’m not using those features to maintain wider backwards compatibility. I’m pretty sure the new elements in 6 are only additions and that everything in this discussion will continue to be true. I’m formally using Babel or TypeScript just yet and, even more importantly, everything I write here is subject to revision over the next couple of posts (and beyond) as I learn and experiment with things.

Public variables (I usually refer to those as properties) can either be public or private. Public properties are declared within the constructor (closure) using this. notation. Private properties are declared using the standard var notation. Public properties are visible from anywhere except to private methods and can be modified, redefined, removed, and so on dynamically. Private properties, including the constructor’s parameters) are only visible within the constructor to private and privileged methods.

The this example we see that public variables are visible outside the object and do not require setters and getters. It’s also possible to add public properties to objects using an external prototype notation.

Functions (methods) can be members of objects as well and they can be public, private, or privileged. Public methods are added externally to the constructor using the prototype notation and are visible to each other and to outside entities. They are not visible to private or privileged methods. Private entities are declared within the constructor and can only see private properties and methods. Privileged methods tie everything together. They can see privileged and private methods and are visible to public methods and each other.

One observation is that if we’re going to make setters and getters for private properties we’d have to write a private method for the getter and then call that from a privileged getter method, which would finally make the property’s value visible to the privileged and public methods, and then we’d have to do the same for the setter in reverse. There may be reasons to do that but I’m not sure I see any in the code I’ve been writing. If you have any guidance on this subject I’d love to hear it. That said, I can imagine other situations where it might be worthwhile to hide certain mechanisms.

Simulation codes often place a premium on speed and we’d want to avoid this kind of thunking up and down if we can. We’ve also discussed previously that memory usage can be equally important, especially in systems with lots and lots of entities. I’m thinking that if the properties are all declared as public and the methods as public also, we make the instantiated objects as small as possible since they carry only data and ensure that code isn’t duplicated if it doesn’t need to be. (And by “duplicated” I mean references to the functions being stored with the instantiations, not the entire code — I think. It’s hard to find solid documentation on what’s actually happening in the different implementations, which is annoying. That said, I’ve identified some possible leads as to how this question can be answered and will report back with my findings.) Tomorrow I’ll try to convert one of the component objects and see how it goes. We can also discuss how inheritance might be used in the component objects since there’s a lot of duplication there, but that’s not the problem we’re trying to solve right now…

One final note: In theory you’re supposed to declare a private variable called that, which is supposed to be set equal to this, in order to make the object available to private methods, but I don’t know if that requirement is still in effect. Adding it in different places didn’t seem to affect anything and omitting it doesn’t keep anything from working (in my limited tests). See this important article from 2001.

Posted in Simulation | Tagged , , | Leave a comment

A Simple Discrete-Event Simulation: Part 69

Today I ticked one item off the To Do list and generally streamlined other parts of the code. The list item involved modifying the Queue component so it would smoothly handle situations when the traversal time is set to zero. If you don’t recall, this is the case when modeling abstract queues like those in data processing or business systems. Even if the time isn’t technically zero in those cases an entity is theoretically ready to be retrieved from a queue the instant it’s put in there. It takes a finite amount of time to traverse a physical queue in the real world. The solution involved calling the traverseComplete function directly in the receiveEntity function rather than putting the event into the future events queue.

You can tell it’s working as intended by noticing that the first entities drop right through the big, common Queue component (Q0) and never show a red waiting-to-traverse status even when they stack up during the run.

I originally thought this was going to be more complicated but once I recognized that I had a nicely compartmentalized traverseComplete function with all of the requisite logic the fix was obvious.

The things I cleaned up started with rationalizing the way I defined the display and movement parameters for Path components. I originally had two ways of generating the path displays: one based on each component’s built-in dataGroup object and one for each component’s graphics object. It turns out that the initialization process was using a goofy mix of techniques so I changed things so the locations are all specified in the graphics object and then transferred to the core component object. The last part is important because the actual movement is calculated in the core component object.

Another thing I streamlined was the definition of the graphics objects in the form of the DisplayElements constructor. I modified it so it obviated the need to supply flags for whether the incoming and outgoing nodes exist, since this can be determined automatically based on the parent component’s type, and I also added an automatic call to its defineGlobalDefaultColors method so it doesn’t have to be invoked externally every time.

I turned off the display of the Arrivals component to save some space and sped up a couple of other items that aren’t otherwise interesting enough to describe.

Here’s the new setup code. Not that you’d notice, but it is a little shorter and cleaner.

I noticed that the arrival schedule and entry distribution arrays are only defined for seven hours (14 x 30 min or 420 min total) while the simulations is allowed to run for ten hours (20 x 30 min or 600 min total). This doesn’t apear to be an issue since all entities are injected into the system within the first seven hours, but I’ll add a To Do item to review it.

Right now we have array parameters with multiple dimensions defined by time (schedule intervals) for the arrival items and for processing time and diversion percentage based on entity type. It’s possible to do both things but it’s only practical to implement if good data can be collected to drive the process. For now we’ll keep things simple and be satisfied that we’ve demonstrated the capability.

Here’s the updated To Do List:

  • Review handling of arrival schedule and entry distribution arrays beyond period where they’re defined
  • Standardize linking and exclusivity process
  • Resolve and standardize new vs. advance issue
  • Rework drawing of Path components so correct elements are always on top
  • New property for forward exclusivity as opposed to receiving exclusivity.
  • Formalize and implement method of receiving from multiple upstream components in turn. Implementing and observing this may illuminate the behaviors that will need to be implemented for the general path solution described in one or more previous posts.
  • Rework the Queue mechanism to flexibly handle finite-traversal time and zero-traversal time configurations
  • Revisit distribution logic to make sure it’s cycling the way it should be.
  • Learn JavaScript’s prototype object pattern and re-implement everything in that in place of the closure pattern I’ve been using; I’ll want to bite that bullet before this thing goes much farther
  • Add Control, Bag, and Stack components
  • Expand function of Process components to handle multiple entities in parallel, in effect making a single component function as if it were multiple, associated ones
  • Discrete-event simulation frameworks often include the concept of sets (logical containers that that be iterated and compared for intersection and so on, so that idea should be implemented; this would expand on things were doing now with lists of components and entities; the need for this was inspired by thinking about the bag data structure in general
  • Ponder the idea of implementing a combined Queue-Process component
  • Expand Path component representation so it can incorporate multiple line segments
  • Add ability to sense reaching tail end of FIFO queue based on stopping to wait on a Path component; collect statistics accordingly (possibly add wait flag to entities so they can test against the next entity in line)
  • Look into creating a zero-duration, zero-queue decision component
  • Create standardized routing mechanism (to components of different types) based on process logic (vs. distribution logic to multiple components of the same type)
  • Add a test to verify that valid routes exist to support all required paths that may be taken by different types of entities.
  • Add a test to automatically assign downstream distribution mechanisms. See if the need for this isn’t actually obviated at some point.
  • Implement mechanism to identify combinations of related components into groups (e.g., a group of tollbooths represent a single toll plaza)
  • Gather and report system-, component-, and group-level statistics
  • Add ability to stream large volumes of output information which can be post-processed and quantified via a parsing mechanism; this is necessary for advanced statistical characterization
  • Streamline the process of defining the endpoints of Path components (i.e., attach standard nodes to other components and connect to those automatically, which will greatly save on the number of x,y locations the designer must specify manually)
  • Add an edit mode that allows designers to change component properties interactively (ultimately including being able to drag them)
  • Use the new, completely external mechanism for displaying component data
  • Describe how abstract representation can be used to represent detailed layouts and interactions; include ability to flag or alarm situations which would cause conflicts in the real world that would not necessarily be captured in a model unless specifically tested for
  • Add the ability to graph outputs as part of reporting
  • Add scrolling, animated graphs to illustrate progress as simulations run
  • Include ability for users to call up component and entity status by written or graphical display interactively while runs are in progress
  • Create streamlined graphical representations of all component types; create data display for Path components
  • Add ability to display entities inside relevant non-path components
  • Abstract the current x,y locations of all elements and scale them to the screen so the user can zoom in and out
  • Employ three.js framework to render models in 3D. Also consider piping this output through the associated VR framework.
  • Improve how type is displayed for 2D entities
  • Improve how status is displayed for 2D entities
  • Modify 3D entities to reflect entity type as well as status
  • Add ability for users to interactively change things during runs
  • Add Monte Carlo mechanisms to various timing and routing events (beyond what’s already been demonstrated)
  • Allow designer to build Monte Carlo and other distributions from acquired data using standardized tools
  • Incorporate Monte Carlo dithering or explicit curve shape for distributions
  • Add ability to perform multiple runs and statistically quantify generated outputs
  • Make simulation display update at regular intervals of simulated time rather than intervals defined by individual events; also make this “speed” scalable
  • Include ability to add arbitrary graphic elements to models (labels, keys, tables, etc.)
  • Include ability to display an underlay “below” the model display (e.g., a floor plan of a modeled process)
  • Allow user to turn off display and run model in “fire-and-forget” mode where it generates results without wasting time redrawing graphics.
  • Allow user to selectively turn different display elements on and off
  • Create suite of test configurations to exercise every combination of connections and support regression testing.
  • Add ability to assign open/close schedules for components and groups
  • Add ability to introduce multiple types of entities in model with different processing rules for routing and timing
  • Add ability to combine multiple queues into a single, logical unit
  • Add ability to adapt standard base components to represent varied and specialized things (this applies mostly to Process components)
  • Add ability to save and restore model definitions (in files/XML and in databases, requires PHP/MySql, Node.js or other back end capability)
  • Add ability to represent more abstract processes:
    • Reintroduce wait..until mechanism that uses the current events queue
    • Include pools of resources that are needed by different processes
    • Implement non-FIFO queues or collections that receive and forward entities based on arbitrary (user-defined) criteria
    • Include ability to accept events that happen according to fixed schedules rather than random ones (e.g., to match observed data)
    • Include the ability to change representation of entities and components to represent state changes (by color, shape, labels, flags, etc.)
    • Support input and editing of modular packages of information used to define and drive models
    • Add ability to represent BPM processes using standard BPMN notation
  • Really complex stuff
    • Develop more complex, arbitrary node-and-link representation of model, which brings up worlds of complications on its own!
    • Polish related sections of code and break them into modules which can be included on a modular basis
    • Make modules and examples distributable for use by a wider community
    • Make entities be active rather than passive, retain some intelligence in the components
      • Write documentation for modules as needed
      • Share on GitHub
      • Create dedicated project page on my website
      • Update and enhance my custom Graph component as well as the simulation framework
  • Re-implement this system in a more appropriate language like C++ or at least Java
Posted in Simulation | Tagged | Leave a comment

A Simple Discrete-Event Simulation: Part 68

Last time I got the system to handle multiple types of entities and today I updated the reporting capability so it generates results for the system as a whole and for each type of entity that we might want to report on. Entities can be differentiated by many properties singly or in combination. I therefore found it necessary to structure the solution so the hand-coding required to ensure all the correct statistical accumulators get updated is concentrated into as few, easily identifiable places as possible.

The existing code that managed the various accumulators was easily modified to update an accumulator array of choice instead of just the single global one. The recordGroupStats function shown below used to write directly to the statsArray accumulator directly, but here we’ve modified it to write to an accumulator of choice by specifying target accumulator array in the whichStatsArray parameter. We then created a “wrapper” function that called the original function so it’s applied to each desired accumulator. The invocations of the original function were easy to find in the code and update to their wrapped version with a slight change of parameters.

Note that the wrapper function must refer to the relevant entity so its type and classification can be determined. This determination can be done using any combination of an entity’s properties, although this example shows a very simple division based on a single property. It’s also possible to collect the statistics in numerous different ways. In this case we broke things down by an entity’s residency but in the future we’ll also break it down by processing speed, when we segregate those entities within the simulation.

Initialization of the statistical accumulator arrays was rolled up into a function so the reset mechanism could easily make a single call instead of having to duplicate the code in two places. We can define accumulators for as many entity subtypes as we’d like.

There were a few places where I had to something slightly more complex, like ensure a counter referred to by all the accumulators was only updated once, but the main work was straightforward.

The output of the report is simply repeated once for all entities and again for each entity type or subtype the programmer wanted. I added a text title for each set of accumulated data to identify its entity subtype and corrected a bug I noticed where the Max and Min values in each row weren’t being calculated correctly.

I’ve tried to build the reporting mechanism in the most general and modular way possible but there’s only so far this can be taken. The instrumentation needed to collect different kinds of data will necessarily vary based on what we might want to know. In the end all we can do is keep the code as clean as possible and understand it well.

Posted in Simulation | Tagged , , | Leave a comment

A Simple Discrete-Event Simulation: Part 67

Today I began implementing the capability of handling different types of entities. The two main aspects of this problem are how to assign properties to an entity and how those properties affect how an entity is processed by different parts of the system.

Addressing the first aspect, I originally defined a property of entityType that was supplied as a parameter to the entity’s constructor, and that value was set to zero for all entities and thereafter ignored. This is an example of defining a custom property for each type of characteristic. This is a good option if you are hard-coding a simulation for a specific application. Another option would be to define a base class and then attach custom properties through inheritance. Still another option, which I am implementing today, involves creating a general structure to define an arbitrary number of properties for each entity. I’m doing this with the idea of creating the most abstract and general type of base framework. In this we are trading away some speed and clarity for generalizability.

I first define a global data structure that describes what all the characteristics are and what values each characteristic can take on. Conceptually it looks like this:

This can be implemented as a two-dimensional linked list, a two dimensional array, a list of arrays, or whatever, so the depiction is rather general. In this example, and since we’re using JavaScript, we’ll use an array of arrays, where the first element of each sub-array is the name of the property (e.g., color) and the subsequent elements are the possible values of that property (e.g., “orange,” “ecru,” “chartreuse“).

This entire structure should be defined before any entity is defined. When each entity is defined, it includes a property called propertyList, which is a single-dimensioned list (array) of values for each possible characteristic. We could allow multiple values per characteristic but let’s just keep it simple, OK?

This mechanism is a bit kludgy and slow since it will often involve the use of strings, the comparison of which adds a certain amount of overhead. There are many ways to improve on the compactness and speed of the operations we’ll add (enumerated types, bitwise mapping, and so on), but this will keep things very clear and require the least amount of custom coding.

Here’s the code for the global data structure and the passive entity type. The data structure defining properties and values is defined first so that entities, when they are created, can be set up to store values for each possible property. The setProperty method is used to define an entity’s properties when it’s created and the getProperty method is used to determine the entity’s properties when it’s being processed.

Now we come to the other main aspect of this problem, which is how an entity’s characteristics affect how it is processed. In practice, in this kind of model, an entity’s type can determine its processing time (within any given component) and diversion percentage (the chance of going to each connected, downstream component when it gets forwarded from its current component). If we’re simulating a tollbooth we might choose a process time of 45 seconds for vehicles paying by cash but only 10 seconds for vehicles with automated transponders (e.g., E-ZPass). Travelers entering a country at a port of entry might be referred for secondary processing at random two percent of the time if they are a citizen of that country of ten percent of the time if they are a non-citizen. The diversion rate for citizens of watchlisted countries might be referred at an even higher rate.

Let’s tackle processing times first, since that’s just a single value for now. Processing times at different Process components may be dependent on different properties. I’ve modeled 40 or 50 land border crossings and know that primary processing time might be affected by citizenship, conveyance type (i.e., car, truck, bus, pedestrian), and membership in pre-clearance programs while the time it takes to pay tolls will depend on the method of payment. Therefore, a different method of identifying types has to be determined for each type of Process component and the value of the process time has to be specified for relevant combination of types. We’ll create dedicated functions to define synthetic type indices that will be used to determine the process time to use.

Let’s define some properties. The first lines define the properties and values that are possible while the function is called to assign values to each entity’s properties when it’s created. Notice also that the function assigns a color value to each entity based on its residency type (blue, purple, and yellow) and modifies it (light or darker) based on its processing speed. The display code has been updated so that the entities are represented as the same 5-pixel radius, red or green circle based on movement permissions we’ve had until now, but with a 3-pixel radius disk superimposed to show the type of entity. (The 3D entities haven’t yet been updated to incorporate this information.) Using lighter and darker colors is not the clearest way to differentiate easily between fast and slow entities but it demonstrates the idea. I’ve used more explicit text and graphic indicators to represent types and statuses in other simulations I’ve written, and the BorderWizard family of simulation tools I worked on used several graphical queues to represent different property values. Conveyance types (car, truck, bus, pedestrian) were represented by different shapes, residency type by different vehicle colors, toll type by hood color, and commercial vehicle type by trailer color. We’re keeping things simple for this project but the possibilities are endless.

Now we’ll create functions that generate indices based on the values of the properties. We’ll have all the entities with an E-ZPass-like credential get processed in a one time, visitors without a fast credential get processed in a different time, and citizens and legal permanent residents (LPRs) without fast credentials get still a different processing time. A similar function for diversion percentages generates indices strictly based on residency type. These can be custom-generated for each type of component on a case-by-case basis.

SO how does all this information get used? First we’ll expand the definition of process times and routing tables we pass to the components as they’re defined. Notice that we don’t assign special traverse times to the Queue component because that is about how long it takes to traverse a queue (and this value might be zero) and doesn’t necessarily have anything to do with the type of entity. For the routing table we’re saying that citizens don’t get diverted to secondary very often, LPRs get diverted slightly more often, and visitors get diverted quite often. (These percentages are only chosen to illustrate the effects.)

Then we’ll have to change how the variables are accessed within the components themselves. Here’s how process time is handled within Process components; we simply use the process time element indicated by the array index we generate based on the entity’s property values, in the second and third lines from the bottom.

Diversion percentages work the same way except they apply to all components except Arrivals, Paths, and Exits. It only matters when we’re model logic routing (option 3) and it only has an effect when an entity is available to be forwarded. Other than that we’ve simply added an extra layer of indirection. (The relevant code starts on line 37.)

So that’s all there is to it! Well, OK, it’s easy enough but spread around quite a lot. This first pass at adding this functionality works, as you can see by running the model, but we still need to streamline this process if possible and add some validation. For example, we should ensure that the size of each array and sub-array matches the number of possibilities that actually exist. We ultimately want to assign these values in the clearest way possible but there is an unavoidable level of complexity that the programmer will have to deal with. If you have any suggestions for ways to simplify any of this (before I think of some), then by all means let me know. We also need to update the reporting capability to capture statistics relevant to the new subtypes we’ve defined, and we need to find a way to add type indicators to the entities in the 3D representation.

Posted in Simulation | Tagged , | Leave a comment

A Simple Discrete-Event Simulation: Part 66

A direct link to the standalone page is here.

I followed up today and set up the initialization of the 3D graphics so it happens only after all of the required resources were loaded.

The process is kicked off by the function called by the body element’s onload event. This is called when the entire page and its required sources have been successfully loaded.

The next key is to get everything that references the 3D graphics environment called for the first time in the initGraphics function. The function also starts off by getting handles to all user controls so it can re-enable them.

Here are most of the called functions. You’ve seen the verifyModel code before so I won’t repeat it here. It’s called because it initializes the 3D elements representing the simulation components and adds them to the scene. Note that the renderer.setClearColor("#000000",1); call explicitly sets the background color of the 3D scene, so it now renders properly on my 3rd gen iPad.

Posted in Simulation | Tagged , | Leave a comment

A Simple Discrete-Event Simulation: Part 65

My charter for today was to figure out why the discrete-event simulation framework was not running in Opera. I had experimented with this on Thursday and Saturday of last week and traced the problem to a call in the Three.js framework file, which presumably failed because some resource was not available (the framework threw an error when a it tried to refer to elements of a variable that had a value of null). The problem existed when the internet connection was slow, which indicated that the issue arose from the fact that all resources were not yet loaded.

When I tried to run the simulation code yesterday and today, with a livelier internet connection, it ran just fine in Opera without me having changed anything.

Nonetheless I spent some time looking into ways to delay trying to draw anything until I was sure all of the resources were loaded. There are ways to do this (see here, here, and here) but there’s a problem doing it in the code as it’s currently written. The initialization code is distributed all through the file, so that would have to be gathered up and placed into a single function — without breaking anything. The global variables would have to be declared outside of the that function and then assigned values within it so they retain the required visibility. The controls allowing the user to run the simulation would have to remain disabled until the initialization is completed, whereupon they would have to be activated. The problem might not be so bad if I only have to worry about items having to do with initializing the canvas and 2D and 3D context elements.

I spent just a few minutes looking into ways to insert a pause into the beginning of the script that would periodically check to see that the load had completed before running further. The problem is that JavaScript is single-threaded (so far, but see this exception) so there is no sleep() or wait() function that will make execution pause. The existing timing functions, setInterval() and setTimeout, only run code when the JavaScript code breaks to wait for user input or some other event (like the animation loop we’ve been using). Those functions may fork off an event that will run in the future but the rest of the code keeps right on running so that’s not the answer.

Tomorrow I’ll see what it’ll take to reorganize the code as I’ve described, or I’ll simply move on. Either way I’ll be sure to revisit the issue if I can reproduce it.

Posted in Simulation | Tagged , | Leave a comment

Reproducing A Clever Animation Product, Part 29

It took a bit of experimentation but the ultimate fix turned out to be quite simple. If the browser’s JavaScript interpreter strips off the second parameter then we can just duplicate the first one. It might not give exactly the intended result if the second parameter was not originally equal to the first but it’ll work. All parts of the demo will now work in recent versions of the Microsoft browsers as well as Firefox, Chrome, and Safari. It works in Opera as well but upon installing it and testing this blog site it turns out that the discrete-event simulation stuff does not work in Opera. I guess that will be Monday’s activity.

Here’s the updated code as described.

The reason the process didn’t create the expected animation was because with the second parameter missing from pStart, the script had no way to interpolate between the second value of pStart and the second parameter of pEnd. The initial transform was applied correctly but subsequent ones all read something like this. IE might not do the expected thing with the second parameter but it isn’t going to be happy trying to process a value of NaN.

Posted in Tools and methods | Tagged , | Leave a comment

Reproducing A Clever Animation Product, Part 28

In keeping with this week’s theme of researching backward compatibility of the various projects I’ve worked on over the past year I decided to figure out why parts of my fast animation framework weren’t updating as expected. The short answer is that IE 11 (and presumably 10 and 9 as well) simply does not process more than one parameter passed to a scale command in a style.transform assignment. I originally saw this happen in the debugger using the transform property directly. Changing the code to use msTransform (both when hard-coded and when switched after detection) didn’t change the behavior.

Here’s the relevant code running in the IE 11 debugger, as it appears just after having executed the statement at line 957. (transformPrefix has the value of “msTransform” in this case.)

…and here is the value of the variable we have supposedly assigned:

Doing the same thing in most other browsers results in the watch values taking on the expected result with both parameters. Although the Mozilla spec says that browsers should accept an optional second parameter, IE does not do so.

The solution, therefore,is to test to see if we’re running IE (9 or later?) and handle this scaling operation with only a single parameter as a special case. This means we cannot specify different scaling factors in the x- and y-directions but hey, it is what it is and it will do something. If we really, truly need to scale the axes independently then we can create a mechanism that uses the scaleX and scaleY commands separately.

I’ll implement the custom code tomorrow.

I used this code to detect which browser I was using and which prefixes I should use.

transformPrefix is the only value relevant in this code but I’ve included the others to demonstrate that they can be used. The code I lifted from an online example (from 2012) didn’t test for the typeof result coming back as an empty string; I had to add that when I got that result in the various debuggers when testing the value transform. That used to be first in the list of parameters to the GetVendorPrefix call but I moved it to the end to ensure I captured other possible results first.

Once the value of transformPrefix is set it can be used in a couple of different ways:

It turns out that just using transform property in most modern browsers will achieve the desired results. This must be the outcome of many years of standardization by the vendors.

Posted in Tools and methods | Tagged , , | Leave a comment

A Simple Discrete-Event Simulation: Part 64

As I’ve been developing and experimenting with JavaScript over the last year-plus I’ve been more interested in showing my experience in simulation, graphics, and analysis than I have been with cross-browser- or backward-compatibility. If it ran on my phone (iPhone 5s and now 7) and the most recent versions of Firefox and Chrome then I was good. I found a few calls in my fast animation framework project that are going to need to be enhanced to support certain browsers and I’m going to revisit those, but I found myself being particularly annoyed by the fact that my stuff was no longer displaying properly on my 3rd-generation iPad. That iPad runs iOS version 9.3.5, which is in theory the final update for that device.

iOS is currently up to version 10.2 for more modern devices and it runs everything nicely. I therefore ran my discrete-event simulation project code through Babel (using its default, built-in conversion page) to see what features from JavaScript 6 I might be using. This is a more recent version of the language that might not be supported by the iPad’s slightly older OS. I knew I was using default function parameters from the latest spec but wasn’t sure if I might be using anything else. I may discuss Babel’s quirks on a different day but for now I’ll note that it didn’t appear turn up anything else. I went through and changed all the declarations that used default parameters to more traditional forms, sent it to my server, and tried it out on the iPad.

It didn’t work.

The next step was to plug the iPad into my Mac and walk through the code in the remote debugger. Sure enough, I’d missed changing one instance of a default parameter. When I fixed that and uploaded it again it ran just fine on the older device. Here it is below.

A direct link to the standalone page is here.

I noticed that my 3rd-gen iPad renders the lower canvas element, that shows the 3D animation built using the Three.js framework, with a white background. I don’t think I explicitly set the color for that background, it just seems to default to black on newer devices so I didn’t think more about it. I’ll see about setting it explicitly to black going forward.

I also notice that the simulation and its animations run a lot slower on the older device. The iPad 3 was the first to incorporate the hi-res Retina Display but still used an older CPU so the device was considered to be a bit strained. Most of the work of running the simulation involves generating the graphics. I’ll eventually make it so the animations can be turned off independently under user control and also so the simulation itself can run at full speed without having to wait for calls to the animation loop. Those should run pretty quickly even on older devices, but as I’ve discussed previously, that is ultimately a question of the scope and scale of a simulation.

Posted in Simulation | Tagged , , | Leave a comment