A Simple Discrete-Event Simulation: Part 87

Today I connected the display of the DisplayGroup objects to the component selection event. That is, if you click on a component it brings up that component’s DisplayGroup object. This process involves several steps.

First, the location on the canvas has to be sensed, and that has been discussed previously. However, I neglected to test the selection functionality when the scene had been dragged to a new location. That was corrected by subtracting the factors globalBaseX and globalBaseY from the actual click location in the same manner that DisplayElement x- and y-locations are modified.

Here’s the updated event handling code, which also shows the new way that mouse click events are handled per yesterday’s discussion.

We have to go back to defining the location and properties of each component’s DisplayGroup object, as in this example, which is the only one so far defined. It’s the last line here:

Drawing the DisplayGroup object is now more complicated, since I’ve chosen to include a pointer from the DisplayGroup’s frame to the edge of the DisplayElement’s frame. The pointer is drawn along the line between the centers of the two frames.

The intersection function determines whether two line segments intersect. Note that I said “segments” here; any two lines that aren’t parallel will ultimately intersect, the question is whether it will happen in the segment(s) of interest.

This image shows how the construction lines are used to define the pointer. If there is no intersection across the frame of either or both elements then no pointer is drawn.

The following steps are taken:

  1. The center points of the displayGroup and DisplayElement objects are determined (see the central purple line)
  2. The intersection of the center line and the outer frame of the DisplayElement is calculated (points ix and iy)
  3. The slope of the center line is determined
  4. A line perpendicular to the center line and passing through the center point of the DisplayGroup object is constructed
  5. Points are defined twelve pixels or units from the center point in each direction along the perpendicular line (marked by the blue and orange crosses, points ax1,ay1 and ax2,ay2)
  6. The intersection of the two new lines with the border of the DisplayGroup are determined (points ix1,iy1 and ix2,iy2)
  7. A triangle from points points ix,iy, ix1,iy1, and ix2,iy2 is drawn and filled in black
  8. The two outer lines are drawn in the desired border color

This method handles pointers that cut across a corner as well as any single edge segment. It also doesn’t obscure any text if the text is drawn after everything else. I was happy when I realized I could take this shortcut and not have to worry about corners as a special case. The code for drawing the construction lines shown in the image is commented out above, but left in place so you can follow the action.

The mathematical methods for finding the intersection points are correct but the digital implementation has the infuriating habit of not finding some intersections that actually happen. This is simply because the calculations aren’t accurate enough to always pass the hit tests. Given that we’re using 64-bit reals I find that hard to understand but it is what it is. I’ve traced through a couple of examples and have seen it happen. I have to figure out a way around this problem but have not done so yet. In the meantime I’ve tested to ensure that the process works from all angles, including when the center line is perfectly vertical when the DisplayGroup object is both above and below the DisplayElement target.

Odds and ends:

For now the described process only works with DisplayGroup objects that are not rotated (the process of finding intersection with the DisplayElement frame is general and would handle a rotated target object) but I’m not sure DisplayGroup objects would ever have to be rotated.

For now the described process only “works” when the target DisplayElement is a rectangular component. It would have to be modified for components that are paths, though that process should be relatively straightforward, since the center point of a line segment would be easy to define.

I had to add a parent value to the DisplayGroup object that points back to the simulation component it represents. This was needed so the code in the DisplayGroup object could trace to its parent component object and then to its DisplayElement (graphic) object.

If a component has been selected and highlighted then, for now, its DisplayGroup is drawn as well if it is defined. In the future this is going to require a separate flag since the selection process is going to lead to a menu to allow the user to take different actions or drive other actions based on varying contexts (e.g., run mode, edit mode, etc.).

This method could (and should and will) be generalized to display other types of information, with the primary example being a real-time scrolling graph of one or more state values.

The DisplayGroup objects are drawn after everything else in the 2D scene so they are always on top. I will expand the user hit testing to scan for these objects first when I implement code to allow the user to move, modify, or hide them.

Posted in Software | Tagged , , | Leave a comment

A Simple Discrete-Event Simulation: Part 86

I was working to add the next behavior when I discovered that things that used to work no longer did. After a bit of digging I realized that the problem was trying to handle click events separately from mouseup and mousedown events. It turns out that it isn’t really possible to separate them out; a click events also fires off a mousedown and a mouseup event. I therefore had to handle the click behavior in an integrated way as I do when handling touch events. The way to differentiate a click from a drag is to ensure the mouseup event occurs within a specified time window and that not too much movement has occurred.

I corrected all of that, as you’ll see directly in tomorrow’s code. It uses ideas I had stubbed in previously. I also realized I was using the wrong time stamping method and changed all of the relevant calls to use the correct one.

Posted in Software | Tagged , , , | Leave a comment

A Simple Discrete-Event Simulation: Part 85

Direct link for mobile devices.

Today I made the selection action work using touch events in addition to the mouse click event. The touch mechanism in JavaScript for mobile browsers does not current have a click counterpart so we have to simulate the action using the events we do have. I’ve done it in the most naive way, by simply activating the desired event if a touch event ends within a sufficiently short period of time. I’ve included the code below, with comments for what’s been added. This implementation was pretty mindless, consisting of about eight lines.

Interestingly, at least on my iPhone, this process continues to work without adjustment when the user zooms the screen in and out, indicating that the OS is handling that part of the scaling process natively. When zooming of the 2D image is handled separately, however, and again I haven’t gotten to this yet, the situation is going to get more complex and will need to be handled explicitly.

I’ve left stubs in for checking to see that we haven’t dragged anything too far, and have also stubbed in the ability to not start executing a drag until it’s clear the user really means it. This could all be tuned more over time but I imagine that a number of capabilities are going to be added over time and I want to get a better feel for what the fuller list might include before doing the final polish on any one feature.

Posted in Software | Tagged , , , , | Leave a comment

A Simple Discrete-Event Simulation: Part 84

Today I implemented the ability to select graphic elements using a mouse click. This involves sensing the mouse click event and its location, variations of which we’ve already visited, and identifying the graphic element on that part of the screen. For the first pass the code only allows the user to select or deselect one or more items, and the only effect is to change an item’s color scheme. The standard neutral, ready, and waiting colors for components and paths are yellow (#FFFF22), green (#22FF22), and red (#FF2222), and selecting an element simply changes the blue values from 22 to FF (e.g., #FFFFFF, #22FFFF, #FF22FF). Deselecting an item, which is only selecting an item that has already been selected, returns the item to its original state and colors.

The first thing we have to do is sense the (left) mouse click and the location:

Now for the actual scan of components. This code ignores components that do not have graphic representations and stops searching when it finds the first match. In cases where a selection could plausibly select more than one item it assumes that the user will change the display order of the items to make a different item come first in the component list. Visio and other drawing programs do this with the “Send to Back” and “Bring to Front” operations, neither of which have been implemented in this code (yet).

The key is obviously how the components’ graphic representations perform the required hit testing:

There’s a lot to say about this code. The main test is the two lines near the middle and everything else is a special case of sorts.

Note that this checks against rectangles defined both up and down and left and right, and not just down and to the right. The simpler test might work for most rectangular area components but would fail for Path components that run in directions other than down and right. This test performs a naive check to see if the click falls within a rectangular area (that has not been rotated). If the component is not a Path then the test is true and the work is done. If the component is a Path then the process is a little more complicated. The code first figures out the point-slope formula for the path in the form:

    y = mx + b

where:

    y = y coordinate, dependent variable
    m = slope of line, (y2 – y1) / (x2 – x1)
    x = x coordinate, independent variable
    b = y-intercept of line (y value when x = 0)

The code then constructs another line perpendicular to the first line, which has a slope of -1/m. Here it’s important to ensure that we don’t run this code on paths that are vertical, or have infinite slope, and you can see that lines that are close to being horizontal or vertical are handled separately.

I then substitute the second equation into the first, eliminating the y and ending up with

    m1x1 + b1 = m2x2 + b2

    m1x1 – m2x2 = b2 – b1

    x(m1 – m2) = b2 – b1

and

    x = (b2 – b1) / (m1 – m2)

Once we have the common x-coordinate we solve for the common y, and then we can use the Pythagorean Theorem to determine the shortest straight line distance from the click location to the line of interest. If the click point isn’t close enough (the setting is currently 5 pixels) then the routine returns false and the search through the component list continues.

If lines are nearly horizontal or vertical the code simplifies things a bit. It basically tests to see whether the click point is within five pixels of the midpoint of a horizontal or vertical line that is close to perpendicular to the Path or interest. This is not limited to purely vertical or horizontal lines in order to ensure that the hit testing area is a bit larger than it might be if we were only to test within the rectangle defined by the endpoints. A purely horizontal or vertical line would yield a test rectangle of zero width and lines that are nearly horizontal or vertical would be almost as bad. Who wants to limit the user in such a way?

Finally, the innode and outnode subcomponents of Bags are handled down at the bottom.

A few improvements are needed here. The testing for the size of the innode and outnode subcomponents of Bags needs to be parameterized. The test for rectangular components needs to be able to test for elements that have been rotated. I’ve written code for this previously and there are a couple of ways to do it. The entire process needs to be modified to handle different levels of zooming and magnification. I’ll be working through all of these processes over time.

The last bit of code has to do with highlighting and unhighlighting the components’ graphic representations, which is pretty straigthtforward:

The value for this.highlighted is initialized to false above this. This can all be more generalized.

Once an item is selected we can start to interact with it in various ways. We’ll do that and more going forward. In the meantime, feel free to click on the different components and see what it takes to get each of them highlighted and unhighlighted. This all works whether the simulation is running or not, for now. I chose a fairly simple set of color changes and something with a higher contrast would probably be better, but what I’ve done serves the purposes for demonstration.

I have not yet implemented a way to select moving entities, but that process would be even simpler than selecting components, at least for circular ones. More complicated entity shapes and orientations are obviously possible.

Posted in Software | Tagged , , , | Leave a comment

A Simple Discrete-Event Simulation: Part 83

Direct link for mobile devices.

Now that I have the chance to return to the Discrete-Event Simulation project the next item to work on is touch events. I had already implemented the ability to scroll the 2D display horizontally and vertically using the keyboard and mouse, and now I wanted to add the ability to do this on a phone.

One thing I was worried about was how laptop touchscreens would handle having both mouse and touch events doing the same thing. My Windows laptop has a touchscreen and I found that touch events appear to activate the functions written for the corresponding mouse events. That is, the code from February 7th would allow me to scroll the 2D image by touch on my laptop even though only mouse events were handled.

I started looking at the way jQuery handles touch events but eventually found the direct documentation here. I prefer knowing how to do things directly and from first principles rather than relying on a framework so I was pleased to find such a clear guide.

I copied the section of code that implemented the mouse action handlers and made minor adjustments to some of the functions to reference touch events and data values in place of mouse events and data values. The code for the mouse and touch handlers is shown below. I use the same global state variables and the same handler functions renamed by appending a “T” to them, for “touch.”

I loaded the whole thing to my server, tried it out, and what do you know? It worked perfectly on the first try. I’m kind of stunned.

Reusing as much of the original logic as possible is what made this work. That said, there are a few things we need to understand.

I explicitly do not try to read touch points beyond the first. That’s why I refer directly to touches[0] all the time. If the user wants to do something more complex it’ll have to wait.

I relied on that fact that the coordinate system for mouse events operated on the same orientation and scale as the mouse events. I don’t see any reason why this wouldn’t be the case, but you never know for sure until you verify it for yourself. I also base everything on relative moves, so as long as the different systems use the same scale and orientation everything should work as expected, regardless of the absolute coordinate values reported by the device.

This functionality works even if the device is rotated, which is nice. I didn’t have to do anything special to reinterpret the coordinates, the OS does it automatically.

This functionality works whether the page is being viewed standalone or embedded as an iframe, which also makes things easy.

The touchcancel function doesn’t seem to ever get invoked. The touch drag event has to be initiated within the proper element (the canvas) but can continue over the entire touchscreen of the device. If the finger goes off the edge of the touchable area it seems just to involve the touchend function. I’ll have to learn more about this.

I still need to learn more about how events are propagated and consumed but that should come with continuing work. In the meantime I’m happy something was easy for once!

Posted in Software | Tagged , , , | Leave a comment

Different Options for Empty Web Links

While working on yesterday’s post I noticed, apparently after far too long, that the Intro link on my site’s main page failed to run the animation when clicked. More accurately, it would run a frame or two of it and then stop. I dug around in the debugger, looked at various browser log (which led me to fixes for a couple of other minor items), and generally drove myself batty until I traced the problem back to the source.

The original form of the link worked like this, in keeping with the format of the other links on the top navigation bar:

The issue involves firing the introClick() function, which makes the intro_div element visible and kicks off the animation. When the animation finishes it sets the display mode from “block” back to “none,” rendering the div invisible. The problem was that the animation would run a frame or three and then the whole thing would disappear. I couldn’t step through it or find anything in the log. This was all the more frustrating because it doesn’t happen when running the page from local disk, it only happens when running from a server. In any case I kept digging until I realized that the animation was being cut off because the entire page was getting reloaded. I then looked around to see what might be causing that issue. That’s when I spotted the likely culprit.

I can’t remember the exact chain of ideas that came to mind but I know from developing the main static pages of my site that clicking on empty links causes the page to reload. For simple pages this isn’t a big deal and it’s not an issue because the links were all eventually populated. If you look at the bottom item in the above HTML, though, what do you see? That’s right, an empty link. Clicking on the “Intro” item at the right end of the menu on the main page kicked off a page reload request. It might be delayed long enough to all the animation to run a handful of frames but ultimately the reload was going to squash everything.

I then looked for alternative solutions. Making the text a span resulted in something that couldn’t be clicked on and needed extra CSS to make the formatting consistent. I used a pure anchor element with custom CSS and without an href property, which worked but the behavior of the mouse cursor was inconsistent (a text selector vs. the standard selection pointer). I thought about using a button but making the format consistent would have been painful. I did some light reading about the pros and cons of different approaches and found none of them to be ideal, but the least bad option is shown below.

This solution leverages the simplicity and consistency of the anchor and link approach while allowing keyboard activation support and not throwing too big a monkey wrench into accessibility support. The approach work in the way akin to how astronomers describe comets, or at least their tails: “A comet is the closest thing you can have to nothing and still be something.” Using the javascript:void(0); statement for the link gives you all the benefits of a link while suppressing all of the undesirable features of an empty link or a linkless anchor. It is a something that is very close to being nothing, and that works for me.

I’ve seen this in use over time one sites where links should work but don’t. I’m guessing that in many cases the site’s maintainers were actively working on something in the background. It’s also possible for the value to be generated as a default placeholder if some kind of automated content is not being generated correctly. I will have a much better understanding of this going forward.

Posted in Software | Tagged , , , | Leave a comment

A Bit About Website Navigation in JavaScript

I had the idea that I wanted to suppress the introductory animation on my site’s main page when navigating back to it from other pages within the site (i.e., if coming from another page in the rpchurchill.com domain). This is on the not-always-applicable theory that many surfers will arrive at the main page first and navigate from there. Seeing the animation in that context would be OK but seeing it every time they went back would get old pretty quickly. Rather than inflicting cookies on people I looked to see if there was a way in JavaScript to find out what site the user came from.

It turns out that the document object in the browser includes a value called referrer that provides the desired information under some circumstances. Click on the link below to see the excitement. Be sure to click through to the following link and go back a forth a few times. When the excitement wears off come back here.

Click here.

You may have noticed that the location of the page the user arrived from includes different information. In one case it includes the “index.html” text at the end of the source link and in the other case it does not. That’s because it reports the information provided by the link in the source page.

In both cases the reported text included the fully qualified path starting with https://rpchurchill.com/demo/... even though the href="" value only included the relative directory information, as shown in the listings for the two sample pages below.

The first file is found at https://www.rpchurchill.com/demo/random_page/index.html.

The second file is found at https://www.rpchurchill.com/demo/random_page/page_01/index.html. It’s exactly the same except for a couple of lines in the middle.

Interestingly, if the url is typed directly into the address bar the value returns an empty string. This is true if even part of the url is typed in. For example, if you go to the link above and then add /page_01 by hand the source page will be reported as an empty string.

I do a bit of testing on the return value to make sure it’s defined and whether or not it’s blank so the code knows what’s going on.

Last but not least, if you want to try this from an external website go here. The source there is reported as an empty string, which is confusing. This behavior seems consistent(ly inconsistent) across several browsers.

References here, here, and here.

Posted in Software | Tagged , , | Leave a comment

Website Update Complete

The static pages are finally complete and QCed. That’s all I have energy for. Have a great weekend!

Posted in Software | Tagged , | Leave a comment

Data Collection: Minimum Sample Size

While searching my hard drive for images representative of data collection I stumbled upon something I’ve also had in mind to look for, which is guidance on how to determine the minimum required sample size. The formulas are usually variations on this equation:

    n >= (z • σ / MOE)2

where:

    n = minimum sample size
    z = z-score (e.g., 1.96 for 95% confidence interval)
    σ = sample standard deviation
    MOE = measure of effectiveness (e.g., difference between sample and population means in units of whatever you’re measuring)

The initial sample population should be at least 30. In theory this method only applies to data that is normally distributed, which a lot of data aren’t. Process times for many activities tends to be skewed left, where most of the values cluster to the low end with a long tail of higher values.

Other forms of this calculation are easily located via search.

A form of this calculation will be added to the simulation framework as the data collection capabilities are implemented. The data collection interface will provide ongoing estimates of required sample size as the sample data are collected, and will let the user know when the minimum number of data points is reached.

Posted in Tools and methods | Tagged , , | Leave a comment

GitHub — Ya Gotta Start Somewhere

I’ve had brief encounters with GitHub over time but not enough to gain any muscle memory. As I’m finalizing the last bits of static content for my website, learning how to work with GitHub makes a good side project.

https://github.com/rpchurchill

I’ve poked around in a few repositories to get the feel of what they’re doing and how it all fits together, I’ve worked through the course at Code Academy, and I’ve used it a bit from the command line but that’s no substitute for using it consistently over time in different modes.

The first new repository I created, beyond the ones that already existed from the aforementioned encounters, is for the discrete-event simulation project. As it stands now this project is just one big file with all of the flotsam and jetsam in it from my stream-of-consciousness explorations, and a couple of supporting framework files I’ll never touch. My workflow has been to create a new version of it with every update, appended with the date in the form discrete-event-sim_yyyymmdd.html, upload it to my site, and make it available in an iframe embedded in that day’s WordPress post. If I want to make it easy to view as a standalone page on a mobile device I include a separate link to it.

This workflow has been pretty effective for me. I have all of the dated versions of the code in a directory on my computer and on my web host, so I know what’s going on, but in the longer run the main file will have to get cleaned up, divided into modules, and made available in a form amenable to being worked on by more than one person.

This part of the effort is on the project’s To Do list anyway, so I’m killing multiple birds with this stone. Over time I’ll add repositories for other projects I have worked on or am working on, and we’ll see where it goes.

Posted in Tools and methods | Tagged | Leave a comment