A Simple Discrete-Event Simulation: Part 83

Direct link for mobile devices.

Now that I have the chance to return to the Discrete-Event Simulation project the next item to work on is touch events. I had already implemented the ability to scroll the 2D display horizontally and vertically using the keyboard and mouse, and now I wanted to add the ability to do this on a phone.

One thing I was worried about was how laptop touchscreens would handle having both mouse and touch events doing the same thing. My Windows laptop has a touchscreen and I found that touch events appear to activate the functions written for the corresponding mouse events. That is, the code from February 7th would allow me to scroll the 2D image by touch on my laptop even though only mouse events were handled.

I started looking at the way jQuery handles touch events but eventually found the direct documentation here. I prefer knowing how to do things directly and from first principles rather than relying on a framework so I was pleased to find such a clear guide.

I copied the section of code that implemented the mouse action handlers and made minor adjustments to some of the functions to reference touch events and data values in place of mouse events and data values. The code for the mouse and touch handlers is shown below. I use the same global state variables and the same handler functions renamed by appending a “T” to them, for “touch.”

I loaded the whole thing to my server, tried it out, and what do you know? It worked perfectly on the first try. I’m kind of stunned.

Reusing as much of the original logic as possible is what made this work. That said, there are a few things we need to understand.

I explicitly do not try to read touch points beyond the first. That’s why I refer directly to touches[0] all the time. If the user wants to do something more complex it’ll have to wait.

I relied on that fact that the coordinate system for mouse events operated on the same orientation and scale as the mouse events. I don’t see any reason why this wouldn’t be the case, but you never know for sure until you verify it for yourself. I also base everything on relative moves, so as long as the different systems use the same scale and orientation everything should work as expected, regardless of the absolute coordinate values reported by the device.

This functionality works even if the device is rotated, which is nice. I didn’t have to do anything special to reinterpret the coordinates, the OS does it automatically.

This functionality works whether the page is being viewed standalone or embedded as an iframe, which also makes things easy.

The touchcancel function doesn’t seem to ever get invoked. The touch drag event has to be initiated within the proper element (the canvas) but can continue over the entire touchscreen of the device. If the finger goes off the edge of the touchable area it seems just to involve the touchend function. I’ll have to learn more about this.

I still need to learn more about how events are propagated and consumed but that should come with continuing work. In the meantime I’m happy something was easy for once!

This entry was posted in Software and tagged , , , . Bookmark the permalink.

Leave a Reply