HTML5 Canvas Issue: Line Caps

I learned basic computer graphics in college and wrote my first programs on an original IBM PC with a Hercules graphics card. I remember that if you specified a start and end pixel that both of those pixels, along with those in between, would be drawn by default. This was entirely a function of the software drivers we used (or those we wrote for class). I later spent many years working with graphics in various of the Borland language products including Turbo Pascal (for DOS) and Delphi and C++ Builder (for Windows). I don’t think it was true of Turbo Pascal/DOS, but the Windows-based products and their OWL libraries had a component called Tcanvas that had a particular way of drawing lines. It would draw all of the pixels in the line except the final one. It assumed that if you were drawing a series of lines, especially for a closed shape, then the next line would always draw the missing pixel. If you really, really needed to draw that last pixel you had to do it yourself. This process is illustrated as the red line is drawn from 1,1 to 1,10, the blue line from 1,10 to 10,10, and the green pixel at 10,10. If the lines had a thickness of greater than 1.0 things worked a bit differently but I almost always worked with one-pixel lines for various reasons.



The HTML5 canvas object uses the lineCap property to accomplish the same thing. The values are “butt”, which ends the line at the points specified, “round”, which draws a round end centered on each endpoint (of the same diameter as the thickness of the line), and “square”, which extends each end of the line by drawing an additional square whose dimensions match the line thickness. The square is also centered on the end of the line so in truth the line is extended by a half-square. If the thickness of the line is 1.0 then setting the value to “round” has the same effect as setting it to square. You can find a nice explanation of all this here.

I noticed that some of the lines in my display looked a bit spotty at times so yesterday I changed the lineCap property to “square” from the default value of “butt” I had used in the original demo. In order to see the effect of changing the lineCap property I added an extra button at bottom right of the display that toggles between the three possible states. The effect is most noticeable when the offset is set to zero and the furnace is somewhat rotated. I’ve added screen captured that illustrate the effect below.

“butt”
“round”
“square”

The point is that graphic systems, like every other kind of system, have a number of variations and details you need to keep in mind.

Posted in Software | Tagged , | Leave a comment

HTML5 Canvas Issue: Half-Pixel Offset

One of the issues I encountered when working with the HTML5 canvas element was the issue of pixel alignment. I made sure that the only size definition was given in the HTML declaration for the image itself as in:

…which means that the canvas pixels will always line up with physical screen pixels. The issue is how the actual coordinates are specified when drawing on the canvas. Every other screen coordinate system I’ve used drew nice lines on even pixel boundaries. That is, you specify the pixel locations as whole numbers. Usually you translate between your internal, floating point coordinate space to the integer pixel space by adding one half and truncating. When I did that, however, the lines ended up getting drawn across two pixels. When I added an extra half-pixel offset the lines were drawn across only a single pixel. This effect is most noticeable when the lines are horizontal or vertical. The following diagram from the Mozilla canvas tutorial illustrates the issue.

I added a button which toggles the pixel offset between 0.0 and 0.5 and redraws the figure to show the effect.

Posted in Software | Tagged , | Leave a comment

Javascript HTML5 Canvas Pour Demo

When Bricmont was purchased by Inductotherm in 1996 I was charged with maintaining the product Inductotherm used to control its induction melting furnaces. During a lull some time later I worked up a little demo program in Delphi that could display the state of such a furnace in 3D. I thought it would be a good exercise to replicate that program using the HTML5 canvas element with graphics generated by Javascript code.

Here is the original program in action. It didn’t include the extra rotations.

The Charge button increments the mass contained in the furnace by five percent. One hundred percent mass is defined here as the mass of a volume of molten steel that fills the entire furnace. The density of the charge material is assumed to be one-third that of the molten material. Charge can only be added to the furnace when it is fully upright, when the Pour Angle is ninety degrees. Each increment of charge is capped so the total volume of material in the furnace is never greater than one hundred percent. If the furnace has been rotated away from its standard pouring view it will be returned to that view.

The Melt button increments the percentage of the mass in the furnace that is melted by five percent. The volume and mass of the melted and not-yet-melted materials are recalculated. If there is no unmelted material in the furnace the button has no effect. If the furnace has been rotated away from its standard pouring view it will be returned to that view.

The Tilt Down button tilts the furnace down in increments of five degrees. If the molten steel in the furnace would spill over the top lip of the furnace the volume and mass of material will be recalculated. This calculation is obviously idealized because material pouring over the lip would have some thickness. If the furnace has been rotated away from its standard pouring view it will be returned to that view.

The Tilt Up button tilts the furnace back up towards vertical. The level of any molten material remaining in the furnace is recalculated.

The remaining buttons are used to rotate the furnace around the X-, Y-, and Z-axes in increments of five degrees. The rotation is always initialized from the standard pouring view. I originally included this capability, along with the ability to perform translation and scaling operations, just so I could interactively locate the display in the drawing area.

This is all something of a hack. The point wasn’t to make it pretty and perfectly rational but to demonstrate some basic operations. The 3D and display operations are all carried out using matrix operations in straight Javascript. The same is also true for what may be the worst hidden line implementation in history. It may be ugly, but it at least has the virtue of working. I didn’t bother defining the bottom surface of the furnace for the hidden line calculations in the interest of being able to see everything I was doing.

Buttons in Delphi (which used Borland’s Object Windows Library or OWL) automatically included a repeat feature, but HTML and Javascript do not do so automatically. I therefore had to embed the button events in functions that fire an initial SetTimeout(func,delay) on mouse down. It would trigger the desired event once and if the mouse button stayed down until that timer expired (at 250 ms) it would initialize a SetInterval(func,delay) event that would fire every 50 ms. The active timer was cleared on mouse up. I found that new timers would occasionally get initialized before the old ones cleared so I inserted extra clear events before every new set event.

I also added handlers for touch events, though I kept it basic. It doesn’t try to handle multi-touch events and only handles the touchstart and touchend events. Interestingly, the touch events aren’t recognized by the version of Firefox that was current as of this writing (v44.0.1, 9 February 2016). The touch features do work in Chrome and Microsoft Edge.

Posted in Software | Tagged , , | Leave a comment

Missing the Point

A recent dinner companion shared a story of his management’s plan to ensure continuing maintenance and viability of a large and rather old mainframe system he supports for a government agency. His specialty is writing, maintaining, and modifying assembler code for the specific hardware host machines and as you can guess, there aren’t many people left around who already know how to do that kind of work. What’s more, they tend to be older, expensive, curmudgeonly, and not always up to writing code in the most modular, clear, and approachable way.

OK, I’ve seen that, and I grant it can be a problem, but the gentleman I was speaking with wanted nothing more than to rationalize everything he could, as I would. He faces two major problems.

The first is that the code is decades old and has been hacked over with little thought to rationality, structure, consistency, clarity, or anything else much helpful. It also contains self-modifying code which is difficult in some cases even to recognize, much less to maintain. Modern processors often won’t even support such shenanigans.

The second is that the agency’s managers want to convert the whole thing to Java so they can hire a bunch of young kids right out of school to maintain it at a lower cost, and possibly with less undesirable feedback (someone who’s been around for a while is likely to be less afraid to tell you things that are true but that you nonetheless don’t want to hear). This is all well and good, though we know that nine women can’t have a baby in a month. This is another way of saying that throwing programmers at a problem isn’t always the way to solve it, and often proves to be counterproductive.

Older, mainframe systems often stay in use for a long time because of precisely the reasons described. The FileNet systems I wrote in the early 90s included an automated terminal component that had the ability to log into and navigate through a legacy mainframe system, then screen scrape the results and incorporate them into the FileNet system’s (Oracle) database. If you can use such a capability to migrate all of the legacy system’s data then great, but if not that system may remain in place.

There was a further impetus to replace or upgrade such systems in the run-up to the Y2K rollover, and I knew some folks who made good money around that time. It turned out to be a non-event mostly because critical code usually didn’t include problematic date calculations (none of the real-time control applications I was writing at the time had a problem, some Wonderware HMI archiving routines proved susceptible but were easily patched), and because most of the mainframe code was either successfully modified or the affected systems were scrapped and replaced entirely. There are legitimate reasons for some legacy systems to still be in use, but in some cases I’m sure they remain merely because of convenience or inertia.

As I noted the agency’s managers want to implement a new system in Java, which I support, but what I don’t support is how they intend to do it. Rather than figure out how it works and reengineer it so it makes sense, particularly while they still have access to people with the experience to help do so accurately and completely, they are undertaking a major effort to write a tool that will translate the code as is, self-modifying operations and all (I was told they’ve figured out how to do this). So, even if they are successful with this project, which itself consumes resources and incurs a respectable amount of risk, they will then continue to be burdened with the same mountain of unmaintainable spaghetti logic they had when they started. That will consume its own resources and incur its own set of risks.

Sure, you can get new graduates who know Java, but how many are going to be able to tease apart the morass to make meaningful and timely changes? Agile and Scrum processes don’t make that kind of code more manageable any more than recasting it in a more modern, higher level language does. (Making fixes as they are identified is better suited to a reactive, Kanban style anyway.) It’s like eating all the food in a hot air balloon in hopes of making yourself heavier so you can get it to descend. You didn’t make the balloon heavier, you just moved the food. And you likely made yourself sick in the process. Moreover, how many of those young people will stay to fight with it when all of their friends (those who could get jobs, anyway) are writing new code under more reasonable management guidance.

Sometimes it isn’t the tool or the language or the platform that is the problem, it’s the underlying logic. You can fail to solve a customer’s problem with any kind of system and you can solve a problem with almost any kind of system. Some tools and platforms are clearly better than others for certain applications, but if you aren’t solving the right problem you’re kind of missing the point.

Posted in Software | Tagged , , , | Leave a comment

Theory and Practice, Practice, Practice

To know and not to do is not yet to know.

This idea has been attributed to many sources. Let’s assume it is essentially Buddhist. The same idea is expressed below in terms of neuroscience.

These items are saying that you need to do things in order to really get them. There are others: Malcolm Gladwell’s 10,000 hours, getting to Carnegie Hall, and the legendary practice regimes of athletes like Jerry Rice, Cris Carter, and Ted Williams.

Of course, it isn’t just volume that builds skill. Practice doesn’t make perfect; perfect practice makes perfect. You have to figure out what to practice, whether and when to practice to your strengths and weaknesses. You might have to mix things up to stay fresh. (By the way, anyone who tells you that computer work doesn’t take a toll on the body hasn’t done it in earnest. People who sit too long need to mix it up more than anyone.)

A lot of other factors come into play as well. Natural talent is one. Baryshnikov worked like crazy but he would never have been who he was without some natural talent. Opportunity is another. Would Bill Gates have been a runaway success in the 1930s? Will a poor but smart and diligent young person in Africa or India be able to do the same things as a graduate of MIT? (That idea is both thrilling and scary.)

Another issue is whether one is trying to learn an entirely new skill, like learning to program for the first time, or a related skill, like learning to program in a new language after already knowing how to program in general, or leaning a new API after having learned many before. Each new endeavor has its own “is-ness” but picking up related skills is far, far easier than picking them up the first time.

Finally, the breadth of the skill is important, and not all skills are equal in this regard. If you know how to play piano then learning to play guitar might be somewhat easier, but learning to play organ or harpsichord would have to be a lot easier. Learning CSS is one thing, learning to leverage SASS and SCSS within CSS, if one already knows HTML and how to program in depth, is far less taxing. I’ve heard it said that one can employ Hadoop proficiently in about two weeks (presumably if one already has a background in data center operations).

I’ve enjoyed and will continue to enjoy getting extra reps in the new languages and tools I’m learning. They are interesting on a direct level because there are specific things you have to know, and interesting on a meta-level because of how those individual bits of knowledge fit into the larger picture.

Posted in Soft Skills | Tagged , , , | Leave a comment

Lack of sleep() in Javascript

I get it. I really do. Javascript is intended to work in a (mostly) single threaded way and allowing it to sleep could gum things up tremendously. Reading that and similar links has been beneficial and I’ll be making time for more.

What brings this up is that I was debugging a piece of Javascript code where I thought it would be clever to graphically render the different 3D hidden point tests I was running to see why some of them weren’t being tested. I could add the temporary renderings of the test point and each plane to be tested against, set a breakpoint at the end, and then click like mad through all the requisite loops. Of course, if I clicked too quickly I could cause the browser or its debugger to hang. Therefore I figured that I could eliminate the clicking by inserting sleep()-like statements that would allow me to watch the proceedings without having to do anything and without running the risk of going so fast I’d lock things up again.

However, my digging around for the fabled sleep()-like statement led me (inevitably, it seems) to a number of StackOverflow discussions on the subject. Those a) made it clear that there was no such animal and b) asserted that it would be a really, really bad idea for one to exist. I did see how I could have refactored the code to use the setTimeout(func, delay) construct and there were a number of other suggestions and discussions of their relative merits. Since the point was to test the code as it was and not to turn it inside out, which may have obfuscated the very thing I was trying to identify, I ultimately chose to click patiently and use conditional breakpoints to save some of the work as I honed in on the problem.

I have often used sleep()-like statements for animations and real-time processes but I can appreciate the desire of the language designers to omit the capability from this particular language. I’m not always a fan of designers’ desire to keep programmers from shooting themselves in the foot. I own my foot and if I want to be able to shoot the thing off that should be my right, should it not? That said, some features are bound to get abused, choice is not infinite, and sometimes you have to just accept things the way they are. Design decisions like these assuredly save time, effort, and frustration on net across the entire user base over the life of the product, and I can respect that. I also appreciate that different programmers can work for a long time in different subject areas and not encounter the same concerns. Conversely, I appreciate as well that some subjects seem to come up over and over again no matter what people are doing.

Interestingly, the poster who started one of the longer discussion threads wanted the capability for the same reason I did, as a known hack for testing and with no intention of using it in production code. What killed me was the insistence of so many commenters that there could be no valid use for such a construct. I’m guessing that some of those strongly expressed opinions grew out of knowledge of the language’s underlying structure and its application while others arose from ignorance of possible uses for the capability. Like Dennis Miller would observe, I could be wrong. In any case, as I mentioned above, I will definitely be studying the language’s underlying structure in greater detail, and expect to enjoy myself while doing so.

Oh, and the graphical debugging effort? Worked like a charm.

Posted in Software | Tagged , , , | Leave a comment

Simulation: Continuing Yesterday’s Analysis

Yesterday I analyzed some of the considerations involved in modeling a section of a petrochemical refining process, namely that of hydrodesulfurization. That is adding hydrogen to hydrocarbons containing sulfur in the presence of a catalyst at an elevated temperature so the sulfhydryl (SH) groups can be split away and separated, to be replaced with the desired hydrogen atoms.

I examined an entire system and its larger context yesterday. Today I wanted to discuss a few more detailed considerations, with an eye toward the costs and benefits of taking different approaches. Bearing in mind that many variations of processes, reaction vessels, and equipment may be involved, and assuming we aren’t doing a finite-element solution where every container or vessel and every run of pipe is divided into multiple sub-volumes:

One Node vs. Many Nodes (or sub-volumes):

  • Nature of materials and processes within the vessel: Many vessel models will contain single liquid and gas regions, but vessels with different types and arrangements of internal equipment may need to be represented individually.
  • Presence of instrumentation or sampling ports at different locations along the vessel: Training simulators are all about the conditions and properties that operators can observe and affect. If multiple instruments or sample ports are located along the long dimension of a vessel the vessel may have to be subdivided into enough separate regions to provide unique results for the individual instrument or port locations. A change in feedstock composition, catalyst effectiveness, or hydrogen feed might provide indications that the reaction is completing (or failing to complete) at different locations.
  • Amount of available computing power vs. need for model speed: Training simulators will theoretically need to run at or at least close to real time. If doing so is an issue then simplifications to the model may have to be made with respect to number of internal nodes or size of time step.
  • Possible failure modes: The location of leaks or ability to specify reduced catalyst effectiveness at different locations may require specification of additional nodes.

Simultaneous vs. Non-Simultaneous Solution: The pressure-flow equations and possibly even the thermodynamic equations may be carried out using simultaneous solutions but the solution for transport properties might not be.

I once had to model a long coil of pipe in the offgas system of a nuclear power plant. The coil was located about halfway through the system after most of the liquid had been removed from the process flow. The purpose of the coil was to increase residence time so most of the short half-life radiation could decay. (I was told that this section of piping was buried outside under the parking lot, but that may or may not have been a joke.) I initially tried to model the transport of radiation using a simultaneous solution and found that non-zero concentrations showed up at the discharge end of the pipe the first time step after radiation was introduced at the charge end. The concentrations at the discharge end were small in magnitude but the character was wrong. This didn’t matter under steady state conditions but it would matter in a meaningful transient, and that was the whole point of the simulator.

I did the pressure-flow solution using the normal techniques (simultaneous matrix) but I decided to model the thermal and radiation transport (and decay) in the pipe coil as a series of ten rotating buckets, basically a bucket brigade, assuming roughly laminar flow. That meant averaging the radiation content of the inflow for one-tenth of the volume of the pipe and assuming that the radiation content at the discharge end was constant for that one-tenth of the pipe volume. (As I think about it I may have made a mistake by continuing to model radioactive decay as the discharge bucket emptied, which would have produced radiation levels that incorrectly sawtoothed over time. I’d have to think about how I should have handled that. As I think about it further I may also have been able to model the outgoing temperature as a constant equal to the ground temperature, if I had been desperate to save a few clock ticks.)

Perfect mixing is often a good enough assumption in simultaneously solved, one-dimensional fluid models but there are other times when it clearly won’t do. In those cases you may have to slow things down, do them by hand, or model them in a different way.

Posted in Simulation | Tagged , , , , , , | Leave a comment

Simulations: What Gets Modeled And What Doesn’t

When I’m not flogging away at code these days I’m thinking about continuous simulations and the details that get modeled within them. Specifically I’ve been reading about and thinking about operations in petrochemical refineries, and even more specifically certain classes of catalytic reactions, like those found in hydrodesulfurization processes. In such a process the (naptha) feedstock is passed through a catalyst chamber where hydrogen is also injected. The idea is that sulfhydryl groups are split apart from their hydrocarbon bases and replaced with one hydrogen atom from an H2 molecule. The other hydrogen molecule then bonds to form H2S. In short, C2H5SH + H2 => C2H6 + H2S.

In thinking about this I was less concerned about the details than about how I would go about simulating such a process. The simulation I would create is greatly dependent on the purpose for which it is to be used.

If I did not know what was going to happen when I mixed certain materials together in the presence of a certain amount of energy and at a certain pressure I would create an extremely low-level simulation that modeled the behavior of every atom. I know this is done with drug reactions of various types and my understanding is that they are often hard to get right in novel situations. Such models also require a lot of computing resources. If the chemical reactions to be modeled involve a catalyst then there are just that many more factors of chemistry and geometry that have to be worked out.

If, however, the purpose of the simulation is to train operators, to size plants, or to experiment with different operating concepts (batching, controls and instrumentation, heat recovery, safety, etc.) I would create a higher-level simulation that modeled the process in a more abstract way.

At some level, assuming that the conditions were right with respect to factors like feed chemistry, temperature, pressure, catalyst area, and the presence and mixing of sufficient reactant materials on a mass or molar basis, I would assume that the reaction works and generates or absorbs heat as designed. Within a reasonable range of operating conditions I would be able to correctly model reactions, heat transfers, flows, pressures, temperatures, and end products. I would be able to create a simulation that could be initialized to a steady running state, be closed down and purged, and be restarted and returned to the original running state. Alternatively I might start at the shut-down state, ramp up to the operating state, and then shut down again. In either case I would have to know the efficiency of any planned reactions at given conditions of temperature, pressure, and catalyst. I would have to know if different reactions happened that also needed to be modeled.

Considering the reaction described above, if different reactions happened at, say, a different temperature, then I would have to make sure those reactions were modeled in place of the ones I’d hoped for. The point is that in a macro-scale simulator I wouldn’t be modeling the reactions of the molecules from first principles, but would instead model conversions, reactions, and state changes as a function of conditions. This can be tricky for a number of reasons.

This kind of simulation might be limited in its ability to handle widely varying feed compositions. You could not, for example, feed ice cream or cornmeal into such a process and expect it to do anything meaningful. It can only model what the simulation allows for. The makeup of the initial petrochemical feed (for the whole refinery and not just the hydrodesulfurization process) would have to be completely defined for every component. The thermodynamic properties of all components would have to be known and specified for all applicable ranges of temperature, pressure, and so on, so the model would correctly represent state changes and the various separation, pumping, heat transfer, and other processes. (This turns out to be somewhat difficult, though resources like this would clearly be helpful, though such data are rarely available in as complete or granular a form as have been derived for water and steam.)

Such a model could handle reasonable variations in feedstock and still work as expected. It could be made to handle changes in the efficiency of the catalytic reaction by changing some characteristic of the catalyst (expressed as an area or percentage), under user control. If the catalyst somehow becomes consumed or fouled the reaction would proceed less completely or not at all.

If the simulator was intended to support training then the trainees could learn not only normal process operations and controls, but also what happens during abnormal situations. Those can include things like leaks, instrument and other equipment failures, loss of utilities, and so on. In those cases it’s may be less important to represent the events exactly than to get them right in character, so the trainees learn what to look for and how to identify cause and effect.

The simulator could be used to test novel operating concepts as discussed. It would be able to handle a range of variation in the feed material. It could be used for operations research by performing parametric runs to test the effects of different changes in a systematic way.

As I’ve been reading about the different aspects of refinery operations I’ve been able to relate most of the processes to elements I simulated when I worked on nuclear power plant simulators. I got to know about the steam cycle on a very deep level and also how to deal with noncondensable gases, catalytic recombiners, phase changes, different kinds of absorption and release processes, and different kinds of filtration and separation processes as well. I look forward to continuing research on the subject.

Posted in Simulation | Tagged , , , , , , , | Leave a comment

Zoot Suits and Not-So-Bad-Dancing

I started swing dancing in 2001, during the tail end of the craze that started around the time of the 1998 Gap Commercial (famous among other things for popularizing the 3D still pan effect also prominent in the seminal sci fi film The Matrix, though it had been invented as early as 1993).

Swing dancing had been in a process of resurrection as far back as the 1980s when handfuls modern enthusiasts succeeded in locating and studying with some of the famous original dancers. In those days they danced to period music but it took over bars and dance halls in the late 90s to the sounds of neo-swing acts like Brian Setzer and Big Bad Voodoo Daddy. Dancers wore zoot suits and other period clothing that could sometimes be over the top. The craze was settling back underground by 2003 and the people who were serious about it had grown to appreciate the best of the original big band and jazz music. Gone too were the zoot suits and bowling shirts in favor of more tasteful vintage dress.

One of the venues here in DC, however, decided to have a little fun with the era by holding a neo-swing night late in 2003 or 2004. The point was to wear the most over-the-top outfits and dance to the most annoying possible music. They even had a dance contest whose goal was to bust out the worst, oversized, gawky moves anyone could remember. People had fun with it and posted the next day that most of the competitors had failed to look as horrible as they’d hoped. They’d learned too much over the years and couldn’t reproduce what it was like to dance like raw beginners. Said one observer, “Your muscle memory gave you away!”


Me with legendary Lindy Hopper Frankie Manning

Some of the 90s-00s dancers got deeply systematic in their study while many of the 20s-50s dancers did far less of that, even if they’d managed to work professionally for two decades or more. Many of the original dancers had trouble even explaining what they did. They could tell what was right or wrong but they couldn’t always put it into words that the younger generation could understand. The younger dancers, being young, had a certain amount of trouble understanding anyone. Over time, however, they learned some of each other’s language. The younger dancers learned the subtlety, feel, cooperation, and responsibility of the dance while the older dancers learned more about how to break things down systematically so they could teach and share more effectively.

Why do I bring this up? Because some days seem like…

Posted in Software | Tagged , , , | Leave a comment

Multiple Paths To Victory

With the release of the game Settlers of Catan in 1995, German inventor Klaus Teuber unleashed a product popular enough to introduce Americans to the Eurogame style of tabletop gaming. Per the Wikipedia entry, “A Eurogame, also called German-style board game, German game, or Euro-style game, is any of a class of tabletop games that generally have simple rules, short to medium playing times, indirect player interaction and abstract physical components. Such games emphasize strategy, downplay luck and conflict, lean towards economic rather than military themes, and usually keep all the players in the game until it ends.” The entry also describes how this class of games is contrasted with those referred to as “Ameritrash.” Interestingly, I read recently that the tradition of playing family games and board games was brought to America largely by German immigrants in the first place.

I was introduced to Settlers in about 2005 and have grown to appreciate more and more of the genre since. I particularly enjoy modular games as I’ve mentioned previously, but many of this class of strategy games are highly enjoyable. The thing that most interest me about them is that most games offer multiple ways to win. They usually involve a common end goal, like amassing a certain number of victory points, but there are generally different options for getting them. The strategy you adopt is based on how things evolve over the course of each game. This ensures that the games stay interesting a lot more often.

Many of us played Monopoly growing up, and it was a very popular game in America. The problem with Monopoly, however, is that there’s only one way to win. You have to grind you opponents into dirt, which can take a long time in some cases. My brother played in a Monopoly tournament at a local library when we were young, and he adopted a wheeler-dealer strategy that carried him to the finals and got him written up in the local paper. In the final game only one player was able to secure and develop a monopoly, and none of the other players would make trade to allow anyone else to do so. The game turned into something of a forced march that probably wasn’t that enjoyable. Monopoly was also interesting because it was one of the early games to be analyzed by computer, which recommended an optimum strategy based on the likelihood of landing on certain spaces. (Buy the orange properties.)

Some games of Monopoly kept everyone going over time until more and more money got injected into the game. Those games were always fun even if you didn’t win. A rather obscure stock market game called Bluechip had the same quality. Sometimes things were pretty thin and sometimes they were flush. If all players were flush then it could be a lot of fun. If all players were thin it was less fun but at least competitive. If some players were flush and others thin, and the difference could happen in your first turn, the fun level was cranked way down. I’ve heard Monopoly described as a one-mechanic game. There may be slightly more depending on how you define it but it is pretty straightforward. Bluechip was in every sense a one-dimensional game. I played it at a game convention a few years ago and saw it much differently than I had when I was younger. I saw that the strategy was very mechanical and without remorse. You simply made the biggest trade possible with the most leverage and hoped the cards and the dice didn’t work against you. Once I saw that the game got kind of stale. Oh well, at least I have the lovely memories of playing it with my grandparents!

One final note on Bluechip is that I’m almost certain it was the inspiration for a stock market simulation game called Millionaire, which was one of the first pieces of software released for the original Macintosh. It was apparently also available on some other platforms. We played it a lot at our house but the game is all but forgotten now. There aren’t even any references online good enough to link to.

The thing about Eurogames is that, in general, if the thing you were planning on isn’t working there may be a different avenue by which you can succeed. You always have to keep track of what everyone else is up to, balance your resources, defend, build, and often trade. In the basic edition of Settlers of Catan the object is to get ten victory points. You can do that by building settlements and upgrading them to cities, by building the longest road, or by buying development cards which give you the chance to earn victory points directly or build the largest army. The course you take depends on the resources you are able to acquire and the fall of the dice. The dice and the potential randomness of the board can skew the game away from one or more players but seldom is there a runaway by a single player. The game generally remains fun, engaging, and colorful for most participants, and it generally doesn’t take too long to play. Modular expansion kits and other versions of the game provide even more mechanics and possible trade-offs.

There is still another subclass of games in the Eurogame genre, and that is the cooperative game. In games like Pandemic the players all work together against a common enemy, which in this case is the worldwide spread of a virulently-communicable disease. Sometimes the players lose, but the players themselves are always on the same side.

I describe all this not because the games themselves are so thrilling, though I and many others certainly think they can be, but because they illustrate that there can be many ways to solve any possible problem. Boards games and certain competitions (e.g., American football and the buggy races held annually at my alma mater) provide specific contexts for those trade-offs. The art of balancing different factors in a specific context is known as a constrained optimization problem. The link is to a formalized mathematical approach to the subject but in many cases the constraints and possible solutions are explored by trial and error. A friend of mine from school told me that one of his professors observed that life, itself, is a constrained optimization problem.

I’m always interested in figuring out how to identify new twists on existing problems so certain classes of engineering and other problems always get my attention. I was taught early on to save on memory and execution time while writing software, but have since come to appreciate that development time, complexity, and maintainability are equally important considerations. I worked on a team that used analytical simulations to examine the effects of modifying different aspects of the logistics of maintaining and managing aircraft. I often have to balance the needs of customers and developers, the limits imposed by the iron triangle (cost, schedule, and scope in project management or software development), or the balance between work and life.

There are always real measurements, real constraints, real consequences, and real success and failure, but as I noted yesterday the goal is to approach every problem cooperatively when possible. The real world gives you enough win-lose and lose-lose propositions. Working together with others gives you the chance to develop solutions that are win-win. The more ways you can figure out to do that, the better.

Posted in Economy and Society | Tagged , , , | Leave a comment