Learn Your Customer’s Culture

Every customer you work with will have an interesting variation in culture, and it’s good to get to know what they are. The differences may be merely interesting, may be motivating, or may be important to how you relate to them.

An early customer I worked with on a business process reengineering effort acknowledged that a certain amount of “slippage” was considered normal in the course of a workday. We were measuring the type, number, and duration of different activities in each type of employee’s work day and expected the total amount of process-related effort to total something less than 480 minutes (or less than eight full hours). Management understood that people needed time for administrative and personal activities to some extent and, since the employees’ activities could be readily quantified on an objective basis, the company had a good handle on what constituted acceptable performance. I’ve heard rougher stories about how employees can be treated in environments that are relentlessly driven by performance metrics, but this company had a good attitude and appeared to support the well-being of their employees. They proved to be easy to work with in the procedural sense, though they were difficult to work with in other ways.

One of the more interesting and motivating traditions I ever encountered was when I worked with the Royal School of Artillery at its home base at Larkhill, just a couple of miles from Stonehenge (I got up and ran there every other day). The location also figured in the plot of an interesting movie, though possibly only because I spent time there.

The Royal Artillery’s interesting tradition is that the artillery pieces themselves (cannons in the old days and everything from howitzers to air defense missiles today) nominally serve as the unit colors, in place of a formal flag or guidon. Even better, you were supposed to show respect to the equipment by always running toward it and always walking away from it. My opinion on such things has changed over they years but this remains a fascinating piece of psychology.

Industrial environments are shaded by companies’ approaches to market competition, safety, and location, among other things. I’ve been at a couple of steel mills where employees have been killed, in one case while I was working there. Management’s reaction has ranged from encouraging employees to nap during slow times to ensure they are fresh when things need to happen, to reviewing safety procedures and working to educate the employees further. Those efforts were of course in addition to trying to figure out exactly what went wrong in the first place. It is recognized that locations like caster decks are inherently dangerous, as is molten steel under any conditions.

Some companies run on a bureaucratic basis where employees may get paid the same no matter how the business performs, in which case the employees generally move at the same speed and level of effort no matter what may be going on. In companies where employees are incentivized based on a lower base wage but bonuses based on production and sales the employees from top to bottom are generally far more proactive and aggressive in their efforts to keep a plant running at top speed and fixing problems as soon as they occur.

Managers in bureaucratic environments are often more laid back in their approach to getting things done and my be more demanding in terms of documentation and ease of use, while managers in more aggressive and entrepreneurial environments will demand more features and quicker completion, if possible. Ideally you, as a vendor, will always try to provide the best possible product along every axis, but I know it can be tempting to hold out on customers who are troublesome and uncooperative. You may also be tempted to get out as soon as you meet the local manager’s minimum requirements.

In some ways the more adept managers and employees you meet in the field will be able to deal with lesser deliverables in terms of polish and completeness, since they will be able to bring more understanding and interaction to their end of the process. That actually does a disservice to the better workers and companies, who should get the best of what you have to offer. Over time, of course, you should be providing the best product and service you can by leveraging your own experience and the feedback you get from customers at all levels. Ask them for feedback and listen to what they tell you. Not every idea may be a winner but they’ll come up with a lot of things you won’t think of.

Posted in Management | Tagged , , | Leave a comment

HTML5 Canvas Issue: Line Caps

I learned basic computer graphics in college and wrote my first programs on an original IBM PC with a Hercules graphics card. I remember that if you specified a start and end pixel that both of those pixels, along with those in between, would be drawn by default. This was entirely a function of the software drivers we used (or those we wrote for class). I later spent many years working with graphics in various of the Borland language products including Turbo Pascal (for DOS) and Delphi and C++ Builder (for Windows). I don’t think it was true of Turbo Pascal/DOS, but the Windows-based products and their OWL libraries had a component called Tcanvas that had a particular way of drawing lines. It would draw all of the pixels in the line except the final one. It assumed that if you were drawing a series of lines, especially for a closed shape, then the next line would always draw the missing pixel. If you really, really needed to draw that last pixel you had to do it yourself. This process is illustrated as the red line is drawn from 1,1 to 1,10, the blue line from 1,10 to 10,10, and the green pixel at 10,10. If the lines had a thickness of greater than 1.0 things worked a bit differently but I almost always worked with one-pixel lines for various reasons.



The HTML5 canvas object uses the lineCap property to accomplish the same thing. The values are “butt”, which ends the line at the points specified, “round”, which draws a round end centered on each endpoint (of the same diameter as the thickness of the line), and “square”, which extends each end of the line by drawing an additional square whose dimensions match the line thickness. The square is also centered on the end of the line so in truth the line is extended by a half-square. If the thickness of the line is 1.0 then setting the value to “round” has the same effect as setting it to square. You can find a nice explanation of all this here.

I noticed that some of the lines in my display looked a bit spotty at times so yesterday I changed the lineCap property to “square” from the default value of “butt” I had used in the original demo. In order to see the effect of changing the lineCap property I added an extra button at bottom right of the display that toggles between the three possible states. The effect is most noticeable when the offset is set to zero and the furnace is somewhat rotated. I’ve added screen captured that illustrate the effect below.

“butt”
“round”
“square”

The point is that graphic systems, like every other kind of system, have a number of variations and details you need to keep in mind.

Posted in Software | Tagged , | Leave a comment

HTML5 Canvas Issue: Half-Pixel Offset

One of the issues I encountered when working with the HTML5 canvas element was the issue of pixel alignment. I made sure that the only size definition was given in the HTML declaration for the image itself as in:

…which means that the canvas pixels will always line up with physical screen pixels. The issue is how the actual coordinates are specified when drawing on the canvas. Every other screen coordinate system I’ve used drew nice lines on even pixel boundaries. That is, you specify the pixel locations as whole numbers. Usually you translate between your internal, floating point coordinate space to the integer pixel space by adding one half and truncating. When I did that, however, the lines ended up getting drawn across two pixels. When I added an extra half-pixel offset the lines were drawn across only a single pixel. This effect is most noticeable when the lines are horizontal or vertical. The following diagram from the Mozilla canvas tutorial illustrates the issue.

I added a button which toggles the pixel offset between 0.0 and 0.5 and redraws the figure to show the effect.

Posted in Software | Tagged , | Leave a comment

Javascript HTML5 Canvas Pour Demo

When Bricmont was purchased by Inductotherm in 1996 I was charged with maintaining the product Inductotherm used to control its induction melting furnaces. During a lull some time later I worked up a little demo program in Delphi that could display the state of such a furnace in 3D. I thought it would be a good exercise to replicate that program using the HTML5 canvas element with graphics generated by Javascript code.

Here is the original program in action. It didn’t include the extra rotations.

The Charge button increments the mass contained in the furnace by five percent. One hundred percent mass is defined here as the mass of a volume of molten steel that fills the entire furnace. The density of the charge material is assumed to be one-third that of the molten material. Charge can only be added to the furnace when it is fully upright, when the Pour Angle is ninety degrees. Each increment of charge is capped so the total volume of material in the furnace is never greater than one hundred percent. If the furnace has been rotated away from its standard pouring view it will be returned to that view.

The Melt button increments the percentage of the mass in the furnace that is melted by five percent. The volume and mass of the melted and not-yet-melted materials are recalculated. If there is no unmelted material in the furnace the button has no effect. If the furnace has been rotated away from its standard pouring view it will be returned to that view.

The Tilt Down button tilts the furnace down in increments of five degrees. If the molten steel in the furnace would spill over the top lip of the furnace the volume and mass of material will be recalculated. This calculation is obviously idealized because material pouring over the lip would have some thickness. If the furnace has been rotated away from its standard pouring view it will be returned to that view.

The Tilt Up button tilts the furnace back up towards vertical. The level of any molten material remaining in the furnace is recalculated.

The remaining buttons are used to rotate the furnace around the X-, Y-, and Z-axes in increments of five degrees. The rotation is always initialized from the standard pouring view. I originally included this capability, along with the ability to perform translation and scaling operations, just so I could interactively locate the display in the drawing area.

This is all something of a hack. The point wasn’t to make it pretty and perfectly rational but to demonstrate some basic operations. The 3D and display operations are all carried out using matrix operations in straight Javascript. The same is also true for what may be the worst hidden line implementation in history. It may be ugly, but it at least has the virtue of working. I didn’t bother defining the bottom surface of the furnace for the hidden line calculations in the interest of being able to see everything I was doing.

Buttons in Delphi (which used Borland’s Object Windows Library or OWL) automatically included a repeat feature, but HTML and Javascript do not do so automatically. I therefore had to embed the button events in functions that fire an initial SetTimeout(func,delay) on mouse down. It would trigger the desired event once and if the mouse button stayed down until that timer expired (at 250 ms) it would initialize a SetInterval(func,delay) event that would fire every 50 ms. The active timer was cleared on mouse up. I found that new timers would occasionally get initialized before the old ones cleared so I inserted extra clear events before every new set event.

I also added handlers for touch events, though I kept it basic. It doesn’t try to handle multi-touch events and only handles the touchstart and touchend events. Interestingly, the touch events aren’t recognized by the version of Firefox that was current as of this writing (v44.0.1, 9 February 2016). The touch features do work in Chrome and Microsoft Edge.

Posted in Software | Tagged , , | Leave a comment

Missing the Point

A recent dinner companion shared a story of his management’s plan to ensure continuing maintenance and viability of a large and rather old mainframe system he supports for a government agency. His specialty is writing, maintaining, and modifying assembler code for the specific hardware host machines and as you can guess, there aren’t many people left around who already know how to do that kind of work. What’s more, they tend to be older, expensive, curmudgeonly, and not always up to writing code in the most modular, clear, and approachable way.

OK, I’ve seen that, and I grant it can be a problem, but the gentleman I was speaking with wanted nothing more than to rationalize everything he could, as I would. He faces two major problems.

The first is that the code is decades old and has been hacked over with little thought to rationality, structure, consistency, clarity, or anything else much helpful. It also contains self-modifying code which is difficult in some cases even to recognize, much less to maintain. Modern processors often won’t even support such shenanigans.

The second is that the agency’s managers want to convert the whole thing to Java so they can hire a bunch of young kids right out of school to maintain it at a lower cost, and possibly with less undesirable feedback (someone who’s been around for a while is likely to be less afraid to tell you things that are true but that you nonetheless don’t want to hear). This is all well and good, though we know that nine women can’t have a baby in a month. This is another way of saying that throwing programmers at a problem isn’t always the way to solve it, and often proves to be counterproductive.

Older, mainframe systems often stay in use for a long time because of precisely the reasons described. The FileNet systems I wrote in the early 90s included an automated terminal component that had the ability to log into and navigate through a legacy mainframe system, then screen scrape the results and incorporate them into the FileNet system’s (Oracle) database. If you can use such a capability to migrate all of the legacy system’s data then great, but if not that system may remain in place.

There was a further impetus to replace or upgrade such systems in the run-up to the Y2K rollover, and I knew some folks who made good money around that time. It turned out to be a non-event mostly because critical code usually didn’t include problematic date calculations (none of the real-time control applications I was writing at the time had a problem, some Wonderware HMI archiving routines proved susceptible but were easily patched), and because most of the mainframe code was either successfully modified or the affected systems were scrapped and replaced entirely. There are legitimate reasons for some legacy systems to still be in use, but in some cases I’m sure they remain merely because of convenience or inertia.

As I noted the agency’s managers want to implement a new system in Java, which I support, but what I don’t support is how they intend to do it. Rather than figure out how it works and reengineer it so it makes sense, particularly while they still have access to people with the experience to help do so accurately and completely, they are undertaking a major effort to write a tool that will translate the code as is, self-modifying operations and all (I was told they’ve figured out how to do this). So, even if they are successful with this project, which itself consumes resources and incurs a respectable amount of risk, they will then continue to be burdened with the same mountain of unmaintainable spaghetti logic they had when they started. That will consume its own resources and incur its own set of risks.

Sure, you can get new graduates who know Java, but how many are going to be able to tease apart the morass to make meaningful and timely changes? Agile and Scrum processes don’t make that kind of code more manageable any more than recasting it in a more modern, higher level language does. (Making fixes as they are identified is better suited to a reactive, Kanban style anyway.) It’s like eating all the food in a hot air balloon in hopes of making yourself heavier so you can get it to descend. You didn’t make the balloon heavier, you just moved the food. And you likely made yourself sick in the process. Moreover, how many of those young people will stay to fight with it when all of their friends (those who could get jobs, anyway) are writing new code under more reasonable management guidance.

Sometimes it isn’t the tool or the language or the platform that is the problem, it’s the underlying logic. You can fail to solve a customer’s problem with any kind of system and you can solve a problem with almost any kind of system. Some tools and platforms are clearly better than others for certain applications, but if you aren’t solving the right problem you’re kind of missing the point.

Posted in Software | Tagged , , , | Leave a comment

Theory and Practice, Practice, Practice

To know and not to do is not yet to know.

This idea has been attributed to many sources. Let’s assume it is essentially Buddhist. The same idea is expressed below in terms of neuroscience.

These items are saying that you need to do things in order to really get them. There are others: Malcolm Gladwell’s 10,000 hours, getting to Carnegie Hall, and the legendary practice regimes of athletes like Jerry Rice, Cris Carter, and Ted Williams.

Of course, it isn’t just volume that builds skill. Practice doesn’t make perfect; perfect practice makes perfect. You have to figure out what to practice, whether and when to practice to your strengths and weaknesses. You might have to mix things up to stay fresh. (By the way, anyone who tells you that computer work doesn’t take a toll on the body hasn’t done it in earnest. People who sit too long need to mix it up more than anyone.)

A lot of other factors come into play as well. Natural talent is one. Baryshnikov worked like crazy but he would never have been who he was without some natural talent. Opportunity is another. Would Bill Gates have been a runaway success in the 1930s? Will a poor but smart and diligent young person in Africa or India be able to do the same things as a graduate of MIT? (That idea is both thrilling and scary.)

Another issue is whether one is trying to learn an entirely new skill, like learning to program for the first time, or a related skill, like learning to program in a new language after already knowing how to program in general, or leaning a new API after having learned many before. Each new endeavor has its own “is-ness” but picking up related skills is far, far easier than picking them up the first time.

Finally, the breadth of the skill is important, and not all skills are equal in this regard. If you know how to play piano then learning to play guitar might be somewhat easier, but learning to play organ or harpsichord would have to be a lot easier. Learning CSS is one thing, learning to leverage SASS and SCSS within CSS, if one already knows HTML and how to program in depth, is far less taxing. I’ve heard it said that one can employ Hadoop proficiently in about two weeks (presumably if one already has a background in data center operations).

I’ve enjoyed and will continue to enjoy getting extra reps in the new languages and tools I’m learning. They are interesting on a direct level because there are specific things you have to know, and interesting on a meta-level because of how those individual bits of knowledge fit into the larger picture.

Posted in Soft Skills | Tagged , , , | Leave a comment

Lack of sleep() in Javascript

I get it. I really do. Javascript is intended to work in a (mostly) single threaded way and allowing it to sleep could gum things up tremendously. Reading that and similar links has been beneficial and I’ll be making time for more.

What brings this up is that I was debugging a piece of Javascript code where I thought it would be clever to graphically render the different 3D hidden point tests I was running to see why some of them weren’t being tested. I could add the temporary renderings of the test point and each plane to be tested against, set a breakpoint at the end, and then click like mad through all the requisite loops. Of course, if I clicked too quickly I could cause the browser or its debugger to hang. Therefore I figured that I could eliminate the clicking by inserting sleep()-like statements that would allow me to watch the proceedings without having to do anything and without running the risk of going so fast I’d lock things up again.

However, my digging around for the fabled sleep()-like statement led me (inevitably, it seems) to a number of StackOverflow discussions on the subject. Those a) made it clear that there was no such animal and b) asserted that it would be a really, really bad idea for one to exist. I did see how I could have refactored the code to use the setTimeout(func, delay) construct and there were a number of other suggestions and discussions of their relative merits. Since the point was to test the code as it was and not to turn it inside out, which may have obfuscated the very thing I was trying to identify, I ultimately chose to click patiently and use conditional breakpoints to save some of the work as I honed in on the problem.

I have often used sleep()-like statements for animations and real-time processes but I can appreciate the desire of the language designers to omit the capability from this particular language. I’m not always a fan of designers’ desire to keep programmers from shooting themselves in the foot. I own my foot and if I want to be able to shoot the thing off that should be my right, should it not? That said, some features are bound to get abused, choice is not infinite, and sometimes you have to just accept things the way they are. Design decisions like these assuredly save time, effort, and frustration on net across the entire user base over the life of the product, and I can respect that. I also appreciate that different programmers can work for a long time in different subject areas and not encounter the same concerns. Conversely, I appreciate as well that some subjects seem to come up over and over again no matter what people are doing.

Interestingly, the poster who started one of the longer discussion threads wanted the capability for the same reason I did, as a known hack for testing and with no intention of using it in production code. What killed me was the insistence of so many commenters that there could be no valid use for such a construct. I’m guessing that some of those strongly expressed opinions grew out of knowledge of the language’s underlying structure and its application while others arose from ignorance of possible uses for the capability. Like Dennis Miller would observe, I could be wrong. In any case, as I mentioned above, I will definitely be studying the language’s underlying structure in greater detail, and expect to enjoy myself while doing so.

Oh, and the graphical debugging effort? Worked like a charm.

Posted in Software | Tagged , , , | Leave a comment

Simulation: Continuing Yesterday’s Analysis

Yesterday I analyzed some of the considerations involved in modeling a section of a petrochemical refining process, namely that of hydrodesulfurization. That is adding hydrogen to hydrocarbons containing sulfur in the presence of a catalyst at an elevated temperature so the sulfhydryl (SH) groups can be split away and separated, to be replaced with the desired hydrogen atoms.

I examined an entire system and its larger context yesterday. Today I wanted to discuss a few more detailed considerations, with an eye toward the costs and benefits of taking different approaches. Bearing in mind that many variations of processes, reaction vessels, and equipment may be involved, and assuming we aren’t doing a finite-element solution where every container or vessel and every run of pipe is divided into multiple sub-volumes:

One Node vs. Many Nodes (or sub-volumes):

  • Nature of materials and processes within the vessel: Many vessel models will contain single liquid and gas regions, but vessels with different types and arrangements of internal equipment may need to be represented individually.
  • Presence of instrumentation or sampling ports at different locations along the vessel: Training simulators are all about the conditions and properties that operators can observe and affect. If multiple instruments or sample ports are located along the long dimension of a vessel the vessel may have to be subdivided into enough separate regions to provide unique results for the individual instrument or port locations. A change in feedstock composition, catalyst effectiveness, or hydrogen feed might provide indications that the reaction is completing (or failing to complete) at different locations.
  • Amount of available computing power vs. need for model speed: Training simulators will theoretically need to run at or at least close to real time. If doing so is an issue then simplifications to the model may have to be made with respect to number of internal nodes or size of time step.
  • Possible failure modes: The location of leaks or ability to specify reduced catalyst effectiveness at different locations may require specification of additional nodes.

Simultaneous vs. Non-Simultaneous Solution: The pressure-flow equations and possibly even the thermodynamic equations may be carried out using simultaneous solutions but the solution for transport properties might not be.

I once had to model a long coil of pipe in the offgas system of a nuclear power plant. The coil was located about halfway through the system after most of the liquid had been removed from the process flow. The purpose of the coil was to increase residence time so most of the short half-life radiation could decay. (I was told that this section of piping was buried outside under the parking lot, but that may or may not have been a joke.) I initially tried to model the transport of radiation using a simultaneous solution and found that non-zero concentrations showed up at the discharge end of the pipe the first time step after radiation was introduced at the charge end. The concentrations at the discharge end were small in magnitude but the character was wrong. This didn’t matter under steady state conditions but it would matter in a meaningful transient, and that was the whole point of the simulator.

I did the pressure-flow solution using the normal techniques (simultaneous matrix) but I decided to model the thermal and radiation transport (and decay) in the pipe coil as a series of ten rotating buckets, basically a bucket brigade, assuming roughly laminar flow. That meant averaging the radiation content of the inflow for one-tenth of the volume of the pipe and assuming that the radiation content at the discharge end was constant for that one-tenth of the pipe volume. (As I think about it I may have made a mistake by continuing to model radioactive decay as the discharge bucket emptied, which would have produced radiation levels that incorrectly sawtoothed over time. I’d have to think about how I should have handled that. As I think about it further I may also have been able to model the outgoing temperature as a constant equal to the ground temperature, if I had been desperate to save a few clock ticks.)

Perfect mixing is often a good enough assumption in simultaneously solved, one-dimensional fluid models but there are other times when it clearly won’t do. In those cases you may have to slow things down, do them by hand, or model them in a different way.

Posted in Simulation | Tagged , , , , , , | Leave a comment

Simulations: What Gets Modeled And What Doesn’t

When I’m not flogging away at code these days I’m thinking about continuous simulations and the details that get modeled within them. Specifically I’ve been reading about and thinking about operations in petrochemical refineries, and even more specifically certain classes of catalytic reactions, like those found in hydrodesulfurization processes. In such a process the (naptha) feedstock is passed through a catalyst chamber where hydrogen is also injected. The idea is that sulfhydryl groups are split apart from their hydrocarbon bases and replaced with one hydrogen atom from an H2 molecule. The other hydrogen molecule then bonds to form H2S. In short, C2H5SH + H2 => C2H6 + H2S.

In thinking about this I was less concerned about the details than about how I would go about simulating such a process. The simulation I would create is greatly dependent on the purpose for which it is to be used.

If I did not know what was going to happen when I mixed certain materials together in the presence of a certain amount of energy and at a certain pressure I would create an extremely low-level simulation that modeled the behavior of every atom. I know this is done with drug reactions of various types and my understanding is that they are often hard to get right in novel situations. Such models also require a lot of computing resources. If the chemical reactions to be modeled involve a catalyst then there are just that many more factors of chemistry and geometry that have to be worked out.

If, however, the purpose of the simulation is to train operators, to size plants, or to experiment with different operating concepts (batching, controls and instrumentation, heat recovery, safety, etc.) I would create a higher-level simulation that modeled the process in a more abstract way.

At some level, assuming that the conditions were right with respect to factors like feed chemistry, temperature, pressure, catalyst area, and the presence and mixing of sufficient reactant materials on a mass or molar basis, I would assume that the reaction works and generates or absorbs heat as designed. Within a reasonable range of operating conditions I would be able to correctly model reactions, heat transfers, flows, pressures, temperatures, and end products. I would be able to create a simulation that could be initialized to a steady running state, be closed down and purged, and be restarted and returned to the original running state. Alternatively I might start at the shut-down state, ramp up to the operating state, and then shut down again. In either case I would have to know the efficiency of any planned reactions at given conditions of temperature, pressure, and catalyst. I would have to know if different reactions happened that also needed to be modeled.

Considering the reaction described above, if different reactions happened at, say, a different temperature, then I would have to make sure those reactions were modeled in place of the ones I’d hoped for. The point is that in a macro-scale simulator I wouldn’t be modeling the reactions of the molecules from first principles, but would instead model conversions, reactions, and state changes as a function of conditions. This can be tricky for a number of reasons.

This kind of simulation might be limited in its ability to handle widely varying feed compositions. You could not, for example, feed ice cream or cornmeal into such a process and expect it to do anything meaningful. It can only model what the simulation allows for. The makeup of the initial petrochemical feed (for the whole refinery and not just the hydrodesulfurization process) would have to be completely defined for every component. The thermodynamic properties of all components would have to be known and specified for all applicable ranges of temperature, pressure, and so on, so the model would correctly represent state changes and the various separation, pumping, heat transfer, and other processes. (This turns out to be somewhat difficult, though resources like this would clearly be helpful, though such data are rarely available in as complete or granular a form as have been derived for water and steam.)

Such a model could handle reasonable variations in feedstock and still work as expected. It could be made to handle changes in the efficiency of the catalytic reaction by changing some characteristic of the catalyst (expressed as an area or percentage), under user control. If the catalyst somehow becomes consumed or fouled the reaction would proceed less completely or not at all.

If the simulator was intended to support training then the trainees could learn not only normal process operations and controls, but also what happens during abnormal situations. Those can include things like leaks, instrument and other equipment failures, loss of utilities, and so on. In those cases it’s may be less important to represent the events exactly than to get them right in character, so the trainees learn what to look for and how to identify cause and effect.

The simulator could be used to test novel operating concepts as discussed. It would be able to handle a range of variation in the feed material. It could be used for operations research by performing parametric runs to test the effects of different changes in a systematic way.

As I’ve been reading about the different aspects of refinery operations I’ve been able to relate most of the processes to elements I simulated when I worked on nuclear power plant simulators. I got to know about the steam cycle on a very deep level and also how to deal with noncondensable gases, catalytic recombiners, phase changes, different kinds of absorption and release processes, and different kinds of filtration and separation processes as well. I look forward to continuing research on the subject.

Posted in Simulation | Tagged , , , , , , , | Leave a comment

Zoot Suits and Not-So-Bad-Dancing

I started swing dancing in 2001, during the tail end of the craze that started around the time of the 1998 Gap Commercial (famous among other things for popularizing the 3D still pan effect also prominent in the seminal sci fi film The Matrix, though it had been invented as early as 1993).

Swing dancing had been in a process of resurrection as far back as the 1980s when handfuls modern enthusiasts succeeded in locating and studying with some of the famous original dancers. In those days they danced to period music but it took over bars and dance halls in the late 90s to the sounds of neo-swing acts like Brian Setzer and Big Bad Voodoo Daddy. Dancers wore zoot suits and other period clothing that could sometimes be over the top. The craze was settling back underground by 2003 and the people who were serious about it had grown to appreciate the best of the original big band and jazz music. Gone too were the zoot suits and bowling shirts in favor of more tasteful vintage dress.

One of the venues here in DC, however, decided to have a little fun with the era by holding a neo-swing night late in 2003 or 2004. The point was to wear the most over-the-top outfits and dance to the most annoying possible music. They even had a dance contest whose goal was to bust out the worst, oversized, gawky moves anyone could remember. People had fun with it and posted the next day that most of the competitors had failed to look as horrible as they’d hoped. They’d learned too much over the years and couldn’t reproduce what it was like to dance like raw beginners. Said one observer, “Your muscle memory gave you away!”


Me with legendary Lindy Hopper Frankie Manning

Some of the 90s-00s dancers got deeply systematic in their study while many of the 20s-50s dancers did far less of that, even if they’d managed to work professionally for two decades or more. Many of the original dancers had trouble even explaining what they did. They could tell what was right or wrong but they couldn’t always put it into words that the younger generation could understand. The younger dancers, being young, had a certain amount of trouble understanding anyone. Over time, however, they learned some of each other’s language. The younger dancers learned the subtlety, feel, cooperation, and responsibility of the dance while the older dancers learned more about how to break things down systematically so they could teach and share more effectively.

Why do I bring this up? Because some days seem like…

Posted in Software | Tagged , , , | Leave a comment