How My Core Competencies Map To Working With Customer Systems

I created the figure below while I was working out the requirements for a simulation tool that would support analysis of this kind of system.

While this diagram represents a fairly specific example of the set of things I’ve been asked to analyze, model, automate, and improve over the years, if looked at more generally it can represent any type of system. I’ve made the same point here before, but in this case I want to do it with respect to specific sets of core competencies as follows. These apply to systems that can be described by similar diagrams in nuclear power plants, paper mills, building controls, inspection and security processes, enterprise business activities, and even in the software processes themselves.

Process Analyst, Business Analyst, Systems Analyst: You have to understand a system before you can analyze it, so I’ve become very adept at analyzing and understanding systems quickly. Most of the items on this list relate to process, business, or systems analysis. Part of the analysis has to do with identifying building blocks that can be defined, manipulated, and recombined in an approachable, modular, flexible way, and some of the analysis has to do with gaining specific subject matter expertise (e.g., pulp and paper, metals, controls, thermodynamics, enterprise document and information management, security operations, airport and border operations, deployment and maintenance operations, and so on).

Simulation Engineer: I’ve worked in both continuous and discrete-event simulation for most of my career, and the simulations I’ve created, contributed to, configured, and operated have been used for training, analysis, and control. I’ve written and worked with simulations and tools written in a variety of high-level languages so I understand the architecture of each method down to its core. Moreover, you have to understand a system at a deep level before you can simulate it, so this activity has always gone hand-in-hand with the analysis activity described above.

Discovery Analyst: Analyzing and simulating systems requires that you get to know what makes them up, what their scope and scale is, and what needs to be included and what doesn’t. I’ve used a variety of methods to get to know processes from guided walk-throughs with customers and interviews with SMEs, to research through printed and online documentation, to review of engineering data and drawings. I’ve created documentation for all findings and used the information discovered to determine what parameters and decision criteria characterize and control each system.

Data Collector: Once the parameters and control and decision variables are identified for a system the data that define them have to be collected. I’ve collected data using different methods including direct observation with timers and checksheets, video observation which was later broken down using the same type of checksheets, direct instrumentation readings, on-site measurements, electronic data capture of historical data, and calculations based on detailed research as described above.

Data Analyst: Identifying and collecting data are only the beginning of the process; the data also has to be analyzed. Both input and output data need to be rationalized, corroborated, and contextualized. You can’t trust whatever gets spit out of your discovery and collection processes; you have to read between the lines to see if it makes sense. You have to ask questions of the SMEs who know the system. You have to know not only how the data describes the system under analysis but also how it will be used by the system you are creating. You also have to ensure you understand the true provenance of all data and what tranformations may have been performed on it. If data is known to be missing or incomplete, you have to find valid ways of filling in the holes from context, identifying additional sources of information including valid analogs from similar systems, or finding ways to make educated guesses. I’ve done all of this and used an array of tools to do it.

Software Engineer, Scrum Developer: I’ve worked with software starting as a sophomore in college. I didn’t just learn how to do x, y, and z but more importantly why to do it. I learned in an excellent environment (at Carnegie Mellon University) and have continually studied developments in the field since. The main thing that drove my need to keep up was the requirements of the many kinds of systems I’ve had to develop, modify, and interface with. I had to learn a number of languages with different structures that addressed different challenges, different paradigms, different hardware and software architectures, different operating systems and development tools, and numerous methods of communication.

Operations Research Analyst: Operations research is the study of changes in system behavior when modifications are applied. This includes trade space analysis, analysis of alternatives, sensitivity analysis, cost-benefit analysis, and various kinds of optimization. Most of my experience in this area is based on my long work in simulation, though I’ve picked up a certain understanding of statistical analysis and an appreciation of many other methods from the field.

Process Improvement Specialist: Everything we do as engineers, analysts, and managers is a form of process improvement in a sense. We’re always trying to do things better, more quickly, and with less input. This means improving outputs by preventing errors and minimizing variations (the focus of Six-Sigma analysis) and by improving efficiency (the focus of Lean analysis). I identify pathways to improvement by reorganizing existing processes and by making individual processes more efficient through various means. I identify constraints and figure out how to reduce their impact. This area of endeavor intersects with all the others.

Control Systems Engineer: Here I am specifically referring to my work with industrial control systems, written in high-level software languages, that are used to control the operation of physical processes in the real world. I have not worked with PLCs or industrial HMI packages like Wonderware, but I have worked with systems that interface to them and to sensing, output, and actuation hardware, directly.

Test Engineer: It has never been my full-time job to design and administer tests of large scale software systems but I have used a wide variety of methods to test, verify, validate, and gain acceptance for the many software systems I’ve created. I also participated in an eighteen-month project to perform a full VV&A on a third-party system meant to manage the entire fleet of F-18 series aircraft for the U.S. Navy and Marine Corps. The methods that guided our efforts was developed by a senior Navy analyst. I’ve been exposed to a number of current tools and methods, some of which recreate methods I’ve previously employed in different ways on my own.

System Architect: I’ve designed, specified, implemented, and modified a wide variety of computing solutions for a number of applications on a range of hardware and operating systems in an array of environments of varying size and scope. I have performed all kinds of testing, documentation, management, and negotiations with respect to those systems. Some systems were composed of single, stand-alone processes hosted on individual desktops while others involved multiple processes and programs as well as communications with other processes and external systems. The systems have been used for analysis, control, training, and automation. Some are meant to operate in real-time while others were meant to run on a fire-and-forget basis. Some supported interactions with users while others ran to completion once started.

Enterprise Architect: The line between system and enterprise architecture is the subject of a great deal of discussion. Systems at all levels of complexity, if conceived and implemented well, should consider flexibility, recoverability, robustness, ease-of-use, and similar factors. Is it then just a question of scale? Is it a question of adaptability to change? It it a reflection of the type of users or results? Is it a question of the tools used? Does the use of SAP, Appian, or IBM FileNet make something an enterprise system? Does it have to do with business function? Is HR or marketing more “enterprise” than manufacturing or building control? I think it’s hard to say. If the experts can’t agree then it’s clearly a difficult problem. I can say I’ve done elements of work at every scale, including enterprise transitions when I did the discovery, analysis, and implementation for FileNet document imaging systems. I would also, as I note, be very comfortable working at the plant-wide scale discussed here.

Project Manager: Having worked with so many different kinds of systems, architectures, customers, solutions, sizes of organizations, and types of colleagues I have an excellent feel for putting all the pieces together. I have served as a researcher, developer, project coordinator, site liaison, technical lead, technical manager, and project manager in many different situations. I have supported sales, security, discovery, design, implementation, test, acceptance, commissioning, service and support, and end-of-life and replacement efforts and so have seen every phase of project and product life cycles from beginning to end. Most importantly, I’ve seen things done well and seen things done poorly during my career, and I’d like to think I have the organizational skill to take a step back to ensure I consider all facets of this experience and the insight to apply it to offer the greatest possible support to all colleagues and customers.

Program Manager: A program is nothing more than a portfolio of projects, so that is largely a question of scale. I have served as a program manager with duties that included contract management, working with subcontractors (and prime contractors in other situations), multiple sites, multiple customers, invoicing, reporting, and planning. The important thing to consider when managing programs is that one has to manage their limited bandwidth. I learned from studying the National Emergency Management System that when the number of reporting (departmental or functional) entities gets beyond four for any one person, that an additional layer of interface should be added. That conflicts with today’s trend toward flatter, more cooperative organizations in general, but the insight remains that individuals have to consider and coordinate at an appropriate level. Program managers (and their analogs) have to know how to gain insight into the operational effectiveness of their reports, but they also have to know how to offer guidance and correction without getting into too much detail.

Scrum Product Owner: My most natural position is working with customers to identify their needs and with developers to meet those needs (while also supporting the developers’ needs). I’ve done this through my career almost from the beginning. I’ve worked in a consciously agile, iterative fashion that seeks lots of feedback and allows appropriate correction, but have not spent time working in a formal Scrum setting. I leveraged my experience to earn my Scrum certifications as a way to get up to speed on the details as quickly as possible.

ScrumMaster: I have taught people to develop new systems in new languages for new applications in new settings. I’ve worked in development efforts at various scales and with differing needs for tools and mechanisms to manage complexity and cooperation. I’ve participated in different methods of software, hardware, and process governance. I am not looking to serve as a ScrumMaster without direct experience in the role and with Scrum in general, but I appreciate what they do and can work with and support such individuals with aplomb.

Posted in Management | Tagged , , , , , , | Leave a comment

Are We Entering a New Era of Computing?

I wrote last Wednesday’s piece to provide a beginning context to show how developers should be aware of the underlying activities in their target environments. I plan to do a couple of posts discussing how much duty cycle (CPU activity) is taken up by the browser, OS, and communication processes when examining all that might go on in a client-side application. Web pages with simple applications are one thing, see the basic, pure JavaScript, graphics, and DOM manipulation examples I’ve worked with up to now, but adding in numerous frameworks (jQuery, AngularJS, etc.) starts to add overhead of its own. You can write a really small web app that uses a pile of frameworks and mostly you’ll just pay for it in download time (at least initially), but you can write a pure JavaScript app that takes everything the browser, OS, and CPU has and ask for more.

When I first started writing Level 2, model-predictive, supervisory control systems for steel furnaces the combination of applying super-fast numerical techniques with faster processors showed me that I could implement some amazing real-time control solutions that hadn’t been possible previously. That said, depending on the geometries involved, I could set the system up to consume about fifty percent of total CPU resources (memory usage was essentially fixed). Most of the duty cycle was devoted to calculating matrix solutions, so I plan to write a few tests that do the same thing to get a feel for just how much power I have available in the browsers of different platforms. I’m thinking PC, Mac, iPhone, Android, and possibly a Raspberry PI-class device. Who knows, maybe I’ll see if I can get my old Palm Pre to work. The matrix exercise is also meant to illustrate the automatic code generation process I’ve discussed.

My point in thinking about these issues is not only to see where things are going in terms of developer mindfulness of resource consumption as the computing landscape changes, but where things might be going longer term. This article suggests we are just approaching a major turning point, though one more interesting than what I discussed last week.

I learned computing on the big DEC time sharing systems at Carnegie Mellon but also played around with little machines running BASIC. (I once did a homework assignment for my Numerical Methods class on a Radio Shack MC-10 Micro Color Computer that I picked up for all of fifty bucks. I printed the code and results out using a 4-inch-wide TP-10 thermal printer. I still have both items, as well as a 16K RAM expansion module, and as of a few years ago it all still worked. The biggest problem now is finding an analog TV to hook it to…) The more fortuitous development was working with the first generation of IBM PCs starting in my junior year, which was essentially at the dawn of the PC era. I successfully rode that wave through the early part of my career, working initially with desktop tools and moving on to networked, enterprise, and client-server systems. I used the web heavily but didn’t consciously think about developing for it until very recently, but my goal was less to learn everything about web tools and techniques than to learn about the processes and parameters.

I love developing software of all kinds but there are two things I love even more. One is working with software systems that control physical devices in the real world, or at least model, automate, or control complex procedural activities. The other is learning about new systems and processes so they can be characterized, parameterized, reorganized, and streamlined to solve a customer’s problem more efficiently. I can write the solution, I can manage the solution, but I really, really like to do the analysis to find out what solution is needed and iteratively work with the customer to continually review and improve the solution to meet their needs.

This kind of thing can happen in a couple of different contexts. I’ve developed systems through most of my career on a project/contract basis. Get one assignment, figure out the requirements, design it, build it, install it, get it accepted, go on to the next project. Even if I was doing the same type of project over and over again and could improve things incrementally each time the efforts were still treated as individual projects for specific customers. As such the interaction and feedback was always in a tight loop with one or just a few people. Later in my career I began to work on more of a continuing/program basis. In those cases there was a software tool or framework that was continually expanded, modified, and adapted to whatever type of analysis was called for. Those engagements might have been longer term but they were still in service of individual customers. A different way to approach things is to release software to an open market where there are many customers (for an OS, productivity tool, web service, or the like). In these cases the feedback is not from a defined set of individuals but from a diffuse set of users and communities with potentially varied needs and expectations. I worked at one company supporting that kind of customer base.

In truth the taxonomy is not so clean as I’ve described, and the situations I’ve worked in have been so varied that there were elements of each category in many of them. Some products/projects/programs have many layers of customers. For example, an organization that develops a large, business automation framework might have as “customers” the solution developers within the company, external solution providers, the end customers to whom the solutions are provided, and the customers those customers are leveraging the solution to serve.

The linked article suggests that there was a desktop computer wave and a web/mobile wave, each of which ran fifteen years-ish. The network-to-web-based wave kind of straddles those two waves. Some people have observed that the earliest computers were one-off affairs meant to solve single problems in sequence. Then they moved to completing multiple batch jobs in sequence, and then time-sharing systems began to support numerous users. Then came the PC era, which returned the focus to the desktop, after which followed the networked era which allowed collaboration. The web-and-mobile era represents a cyclic return to the era of centralized time-sharing systems, but on a much grander scale and with a concomitant requirement for simplicity from the user’s point of view.

The next wave appears to be the internet of things, which has also been called ubiquitous computing. The early devices tend to be small and standalone as people figure out what to do with them, then they get connected to each other, and then those individual devices and networks will probably coordinate with centralized compute and aggregation nodes (think Waze). The cycle will continue.

The goal, then, is to figure out how to position oneself for these future developments. The goal will be to identify applications that provide value to people; methods of parallelization, communication, and coordination that will allow this swarm of devices to provide that value in a meaningful way; methods of security that will support individual privacy and dignity; and methods of development and deployment that will help developers provide the solutions while optimizing on the right variables. It is entirely possible to find a niche developing control, analytical, communication, interface, or enterprise systems, but it’s always good to keep the wider picture in mind. Ideas inspired or updated by developments in new areas can inform everyone’s work.

It’s going to be an interesting time.

Posted in Uncategorized | Leave a comment

System and Enterprise Architecture

The figure below shows a notional representation of an integrated, plant-wide control system for a type of modern strip mill. I installed one- and two-line tunnel furnace Level 2 control systems in several plants just like this. I’ve highlighted the furnace Level 2 system near the center with a red border and bolded text.

The interior architecture of the Level 2 system is shown next.

I am entirely comfortable designing, implementing, and managing system architectures of this type. To the degree that the plant-wide system components up to Level 3 approach being something like enterprise architecture I am entirely comfortable working with something like that as well.

When it comes to the Level 4 sections, the pure enterprise sections, I have done some types of analysis, design, and implementation while there are other parts I haven’t done. I have definitely traced data sources and transformations through such systems for review, rationalization, requirements analysis, design, and so on. I’ve also looked at a wide range of complex business processes to see how they can be automated, characterized, parameterized, and controlled. I have designed data schemas for large, business process reengineering projects. I have not worked on the design of large scale data centers, security issues, large automated build processes, and some other aspects of enterprise computing. I have not trained in ITIL, BSM, ITSM, or the like, although I have certainly designed software systems with an eye toward reliability, fault tolerance, and recoverability. I’ve also worked with systems that have defined performance and uptime guarantees.

I have definitely worked more with the logic and flow of information in larger systems while being less concerned about every facet of the implementation. I can pick up whatever parameters and constraints that describe and bind such systems quickly enough to work meaningfully with specialists in those areas. I have done this kind of analysis and design work across a wide variety of different systems, enough to know what is unique about each one, enough to know what is the same, and enough to see what can be made as modular, repeatable, and flexible as possible.

I would say that I have done a lot of work in the area of system architecture but only partial work in the area of enterprise architecture.

Posted in Software | Tagged , , , | Leave a comment

The End of Moore’s Law, Code Bloat, and the True Nature of Efficiency

It has been suggested that the improvements in semiconductor and processor technology are approaching their natural limits in terms of how far the size of each feature can be reduced. Since quantum computing has not ready for widespread use, and since parallel computing processes are similarly not well utilized by the mainstream, it seems that the pace of developing faster hardware is due for a slowdown at the very least. To this point the improvements in hardware seem to have greatly outstripped those in software. Software has become more capable in terms of scope and scale, but at the cost of efficiency. Larger and more complex software systems have taken advantage of even more rapid improvements in hardware but if those improvements in hardware predicted by Moore’s “Law” slow down or pause, will the process of developing most software run into limits of its own?

There are many aspects to this question. First, there have always been applications that will push any computing hardware to its limits. Simulations are a prime example of this. Some simulations can always be made more granular in terms of time, space, or details considered, and so can always be reconfigured to soak up all available computing resources and then beg for more. The interesting thing about simulations is that, while they tend to do a lot of things a lot of times, they tend to do the same things over and over. A good simulation is very modular and uses a small set of building blocks. A good finite element code doesn’t have to be particularly big or complex, it just has to be able to repeat itself quickly. Conversely, there are plenty of smaller applications and processes that will run just fine on almost any hardware. It isn’t optimal the people get twenty or more programs running on their smartphones, but it can be done. Heck, there are probably more processes running in the background of most modern operating systems as it is. A Lot more.

Are there applications we haven’t though about yet that will put a new twist on this question? Are there problems in computing that we haven’t been able to tackle so far because they’re either too big or because we simply haven’t thought of them yet? Improvements in computing speed, communications, and storage have continually made new things possible that weren’t before. Think video, voice, virtual environments, sensor and control applications, cloud computing, and so on. Certainly there are tons of problems in bioinformatics and theoretical physics that would benefit from more computing power, but practical problems in business would clearly benefit as well.

There are whole classes of problems in business that fall somewhere in between a simulation of drug interactions with cellular mechanisms and the calendar app on your iPhone to be sure. Specialized work will always be done on the high-end applications and developers of low-end applications are less concerned with code size and speed, since almost anything that works will yield acceptable performance in most situations, and more concerned with the speed and reliability of the development process itself. Enterprise business systems themselves can get very large. They can generate specific optimization problems that are as voluminous and complex as anything a genetic researcher or theoretical physicist can dream up, but their biggest problem often have to do simply with scale. They have a lot of data and it has to be processed by a lot of machines working together. That requires a lot of coordination. It can be difficult to search through a system to find all the statuses and activities having to do with an individual transaction, which itself can spawn numerous sub-entities which all have their own processes and states. Completing a system build to perform a test of one incremental feature can take many hours, so a lot of work has to be done up front to maximize the chance of getting it right on the first try.

During the run-up to Y2K a lot of people asked why developers couldn’t have treated years as four-digit numbers from the start. A lot of money was being spent to repair or replace a bunch of systems and code when it seemed like a modicum of forethought could have obviated the need for much of it. One writer observed, however, that the decision to streamline the storage devoted to date information wasn’t just due to lack of insight into the potential problems their designs would cause or based on the assumption that many systems wouldn’t be around long enough for it to matter. Instead, the writer calculated the cost of actually storing the information in its entirety, given the systems available at the time, the volume of data stored, and the time value of money, as opposed to storing the streamlined versions. The calculation turned out a very, very large number. Now, one can argue whether the number is exactly correct but the size of the figure was pretty eye-opening, and would in any case be larger than the remediation actions taken in the late 90s.

The point is that there a lot of ways to measure efficiency and a lot of ways to economize. There are a lot of variables to consider. The iron triangle considers cost, time, and scope, and people make trade-offs among these considerations all the time. Another set of constraints is similar. People often joke, “You can have it fast, cheap, or good. Pick two.” A corollary would be, “You can have it really fast, really cheap, or really good. Pick one.” Trade-offs are made not only across variables but across degrees of optimization within any individual variable. While cost isn’t always the overriding consideration, in the long run it always is. The question is how to optimize on it.

When I hear people complain that developers aren’t properly mindful of saving every last clock cycle when they code I have to take it with a grain of salt. Yes, it’s always a good idea to use efficient algorithms and be mindful of the resources you’re consuming and yes, it’s a good idea to know what’s going on under the hood so you have a better idea of what trade-offs you’re actually making and yes, both Pareto’s and Sturgeon’s Laws apply to software development as well as they do to many other things, but in the end things get optimized the way they need to be when they need to be. If the development of software, which has undergone plenty of changes and cycles itself over the decades, runs hard into limitations imposed by a slowdown in the development of hardware, then people will do what they need to to figure it out. They probably won’t do it much before then.

If you see the possibility to optimize something before everyone else finds the absolute need to do so, and if you can do it cost-effectively, then that’s an opportunity for you, right?

Posted in Software | Tagged , , , | Leave a comment

Automatically Generating Code

A programmer’s efforts to create code can be greatly enhanced by automating as many operations as possible. This process reaches its zenith when the code itself can be generated automatically. I’ve done this on a handful of occasions in both direct and indirect ways.

Curve-Fitting Tool: I’ve discussed my work with curve fitting over the course of several recent posts. The goal was always to generate functions or at least sections of code that could be dropped into a larger project. All of the segment functions for the properties of saturated steam and liquid water were generated by one or another of the tools I wrote. I hope to be able to expand those to generate segments and functions in a wide variety of languages going forward. I’ve also identified a number of improvements I can made to the generated code, documentation, and process of verification.

Automatic Matrix Solution Code Generator: While working at Bricmont I ended up doing a lot of things by hand over and over. The control systems I built all did the same things in principle but the details of each application were just different enough that generalizing and automating their creation did not seem to make sense. If I had stayed there longer I might have changed my mind. I did leverage previous work by adapting it to new situations instead of building each system from scratch. That way I was at least able to identify and implement improvements during each project.

There was one exception, however. I was able to automate the generation of the matrix solution code for each project. In general the block of code was always the same; there were only variances in the number and configuration of nodes and those could be handled by parameters. That said, the matrix calculations probably chewed up 80% of the CPU time required on some systems, so streamlining those bits of code represented the greatest possible opportunity to improve the system’s efficiency. To that end I employed extreme loop unrolling. That is, writing all of the explicit calculations carried out by the tightly looped matrix solution code with array indices expressed as constants. In that way you get rid of all calculations having to do with incrementing loop counters and calculating indirect addresses. The method saves around 30% of execution time in this application, but at the cost of requiring many, many more lines of code. The solution to a 147×8 symmetric banded matrix expanded to about 12,000 lines. The host system was way more constrained by calculation speed than it was by any kind of memory, so this was a good trade-off.

The code was generated automatically (in less than a second) by inserting write statements after each line of code in the matrix calculation — if that line performed any kind of multiplication or summation. The purpose of the inserted lines was to write the operations carried out in the line above in the desired target language (C++, FORTRAN, Pascal/Delphi at that time), with any array indices written out as constants. Run the matrix code once, it writes out the loop-unrolled code in the language of choice.

Once the code that runs the vast majority of time was made as efficient as possible, there wasn’t much value to be realized in trying to wring significant performance gains out of the rest of the code. That being the case (and knowing I always abide by guidelines that make things generally efficient), I was able to concentrate on making the remainder of the code (you know, the other 38,000 lines of a 50,000 line project) as clear, modular, organized, understandable, and maintainable as possible.

I’ll include this one as an honorable mention:

Automated Fluid Model Documentation and Coefficient Generator: The project management process used by Westinghouse was solid overall (it was even rated highly in a formal audit conducted by consultants from HP while I was there) but that doesn’t mean there weren’t problems. One monkey wrench in the system caused me to have to rewrite a particularly long document several times. After about the third time I wrote a program that allowed me to enter information about all of the components of the system to be modeled, and the system then generated text with appropriate sections, equations, variable definitions, introductory blurbs, and so on. The system also calculated the values of all of the constant coefficients that were to be used in the model (in the equations defined) and formatted them in tables where appropriate. I briefly toyed with extending the system to automate the generation of model code, but the contract ended before I got very far.

I’ve written other tools and modular systems but the systems descriptions they generated were more like parametric descriptions than native code. The main system runs based on the parameters but doesn’t spit out standalone code. I’m sure there’s a good philosophical discussion of the nature of the demarcation in there somewhere.

I’ve also been involved with systems that can write code which can then be incorporated into the running system on the fly. This requires the ability to generate code that knows what variables and connections are available in the main system, can compile or interpret the resulting code or script, and can integrate the results into the main system. My original concept, intended for use in the fluid modeling tool, was for the program to be able to write out the bulk of the code for calculations of pressure, energy, flow, accumulation, concentration, transport, state changes, instrumentation, user interaction, and so on. The system would then have to be able to let the user generate additional sections of code to handle special situations in a model that can’t be covered using the standard methods. At the time I was doing this I planned to have the system write the entire standard and custom code at one time in an integrated way, and then compile and run it by hand.

Today I would try to automate the process even further. I succeeded in doing this in a small way when I wrote my system to simulate operations in medical offices. It was able to generate the required parametric input files, kick off the execution process (written in SLX, it reads input files and generates several output files), generate results, and allow the user to review the results and spawn the separate output animation process (spawns a Proof process which reads the animation file generated by the SLX simulation run) from within the main program and user interface (written in C++).

More recently I worked with a development team that created a system to calculate staffing requirements based on a range of inspections of different types of traffic through the various land, air, and sea ports of entry. The program read in all of the provided raw arrival data from a number of sources but calculating the number of staff required combining those figures with information about number of staff and range of duration required for each inspection process. The former information came from a range of agency collection processes while the latter information was gathered by subject matter experts and experience. These bits of information were originally combined in an Excel spreadsheet. A spreadsheet is, in fact, a kind of computer language processor, but the calculations are expressed in a declarative, dataflow, cell-oriented paradigm.

The replacement system was written in C# and provided means of identifying individual data items by name and sometimes by the location (port of entry) they applied to. These items could then be retrieved, used in various mathematical and accumulation operations, and then stored as desired. The instructions for doing so were written out as user scripts in valid C#. While C# code can be more daunting to users than spreadsheet operations, the tool took great pains to hide as much of that complexity as possible from the user. The individual writing the scripts was able to get a lot done by following the patterns shown in a minimum number of examples, while an advanced user could accomplish almost anything. Once all of the scripts were written the tool could have them all processed so the calculations could be run within the tool. C#, Microsoft’s answer to Java, is, like Java, actually run by turning source code into meta code and then interpreting the meta code. These processes could all be done from within a running UI. Accomplishing the same thing in a purely compiled language would involve compiling added snippets of code and merging them in as a .DLL or something similar, and they would have to be embedded in a wrapper unit and function call with a known name. It wouldn’t be impossible, it would just be a bit more limited.

Posted in Software | Tagged , | Leave a comment

Methods of Inter-Process Communication I’ve Employed

A while ago someone asked me how I’d go about making two processes communicate. My answer was to figure out what data needed to be communicated, what communication type was to be used, what protocol was to be used, and how control was to be arranged. I then stated that I would document everything that was required in each of those areas, including test cases, and review said documentation with all relevant parties to ensure they agreed with what was documented (e.g., a customer might acknowledge that their requirements were properly understood) and understood what was documented (e.g., a development team might ask questions about anything that seemed unclear, ask for support in carrying out any part of the process they might not understand or have tools for, and possibly offer suggestions for better or different ways to do it).

I then began to give examples of the many different ways I’ve accomplished such tasks throughout my career. For some reason my explanation did not make headway with this individual. The individual was either looking for a specific example (e.g., the get coordinates from postal address function in the Google Earth API using JSON, as if that or something like it was the only acceptable answer) or simply wasn’t considering the question broadly enough.

To that end, and with the goal of providing additional descriptive material which can be linked from my online resume or home page, I offer the following details about the numerous ways I’ve solved this type of communication problem in the past.

Serial (RS-232, RS-485)

I wrote control systems for a maker of reheat furnaces for the metals industry from 1994 to 2000. I wrote supervisory control systems using model predictive simulation in high-level languages (FORTRAN, C/C++, and Pascal/Delphi) for DEC (VAX and Alpha) and PC hardware. These systems employed a number of communication mechanisms to exchange information with other plant business and control systems. Our presence in the metals industry and our ability led to our being acquired by Inductotherm when our company’s founder, Francis Bricmont, decided to retire in 1996.

One of the tasks Inductotherm wanted us to take over explicitly was to write a new, PC-based version of control software for their electrical induction melting furnaces. While that was being done they also needed someone to take over support of their existing induction furnace control product, called MeltMinder 100, which was a PC-based DOS program which used serial communications to interact with several types of devices. It was written in Microsoft Visual C++ and since I was the only guy who regularly wrote high-level language software the task fell to me. I helped another team of new hires design the replacement product, MeltMinder 200, which they (inexplicably) chose to implement in Microsoft Visual Basic, but I had to handle all the mod requests and troubleshooting for the 100 version for the four years I remained with the company.

The original design of Inductotherm’s hardware employed serial communications between all devices. The Meltminder software had to talk to a number of different components to read data from sensors and write data to control various events.

  • Inductotherm VIP (Variable Induction Power) supplies: These devices provided finely controlled power to the induction furnaces.
  • Omega D1000/2000: These small devices in hexagonal packages (we colloquially referred to them as “hockey pucks”) each provided a combination of analog and digital inputs and outputs that could be used to communicate with a range of external devices including thermocouples, tilt meters, scales, actuators, alarms, and so on. One unit provided the required I/O for each furnace.
  • 2-line dot matrix displays: These devices showed one or two lines of up to about 32 characters of text and provided a low-level shop floor interface to the system and were also used to show the weight of material in a furnace.
  • Spectrometer interface: Some systems incorporated spectrometer readings which were used to define the chemistry (in terms of the percentage of each element present) of the current material in the furnace. The MeltMinder software could then do a linear optimization to figure out what combination of materials to add (each of which had its own, known chemistry) to achieve the target melt chemistry with the minimum amount of additions by weight.

The reliance on serial communications imposed certain limitations and complexities on MeltMinder systems. DOS-based PCs could only support a limited number of serial ports and this necessarily limited the number of furnaces which could be controlled. The Windows NT PCs could support more serial ports, and they ended up with serial connectors that were muti-headed monstrosities. As I review Inductotherm’s offerings now it seems that they’ve updated the MeltMinder software to a version 300, and I know they also divested themselves of Bricmont and took their control software development back in-house.

American Auto-Matrix makes products for the building control industry (primarily HVAC but they also integrate access control and other systems). Their unit controls, area controllers, and PC software communicate using a number of different protocols with one of the chief ones being serial. This is effective for low bandwidth communications over long distances with few wires. TCP/IP, BACnet MS/TP, ModBus, StatBus, and proprietary communications were also used.

I worked on several modifications of the PC-based driver software. It handled TCP/IP messages across Ethernet networks (to area controllers and other PCs) and could be connected directly to serial devices for configuration and monitoring. The communication driver software was implemented as a .DLL in Microsoft Visual C++.

Serial communications were carried out as a series of messages in two possible protocols, publicly defined and supported by American Auto-Matrix. The PUP and PHP protocols defined the meaning of each byte in each possible message. Each protocol included 12-20 possible message types in different configurations and the software included a cyclic redundancy check (CRC) for each message sent and recieved.

  • PUP and PHP protocols between lines of unit and area controller devices.

I took a course called Real-Time Computing in the Laboratory during my junior year at Carnegie Mellon University. Our software was written in assembly language for sample development boards based on the Motorola 68000 microprocessor (that powered the original Macintosh computers). I was fascinated to learn the internal structure and working of processor chips and remember being impressed at how streamlined the Motorola chips seemed to be compared to the extant Intel (8088, 8086, 8087) chips then available.

That aside, we did a number of projects exploring low-level computing and different kinds of communications with external devices. Serial communications were used to talk to a couple of different things. I don’t remember what they all were but I remember learning about interrupt request lines and how electrical signals from external devices could trigger jumps to interrupt code that would save the current program state on the stack, read the input and process it, then restore the original program state by popping the relevant information back off the stack.

One project I do remember involved trying to get an HO scale model train (i.e., a single, small locomotive) to go through the gap between the blades of a rotating windmill (i.e., a small electric motor and a cardboard disc with chunks cut out).

  • Train sensor: I want to remember that his was a physical contact of some sort that sensed when the train was present in a given location. It would read as being continually on as long as some part of the train was on or over that point of the track.
  • Windmill blade sensor: This was an emitter-detector pair that generated signals when the signal was broken and when it was reestablished.
  • Power supply: We controlled the power output to the track by managing its duty cycle. This meant that we continually turned the power supply on and off and the frequency and proportion of time turned on determined the total power supplied to the track.

I felt this class was invaluable for teaching me about how high-level software works under the hood. We also used our knowledge of assembly language code to learn a little bit about how compilers worked, but that’s a different discussion. Suffice it to say that some of the assembler outputs were very difficult to relate back to the high-level loop, conditional, and indirect referencing code that generated it.

Parallel

The Real-Time Computing in the Laboratory class also included a project where we used a parallel port to interact with an external device. The device was an aluminum box with a row of 16 LEDs and 16 toggle switches. We started with just reading the switch positions and writing the LED on/off state but ultimately created a little game that turned on LEDs on either end of the row and turned off a shifting set of LEDs in the middle of the row. The goal of the game was for the player to continually turn on a second switch and turn off the previous switch in order to move the player LED so it didn’t touch the LEDs on either end as they shifted to and fro. This was akin to trying to drive a “car” down a twisty “road” without driving off of it.

  • Box with 16 LEDs and 16 switches.

TCP/IP messaging

My main job at Bricmont was writing Level 2 supervisory code that controlled reheat furnaces using a model-predictive control scheme. The system had to read inputs from several external systems and write outputs to several other external systems. The concept is shown in the figure below.

Some of the communications were accomplished by sending TCP/IP messages directly across the Ethernet network. Doing this required defining the meaning of each byte in the message and the size of the message. On my end I always sent and received the message as a block with a defined number of bytes. I had to pack the header bytes with the information required by the TCP/IP protocol itself and the body bytes with the information agreed to by the author of the connected system. I wrote systems directly employing this method in Borland C++ Builder.

When moving large blocks of data I often like to define variant records (also known as free unions). Sometimes the entire record can be variant and sometimes only the latter part of the record is like that. This method allows the programmer to refer to a second of memory using different handles. Individual structure variables can be read and written as needed and the entire block can be processed as a unit for speed and simplicity. Different languages make this more or less easy to do. Fortunately, FORTRAN, Pascal/Delphi, and C/C++ all make it easy. It is more difficult to do in languages like Java, but the same effects can be achieved using objects and polymorphism.

  • TCP/IP messaging to remote PC for external status display.

The PC software and area controllers made by American Auto-Matrix also employed TCP/IP messaging. It was written in Microsoft Visual C++ as a .DLL and handled the sending and receiving of packets using an interrupt mechanism. This software required population of the header and body parts at a lower level than I had to do at Bricmont, but was not otherwise a problem.

  • Driver DLL for Auto-Pilot PC interface software: This was used to communicate with area controllers and other PC stations.

I used this method to initiate the communication process for a simulation plug-in that was otherwise controlled via the HLA protocol, as described below

  • Initiation mechanism for automated simulation evacuation plug-in: This was used as part of Lawrence Livermore’s ACATS System. The software was written in Microsoft visual C++ as a wrapper for a system largely written in SLX. I wrote the proof-of-concept demo using Borland C++ for the wrapper around SLX.

DECMessageQ:

This protocol is provided on DEC systems in various languages (I used both FORTRAN and C/C++) to enable communications between the Level 2 furnace systems built by our company and the Level 2 caster and rolling mill systems built by other companies (we worked with SMS systems a lot). We also used this method to exchange data with the plantwide Level 3 system.

  • Caster Level 2 Systems: These messages were sent to the furnace control system whenever a slab was produced by the caster. We needed to know the dimensions, melt (which batch of molten steel, implying a particular chemistry and usually customer), ID (which individual slab or billet in each melt), weight, and other information about each piece as it entered the furnace.
  • Rolling Mill Level 2 Systems: These messages were sent to the rolling mill whenever a slab or billet was discharged from the furnace. The caster needed to know the information sent by the caster plus things like average and location temperatures, furnace residence time, and discharge time.
  • Plant Level 3 Systems:Some information was passed directly between Level 2 systems but some was passed via the Level 3 system. Other information was passed to the Level 3 system describing fuel consumption, delays, and special events.

High-Level Architecture (HLA)

HLA is a protocol used by distributed simulations to synchronize and control events in the system. A common use of this protocol is in military engagement simulations where trainees in hardware-in-the-loop simulators, sometimes separated by great distances, can participate in mass battles. The protocol specifies actions, locations, starts and stops, and so on. The protocol is pretty minimalist but the amount of information needed to describe events in detail might be both voluminous and complex. It is an exercise that necessarily carries a lot of overhead.

  • One of the projects I managed at Regal Decision Systems was an evacuation plug-in for the ACATS system being managed by Lawrence Livermore National Labs. ACATS is a distributed simulation system that lets multiple users engage in a variety of interactions. Such engagements sometimes call for the movement of large numbers of entities (e.g., people or vehicles), but the overhead and expense of involving numerous individual users would be excessive. To that end we wrote an automated simulation of pedestrians evacuating a building. The system was initialized by forwarding a building layout defined as an IFC (Industry Foundation Classes, a BIM format) model, and then defining interior locations of building occupants. Our software then figured out where the evacuees could go and incrementally moved them toward efficient exits when given permission to do so. I wrote the initial demo code that showed how we could marry the HLA extension to the SLX programming language to do what we wanted. Interestingly, the process of initializing our system required some sort of messaging prior to initializing the HLA process, so I leveraged my previous experience to define and guide the implementation of a TCP/IP messaging process that kicked the process off. By the way, the Regal system was implemented in a combination of C++ and SLX on Windows XP while the ACATS system was implemented in C++ and other things running on Linux.

Memory Sharing

Many of the furnace control systems I wrote at Bricmont involved numerous processes all mapped to a common area of memory. The programs all had their own local data segments but an initial program allocated a memory structure on the heap that was tagged with the operating system in such a way that other programs are able to map to the same location. Since the same data structure definition is built into all of the programs that need to share data in this way they are able to read and write the same locations using the same names.

The various programs run in continuous loops at different rates. The programs that communicate with external systems run a bit more quickly than the fastest rate at which they would expect to need to send or receive messages. The model process runs as quickly as it can given the computing time it requires, plus a buffer to give other programs enough duty cycle to be sufficiently served and to allow decent response from the UI.

Each program writes a timestamp to a heartbeat variable on each cycle while a monitor program repeatedly checks to see whether too much time has elapsed since each program last updated the variable. If the heartbeat is not updated by any program, the monitor program can issue a command to terminate the offending process if it’s still running but hung, and then issue a command to restart the program. As a practical matter I was never aware of any program that ever caused a hang in this way.

Each process tended to read or write information in the shared memory block in a single bulk operation during each cycle. Before reading they each check against lock variables to ensure that nothing else is writing to a shared variable at the same time. If some other process is writing to that section of memory the reading program will briefly pause, then check again until it receives permission to read. The same process is followed by programs wanting to write to shared memory. They check against the requisite read flags and only perform their writes when they have permission. These locking mechanisms ensure that processes don’t exchange data that may be inconsistent, possibly containing data from partial reads or writes.

  • DEC systems written in C/C++ and FORTRAN: Most DEC systems were installed to control tunnel furnace systems but the first one I did was for a walking beam furnace.
  • PC-based systems written in Borland C++ Builder or Borland Delphi (Pascal): Systems written for all other kinds of furnaces were hosted on PCs as soon as I felt they were powerful enough to support the computational load required of them. Being able to switch to PCs and Windows/GUI-based development tools made the systems both far cheaper and much more informative and engaging for the users.
  • Thermohydraulic models written in FORTRAN for nuclear power plant simulators on Gould/Encore SEL 32/8000-9000 series minicomputers: The SEL systems included four processors that all shared the same memory space. Different models and processes communicated using a common data map (the Data Pool) and the authors of different parts of the system had to work together to agree on what kind of interfaces they were going to arrange between the processes they wrote.

File Exchange

CSV, Binary, and XML files could be written to or read from local hard drives and drives that were mapped on remote systems so they were seen by the local OS as a local drive. Windows-based systems made this very simple to do at least beginning with Windows NT. Even better, the local hard drive could sometimes be configured as a RAM disk, so the code could treat the read and write processes like simple file operations but proceed as if communicating with RAM rather than a physical disk. Lock files were used in place of shared memory variables as describe above, but the process was the same. Lock files were written when the read or write operation proceeded on the transfer file, and then the lock files were erased. Other processes needing access to the transfer data file merely needed to check on the presence of a lock file to know whether to wait or proceed.

I believe that this process was only used on PC-based systems, but it may have been used on some DEC-based systems as well. The files could be written in any format, including .CSV, binary, or XML, but .CSV was the most common. Philosophically any file transferred between and used by multiple systems is a form of inter-process communication.

  • Communications between Bricmont’s Level 1 and Level 2 systems often used this method.
  • At one mill we wrote out fuel usage data to an external system that tracked energy usage in the plant. That company identified ways for the customer to save energy and was paid based on a percentage of the money the recommendations saved.

At Regal Decision Systems we wrote a system to optimize the evacuation of buildings in the presence of a detectable threat. The threat detection system sensed the presence of airborne chemicals, modified the settings of the HVAC system and certain internal doors, performed a predictive fluid simulation of the location and density of threat materials over time, and wrote the result file out to a shared disk location. The evacuation guidance system imported the threat information, mapped it to several hundred locations in the building, and used the information to specify a series of directions that should be followed by building occupants in order to escape the building in minimum time and with minimal exposure to the threat. The system then wrote another file out to the lighting system specifying the preferred direction of travel down every segment of corridor and indications of whether to pass through or pass by every relevant door. The calculated threat-over-time file could run to a gigabyte in size and was formatted as a .CSV file. The lighting solution file was small and also formatted as a .CSV.

  • Evacuation Guidance System: communication between threat detector and evacuation optimizer
  • Evacuation Guidance System: communication between evacuation optimizer and egress lighting system

Web APIs

XML/AJAX and JSON/AJAJ information can be passed over the web in languages like JavaScript and PHP. This is a standard method supported by numerous web systems. I wrote and successfully executed API calls of both types while completing a Web Developer’s course on Udemy in 2015.

  • Google Earth location API: I completed exercises which used JavaScript.
  • Twitter API: I completed exercises using PHP. Annoyingly, I needed to create a Twitter account to do so. I’ve always avoided Twitter, and I haven’t looked at it since.

Database Polling

Toward the end of my time at Bricmont our customers began asking up to perform inter-process communication by reading from and writing to database tables on remote systems. This changed neither the architecture nor the internal operation of our systems; we only had to interact using SQL operations rather than by exchanging files, TCP/IP packets, or other kinds of messages. We polled databases and checked against timestamps and lock records to see if there was anything new to read, and wrote when we needed to, after also checking against lock records.

Handling the passing of messages in this way allowed databases to be used as communication systems as well as historical archiving systems. I wrote archiving functions in my Level 2 systems that worked by writing to and reading from binary files. I’m sure my successors moved to purely database-driven methods not too long after that.

  • Communications with plantwide Level 3 system: We implemented this on a system for a big walking beam furnace. We wrote the system in Borland C++. The remote databases we interacted with were by Oracle. We learned what we needed by taking a one-week Oracle training course, which covered far more information that we needed. We may also have communicated with caster and rolling mill systems in this manner, but I remember that the most substantive information was exchanged with the Level 3 system. We probably passed notification of charge and discharge events to the remote Level 2 systems, but the ID, melt, and dimensional information was received from and sent to the Level 3 database.

Screen Scraping

I describe how I used this process here.

Dynamic Data Exchange (DDE)

This older capability of some Microsoft products (e.g., Word and Excel) enabled processes to modify contents of documents and other data repositories (Wonderware apparently had this capability as well). The products had an API that allowed them to be manipulated as if a user was interacting with them directly. One example usage was that templates for form letters could be defined, including named fields, and external programs could then control the subject programs to open the desired template, populate the defined fields using dynamic data, and save or print the resulting document. The capability enabled control of both content and formatting.

  • Generation of customized documents in FileNet WorkFlow systems: We arranged for automated creation and modification of different kinds of documents when we built document imaging systems as part of large-scale business process re-engineering efforts.

Processes can communicate in many different ways. I still feel that giving a general view of the steps to be taken is the correct way to begin, and that can be followed by examples. And if you want examples, this description should provide plenty.

Posted in Software | Tagged , , , , | Leave a comment

Controlling the Animation

One way to pause a running animation is simply to inhibit the action as shown in the first example below, and based on the following code. As you can see, the window.requestAnimationFrame is still called, but setting the value of running to false merely prevents anything from getting done.

If you want to stop the animation entirely you can invoke the window.cancelAnimationFrame function, which takes a request ID for a parameter. That ID is actually returned by the window.requestAnimationFrame function. This ID wasn’t necessary when we didn’t want to do anything but start the animation, but is if you want to stop it directly. The ID is used internally by the browser in a way that supports multiple animation requests at the same time.

Next we can work on controlling the frame rate.

Here we embed the animation call and the window.requestAnimationFrame function in a timer so the refresh is only executed when the timing wrapper allows it. I’ve set it up so you can click to cycle from 20fps to 140fps in increments of 20fps. Your results may vary by browser and hardware but I didn’t notice that the animation was able to run at full speed until the setting was bumped up to 80fps, and occasional hiccups remained even then. I know that the refresh rate on my hardware is 60fps so I’m guessing that the effective time between updates is governed by the timer setting plus the time it takes for the next refresh to happen within the browser.

It appears that using this mechanism only makes sense when you want to manually slow the refresh rate down to well below the screen’s refresh rate. Also bear in mind that the browser may update less often than the hardware refresh rate if the system is busy, if individual animation cycles become too complex, or for other reasons.

Posted in Tools and methods | Tagged , , | Leave a comment

More Basic Animation

Today I continued experimenting with the techniques I started using yesterday. I tried the exercise with filled circles and also after having placed additional objects on the screen. The only drawing modes that came close to supporting what I was trying to do had the undesirable side-effect of eliminating everything else on the canvas. That wasn’t going to work.

I did some more reading and decided on the brute force approach of just redrawing the entire screen for every refresh. That has the virtue of being bulletproof as far as getting exactly what you want displayed, but has the potential vices of being slow, causing flicker, and so on.

The process begins with clearing the canvas before drawing each frame. In theory one could use the context.clearRect function but that only works if you want a white background. If you want any other color you might as well use the context.fillRect function and get what you want. It appears to be pretty quick for areas of reasonable size. It’s also possible that only a portion of the canvas would need to be redrawn, and so only a portion would have to be cleared.

The next thing I noticed was that doing the animation using the setInterval function results in hiccups where the frame isn’t redrawn. This occurs at irregular intervals so I looked around some more and discovered the window.requestAnimationFrame function, which works most efficiently with the browser. As explained here, among other places, this preferred methodology has the advantages of automatically suspending when the canvas’s tab is hidden, syncs with other things the browser is doing, and works much more smoothly.

Here is the basic code. It looks like it’s making a recursive call but I interpret this instead as akin to placing the function call event in the browser’s refresh queue.

For today I’m stopping with this basic animation, which seems to work quite smoothly. It runs more quickly than yesterday’s experiment so I made the movement increments smaller. I’ll experiment with throttling the speed of the animation and stopping it outright tomorrow.

Posted in Tools and methods | Tagged , , | Leave a comment

Beginnings of an Animation

I wrote various two- and three-dimensional animations for years and am looking into various aspects of the practice beginning here. I took a course in computer graphics during my senior year in college using Microsoft Pascal on first-generation IBM PCs with Hercules graphics cards. The Hercules cards featured a resolution of 720×348 and the individual pixels had an aspect ratio of 2:3 (technically 1:1.55), which meant you had to transform all of your locations if you wanted to draw a square. Writing to pixels in, say, locations 100,100 to 200,200 would give you a rectangle that displayed half again as tall as it was wide. Well, at least this was true on the monochrome monitors available at the time.

Another method we learned was about how pixel were written to the screen. If we wrote pixels in OR mode, a logical OR operation would simply draw over what was already in place, and that was the default mode. If we wrote pixels in XOR mode, a logical XOR operation would invert the desired pixels. This effect could be used in animation by drawing something in XOR mode, then drawing it again in the same spot to completely undo the original draw, and then drawing the thing in a new location. That technique, by the way, was supported by more than just the Hercules graphics card.

With happy memories of green-on-black graphics and this simple animation idea I set out to see if the same techniques would work on the HTML5 canvas object.

It started out well enough. I’ve worked out the drawing, moving, and timing issues in previous exercises and the HTML canvas has the virtue of supporting pixels with a 1:1 aspect ratio (or at least it assumes that modern displays do as much). Replicating the XOR operation turned out to be problematic, however. The canvas element implements a more complicated set of drawing modes and none of them worked the way the old XOR mode did. Indeed, the effects turned out to be rather surprising.

In the demo below I try the undraw-move-draw-wait technique in the various drawing modes supported by the canvas element using the globalCompositeOperation value. Clicking on the button cycles the drawing mode through the eleven modes, the operation of which are explained here, among other places.

As you can see, only a couple of the modes even come close to doing what I want and then only against a white background, or, more properly, against a completely transparent background. When framed on this page the animation often takes on the background color of the WordPress page. Some initial research indicated that the old XOR technique is problematic to the point of effectively not being reproducible. It therefore looks like animation may have to be accomplished by redrawing the whole scene, which is annoying, and brings subtleties of its own.

Posted in Tools and methods | Tagged , , | Leave a comment

The Steam Table Form… Finally!

About two months ago I started making a web form that allowed a user to calculate thermodynamic values for saturated water as functions of either temperature and pressure. I got sidetracked for quite a while working on the graphing capability and fixing the functions themselves, but today I finally got the form nailed down with some decent functionality.

The user can enter either a temperature or a pressure and press submit to generate the function values, which are displayed. Pressure and temperature are actually fixed functions of each other, so the opposite entry field is populated in addition to the static fields for the other values below. I originally set it up so that the form would generate values based on whichever value was good between temperature and pressure, but upon trying it out I found it was weird when the form used the other value if the most recently edited one does not have a valid format. That said, the form will use the opposite value if the last editing action clears the current input. Appropriate messages are supplied if input values are out of range or incorrectly formatted.

The input values cover the entire range of saturation pressures and temperatures (and helpfully ignores those values I created for lower ranges). Testing indicates that values are accurate to within about one percent at worst, which is generally sufficient for engineering calculations. Accuracy can be improved by applying more fits over smaller ranges. I also learned that pure polynomial fits are generally smoother than fits that use inverse, square root, and logarithmic functions.

Posted in Tools and methods | Tagged , , , | Leave a comment