Interface Analysis

I’m pursuing yet another certification, this time the IIBA‘s CBAP, for Certified Business Analysis Professional. I chose a 35-hour training course run as a series of interactive webinars over seven successive weekend days and during the second session this past Sunday we ended up discussing several of the 50 techniques used by Business Analysts during each phase of an effort. The students were asked to each take a separate technique (we cover a few each session) and then talk on it for ten minutes or so. I was assigned the topic of Interface Analysis.

Since I’ve spent many years working on such things I observed that I was either the best possible choice to discuss that subject — or the worst — depending on your level of enthusiasm. My enthusiasm for this particular subject is high, so I took the opportunity to describe a few things from my website (we could share screens and all interact via audio and video), mostly on this page where I describe how my experience across many industries has led me to be a well-rounded systems or process analyst in the general sense. Come to think of it, I probably spent most of my time on this page, where I describe my experience as a system architect.

I don’t remember if I had time to describe any material to the group beyond the one page but I know that I discussed a few subjects, whether explicitly or implicitly. I’ll describe my clarified thoughts here.

  • Interfaces can be used to connect:
    • processes within a single machine: via shared memory, semaphores, or other sorts of messaging
    • processes on separate machines: via serial, fiber optic, and hardwired and wireless network protocols like TCP/IP
    • processes on separate networks: via routers and long-distance connections
    • users with user interfaces of various sorts: using software GUIs and hardware switches, buttons, knobs, and so on
    • logical operations within processes: information passed between logical blocks of code or representations as parameters and objects
  • The electrical protocol and media is distinct from the information protocol.
    • RS-232 and RS-485 are examples of serial protocols, and these describe a certain set of electrical connections and signalling used to send and receive information. Ethernet is a standard networking protocol, and wired and several wireless standards exist. Arduino boards use their own serial protocol for wireless communications distinct from WiFi and TCP/IP. Optical fiber communications have their own methodology. BACnet is commonly used for building control systems.
    • HTML and XML are examples of information protocols. These can be transmitted across many different types of hardware connections, though of course TCP/IP is the most common. American Auto-Matrix publishes a protocols they call PUP (Public Unitary Protocol) and PHP (Public Host Protocol) that describe standard message formats typically transmitted over RS-232 and RS-485 connections. These are used to control, configure, and record data on networks of unit controllers attached to host controllers. An almost infinite variety of standalone message formats are defined for individual applications and I wrote many different versions.
  • Interfaces need to ensure logical consistency across time. Mutexes are one way of ensuring this.
  • Communications need to be checked for potential errors. A system I supported for Inductotherm used CRC checks to verify the accuracy and integrity of their serial messages.
  • Our course instructor felt it necessary to observe that interfaces should be kept as simple as possible. This is true, and in my enthusiasm to complete a comprehensive information dump with minimum preparation I had neglected to mention it. It should also be recognized, however, that they should be as simple as possible — but no simpler. Make sure the necessary information is relayed as required.
  • The information payload of a message (distinct from header information used in the low-level communication layer) can be compressed to greater or lesser degrees to optimize on message size, transmission time, user comprehensibility, and software processing time.

I describe several examples of inter-process communications I’ve worked with here and here.

Addendum, February 15, 2022

Since this initial article was written I’ve touched on many additional aspects of interfaces. I’ll include a couple that involve diagrams that may illustrate the idea more clearly.

I’ll start by observing that all interfaces involve a form of communication, a basic model of which is shown here.

The next diagram shows not only the physical flow of material though a manufacturing process, but also the flow of information between the many different control and information systems that govern and support the entire process. There are quite a few as you can see, and there are even more when you consider that the level 3 and 4 systems are abstract representations of operations that themselves consist of many individual computers and networks, many of which are connected to similar systems in entirely different organizations.

The colored arrows at the bottom indicate the temperature (and sometimes state) of the physical material as it moves through each piece of large industrial equipment (the furnace can be up to 800 feet long), while the unfilled arrows show the flow of information between different instruments and computer systems. Humans interact with many of the systems shown, which constitutes yet another class of interface.

Those communications operate on different types of protocols, on varying electrical connections, with different kinds of verification methods, on very different timings. Some communication channels need to be able to handle many different kinds of messages, each of which may contain multiple pieces of information. The format and meaning of each of those individual values has to be identified, negotiated, implemented, and tested by the engineers who create the systems that pass the messages back and forth.

Nuclear power plants may contain on the order of fifty separate fluid systems, and those interface with many different electrical systems and further models for each different type of equipment (valves, pumps, indicators, and so on). The diagram below shows the first of three pages of just one of those fifty thermohydraulic systems. Any link that ends in an incoming or outgoing arrow shows an interface with some other system. Note that this diagram was prepared in the process of creating an operator training simulator for a particular plant. Some of the connections are to other systems contained in their own piping and equipment networks, like this one, while others connect to models of rooms and other containing spaces. Since all of these models (except the core model) ran on one of the CPUs in the same computer system, sharing the same memory space and programmed in the same language, the interfaces between models were a little bit simplified. A little. The engineers still had to agree on, implement, and test the meaning, units, range of values, and memory format of every variable shared between models.

Finally, I discuss many aspects of low-level electronic communications in a series of articles here.

Posted in Tools and methods | Tagged , | Leave a comment

Buggy, Design Contradictions, and TRIZ (now ARIZ)

A Tuesday session of the Project Summit / Business Analyst World conference (June 20th), featured an interesting talk by the New Jersey Department of Health’s Victoria Roza. She described methods of design and creativity coming out of a practice called TRIZ, which I originally encountered during my Lean Six Sigma training. That training only discussed the idea in terms of the contradiction matrix that was the original innovation of the practice, but Ms. Roza described how the practice has been greatly expanded and generalized since then. The expanded practice is known as ARIZ. (Both acronyms reflect their Russian origins. The “T” is for “theory” while the “A” is for “algorithmic.”)

I look forward to doing more reading on these subjects but for now I want to take the opportunity, for Ms. Roza’s benefit and others, to describe an engineering/athletic competition that is a major tradition at Carnegie Mellon University. Officially known as Sweepstakes, the competition is a combination soapbox derby and relay race, where two pushers relay their buggies up the front hills, release them to “freeroll” down a long, winding road to a high-speed right turn (called the chute), after which three more pushers relay the buggies up the back hills to the finish line. The course is about 0.8 miles long and the record time as of 2017 is 2:02.16. This is not quite ten seconds faster than the record set my first spring on campus in 1981.

Here are some nice introductions to the subject from YouTube.

A buggy team consists of five pushers and a driver, and organizations that want to enter buggies into competition must provide other kinds of support to the event. Separate competitions are run for push teams of men, women, and alumni.

  • Driver: Small, light, probably female, nerves of steel, probably a screw loose.
  • Hill 1 pusher: Typically the strongest pusher, gives the buggy its initial momentum up the steepest hill.
  • Hill 2 pusher: Can be the slowest runner, but needs to push like a beast when releasing to freeroll.
  • Hill 3 pusher: On the teams with the best buggies this pusher may only run a few yards. On weaker teams it doesn’t matter.
  • Hill 4 pusher: Requires a combination of strength and speed to crest the final slope.
  • Hill 5 pusher: “Hill” 5 is actually flat, so this pusher should be the fastest runner but doesn’t need to be the strongest. My senior year I pushed hill 5 for our B-team, a sure indication that most of the guys in our fraternity were on the slower side. We finished in 2:47, at least twenty seconds behind the other buggies in our heat, but tying our extant record until our A-team lowered it to 2:37 about an hour later.
  • Buggy Chairperson: One representative per organization who attends meetings, coordinates activities, and ensures all duties are performed. Usually the chief designer and builder of the buggy.
  • Mechanics: Any individuals who assist the chief designer.
  • Guide Flagger: One or more people who provide targets to help the driver take the ideal line in freeroll.
  • Sweepers and Flaggers: Organizations entering buggies must provide help to sweep the course before practices and races, and flaggers to control traffic access to the course. Practices were often early in the mornings on weekends, following mixers that ran late into the night. It wasn’t uncommon for sweepers to sleep on the floor in the hallway outside the chairman’s door, so the chairman would wake the individual up while tripping over him, and make sure he got out to fulfill his duties.

As you can see from the fourth video above, the buggies themselves have gotten smaller, faster, and simpler over the years in many ways, while some aspects of the wheels and bearings have undoubtedly gone increasingly high-tech.

Buggy is a classic constrained optimization problem. Here are some of the contradictions.

  • A heavier buggy might roll downhill faster and might gain better traction in the chute but a lighter buggy is easier to push uphill. Buggies doubtless get lighter and lighter every year. The benefits of improving (sideways) traction in other ways, plus requiring less traction for lighter buggies in the chute, definitely skew the buggies towards smaller size and lighter materials. Additionally, buggies cannot change weight during the run, so they can’t, for example, drop a heavy weight at the bottom of the hill.
  • The pushbar needs to stick out to allow the pushers to move the buggies, but they generate drag at higher speeds. Builders have experimented with many different cross-sections of pushbar, but the most extreme designs involve having the pushbar fold down to a horizontal position during the freeroll. This innovation was first adopted in 1983 or 1984 with a pushbar that lowered backward so it extended out behind the vehicle. This was later seen as a safety hazard (to following buggies) so subsequent pushbars folded forward into the main body of the buggy. The question then becomes whether the drag saved is worth the weight and complexity of the (driver-powered since there can be no energy storage, if I understand correctly) folding mechanism.
  • The crossbar on the pushbar can take different forms. A wide pushbar would create more drag but allow for a more powerful pushoff with a widely spread hands (think bench press) and would in general be easier to grasp. A shorter crossbar would create less drag and possibly allow a push-off to be weaker but longer if the hands are closer together, and would be a bit harder to grasp in general. I think I’ve seen pushbars eliminated entirely to be replaced by a tennis ball, but that’s very hard to control and all but impossible to get two hands on. It didn’t catch on.
  • Larger wheels allow for greater contact with the road and lower speeds at the bearings near the axle and hence lower friction, but smaller wheels are easier to spin up (less rotational inertia) and present a smaller cross-sectional area for reduced drag. I don’t know whether smaller wheels are easier or more difficult to balance (classic soapbox derby wheels can spin for thirty minutes) but with lower rotational inertia perhaps it matters less.
  • Drivers were traditionally males but over time became all but exclusively female. Older analyses (men are stronger, more coordinated, more willing to risk injury) are no longer applied and 80-pound, four-foot-nine, fifteen-year-olds who somehow materialize on campus (this happened when I was there) are recruited heavily by the most aggressive and successful competitors. Smaller drivers mean lighter buggies with smaller cross-sectional areas.
  • Having the drivers drive feet-first is safer but orienting them head-first makes for a smaller cross-sectional area. Safety is a huge concern. Ambulances have been known to take the entire buggy to the emergency room so the driver can be extracted under the best possible circumstances. I heard about a bad, head-on crash before my time on campus and was standing about 20 feet from a similar occurrence in 2014, I think. The safety harnesses and forward protection in modern buggies is good, but a sudden deceleration when hitting hay bales head-on at over forty miles an hours is always going to be a problem. I don’t know why the driver didn’t make the turn (perhaps the steering failed?) but it was a scary and heart-rending event to witness.
  • Steering with hands in front of the driver makes for a less comfortable setup for the driver and a potentially more limited field of view, but steering with hands at the driver’s sides makes for a wider buggy with a larger cross-sectional area.
  • Wheels with soft rubber surfaces will grip better than a harder surface for better traction through the chute but will generate more rolling friction over the rest of the course. Designers have addressed this problem in many ways and I’ve seen a lot of crazy wheel configurations since 1981, including one that didn’t survive its first power-slide through the chute during practice rolls one winter weekend. An interesting solution is to use two larger, softer wheels inside the buggy’s body and one smaller, harder one outside it. I know that rolling surface designs have been a huge point of experimentation over the years and I was always amused when spectators rushed over the hay bales at the bottom of the course to retrieve sections of tires that scuff off in the turn. They would all sniff the rubber bits and make knowing pronouncements on the chemical treatments that must have been applied. I suggested in a class assignment that buggies use a tire with a hard, narrow band in the middle or to one side and softer surface to either side. The wheels would be arranged so they would tilt in a turn, which would bring the softer sections in contact with the ground. This wouldn’t be hard to do with a single front or rear wheel mounted at an angle, but would be harder to do for pairs of wheels on a fixed axle on in a classic, two-wheeled steering arrangement.
  • Speaking of wheel arrangements, classic four-wheel setups presumably offer greater stability and traction in the chute but at the cost of weight, drag, and complexity. Three-wheel arrangements offer a lot more flexibility and are typically simpler and lighter. Two-wheeled arrangements were employed in the 60s (using modified bicycle frames without fairings, which would never fly today) and again briefly in the 80s, briefly holding the course record (not bad for a fraternity that was traditionally terrible at buggy!). The biggest problem with four-wheeled buggies was getting the caster and camber of the wheels right. Using three wheels and steering with one greatly mitigates this problem. Two-wheeled buggies were outlawed (supposedly) for safety reasons. Interestingly, they used a pair of retractable training wheels for stability when being pushed at lower speeds, and only ran on two wheels during the freeroll section of the course (including the chute).
  • Smoother bodies that cover more components prove to be more aerodynamic than the wider variety of designs that have been employed throughout the event’s history, at the cost of added weight, cost, and complexity. Experience has demonstrated the value of smaller, faired bodies, to the point where the macro design of most buggies has converged, leaving differences only in the small mechanical details, the fitness of the pushers, and the skill of the drivers.

Doubtless there are other considerations, but those are the main ones I remember thinking about back in the day. Our fraternity did better at Booth and Greek Sing, the other two legs of the “Triple Crown,” than we ever did at buggy, but we had loads of fun doing all of it.

Posted in Engineering | Tagged , , , , | Leave a comment

Nine SDLC Cross-Functional Areas

I met the very impressive Kim Hardy at an IIBA Meetup in Pittsburgh a few weeks ago. She is passionate about relating her insights to people, their needs and values, and how to make them effective, engaged, and happy while realizing their organizational goals. Last Friday we got together online and she walked me through an analytical framework she’s been building up as part of her “People First” Agile coaching practice. I don’t want to give away her ideas, you should seek those out on your own, but I do want to share one insight she’s developed that resonated with me strongly. I’m not suggesting that it’s entirely unique because I think we all implicitly understand the categories she defines, but sometimes the most powerful insights come from clarifying things everyone already knows in a pithy way.

The specific clarification has to do with identifying nine cross-functional areas within the SDLC process. Ms. Hardy’s consulting practice aims to ensure that each Agile team includes members that are knowledgeable in — and hopefully passionate about — each of the nine areas. The work will usually get done even if this process isn’t followed consciously, but consciously applying it cannot help but clarify lines of responsibility, give more team members a chance to shine, and clearly identify areas that need attention.

Before proceeding I want to observe that while this breakdown scales to efforts of every size, it’s probably most important to apply explicitly on larger efforts. Larger efforts involve the most people, require the most coordination, and incur the greatest overhead in each functional area. By contrast, when I wrote model-predictive control systems for the steel industry I worked with one other person some of the time and by myself more than half the time. I (or we) had to interface with other computers within the furnace control system and with other plant systems but within the Level 2 supervisory control system I worked alone (or close to it). My sometime co-conspirator set up the hardware, VMS OS, and communication programs on DEC VAX and Alpha systems while I wrote the simulation and control code. On Windows-based systems I did it all on my own. I had to be reasonably competent in all nine areas. At that time (1994-2000) I didn’t take as high level a view as I do now. With more analytical and organizational experience I have a deep feel for all of these areas, but I have specialized more in the abstract discovery, data, and decision areas than in the technical implementation details.

The nine areas are:

Business Value: Supporting organizational goals

User Experience: Ensuring that users are empowered to execute their duties with clarity, speed, and accuracy

Process Performance: Executing project schedules efficiently and effectively

Development Process: Ensuring that developers have the tools, training, insulation, and support they need to produce excellent work

System Value: Creating systems that realize an organization’s business values

System Integrity: Ensuring systems are secure and robust

Implementation: Developing code that is efficient, free from defects, and addresses the organization’s values

Application Architecture: Ensuring systems are maintainable, modifiable, reusable, understandable, and so on, and ensuring that the business needs are addressed

Technical Architecture: Employing the most capable technologies possible

This classification and clarification resonated with me in particular because I’ve been trying to communicate where my current strengths and passions lie. Using this breakdown I assigned levels of preference to each area of practice as follows:

This says that the primary areas of importance to me at this stage of my career are those that solve a problem for an organization. Accordingly, I have specific explanations for why I value each area the way I do.

Business Value: I’m all about solving the right problem. Identifying an organization’s problems, needs, and requirements should come before any other consideration. Like you don’t plug numbers into the equations until you’ve defined the equations that describe and solve the problem, you don’t implement the solution in detail until you until you’ve defined and “solved” it in the abstract. This means that you concentrate on the logical and operational needs of the people and organization first, and only then do you work out the specific implementation.

User Experience: I’ve been building user interfaces and animations and conveying information for a long time. I’m keenly aware of the need to let users manipulate things in an efficient, powerful, and flexible way while at the same time guiding and constraining their actions to prevent errors, maintain situational awareness, and support an organization’s business goals. I care what users can do and am passionate about seeking and incorporating their feedback, but I’m less interested in the specific technologies used to implement the capabilities that let them do it.

Process Performance: This is about project management, Agile methodologies, and maintaining cohesion, buy-in, and enthusiasm. I have experience, insights, and/or certifications in all of those areas but think my greatest value-add comes from streamlining the design and requirements process. I always try to break processes down into their simplest, most general actions and representations so complex systems can be built up from the smallest number of generalized components. This allows a technical team to construct the smallest number of code units that implement the components so systems of arbitrary complexity can be built using just a few of them. Different systems are more and less amenable to this approach but I’ve had the good fortune to work on many applications where the approach is extremely effective. All good practitioners look for efficiencies, modularity, and opportunities for reuse.

Development Process: Caring about details of the development process is a function of being in the bit-slinging trenches every day. I mentioned that good practitioners always try to be more efficient, and coders and their coordinators tend to apply that impulse to methods that streamline their areas of concern. I love coding but at this point, for me, it’s a way to understand what the developers are doing, stay in touch with the current state of practice, and be able to relate general concepts to newer developers and non-technical individuals. I met a gentleman at a Build Night at Pittsburgh Code & Supply who walked me through the tremendous automations he had arranged for building and hosting websites directly out of GitHub. I am aware of those technologies but I don’t use them directly in my work and I observed that his “web fu” was far, far beyond my own. I realized at that moment that I totally didn’t care that that was the case, and recognized that I’m glad the people have different skills and interests. I didn’t care, and that was totally OK (for me, anyway, if not for you — there isn’t time to learn it all). The point is to work with teams where members have those skills and passions, and ensure they get the support they need to develop and apply them in peace.

System Value: I’ve stated above that my passion for solving problems abstractly before they are addressed concretely, as well as identifying opportunities for simplification and modularity, should lead inevitably to creating systems that directly and efficiently realize business value for organizations.

System Integrity: A lot of this has to do with cybersecurity, social and organizational engineering, and control of information. It’s not that I’m not interested in these things per se, it’s just that they aren’t what I prefer to concentrate on. I contribute in this area by being aware of what’s going on, building organizational systems for governance, and ensuring that the proper specialists are engaged to address the details.

Implementation: I care about efficient, modular, clear, documented code in a general way, and in particular ways when it comes to certain classes of solutions, but I’m less about build and deployment workflows and details than I am about optimizing on the right things so a system or portfolio’s total cost of ownership is minimized. This means knowing how to balance development time and squeezing out that last bit of performance, accepting the easy-to-write and -maintain abstraction in place of the custom coding job, space/storage/bandwidth/responsiveness tradeoffs, and so on. Again, there isn’t time to learn it all.

Application Architecture: For the nth time I’m all about realizing business value by ensuring that the correct information is gathered, processed, transformed, routed, and output in the process of providing effective control and decision support. I care what goes where, when it goes there, who it goes to, and why more than I care exactly how any of that happens. The W’s are the abstractions that provide the business value. The H is the instantiation of those abstractions. I refer to these considerations as the “solution space,” as opposed to the “tool space” described in the next section on Technical Architecture.

Technical Architecture: I give this the lowest level of concentration (again, for me) because I’m interested in solving organizational problems and those of the people within an organization. React vs. Angular? There’s an answer that a dedicated coder would care about but it’s not a concentration of mine. Oracle SQL vs. MySQL vs. NoSQL? Use whatever makes you happy. I’ll help design the schemas, you optimize the best way to work with them. C# vs. Java vs. C++ vs. SLX vs. goodness knows what else? Depends on the problem. Each individual will naturally know a related subset of specific languages, frameworks, development tools, and methodologies. My method is to analyze problems at a more abstract level, and then help technical teams implement solutions using their preferred tools and techniques. Having worked with a dozen different languages and having read about even more the one thing I appreciate is that they are all arise from unique historical circumstances and are designed to solve slightly different problems.

I’ve spent a lot of time looking at placement ads in the tech industry over time and see that they tend to list easy-to-label technologies that can be readily screened by ATS systems. Moreover, they often ask for lists of skills, in combinations and at levels of experience that are, to be polite, wishful thinking. I therefore try to talk in terms of these cross-functional areas. It helps people and organizations, build teams, manage processes, and identify opportunities in an interesting and clear way.

Posted in Management | Tagged , , , , | Leave a comment

Addendum To Process Described in Post on Domain-Driven Design

I’ve edited the post from May 11th that describes my preferred project/VV&A methodology. I was reminded, while viewing some excellent presentations at the Project Summit / Business Analyst World conference in DC this week (at which I volunteered) that I forgot to include the step of specifying Assumptions, Capabilities, Limitations, and Risks and Impacts. I have edited that post accordingly.

Posted in Management | Tagged , , , , | Leave a comment

fredfredfred

Example code:

Example tests for above:

Posted in Software | Tagged | Leave a comment

Domain-Driven Design

At yesterday’s DC PHP Developer’s Community Meetup Andrew Cassell gave a really nice presentation on Domain-Driven Design. He described the major books in the field, some of the main movers and history, and what the idea is all about.

In a nutshell, it says to understand your customers’ problems in their language, talk to them in their language, and discuss your proposed and executed solutions in their language. If you’re sensing a pattern here then you’re onto something. You are supposed to be the expert in all the computer twiddlage that you do, but your customers are not, nor generally should they be. They are the experts in all the domain-specific twiddlage that they do, and you need to respect that. Then you need to learn it. You don’t have to learn it to the level that they know it, but you have to learn enough about what they’re doing, and how they describe it, to be able to give them the functionality they need.

As Andy described, programmers tend to start thinking about data structures and UI screens as soon as they start to get some input (i.e., as soon as they start listening to the customer describe what it is they do). That’s not a bad thing, it’s a reflection of their passion and expertise. But any passion needs to be controlled, and rushing to cook up a solution before you truly understand the problem is a “one-way ticket to Palookaville.”

There’s more to it, of course, and the talk covered a lot of the tactical considerations like isolating sections of software that represent different areas of business functions so they communicate using messages rather than a closer type of coupling. Also described was how statements should be streamlined and clarified so the code itself clearly expresses business logic. Doing this ensures that elements within the computing system are always in a valid state and thus don’t need to be continuously parsed and error-checked. If done correctly — and this has always been a feeling of mine — once you understand the customer’s problem domain and have mapped out their process in sufficient detail, your code practically writes itself.

I listened to the talk with great enthusiasm because I feel I’ve essentially been doing this for my whole career. I know that the method is extremely powerful and it’s helped me write a lot of amazing software, but I haven’t felt like I’ve ever gotten anybody to understand it (this is actually the reason I created this website in the first place). People look at my resume and don’t see the common thread that holds the whole thing together. They see a bunch of different jobs, industries, and tools, many of them older, and don’t think any of it applies to them.

I cannot tell you how frustrating this has been.

Each person has their own history influenced by different sets of people, places, and events, and they each create a unique synthesis because of it. I got my degree in mechanical engineering but also took a heavy dose of computing courses (it was Carnegie Mellon after all…), so it was natural that I would try to focus my computing skills on the problems I encountered working in the paper and nuclear industries. In particular, I was a process engineer who studied, designed, and improved fluid systems where a series of pipes moved liquids (and other materials) from one machine or process to another. These things were all described by drawings like those shown here.

Analyzing these systems required having a map of what went where and a description of what happened in each component of the system. There were inputs, outputs, volumes, compositions, masses, flows, volumes, compositions, temperatures, energies, surface areas, variations, energy transfers, elevation changes, geometries, and transformations at every step of the process that had to be described using language appropriate to that domain (which in these cases included thermodynamics, fluid mechanics, differential equations, knowledge of making wood pulp, chemistry, radiation, failure modes, connections, and the steam cycle, many different kinds of process equipment, and other things). I first got used to working with fluid and piping systems in the paper industry but mostly did ad hoc analysis and designed a few of my own tools. When I moved to the nuclear industry it was to build full scope operator training simulators, which meant hooking complete reproductions of all the control room hardware to first principles computer models of all the fluid and electrical systems in a given nuclear power plant.

Such an undertaking requires a deep level of understanding and a huge organizational effort to get everything in order, properly labeled and interfaced, built, documented, tested, shipped, installed, verified, and accepted. In order to simulate something you need descriptions of every facet of the system and its behavior that has a material effect on its operation. In software terms this means you have to define every variable, data structure, calculation, transformation, and transfer in the system. You have to define them, name them, determine initial values for them and determine acceptable ranges of values, and test them.

If all of this didn’t happen, and with sufficient accuracy, then the thing simply wouldn’t work.

After working on projects like this I found it to be a really easy transition of doing business process reengineering using FileNet document imaging systems. In these projects the goal was to map out a company’s business processes, and then write software that allowed users to view scanned images of large volumes of paper documents in the course of their collating and evaluation work instead of the physical documents themselves. Big companies like banks, credit card processors, and insurance companies were practically deluged with paper documents back in the day but I worked with teams of consultants who added a scanning and indexing step at the front of a company’s process, and then automated the process to remove all subsequent physical handling of the documents. The combination of eliminating physical handling steps and streamlining evaluation processes through automation support reduced system costs by up to thirty percent.

We figured out what the customers needed by having them walk us through their process, which we then documented, mapped, and quantified. This defined the As-Is state of their process. We then identified the process that would be eliminated or streamlined through automation, determined the overhead, requirements, and costs of the new scanning step, and documented the configuration, costs, and requirements for the new system. This defined the To-Be state. For a disability underwriting process the domain knowledge included document indices, names, addresses, companies, medical records, risk profiles, scoring systems, sorting/scoring/collating methods, and determinations of feasibility and costs. In order to build such a system we had to talk to executives and workers in every one of the company’s operations. We never talked to the customer about the behind-the-scenes geekery that we loved as programmers, we only talked about, documented, an analyzed things in the language that we learned from them.

Once we did that then generating the UI screens, data structures, calculations, and summations were almost trivial to implement. I’ve since done the same things in other jobs and industries, as detailed in the common thread link above.

Were the programming tools different? Yes. Was the domain knowledge different? Sure. Were there still inputs, outputs, transformations, routings, decisions, and results?

Yup. Bingo. And that’s the point. The words change but the melody remains the same. I’ve learned a lot of things during my career, but this underlying understanding of how to work with customers to figure out what they need, in their own language, so I and my colleagues can give them the best possible solution, is my superpower, if I can be said to have such a thing.

The method for establishing and documenting a mutual understanding with the customer, and then progressively advancing through the design, implementation, testing, and acceptance of the solution is this:

  • Define the Intended Use of the System: This means defining the overall goals of the proposed (new or modified) system in the business context.
  • Identify Assumptions, Capabilities, Limitations, and Risks and Impacts: Identify the scope of the project or model and define what characteristics and capabilities it will and will not consider or include. Describe the risks and impacts of those choices. (added 22 June, 2017)
  • Discover, Describe, Quantify, and Document the System: In simulation terms this would be described as building a Conceptual Model. This defines the development team and customer’s mutual understanding of the system that embodies the defined business need. This can describe an existing system, a new system, or a modified system.
  • Identify the Sources of Data for the System: They must be mapped to the conceptual model of the system and validated for accuracy, authority, and (most importantly) obtainability.
  • Define Requirements for the System: Functional requirements should be mapped to all elements of the Conceptual Model of the system, and this mapping should be comprehensive in both directions (that is, every point in both lists should be addressed). This mapping should be done using a Requirements Traceability Matrix (RTM). Functional requirements relating not to the business logic but to the computer hardware and software environment should also be documented, as well as the system’s Non-functional requirements (that describe not what the system does but how it should “be” in terms of robustness, accuracy, maintainability, modifiability, ongoing plans for modification and governance, and so on).
  • Define the Design of the System: All of the different types of requirements listed above must be mapped (in both directions) to elements in the Design Specification. This should be done by extending the RTM.
  • Implement the System (and Document it): This is addressed by the Implementation Plan, which is guided by a combination of Waterfall and Agile techniques appropriate to the project. All items should be mapped in both directions to the System Design using the extended RTM.
  • Test the System: This process results in the Verification of the system and this documented activities should map in both directions to the Implementation and Design elements listed in the RTM, in both directions.
  • Demonstrate that the System’s Outputs are Accurate and Meet the Customer’s Needs: This step is carried out through comparisons to real-world and expected results based on historical data and the judgment of Subject Matter Experts (SMEs). Note that outputs are more than just the data generated by the system, though this is the major goal of a simulation. They also include all of the functional and non-functional requirements having to do with the computing environment and the meta-operation of the system within the business context. Completing this step results in the Validation of the system.
  • Accept or Reject the System: Determine whether the system is acceptable for the Intended Use in whole or in part, or not at all. Accepting the system results in its Accreditation (or Accreditation with Limitations if acceptance is only partial).

In summary, the need is defined; the details are identified; a solution is proposed, implemented and tested; and a judgment is made as to whether or not it succeeded. The artifacts of the system can be listed like this, and they should be mapped in both directions every step of the way. If all needs are identified and addressed as being acceptable, then the system cannot help but be fit for the intended use.

  • Intended Use Statement
  • Assumptions, Capabilities, Limitations, and Risks and Impacts (added 22 June, 2017)
  • Conceptual Model
  • Data Source Document
  • Requirements Document (Functional and Non-Functional, including Maintenance and Governance)
  • Design Document
  • Implementation Plan
  • Test Plan and Results (Verification)
  • Evaluation of Outputs and Operation Plan and Results (Validation)
  • Acceptance Plan and Evaluation (Accreditation)

The level of effort that goes into each step should vary with the scope and scale of the project. I describe a heavyweight methodology I was required to use for a high-profile system used to manage fleets of aircraft for the Navy and Marine Corps here. The nuclear power plant simulators I worked on employed a very similar process, and all of the other projects I participated in over the years scaled down from there.

As a final note I’ve often described how I always like to automate processes and develop tools. To that end I have written programs that implement business logic in a block-level form, such that the blocks can be connected together in a way that, when the properties of the blocks, connections, and processed entities are defined, the business logic is instantiated automatically. This is a form of Domain-Driven Design in that the components of the computing system (the blocks, connections, controllers, entities, and so on) are purpose-built to represent a particular aspect of a given problem domain. Such systems exist for electrical circuits (SPICE), business process models (BPMN), Port-Of-Entry Modeling Systems (BorderWizard and related tools I worked on), and more. I’ve written tools like this to model nuclear plant fluid systems and certain classes of general purpose, discrete-event systems.

The discrete-event simulation framework I’m building is a form of visual or graphical programming (as distinguished from merely programming graphics). Here is an interesting discussion on the subject. It appears that visual paradigms work better at the level of business logic in a specific domain rather than trying to displace the low-level constructs of conventional languages.

Posted in Management | Tagged , , , , | Leave a comment

Node.js

I finally finished working through the exercises in the LearnYouNode suite put out by NodeSchool, which organizes events in cities all over the country. They have a whole pile of different self-directed workshops. It was decent enough introduction to the subject and gets you to do basic things like basic file I/O and server operations (answering HTTP requests and so on). It really helped me get used to the asynchronous operations and JavaScript callbacks in general. I’ve also worked through some of a Udemy course on Node.js and will have to finish that up before too long. I’m guessing that it’ll be much easier to handle having completed the LearnYouNode workshop.

It’s actually not easy to find third party web hosting services that support Node (1&1 and a lot of the standard services like GoDaddy don’t, for example) so it’s more likely to be hosted on local machines. Local machines, of course, can be as small as a Raspberry PI (a surprisingly popular application), as average as your desktop machine, as big as a local server, or as huge as server farms that get as big as you’d like.

I’ve volunteered to serve as a mentor at the next NodeSchool hosted in Baltimore, which happens every two or three months on a Saturday from 1:00-5:00 pm. I expect I’ll start off just helping the folks working through the basic JavaScripting exercises to start. I’ve attended a few Saturdays in DC and had fun hanging out with people and working through the exercises.

Posted in Uncategorized | Leave a comment

Screen Scraping

At my PHP meetup the other night some of the folks were discussing upgrade paths for content management systems, especially Drupal. They noted that there isn’t yet a good upgrade path to the most recent version from the previous version. They described all of the manual steps that would have to be taken to migrate the data from one version to the next, with the biggest technical consideration being whether or not you have direct access to the database. If that access is possible then you can go in behind the scenes and just move stuff to a new database and its new table structure and effect whatever changes and transformations are needed by hand. The same thing can be done using custom back end code. That that access is not possible then they talked about using a process known as screen scraping.

There are numerous methods for using JavaScript and other automated tools (e.g., Selenium, which I used briefly) to manipulate user screens in web-based systems and similar tools exist for many other kinds of systems. Screens can be navigated to that will display desired data which can then be automatically read on a field-by-field basis. This works when the entry and display elements are tagged with unique identifiers, which is certainly the case in HTML screens. The hassle with this method is that it’s slow and you have to have a way to ensure that you can cause all of the data in the system to be exposed. That may or may not be a simple thing to do.

The FileNet ECM/BPM tools I used back in 1993 and 1994 incorporated mainframe terminal windows (it supported several kinds, sometimes connected directly by Telnet across TCP/IP but other times using IBM 3270 emulation using dedicated hardware) that could be screen scraped in the same way. That tool required entry and display fields to be defined by position using an 80×25-character screen. I don’t recall attempting to do complete migrations using that method personally, but I imagine that others must have done so. The capability was more often used to supplement the ongoing activities, although over time such a procedure can affect a reasonably complete soft migration.

I think this reflects the idea that the central issues in computing haven’t changed much since its earliest days. As discussed in the book Facts and Fallacies of Software Engineering, every new innovation is hailed as being massively transformative but over time proves to yield some marginal improvement in some limited problem domain. The biggest areas of endeavor now seem to be managing complexity and balancing costs for information technology against fixed and ongoing costs across every life cycle phase considering performance, reliability, and storage to achieve the lowest systemic cost of ownership consistent with required performance.

People recruit based on long checklists of specific tools because it’s easy to do, seems objective, and is amenable to automation. The question, however, is whether you really want to recruit for specific tools or for the ability to solve the larger problems at a higher level.

The bottom line is that the more things change, the more they stay the same.

Posted in Tools and methods | Tagged , , | Leave a comment

The Confederation Bridge

This weekend I finally made it to my twelfth Canadian province (only Nunavut remains) when I drove across the Confederation Bridge into Prince Edward Island (see also here). The bridge is seven miles long, 40-60 meters (about 131 to 196 feet) high, surprisingly comfortable to drive across (at least in a car), and must withstand high winds and massive seasonal ice floes. It is not the longest bridge in the world, but it is the longest bridge over ice-covered water.

Dealing with the seasonal ice was the main design consideration, from what I’ve read. This concern drove the spacing between vertical supports to be wider than originally planned — 250 meters per span as opposed to 175 meters. This spacing would indicate roughly 52 vertical supports along the 12.9 kilometer span but the spacing must be closer in places because other documentation indicates there are 65 vertical supports (14 and 7 in the shore approaches and 44 in the main span). The concerns about ice also drove the design of the bases of the vertical supports, which are angled so the ice rides up their lower slopes and breaks off. This exploits the fact that while ice is fairly strong in compression it is relatively weaker in tension. Experience has shown that the pylons cut through the ice smoothly such that seasonal flows are not meaningfully affected.

The bridge was opened in 1997 on time and on budget only four years from the initial approval, which strikes me as pretty impressive. I also read that a scale mockup was built and tested to failure, meaning that the builders were not yet relying solely on computerized methods of design and testing. Having lived through most of the transition to computer-aided engineering I’m always fascinated by the cleverness of engineers who worked with physical models, especially in cases like the U.S. Army Corps of Engineers Bay Model in Sausalito, CA.

An older gentleman who used to call on our fraternity house to sell cleaning supplies (I was House Manager at the time) spent a morning regaling us with tales of how mechanical linkages were used to machine symmetric shapes using a template that was only half complete. The milling heads had to be 180 degrees out of phase. I don’t know if he was blowing smoke or not but he sounded credible to me at the time!

I saw a television documentary about another fascinating bit of old-timey analysis when I was in England in the summer of 1987. It described the design of European cathedrals and used photoelastic analysis of plastic cross-sections to show how stress was distributed widthwise through the columns and arches supporting the central nave and one or more aisles on either side and lengthwise through semi-circular apses and buttresses on one or other ends. The designs were obviously highly varied but the basic ideas were fairly consistent.

There have also been numerous documentaries that describe the history of technology. Many discuss efforts to reconstruct a range of Roman siege projectile weapons and one described life in England in 1900.

Posted in Engineering | Tagged , , | Leave a comment

I Helped Design This

In 2004 or 2005 I attended a series of meetings where I helped do traffic projections for this new port of entry (and bridge) at Calais, ME. I also built and ran models of different port designs using the BorderWizard tool that Regal Decision Systems had created. The results I reported helped to select and verify one of the design concepts and this evening, as part of 930 miles of driving on the way to Prince Edward Island (my 12th province of 13), I finally got to see it. I’ll be processed there as I cross back into the U.S. on the return trip.

It’s rare that new crossings are constructed. These days they’re more likely to shut them down, so this was a fun opportunity.

Posted in Uncategorized | Leave a comment