Who Decides?

A little while ago, during our weekly Tampa IIBA certification study group meeting (you should join us!), someone asked me whether the business analyst or the project manager makes certain decisions. The specific question had to do with potential guidance on how projects should be implemented, but here I will try to answer the question more generally. (If you’re interested, the question is asked at 38:25 of the video file BABOKStudyGroup20230209.mp4 stored here. Our normal facilitator was out so I guided that week’s session.)

I offered a few answers at the time, but let’s try to analyze it even more completely.

In theory, the project manager and business analyst have different duties. The PM manages the environment in which the work is done and arranges for, tracks, and protects the resources (especially including money and people), while the BA manages the actual analysis and solution process. I touch on this idea here.

The foregoing is true if you’re talking about two practitioners of similar seniority in an organization where those roles are defined clearly. In smaller organizations (for example, where the PM and the most senior BA are the same person), situations where things aren’t well delineated, or where you have people of unequal seniority, the decisions will be made by the most senior member or the one otherwise nominated by senior management or organizational working style. It’s possible that the PM is the most senior and guides most aspects of running engagements. I probably thought this was true through most of my career, and the idea of having people called business analysts is still comparatively new (and given the way things always change, who knows what may be in fashion or what anything may be called in twenty years?). In other cases, a team lead, primary investigator, senior engineer or scientist, or someone else in an analytical role will guide most of the engagement while the project manager stays in the background “counting the beans.” I’ve been that guy, too. The point is that the actual conditions on the ground dictate how things will be done.

Another assumption smuggled into the question is that there is only one BA (“the” PM or “the” BA). In reality, especially depending on the scope and scale of the project, a given effort may involve multiple BAs, and they may specialize in different phases of a project. Junior BAs, in particular, may chiefly be involved in the requirements phase(s), or at least may not be aware of everything that’s going on in the entire scope and life cycle of an engagement. (This is why BAs always seem to be doing such a seemingly wide variety of different things in different contexts if you talk to them. They’re applying a consistent set of skills, but they really are in a wide variety of different situations in different contexts. That’s why the BABOK is written in such a general, non-prescriptive way, and why I emphasize that my framework is extremely flexible.) This is certainly often true of implementation SMEs and testers. There may be multiple PMs, too, but that is more rare, and usually only happens if one is in training or a generalized PMO (Project Management Office) is in place.

External customers may drive the majority of the process based on their own methods, experience, and needs, and only include your BAs and PMs in advisory roles. I once did a discovery and analysis project where a VP from a large customer organization worked hand-in-hand with my consulting company’s senior BA/PM to run the engagement. I’m sure they had at least some discussions about how things should be done before I ever got involved, and I was learning how to be a mid-level analyst and project coordinator at that time. If the customer is internal or if the team is working for itself, decision-making defaults back to the other considerations discussed here.

Sponsors may also have a lot to say about how things are done, and may be involved to varying degrees. This is especially true if they explicitly want to do something a bit different than the organization’s typical m.o..

Every combination and permutation of roles, titles, and methods is possible. Within all that, human nature is always a factor, and humans have wonderful qualities and all manner of foibles. The best thing to shoot for is to operate in the most friendly, cooperative, supportive, customer-helping way possible, aided always by continuous, open communication.

Posted in Uncategorized | Leave a comment

What To Be Aware Of Going Into A Project

Short answer: everything!

Long answer: let’s actually talk about it.

The more you understand about the things you might encounter when you try to solve a problem, the more likely you will be to make use of the right people’s skills, be aware of things to look for during discovery and data collection, specify requirements for, include in the design, implement completely and efficiently, and test thoroughly and appropriately.

How to Prepare to be Aware of These Items

There are multiple ways to develop awareness going into a project.

  • Personnel: “Our most valuable asset is our people!” announce motivational signs in organizational facilities so often it’s a cliché. Things become clichés, of course, because they arise from a large element of truth. The knowledge gained in the following items is still embodied in individual humans, and even if AI proves capable to performing some of this analysis (about which I remain skeptical even now, having followed developments in the field for forty years), it would still have been seeded by the knowledge and experience of people. Moreover, any results generated by AI would have to be verified and validated by people.
  • Training: This involves arming people with knowledge they will need ahead of an engagement. The most striking example of this I experienced was taking a one-week operator training course at a nuclear power plant prior to starting a project to build a site-specific training simulator for it, but this training can take many forms and concern any of the factors discussed in the remainder of this article.
  • Organizational Knowledge and Lessons Learned: Whatever knowledge is not embodied in people will (or should) be embodied in organizations’ historical documentation, policies and procedures, and existing codes and plant. These materials will need to be reviewed and possibly reverse engineered, but the knowledge is there and can be crucial.
  • Research: Anything that isn’t already known can be learned, and from any appropriate source. I differentiate research from training by observing that training involves knowledge given while research involves independently seeking it.

Solution Components

Different aspects of solutions must be considered. There is a tendency to think about IT processes and solutions by default, but that should be resisted. Solutions, and process solutions in particular, can involve a wide variety of physical, logical, informational, and calculation-al facets whether they are primarily physical (e.g., security inspections and border crossings and airports, manufacturing lines, business processes that involve paper documents, and retail establishments), primarily informational (e.g., news and media websites, video games, design and analytical simulations, and online discussion forums), or a mixture of the two (consider that Amazon.com, Starbucks, and border crossings combine many physical and IT elements).

  • Process Components — Abstract: Some solutions involve performing analyses that lead to specific items rather than processes, and those analyses are necessarily different. For engagements that analyze and then create or modify existing processes, analysts should be aware of entities, entries, exits, process stations, queues, paths, bags (or pools), resources, states, and so on, as described here. All processes are made up of the same basic set of abstract, logical components, no matter what they do and what elements they include.
  • Physical Equipment — Non-Computing: This includes the physical process items like machines and storage components (like queues, tanks, hoppers, parking lots, and yards), the buildings in which they are housed, and the pathways between and through them. A Starbucks will involve a physical space of some kind, counters, furniture, restrooms, cold and room-temperature storage areas, coffee machines and milk foamers and slushie blenders, counters, floors, drive-through windows, dumpsters, a loading area, office space, and so on. Land border crossings will include facilities at booths, desks, scanning machines, loading docks, and other parking areas at which interviews and inspections can be conducted, offices, rest areas, roads and sidewalks and other paved areas, greenery and and runoff collection areas, and signage, crossing gates, turnstiles, and traffic lights to control the movement of vehicles and people.
  • Physical Equipment — Computing: Many types of computing devices are used for different operations. Sensors, actuators, indicators, and displays can be attached to those. A Starbucks, for example, will have all of the computing equipment needed to communicate with the home office, track finances and stock levels, process online orders, and manage customer cash balances and loyalty points. A manufacturing plant will include sensors and actuators to track storage quantities and measure and control movements and operating settings. Controls and displays allow users, operators, and customers to interact with and understand the processes.
  • Operating Systems: Many computing devices will include operating systems of various types and complexities.
  • Communication Channels: Messages may be passed between entities of all kinds. They may be physical, in the form of documents; electronic, using different, agreed-upon information protocols across different types of physical connections; visual, in the form of lights and signs and arrows; and audial, via alarms, warbles, and spoken announcements and instructions.
  • Implementation Languages: Computing systems will be implemented in different programming languages, each of which have different feature, involve different trade-offs, and are suited to different purposes. I go into more detail here.
  • Algorithms and Calculations: Calculations are performed and decisions are made based on an infinite variety of considerations and methodologies. Ready-made methods and techniques can be applied in many cases, perhaps adjusted for individual situations, but in other cases they must be developed from scratch.
  • Data: Data drives every process and even governs the development of discrete products. I discuss data in many contexts here, here, and here), just for starters.
  • UI/UX: User interfaces and the user experience come in many forms. An entire field of study has grown up around the design and implementation of these in different forms. Screen-based displays are an obvious (and ubiquitous) form of interface, but they aren’t the only ones. Many types of control, feedback, and situational awareness are made possible though every kind of device imaginable. (Also see the discussion of communication channels just above). These require every bit as much consideration, analysis, and design as screen-based interfaces do. Just think about how carefully designed the controls of automobiles and standup arcade games are, and how much research and experience has gone into creating them. Now consider the controls of a nuclear power plant, as shown in this still from the movie The China Syndrome. (I spent a few years developing and implementing thermo-hydraulic models for training simulators with control panels that looked like this, and I can tell you these things aren’t designed casually.)
  • Security: The field of cybersecurity is exploding these days but security considerations occur in many additional situations. Banks and prisons involve a high degree of security, as do schools, corporate methods and information, personally identifiable information associated with medical and financial data, transportation, hazardous materials, and infrastructure. Security is a critical aspect of military operations, and even of competitive sports. A major point of weakness involves the behavior of the human participants and (would-be) guardians of secure processes.

Target Industry

Industry knowledge is always important. While the skills and insights from all the other sections of this article can be applied to any new situation, having knowledge of the methods and history of the industry at hand is usually useful. This knowledge acts as a form of shorthand for communication with other practitioners and stakeholders, and can also act as a shortcut to understanding the best way to look for solutions. That said, industry knowledge can also act as a limitation on the types of analyses performed, by constraining what investigators may be willing to consider. Applying techniques and solutions from other fields can be an important source of creativity and lead to meaningful improvements.

  • Processes: When I first started working as a process engineer in the paper industry, I had to learn a ton about the craft and techniques for making paper. The industry had a unique set of parameters for describing the physical characteristics of paper, of which our company used about twenty. I not only had to learn what those were but also how the various pieces of equipment and chemical processes would change them from one step in the production process to another. I further had to learn the combinations of characteristics and processes that resulted in different types and grades of paper (consider the differences between bathroom tissue and newsprint, only as one example). This exemplifies an easy-to-understand physical process, but the same tradecraft and understanding applies to businesses of all kinds. The field of accounting (and financial analysis more generally) has its own rules, techniques, traditions, and regulations (think GAAP, or Generally Accepted Accounting Principles). The field of insurance (and actuarial studies) has a whole different oeuvre.
  • Equipment: Each industry has its characteristic types of equipment, while other types are widespread. Think of the different kinds of machines and process elements you might find in any manufacturing plant. Some will be general, like drills, CNC milling machines, heat exchangers, pumps, valves, pipes, and conveyors, and hydraulic presses, while others may be unique to an industry, like looms for textiles, thermo-mechanical refiners for paper, nuclear reactor cores for power plants, and so on. Start by knowing what is already available; continue by understanding the parameters governing their design, acquisition, emplacement, operation, and maintenance; and end by knowing when you may need to employ something creative and new.
  • Sources and Sinks (Inputs and Outputs, Suppliers and Customers): Understanding the inputs and outputs of any process is indispensable. Inputs and outputs take the form of people and organizations as suppliers, customers, stakeholders, researchers, and regulators. They also take the form of raw materials, intermediate goods, and final outputs — and every kind of information imaginable.
  • Standards and Measures: Every field has its characteristic measures and metrics. Many are just common units of measure from various areas of physics, but others are more specific. Many are derived from large volumes of data, vary widely depending on what is included, and inspire vigorous, ongoing debate. The Federal Reserve, for instance, publishes (or at least used to, I think some may have been discontinued) six different measures of the money supply (named, super creatively, M1 through M6), but additional measures are defined by other groups (e.g., the Austrian True Money Supply). The measure of monetary inflation is even more contentious. My favorite measures come from the paper industry (as described in the Processes item, above). My favorite measure of all time is Canadian Standard Freeness, which essentially describes how long it takes for a pulp sample to drain under very specific conditions. Other metrics and KPIs (Key Performance Indicators) are generated by specific organizations for their own custom uses, often derived from a number of other factors. Those KPIs or MOEs (Measures of Effectiveness) may be expressed as fixed values (six or less rejected parts per million) or as probability values (customers must be served within fifteen minutes, eight-five percent of the time).
  • Regulations: These regulations may be set by government, industry trade groups, or independent testing entities (e.g., UL Labs). However, it should not be assumed that the existence of regulations and regulatory bodies yields optimal or even desired results. It should further not be assumed that the establishment of said regulations is the cause of any positive trends or outcomes.
  • Participants and Competitors: I worked in a couple of industries where there were only a handful of competitors. For example, I think there may only have been three companies in the world that made thermo-mechanical pulping refiners in 1989. I think only three companies made nuclear power plant simulators, as well. The universe of customers was larger but is also constantly changing. Most of the work I did in 1989 involved the production of newsprint, but with the decline of daily newspapers, that market has contracted significantly in favor of cardboard packaging stock. Knowing who the competitors are, who they serve, and how things are changing is an ongoing challenge. It is also important to understand which individuals are the important business, research, and thought leaders in any field.
  • Trends: As noted above, things have a tendency to change. Understanding how and why is important. Being able to spot potential changes before others provides the best chance to mitigate or avoid threats and capitalize on opportunities ahead of any competitors.

Engagement Methodologies

As business analysts, project managers, sponsors, and other participants, we should have a good idea how to attack problems individually and, more often, in groups. Many of the same issues come up and many of the same skills and training are required to create effective solutions and realize value. These are common to almost any type of endeavor.

  • How To Run an Engagement: I have served in almost every role in a wide variety of situations, organizations, industries, and types. I have created my analysis and solution framework as the result of those long and varied experiences. Many of those involved my own successes and failures, but I learned a ton from what my employer and customer organizations and their people did well and poorly. I write separately about business analysis and project management below, and my framework primarily concentrates on the business analysis and solution creation, evaluation, and selection process, but in the larger scope of things I always think about how the two work hand in hand. Knowing how to set up an engagement and work through it in an organized way is invaluable.
  • How to Derive Value: Even though the phases of my framework are always the same, different individuals and teams can find themselves effectively working in, modifying, or preparing for work in any phase at any time, and at any level. It is not uncommon for work to be going on in all phases at the same time. The value of my framework is that it enhances the situational awareness of the participants, so they have a solid context for whatever it is they’re doing and can communicate it clearly to others. More importantly, working through the phases can lead to deriving value based on many possible types of solutions and opportunities. The opportunities identified may be by phase or by approach.
  • Business Analysis Skills: The practice of business analysis involves analyzing problems and developing solutions. It is detailed in two major handbooks. The BABOK (Business Analysis Body of Knowledge), published by the IIBA (International Institute of Business Analysis), details the practice from many points of view. It provides a framework similar to, but structured differently from mine, describes a list of fifty techniques, and provides other contexts and information for practitioners from beginner to experienced professional. The IIBA also sponsors numerous certifications in the subject. The PMI, described below, also offers a thick handbook and certification for business analysis.
  • Project Management Skills: The practice of project management involves setting up an environment and acquiring and managing the resources needed to solve a problem. As noted (and linked) above, project management may be thought of as a wrapper around the work of business analysis. The PMBOK (Project Management Body of Knowledge), published by the PMI (Project Management Institute), details the practice from many points of view.
  • Participants and Stakeholders: I include an item called participants and competitors above, which is mostly about the organizations and thought leaders. Here I think more in terms of individual practitioners and those associated with and affected by their work. Business analysis recognizes many different stakeholder roles (the BABOK lists business analyst, customer, domain SME, end user, implementation SME, operational support, project manager, regulator, sponsor, supplier, and tester while recognizing that other roles exist and that individuals may serve in more than one role). Many additional roles may exist, though many may fall into one of the two listed groups of SMEs.
  • Financial Analysis: Financial considerations are often involved in business analysis and project management. This can involve calculations like ROI, payback period, the time value of money, and all kinds of estimations, compilations, and accountings of prices and units.
  • Tradespace Analysis: Life is a constrained optimization problem. You never have enough of all the resources you want. Money is an important constraint but, believe it or not, the more onerous for us humans is ultimately time. The classic tradeoff is among the Iron Triangle constraints of time, cost, and quality (or features). The joke is that you can have it fast, cheap or good — pick two! (And if you want it really, really fast, cheap, or good, pick one.) However, an infinite number of other tradeoffs are possible. Consider the cost of computer time to support a programming language that does a lot for you and allows for faster development with fewer errors, contrasted with one that runs faster and uses fewer resources but takes longer to develop in and is more prone to errors. Labor vs. automation is another common tradeoff.
  • Technologies and Tools: These can involve all kinds of software (for word processing, spreadsheets, media, collaboration) and analytical techniques (statistics, optimization, marketing, simulation).

Everything Else

In the end, working on projects and developing solutions for internal and external customers is about working with and serving people. You also develop your own skills, abilities, and reach, but hopefully that is applied to the first goal.

  • Communication and Empathy: Working with people well means caring about them and correctly understanding their needs and supporting them so they can make their best contributions. Many analysts are more thing-centered than people-centered, so this can be a struggle, but keep at it. It is the most valuable and rewarding ability and gift you can have.
  • Ability to See Connections and Commonalities: These abilities are major aspects of creativity. Another major aspect is the ability to put things together in new combinations in ways that make sense for that effort. I write about seeing and finding connections and commonalities in detail here. Like getting better at communication, these skills can be developed if they don’t come to you naturally.
  • How to Learn Anything: Last but not least, you should be willing to learn everything and anything. Every project will require you to learn its unique particulars and requirements, but as all the previous descriptions attest, there’s a lot more. You don’t have to become an expert in all this stuff, but you should at least know that it exists, so you can find the right people who do know and communicate with openness and respect.

Can you think of anything else it might be valuable to know before starting a project? I’d love to hear about it.

Posted in Tools and methods | Tagged , | Leave a comment

Why to Address a Problem in an Organized Way

When developing a solution or capability, it is usually good to make it as simple as possible — but no simpler. The key to walking the fine line between too simple and too complex (or too heavy or too expensive or whatever) is to approach problems in an organized and consistent way. This increases the chance of finding the appropriate solution not only in terms of complexity and weight, but also in terms of accuracy and correctness. You don’t want a solution that doesn’t do everything you need, but you don’t want to spend a hundred thousand dollars for a twenty-thousand-dollar problem, either.

Different problems can come up if you skip different steps in the process, or if you do them incorrectly or incompletely. The steps are as follows:

Intended Use (or Problem Statement): Defining the problem to be solved
Conceptual Model (including Discovery and Data Collection): Understanding the current situation in detail (describing the As-Is state)
Requirements: Identifying the specific functional and non-functional needs of the organization, users, customers, and solution (describing the abstract To-Be state)
Design: Describing a proposed solution, or the best among many possibilities (describing the concrete To-Be state)
Implementation: Creating and deploying the chosen design
Test and Acceptance (Verification and Validation): Making sure the solution works correctly as intended, and that it actually addresses the intended use

The phases are shown below to reflect that iteration, review, and feedback occur within each phase, in order to ensure everything is properly understood and agreed to by the relevant participants and stakeholders, and also between phases, as work in any particular phase can improve understanding of the overall problem and lead to modifications in other phases. I write about this elsewhere here and here, just for starters. I also describe how the phases are conducted in different management contexts here. For this discussion, pay particular attention to the difference between when the conceptual modeling work is done in a project involving a change to an existing situation vs. when a team is creating something entirely new. That said, most efforts within organizations are likely to involve changes or additions to what’s already there, so the descriptions that follow will assume that.

More pithily stated, the process is: define the problem, figure out what’s going on now, figure out what’s needed and by whom in some detail, propose solutions and choose one, implement and deploy the solution, and test it. Different problems arise when errors and omissions are made in different phases. Let’s look at these in order.

All items identified in each phase of a project or engagement should map to one or more items in both the previous and subsequent phases. This ensures that all items address the originally identified business need(s), that all elements of the existing system are considered, that all participants’ needs are met, that the design addresses all needs, that the implementation encompasses all elements of the design, and that all elements of the implementation (and deployment) are provisioned, tested and accepted. Equally as important, tracking items in this way ensures that effort is not directed to doing extra analysis, implementation, and testing for items that aren’t needed, because they don’t address the business need.

The tool for doing this is a Requirements Traceability Matrix. This article discusses how it is used to trace and link items across all the phases. However, there is another way to think about the traceability and mapping of requirements and other items, and that is in any form of logical, possibly hierarchical model that ensures all elements of the solution are considered as a unified and logical whole.

Finally, while this framework seemingly includes a lot of steps and descriptive verbiage, at root it’s ultimately pretty simple, and the same rules apply for managing and conceiving the work as for the solution. In the end, the process used should be as simple as possible — but no simpler. The key is to use a consistent and organized approach that maintains and enhances communication, situational awareness, engagement, and the appropriate level to detail.

Intended Use

The simplest thing to say here is that if you don’t identify the problem correctly, you probably aren’t going to solve it. The iterative nature of the framework does allow the problem definition to be modified as work proceeds, so even if the problem isn’t defined exactly right at the beginning, it can be redefined and made correct as investigations proceed. For this exercise, however, let’s assume we do know what we’re trying to accomplish.

Conceptual Model

This mostly involves figuring out what’s going on now, in the proper amount of detail. If you omit this step, or if you do not complete it with sufficient thoroughness and accuracy, you may encounter the following problems:

  • incomplete understanding of current processes, causing you to leave things out of your analysis and solution, and not solve problems users may be having
  • incomplete understanding of current processes, causing you to “re-invent the wheel” by reimplementing capabilities you already have
  • incomplete understanding of data, so you don’t understand the completeness, quality, or usability of your data for different purposes (see here, here, and here)
  • incomplete understanding of interfaces and communication channels (including human-human, human-system, system-system, human-environment, system-environment, human-process_item, system-process_item, where a process_item could be a document, a package, a vehicle, an item being manufactured or assembled or a component thereof; and all of the foregoing can also be in your own organization or external organizations), so you don’t know who’s talking to who and why
  • incomplete understanding of different kinds of queues and storage pools, causing incorrect analysis or design of solutions
  • incomplete understanding of extant security measures and possible threats, leading to a range of potential vulnerabilities
  • incomplete understanding of potential solutions, which may inhibit alertness to things you could be paying attention to during discovery and data collection that may be germane
  • incomplete understanding of all of the above, causing potential lack of awareness of scope and scale of operations

Requirements

This involves learning what the users, customers, and organization(s) need, and even what the solution needs. This step can even include identifying methodological requirements concerning how the various steps are performed in each phase. Functional requirements describe what the solution does. Non-functional requirements describe what the solution is.

Skipping all or part of this step can lead to these problems:

  • not talking to all of the users can cause you to miss problems like ease of use, intuitiveness of the system, opportunities for streamlining operations, the difficulty or outright inability to fix mistakes, the time and complexity of tasks, design that promotes making errors instead of preventing them, and more.
  • not talking to all of the operators and maintainers can cause you to miss difficulties with documentation, modifying or repairing the system, performing backups, generating reports, diagnosing problems, restarting the system, applying updates and patches, arranging failovers and disaster recovery, notifying users of various conditions, and more.
  • not talking to the owners of external systems and organizations with whom you interact can cause you to miss communication errors, data and timing mismatches, changes to external operations, and the like.
  • not talking to vendors can cause you to miss problems like identified errors and faults, updates, recalls, updated usage guidance, end-of-support notices, training, and so on
  • not talking to implementation and deployment SMEs can cause you to miss problems like solution requirements, deployment needs, implementability, possibilities for modularity and reuse, availability of teams and resources, fitness of use for different tools and techniques, inadequate robustness, insufficient bandwidth or storage or other capability, compatibility with different hardware and software and OS environments, and more.
  • not talking to UI/UX and other designers can cause problems with usability, consistent look and feel, branding, disability access, testing, and so on.
  • not talking to customers can cause you to miss problems like ease of use, reluctance to use new features or otherwise change, fears about security, concerns about price changes, perceived long-term viability of the organization, overall preferences and use trends, and others

Design

Per the beginning of this post, solutions should be as simple as possible, but no simpler. Choices among many possibilities must consider many factors, all of which can be analyzed in a tradespace with all other factors. In one case, you may choose a dashboard. In another case a monthly report may suffice. A quick and dirty macro may serve the purpose in other cases. A heavier and more expensive capability may be used where a simpler solution could work, if it is already owned and many experienced analysts and implementers are readily available, in order to be consistent, but a new capability might not be adopted if a simpler one will do the job and be cheaper, more approachable, and maintainable.

There are no hard-and-fast rules for making these judgments. Each organization must apply business acumen to its own situations as they arise. As described above, the right effort should be applied to the problem to generate the right solution.

Implementation

Work in all phases of every engagement or project must proceed on three levels, as described here. The top layer considers the organization, its sub-units and departments and functions and locations, and its people. The bottom layer is the hardware and software that make the process run. This is where the technology and other physical plant comes into play. An assembly line or the equipment and building that make up a coffee shop count in this layer.

The middle layer is the abstract or application layer, and that is where the actions, decisions, calculations, governing rules, and data are described that logically drive the organization’s process. The data and operations so identified and described marry the organization and its people to whatever physical implementations are needed.

The implementation phase can involve an almost limitless number of considerations, levels, and components, all of which have to be constructed, tested, and deployed, possibly in multiple phases depending on the scope and scale of the solution being developed. Solutions must sometimes be rolled out in phases, but small or otherwise incremental capabilities can be deployed all at once. The key is to do the implementation and deployment in a way that makes sense for the solution.

If the design of the abstract, middle layer is defined correctly, this will minimize potential problems with the implementation and deployment. Moreover, if the implementation and deployment SMEs are involved from the beginning, depending on the nature of the envisaged solution, their ongoing insights and analysis should mitigate problems at this stage.

Mitigate, that is, but not eliminate. Difficulties can always arise. The major point of this exercise is that the problem isn’t solved, or even conceived, at the implementation stage. Ideally it is logically solved before you ever get that far. As long-time computer industry analyst Jeff Duntemann once observed (riffing on Ben Franklin, as I recall), “An ounce of analysis is worth a pound of debugging.”

Test and Acceptance

Testing is meant to tell you whether the thing you built works, and whether it is the right thing to have built (to solve the identified problem and realize the sought-after value). These two operations are formally known as verification and validation.

A form of this V&V happens every time you iterate within a phase while you are working toward the most correct and complete understanding. For example, when you perform discovery and data collection in the conceptual modeling phase, you document your findings, have the subject matter experts review your documentation and tell you what you got wrong, and you make the necessary edits and resubmit for review. This process continues until the SMEs confirm that your documentation, and hence understanding (of the current state, and potentially other solution components) is complete and correct.

A different form of it occurs when you iterate back and forth between phases. You identify gaps that need to be filled and modifications that need to be made in order for the whole to make logical and consistent sense.

The main V&V operations are performed on the implemented items. Verification operations tend to be more concrete and definable and the tests to perform them tend to be more amenable to automation. Validation operations tend to be more abstract and require expert judgment. Completion of all testing leads to a determination of non-acceptance, partial acceptance (with limitations), or full acceptance. Obviously, final validation and acceptance are unlikely to be completed if the problem has not been solved using a consistent process that considers all relevant factors.

Conclusion

Do the proper analysis, through all the steps, in order, at a weight and level of effort appropriate for the problem, and that will give you the best chance to succeed. Jumping in and just implementing something is not likely to yield good results.

Posted in Tools and methods | Tagged , , , | Leave a comment

Old-Timey Calculators: Before Electronic Computers (and Even After)

Some time around 2004, as I was bouncing around the country doing data collections and research in support of the simulation and analysis of transportation and security systems, I happened to visit the National Museum of Nuclear Science & History in the Old Town section of Albuquerque, New Mexico. The exhibit that most struck me on that occasion involved a collection of manual devices that scientists and engineers in the early days of atomic research (including on the Manhattan Project) used to perform various calculations. Having some appreciation for the histories of science, mathematics, engineering, and computing, I am always impressed at how clever the early investigators could be. They were, after all, every bit as smart as we are. It’s just that they hadn’t learned as much and didn’t have access to the tools we do.

When I revisited the area on a major road trip last October, it took me a little while to figure out that the shiny new museum was the original one in a new and much expanded location.

I naturally scoured the place to find the exhibit I had so clearly remembered, so I could write about it here, but alas to no avail. Fortunately, I decided to bug the docents and those extremely helpful individuals introduced me to the museum’s head curator, who was kind enough to listen to my story and take me back to a meeting room where she let me look through a box of items she’d pulled from non-display storage.

Those reference aids were all fascinating, I told her, but they weren’t what I was looking for. What I was looking for were a series of paper calculators like this one from a game I used to play with my mother’s parents (the version described in the link is an update from the one I played with my grandparents, and which I think is superior), as shown below. A close-up of the paper calculator is shown separately. It is essentially just a glorified lookup table of multiplied values of number of shares (1-50 are shown on the other side) and the price per share. The game included two hundred shares of each company, but the makers figured people could multiply by a hundred and add when necessary.

Then the curator related that the items she had available were only part of a much larger collection that a donor had loaned to the museum for display, but he had chosen to reclaim many of the items when the museum moved.

Oh well.

Let’s see what we can dredge up, anyway.

I have a few random things I’ve collected over the years. The middle slide rule was my father’s. He used it to get his master’s in finance (and his CFA certification) in the mid-60s. I remember him showing me how to multiply on it, with the example 9 * 9 = 81. The center sliding section can be flipped over and provides several more scales. Back in the day I could tell at a glance what all the scales were used for. Now I’d probably have to think about some of them for a while… The smaller, two-cycle rule at the bottom and the circular one at the top are items I happened upon at random.

This Kodak reference has a bunch of pages with tables and rotating wheels that helped you figure out various camera settings. I never spent enough time using it, but it seemed like the kind of thing an intrepid photographic hobbyist (and inveterate gearhead in all things) ought to have in support of his Minolta X-700.

Greasy nerds (and I say that with all love and respect!) old enough to have enjoyed the heyday of classic first edition Advanced Dungeons & Dragons may remember this legendary combat calculator that was published in Dragon Magazine issue number 74. The idea was that you cut out the cardstock images, excised the viewing windows and the center hole, and stuck the layers together with a brass fastener. I think I laminated mine. Then I got really clever and made my own wheels on the back side, which showed die rolls needed to make saving throws based on character class and level, and the results of cleric characters’ attempts to repel (turn) undead monsters. Sure, all this stuff could easily be looked up in a book or a dungeon master’s screen, but this was more fun. (Never mind that I basically never used it…)

Here I’m cheating a little. (OK, a LOT!) I had a bunch of handheld calculators over the years. (I should probably do a separate post on those.) The last one I used in college was the HP-41CX, complete with card reader. My NCO school instructors at Fort Bliss had no idea what to do with me when I used it to write a program to do intersection and resection in our map reading class. The smaller-and-not-expandable HP-42S was the most powerful programmable calculator Hewlett-Packard ever released. Devices like this have since been overtaken by the power and ubiquity of computers and mobile apps. That said, because I would find it intolerable to not use Reverse Polish Notation when doing quick calculations, I have an HP-42S simulator on my iPhone, and an HP-15C simulator on my laptop.

Since we’re talking about doing things by hand, however, on with the show.

I visited the Greenwich Observatory in 1987, but after a pretty full day of knocking around London with some colleagues, I arrived just after closing, and wasn’t able to see what was inside. I managed to remedy that in 2014 (with my little stuffed mascots, in front of the main building and in front of the Prime Meridian which divides the eastern and western hemispheres). Better 27 years later than never, right? The third image shows an instrument used to measure the angle of different objects in the night sky to a very precise degree — for a mechanical device. The fourth is just a modern update (c. 1897). I didn’t get pictures of everything there, but the instruments ware all interesting, and the older the better. They reminded me of a technique I learned in Mr. Thompson’s astronomy class in high school, which allowed you to accurately plot a planet’s orbit around the sun using just two observations at different positions and times. Decades and centuries of diligent observation, measurement, and analysis slowly unlocked important parts of the workings of the universe.

If you observe that those instruments were unlike the manual calculation aids I discussed above, I won’t argue, although I’m guessing that many tables of objects and their movements were compiled at Greenwich and elsewhere. One wonders how the Mayans did it, or the Egyptians, or anyone who came before (like at Göbekli Tepe).

I finally poked around the internet and stumbled across a few websites discussing several major collections of the sort of calculation devices I set out to discuss. I neglected to ask the name of the collector whose items I had seen at the original nuclear museum in Old Town Albuquerque, so I can’t verify that any of the devices described in any of the following links are the same ones I saw, but at the least you might appreciate the wide variety of devices that have been created over the years for almost any application imaginable.

Here is a nice, general article about slide rules.

The site for the Oughtred Society (named after the inventor of the slide rule, William Oughtred) links to a ton of resources on the subject.

This subpage of the above links to several prominent collections of devices. Those collections feature literally hundreds of devices.

This item is constructed not as a classic slide rule but as a sliding, paper lookup device of the type I most remembered seeing at the museum. Other examples are here, here, here, here, and here, but there are many circular ones dedicated to looking up values related to fission and radioactivity effects, as exemplified here, here, and here. As you can see, some of these are nothing but glorified lookup tables, but others can be used to calculate various quantities. As always, you should only plug numbers into the actual calculation when you have already solved the problem logically and understand what all the components mean.

Posted in Tools and methods | Tagged , , , , | Leave a comment

Conceptual Modeling Work May Occur In Many Contexts

During a previously-referenced recent conversation I was asked why I refer to certain research, discovery, and data collection activities as “conceptual modeling.” I do so because it is a surprisingly general standard term of art that has its own Wikipedia entry. From my point of view it especially comes up in the fields of software and simulation design, but it also comes up in the fields of neurological research and philosophy. I encountered the term in formal usage surprisingly recently, when I was using a specific framework for conducting IV&V operations for a Navy simulation product. (It’s amazing how long you can do things and use concepts without formally knowing what they’re called.) There are also a few books (see here and here) that discuss the idea to greater or lesser degrees.

The materials linked above define the term in a lot of ways, but the way I think about it most directly involves the discovery and data collection steps involved in describing the As-Is state and the components that make up the To-Be state. Indeed, the entire design phase can be thought of as a form of conceptual modeling. I usually depict the conceptual model phase as being the second thing we do in a situation where our project will involve modifying an existing system. However, if we’re building a new system from scratch, I describe the conceptual modeling phase as taking place embedded within the the design phase.

Even if the project proceeds from an existing As-Is state, the work done in the design phase still involves a form of conceptual modeling.

Conceptual modeling also takes place at all three “levels” of enterprise architecture, which is comprised of the business layer, the abstract or application layer, and the implementation/deployment layer.

One other form of conceptual modeling takes place when examining opportunities from different angles. For example, becoming aware of a new piece of technology before a project or engagement even starts, and then embarking on an effort to take advantage of capabilities the new technology provides, is a form of conceptual modeling that drives the whole effort, and occurs before it. I can’t say my awareness of an inexpensive turntable device with a hole in the middle of it entirely drove me to identify Jules Verne’s Around the World in Eighty Days as a good subject for a college spring carnival booth where the year’s general theme was “Adventure,” but it certainly drove an aspect of the possible implementation that fired people’s imagination and enthusiasm.

If we want to go really wild, devising and implementing and formal test and acceptance process, whether it’s for a CI/CD-type microservices system, a mainframe/HPC system, a desktop system, a distributed control or training system, and embedded system, or even a non-IT solution, can involve a different forms of conceptual modeling.

In short, conceptual modeling involves identifying and incorporating concepts in whatever contexts make sense. That means that the process of conceptual modeling can take place almost anywhere within an engagement or project. It can be one of those very general terms, like “gap analysis,” which can describe any form of identifying differences between where you are and where you want to be (which can also happen within each individual phase and across an entire effort), or “level,” as used in the Dungeons and Dragons role-playing game. There, the term could refer to the power and abilities of the players’ characters; the difficulty and power of magic spells; the difficulty and danger of different sections of a dungeon (or other environment), especially if arranged in “floors” that go ever deeper underground (or upwards in a tower or mountain, or deeper into a dense wood, or…); or even the power of monsters and other opponents. The process of identifying and continually reviewing assumptions, simplifications, and risks and impacts similarly takes place throughout an entire engagement, across all phases in any order.

Revisiting the idea that everything in my framework is iterative both within and between phases, and also the IIBA’s insistence that training materials should never provide prescriptive ways to do things (sometimes to a fault), we see that almost every activity in an engagement can be worked on at any time. I provide a “standard(ish)” order just as a way to start thinking about the entire process. Even if a given project involves doing things in a very different order from how it may often be done, having a standard departure point and template, helps improve the situational awareness of analysts and other practitioners.

Posted in Tools and methods | Tagged , , , | Leave a comment

The Greatest Illustration of Writing Accurate Instructions Ever!

For architects, business analysts, programmers, managers, testers, and everyone else. It isn’t easy to do it well…

Posted in Management | Tagged , , , | Leave a comment

Framework Phases Across the Proposal/Bid/Sales Process

I’ve discussed the many ways the different phases can be arranged in some standard(ish) management contexts, and how the work in each phase can be broken down in multiple dimensions, but those hardly exhaust their possible arrangements. Another important source of variation has to do with work performed in getting a project in-house to begin with. It doesn’t apply in every case, and especially doesn’t typically apply for work done for internal customers, but there are a lot of possibilities when it does. I hope the following example will illustrate the idea.

In my first engineering job, as a process engineer for a company that made capital equipment and turnkey thermo-mechanical pulping lines for the paper industry, about half my job involved creating process and instrumentation diagrams (P&IDs) that supported the negotiations our senior engineers and sales staff would have with potential customers. The drawings would include the heat and material balances showing the amount of materials the system would have to process to generate the customers’ desired output, and the conditions in each part of the process. The types of equipment in each system were chosen to produce the desired paper characteristics. The sizing part was fairly straightforward, but the quality part seemed to be as much art as science, and was driven more by experience than by prediction from first principles. This would allow the next group of designers to specify the size and quantity of the equipment that would need to be included in the system, so that prices could be attached and a bid prepared.

The process of creating each set of drawings took several iterations between me and the drafting department, and then there could be several rounds of modifying the drawings as the design and configuration of the proposed system evolved. I remember going to one plant that was actually built from drawings I did (Daishawa, Quebec City) and also doing multiple rounds of drawings for a plant in the Orinoco River region of Venezuela. I don’t know if that plant ever got built, or if so who built it. I remember that it included huge storage tanks (often called chests) that the energy-intensive pulping line(s) would fill up overnight when electricity was cheap (they called this load shedding), and which would be drawn down during the day as the actual paper machine ran continuously.

So the customer identified the requirements, a first pass at the conceptual model and design work was completed, and then and only then could the implementation begin, if our company’s bid was the winner. Once a project was actually undertaken, there would then be another round of requirements, design, and conceptual modeling activities to work out specific details of the system, equipment layouts, and so on. One can think of the sales effort as one independent project, with the submitted bid representing the work of the implementation and test phases, and the construction of the actual production line and all its equipment as another independent project, or one can think of the entirety of the work as being a single project with some unique patterns of iteration.

In the end it doesn’t actually matter. What matters is being able to recognize the correct activities to perform in each phase and what phase you are likely to be in at any given time. I know the IIBA sets great store on not being prescriptive. I think the many possible configurations of phases show that my framework is extremely flexible, and that anyone who thinks it is overly prescriptive probably doesn’t understand it yet.

Posted in Management | Tagged , , | Leave a comment

“Where in the Framework Do You Think About How To Add Value?”

I was recently asked this by a very intelligent and insightful individual. My expanded answer follows.

The short answer is that I differentiate between the solution and the engagement in several different ways. The solution, and the analysis performed beforehand that helps to decide whether to even pursue a solution, is where the value is generated. The engagement describes the management environment in which the solution is generated. So, if I’m talking about the management environment, I’m not likely to be talking about the solution in its own detail.

It’s amazing how long you can think about a thing and still have so many unstated assumptions in play. It is clear in my mind that the framework I’ve developed is intended to guide work in any engagement with customers and problems, enhance situational awareness within such engagements, and so on. I often speak and write about the kinds of work that are done within each phase of an engagement, but I had not done so in the conversation where this question was posed to me, so some important context was missing. I’ve also had the feeling that, if people have failed to understand what I’ve been trying to communicate with all this, they may think it is just another form of content-free consultant-y blather. My antipathy towards fluffy consultantspeak is as strong as anyone’s, so let us definitely understand the missing context.

I recently discussed the factors that can drive potential solutions. Many of those factors consider how value is evaluated and generated before and outside of any particular engagement. If those factors are not expected to drive value, through the application of prior experience, a priori reasoning, or other entrepreneurial judgement, then why would anyone even begin an engagement?

It is certainly true that projects fail for various reasons, but projects quite often succeed, too, so let’s look at how that happens.

When I did business process reengineering using FileNet document imaging systems, it was pretty well understood that properly designed and implemented systems could generate substantial cost savings. So although engagements meant to analyze, automate, and reconfigure customer operations were known to provide (a really good chance of generating) benefits going in — for certain document-heavy classes of business processes — the engagements themselves still needed to worked through all the phases in detail to figure out where and how they would be realized for each customer’s specific process, and to figure out the exact value of the benefit.

The specific benefits of solutions like the foregoing may be readily calculated, but it may be far more difficult to assign specific monetary values generated in other situations, for example by the creation and employment of nuclear power plant training simulators. Their use is likely to lead to more efficient and effective operations on an ongoing basis, though those savings would be difficult to identify and quantify. The biggest reason they are used, however, is to prevent catastrophic failures that result in the loss of entire plants and extended losses in the surrounding regions. Economic analysts can (and do) put prices on those things, but the goal is largely to forestall the unthinkable.

Other types of benefits may not be strictly monetary, but may instead provide psychic or quality-of-life benefits. This is probably more likely to be true for consumer products than for capital goods or process improvements. It is also true that the producer of these kinds of products must provide them at a price consumers are willing to pay, because the customers will perform their own, subjective, cost-benefit analysis, whether they do so explicitly or not.

Business analysts, project managers, executives, and individuals in every other role can contribute to analyses of whether projects and product development efforts should be undertaken. However, it’s important to be able to do the necessary costing and accounting that enables determinations of which efforts are worthwhile and which aren’t.

We can also state that doing project and product work more efficiently always drives value. This not only enables a team to deliver outputs at lower cost, and probably higher quality, but those savings may make the difference between whether customers accept those outputs or not.

The initial phase, whether we call it the intended use, problem statement, or something else, is mostly about defining the goals of the engagement and setting up the management mechanisms in preparation for doing the work. I assume that this step is only undertaken if the effort has already been judged to be beneficial.

Value is generated in the conceptual model phase differently based on when it occurs. If it is carried out at or near the beginning to determine the As-Is state for a process improvement, then generating the most thorough and accurate picture of the current state will lead to the best results in later phases. This work includes learning everything possible about extant assumptions and risks. If the conceptual model work is conducted as part of the design phase, when there is no current state and the goal is to build something entirely new, then thorough and accurate information is still important, it just happens in a different way.

The requirements phase is where the envisaged solution is fitted to the specific needs of the customer, and where the needs of the solution and applicable regulations and standards are folded in. The more accurately and completely the needs can be identified, the more value the solution will be able to provide.

Value can be added in the design phase through elements that embody robustness, modularity, efficiency, novelty, and effectiveness in many ways. Elements can involve improved materials, effective rearrangements, the leveraging of new scientific discoveries, updated machines and components, and more.

Improving the efficiency and effectiveness of activities of the implementation phase adds value by consuming less time and resources. The resource savings can be realized both while carrying out the actual work or during the operation of the delivered solution. Alternatively, value may be enhanced by building in ways to generate more outputs with similar inputs. One possible example of this is efficient computer calculations that produce more granular, accurate, and frequent outputs. Another example is improved machining methods that produce tighter tolerances, that result in tighter seals and reduced friction and wear, that would allow an engine to produce more power and less pollution over a longer period of time while requiring less maintenance. It should also be understood that effective deployment is an integral part of the implementation phase.

It occurs to me that the line between requirements, design, and implementation can be rather blurry. Harkening back to the last example, I would ask whether the improved ability to machine surfaces is an element of design or implementation. I’d love to hear your thoughts on this distinction.

Activities in the test (and acceptance) phase add value in at least two ways. One is simply by reducing the time and effort taken to conduct (all necessary) testing. Theoretically this applies to all forms of iterative review and incorporation of feedback embedded in every previous phase, as well. The other value-add comes from ensuring testing is thorough so that the chance of passing deficiencies on to customers is reduced to the greatest degree possible.

Finally, just as evaluation and selection activities before engagements can add value, ongoing review and consideration can add value after the initial engagement ends. This involves thinking in terms of the extended life cycle of a solution, and continually looking to improve the solution and the means of delivering it.

In conclusion, while most of the explicit value is generated by the specific analyses and decisions made before a solution effort is undertaken, during the actual work of generating a solution, and ongoing review and improvement of the solution, the environment in which the work is performed and the framework used to guide it should not be overlooked. The solution and the engagement, which again are often referred to as the product and the process, may be thought of as a pair of scissors. You need both halves working together to get the best results.

Posted in Tools and methods | Tagged , , | Leave a comment

Three Layers of Architecture, and Three Dimensions of Iterative Phase Loops

As I’ve been developing my engagement framework over the past few years, I have sometimes struggled to classify the exact phase in which certain activities take place. Or, more to the point, I have sometimes had difficulty contextualizing multiple different activities that may take place in the same phase. Although the framework has seemed very solid against watching several years of material in the practices of business analysis, project management, Agile and Scrum, and related disciplines, there seemed to be just a small thing missing, a bit of confusion I couldn’t quite resolve. A couple of posts where I wrestled with this are here and here.

Much of this particular confusion was lifted when a gentleman named Kieth Nolen, MBA, CBAP gave a presentation on business architecture to our weekly IIBA certification study group, as part of a professional series we did on different fields and activities business analysts need to be aware of. His presentation may be viewed at or downloaded from the Tampa IIBA chapter’s shared drive here. Look in the IIBATampaBayBABOKStudyGroup directory for the file named BABOKStudyGroup20220802.mp4 (which alas I cannot link to directly).

Mr. Nolen provided wealth of fascinating and useful information, including this table of perspectives to consider when analyzing or designing a system.

What really clicked for me was his description a tool called ArchiMate. It enforces certain (necessary) rules on the creations of diagrams expressing business architecture designs. The thing that made it pop, especially given what had preceded it, was its segregation of elements into three major layers. The tool refers to these as the business layer, the application layer, and the technology layer, as shown in the diagram below.

As the oodles of drawings elsewhere on my website will attest, I have spent many years expressing different aspects of architecture, design, and processes from different perspectives. I’ve always had an innate understanding of how to communicate what I needed to, but it’s always helpful to keep reading, listening, and learning so I find more and different takes on materials on interest. The longer you do something, assuming you’re on the right track, more and more of what you see should fit into your existing understanding. This makes it all the more interesting when you find something that actually appears to be new. Such findings can lead to important clarifications and breakthroughs, and I believe this happened with me.

I use slightly different language than Mr. Nolen or Archimate use, but it’s clear where the idea came from.

The standardized representation I’ve developed for my engagement and analysis framework is below. It has to be understood that this is a highly stylized and streamlined representation of what practitioners actually do when solving a problem for an internal or external customer.

This is meant to depict the iterative nature of the work within and between each phase at a high level. Implicit in this is that many activities can occur in each phase both in parallel (if several individuals or teams are doing similar things at the same time) and serially (if one individual or team performs several successive different tasks in the same phase.

Parallel activities may be depicted this way, by showing additional iterative cycles represented in depth:

Serial activities might be thought of in two ways, by showing additional iterative cycles represented horizontally:

The breakthrough can from expanding the representation in the vertical direction, as shown next:

This is where I’ve used slightly different language for the three layers. Where ArchiMate goes with business, application, and technology, I prefer what I feel are more general terms. Those are business, abstract, and implementation.

The business layer can include elements like business units or departments, people as individuals or in groups with similar responsibilities and permissions, and overall systemic responsibilities. The abstract layer includes descriptions of the processes, communication channels, data elements, activities, calculations, and decisions. Those can be determined logically through the discovery and data collection activities in the conceptual model phase, and also through various activities in the requirements, design, and implementation phases. Finally, the implementation layer (as differentiated from the implementation phase) describes all aspects of an implementation, in terms of the hardware, software, operating systems, governance and maintenance procedures and so on. While one is generally tempted to visualize IT systems first, foremost, and always in these situations, the considerations are meant to be more general than that. That is the primary reason for my using different language than what ArchiMate uses.

Moreover, I find that separating activities into layers in each phase can actually be done on an almost completely arbitrary way, and we aren’t limited to only the three layers listed. The specific insight provided by Mr. Nolen’s presentation has lead me to a more general conceptual understanding of how to proceed.

So how does this represent a breakthrough for my understanding and the interpretation of my framework? Let’s look at how activities may be broken down into layers within each phase.

  • Intended Use (Problem Statement): This phase doesn’t immediately seem to break down into layers in an easy of obvious way. Problem statements and project charters are usually described at a high level with few items. These can be added to and clarified over time, as understanding increases through iteration between phases over the course of an engagement or program, but their general expression tends to brief. Additionally, a lot of the context may be subsumed within the very nature of the proposed solution approach.
  • Conceptual Model: Discovery and data collection activities can be carried out to map and characterize existing (or new) systems along the line of the standard three layers. Other possibilities exist, however. One example I can think of was a two-part analysis we did of a complex, large scale business process, where we did a first pass of discovery and data collection to determine what and how much was going on and how long it took to complete an initial economic justification, and then a second pass of discovery and data collection to learn the details of each activity so an automated replacement system could be developed and deployed.
  • Requirements: Several possible division exist here. Functional requirements describe what a system does (or will do) while non-functional requirements describe what a system is (or will be). Methodological requirements may govern how the engagement itself is to be run, how communications must occur, how things must be written, and so on. Requirements can exist for accuracy, for what must be considered in or out of scope, what hardware must be used, and more.
  • Design: Designs can certainly cover the standard three layers, but I can think of multiple possible subdivisions within each layer. Particularly in the abstract (or application) layer, I look back to my days writing thermo-hydraulic models for nuclear power plant simulators, and see that the governing equations and the solution methods could be considered very distinct parts of the design. Multiple engineers could all agree that the governing equations are written correctly (those could actually be considered part of the conceptual model, by the way), but implement the actual performance of calculations in entirely different way. (Development of my framework has made it clear to me that, even though Westinghouse got good scores in an evaluation of its project management practices, lack of review and socialization of methods in this area was a major weakness of the organization.)
  • Implementation: Beyond the three standard layers, the construction and deployment of a new system (or modifications of an existing system) can be seen as entirely separate subdivisions within at least the abstract and implementation layers. While I often write that I consider system maintenance and governance a subset of non-functional requirements, I also see the CI/CD or release train process for a system or capability to need as much effort, thought, care, and feeding as any other part of a developed and deployed system. This, this can be seen as its own layer through all of the phases (but especially this one), or an integral part of all other work. Either way, awareness of the need for this capability is crucial.
  • Test (and Acceptance): Testing and acceptance can occur at so many places through every phase of the process that it almost should be left (per the forward and backward linkages required in the Requirements Traceability Matrix) to be driven by the linkages to the implementation items. Conversely, the different kinds of test can be considered separately, especially when it comes to the differences between verification (does the thing be built work as intended?) and validation (did we build the right thing to address the identified need, and is it sufficiently accurate?).

Coming to understand the possible vertical divisions within each phase makes my framework much more adaptable and powerful in my mind. I’m sure I will considering the subtleties and ramifications of these new insights for a long time to come.

I will also share that this insight has allowed me to put clearly into word the space in which I seek to work and help people. In the end, the main area I like to think in is in the middle or abstract (or application) layer. I can certainly support analysis and development of the business layer driven by that. Even more to the point, I’m not very interested in writing code or developing databases or mucking around with the details of deployment at this stage, and I’ve never been and expert at security but recognize its indispensability, but I’ve certainly done and learned enough to be able to work with the relevant specialists of every kind as needed.

Posted in Tools and methods | Tagged , | Leave a comment

Approach Contexts for Potential Solutions

As I’ve been pondering different aspects of my engagement framework, the management environments I’ve experienced, and the nature of the solutions I’ve helped create or that I’ve learned about through other means, it occurs to me that potential solutions are driven by a number of factors. Solutions come in many contexts, so let’s look at some of the considerations. I don’t know if any overarching classification exists, so we’ll just list what we can and see if some kind of order or hierarchy suggests itself.

  • Internal vs. external customers: How a solution is approached may differ based on whether the solution team is serving an internal or external customer. External customers must be engaged with on a more formal and structured basis, with the activities and behaviors often driven by contractual terms of various kinds. Internal customers are much more likely to be served with ad hoc solutions as opposed to more standardized solutions that may be the raison d’être of an organization serving primarily external customers.
  • Standard solution vs. open-ended solution: Most of the companies I worked for were smaller, specialized, highly technical organizations serving larger, commercial firms with somewhat standardized solutions in a vendor or consultative capacity. None of the vendors or consultancies I worked for provided truly standardized solutions. What they did was provide solutions using a certain set of tools, components, and approaches, as opposed to going in cold and trying to generate solutions from a completely (open-ended) blank slate. The solutions were always highly customized to the customers’ specific needs and situations. I think the differentiation here has to do with the degree to which the possible solution space is constrained. (The more an organization’s solution offerings are constrained, the more I tend to refer to them as hammer-and-nail companies, because if all you have is a hammer, everything looks like a nail.) That said, the range of provided solutions from a single vendor may be quite wide, but they will all tend to be related to a single area of endeavor. For example, I worked for two different companies that provided customized, turnkey production lines (or parts thereof) to industrial clients in the paper and metals industries. Both sold systems, individual pieces of equipment, service, electronic control systems, and independent analytical services for individual situations. However, they almost always related to each other in the knowledge and process areas addressed. By contrast, I know of other consultants or vendors that provide more generalized, open-ended kinds of services. I guess it’s just a matter of how the various offerings are related. Finally, constraints may take many forms. Companies like IBM will tend to serve larger organizations that can afford to pay high rates for large, complex solutions, and providers tend to scale down from there. Every organization offering solutions is constrained in some way.
  • Imposed by competition: There are a lot of reasons organizations will seek solutions, but the primary one is competition. An organization seeks and applies solutions to improve its processes and offerings on an ongoing basis, or it may (will!) suddenly find itself losing money as competitors do so and end up serving customers better. Individuals and organizations should always be looking to continually improve every aspect of their business.
  • Opportunity from new technology: The appearance of new technologies can lead to all sorts of opportunities for improvement. Some will be blindingly obvious, while others may require more thought or lateral association, or even luck. I once created a really fun and successful solution based on an item I had recently seen that was new to me. That set off a whole chain of events that led to a pretty spectacular outcome. By contrast, and I intend to write about this more in the near future, some solutions and methodologies are well understood, but await a breakthrough that makes their implementation possible. The design of the Polaris missile and the submarines that launched them, for example, allocated certain volumes for various electronic systems to be housed, that couldn’t actually fit in the spaces provided. The project managers counted on improvements in technology during the development process to make everything work. Similarly, the mathematical techniques for performing certain physics and Monte Carlo analyses (using continuous and discrete-event simulation techniques, respectively) were long known but awaited the development of digital computers to finally make their applications practical.
  • Automation: Automation is the process of using machines (including computers) to perform or at least assist with operations that were once done manually. I often built tools to automate calculations and even write code and documentation it would have been extremely tedious to write out or even type by hand. Similarly, I used to analyze business processes so they could use scanned images of documents instead of the paper documents themselves. The goal in those instances was to preserve the operations where humans had to apply their expertise and reasoning power in creative ways, and automate the repetitive actions the computer could be instructed to do from context. We added scanning, indexing, and archiving steps to an existing, complex business process for about ten percent of the original operating cost, and were able to automate away thirty to forty percent of the operating cost, for a savings of twenty to thirty percent overall, in addition to the elimination of a lot of human drudgery.
  • Standardization / Modularization: I must have played with almost every modular construction toy in existence growing up, so I was used to putting creative things together out of standard components from an early age. The trick to doing this as a professional analyst and designer is to recognize opportunities for creating standard components. One way to do this is by using affinity grouping (think of the exercise where you brainstorm a bunch of ideas on Post-It notes and then group them by how similar certain ones are to each other). Another way is simply to recognize when you are doing the same or very similar things over and over in different situations. Standardization can be accomplished in a number of different ways. Policies and procedures can be established that enforce consistent best practices. Or, reusable (physical or logical) components can be built that enforce standard approaches or operations for any system in which they are incorporated. It is sometimes a good idea to make the created standard components configurable, so they may be adapted to individual situations as needed. In that case, designers and users need to continually reevaluate the boundary between making something configurable and when it would be best to create a new standard component.
  • Rearrangement or streamlining: Sometimes all the right things are done for the right reasons and with the right operations, and all that is needed is to put them in a different order so they can be accomplished more quickly and efficiently. A classic case of this is when a series of production machines are located far apart from each other, necessitating a lot of transport time and resources between operations. If the production operations are all relocated to be near each other and in the correct order, the waste associated with transport and storage can be reduced or eliminated, making the overall process more efficient. More complex examples of this are possible. It may, for example, improve overall efficiency (and quality) to add a step to a process which performs a kind of standardized preparation and setup, so the value-added operations can be performed more cleanly and effectively, with less chance of error or unexpected occurrence. Another way to realize improvements is to re-route items so they interfere with each other less, and still another way is to relocate or otherwise automate certain operations. The latter approach may not lessen the amount of effort involved in completing some operations (for example, processing paperwork for import brokers in cross-border trucking operations), but will generate benefits in other locations (by reducing or eliminating certain sources of congestion at the border-crossing facilities themselves. In these cases it’s all about eliminating constraints wherever possible (and then identifying and addressing whatever becomes the new constraint…).
  • Lean vs. Six Sigma: Lean is doing more or the same with less, while Six Sigma is about eliminating variability (and hence waste) so you effectively do more with the same. Tightening up a machine or creating a jig or structuring a computer’s user interface to improve clarity and situational awareness leading to a reduction in errors and thus loss or rework is what Six Sigma is all about. Lean is about automation, standardization, and rearrangement or streamlining as described directly above.
  • Profiling (Theory of Constraints): Software developers often instrument their code to see which operations are taking the most time (or consuming the most memory or bandwidth or other relevant resource) in order to identify which code to devote time toward making more efficient. (I have written about extreme loop unrolling in some of my own work, for example.) Code profilers have been around for a long time. A number of standard techniques exist to improve the speed (and other consumption of resources) of code, and a balance must be struck between the efficiencies to be gained vs. the cost to gain them. However, this approach is applicable to far more activities than computer code. A couple of years ago I listened with interest as a very successful entrepreneur described how he noticed that his biggest expense for a particular product was in the materials used to build them. The entrepreneur then asked an expert in that class of materials if anything cheaper existed that would do the same job at a lower cost. Upon learning that the answer was yes, the entrepreneur adopted the new material, thus lowering costs significantly while retaining quality, durability, and performance. As discussed above, this is just another exploration of the Theory of Constraints.
  • Simplification: Making products simpler can make them easier to use and easier to produce. Product lines can be extended with simpler options (which may or may not be desired). Design for manufacturability is a major practice in this area. I’ve watched certain products I use be continuously redesigned so they embody simpler shapes and include fewer parts. It can take a lot of experience and insight to see how to make something simpler. One doesn’t generally find the simplest, most optimal design on the first try. It takes iteration to get something to work and then successively improve it. In a market sense, it is sometimes less a case of “getting there first-est with the most-est” (as they say about wars) than about “getting there first-est with just-enough-est.”
  • Incremental vs. The Big Kill: Many kinds of improvements are possible. Sometimes it is necessary to make a lot of small improvements to eliminate losses and small errors, and sometimes it is possible — and even necessary — to do something completely new or different and make a major improvement in one quantum leap. Individuals and organizations should continually look for ways to do both.
  • End-of-Life, replacement, and repurposing: Change is sometimes necessitated by circumstance. One possible example is that existing capabilities stop functioning due to failures, and another example is that the vendor of the capability goes out of business or otherwise withdraws support (think versions of software, fixed-duration warranties, exhaustion of spare parts, and more). In these cases the user is driven to adopt a new solution or abandon that capability entirely. It is also possible that a capability that becomes uneconomic for its original use can be repurposed for an alternative use where the economics make sense. When I worked in the paper industry in the late-80s, the major emphasis and source of market growth was newsprint (my, how things have changed!). When modern production lines are turning out bright, strong rolls of paper, forty feet wide at sixty miles and hour, it can be impossible to run older production lines at a profit. However, it was often possible to repurpose a line, with minimal tooling and rework, to instead produce specialty grades of paper (like tissue) in a way that could still be profitable. I also heard of entire lines being purchased to be shipped to developing countries for greatly reduced capital outlay (vs. a new system), so that it made economic sense to operate there. Much of this discussion is probably more germane to the use of capital equipment rather than software, but readers should be alert to all relevant possibilities.
  • Entirely new solutions (market opportunities from both growth and development): Sometimes the opportunity to be exploited isn’t in doing the same thing better, but in doing something entirely new. These opportunities may be driven by technology, as described above, but may also be driven by economic growth, changing tastes or practices, or other reasons. “New” can mean a number of things in this context. New can mean more of the same, or increasing scale. New can mean different, or increasing scope. Writer Jane Jacobs, the noted urban theorist (among other things), differentiated between growth and development along these lines. Another form of new can mean moving to new locations, whether adding to the total stock of capability, or replacing capability lost elsewhere. However it is defined, doing new things is going to be different than improving existing things.
  • Improved management techniques: This type of improvement can take many forms. It is subsumed in a lot of the other items described above, but I will add a couple more here as a kind of a catch-all. One possible management improvement is in how maintenance is conducted. Collecting new data or leveraging existing data can allow organizations to implement or fine tune preventive or condition-based maintenance procedures to head off major failures, improve uptime, and reduce overall maintenance costs and materials. Other benefits to be realized include employee and customer support and engagement, improved communications of various kinds, improved training to make the workforce more aware of all of the foregoing, and more.
  • General technical solutions: There are large classes of problems that may be addressed in fairly standard ways, all of which can provide or improve value when judiciously applied. I wrote this article in response to an interesting presentation I saw a few years ago on the ARIZ (formerly TRIZ) method. Sometimes, of course, there are no solutions, only trade-offs. Many solutions that come out of the practice of ARIZ overlap with items discussed previously in this article.

So, that’s everything I can think of the time being. Can you think of anything I might add? I’d love to hear your suggestions.

Posted in Management | Tagged , , , , , | Leave a comment