Decision Analysis

Some decisions are fairly straightforward to make. This is true when the number of factors is limited and when their contributions are well defined. Other decisions, by contrast, involve more factors which contributions are less clear. It is also possible that the desired outcomes or actions leading to or resulting from a decision are not well defined, or if there is disagreement among the stakeholders.

The process of making decisions in similar the entire life cycle of a business analysis engagement — writ small. It involves some of the same steps, including defining the problem, identifying alternatives, evaluating alternatives, choosing the alternative to implement, and implementing the chosen alternative. The decision-makers and decision criteria should also be identified.

Let’s look at a few methods of making multi-factor decisions at increasing levels of complexity. It is generally best to apply the simplest possible method that can yield a reasonably effective decision, because more time and effort is required as the complexity of analysis increases. I have worked on long and expensive programs to build and apply simulations to support decisions of various kinds. Simulations and other algorithms themselves vary in complexity, and using or making more approachable and streamlined tools makes them more accessible, but one should still be sure to apply the most appropriate tool for a job.

  • Pros vs. Cons Analysis: This simple approach involves identifying points for and against each alternative, and choosing the one with the most pros, fewest cons, or some combination. This is a very flat approach.
  • Force Field Analysis: This is essentially a weighted form of the pro/con analysis. In this case each alternative is given a score within an agreed-upon scale for the pro or con side, and the scores are added for each option. This method is called a force field analysis because it is sometimes drawn as a (horizontal or vertical) wall or barrier with arrows of different lengths or widths pushing against it perpendicularly from either side, with larger arrows indicating considerations with more weight. The side with the greatest total weight of arrows “wins” the decision.
  • Decision Matrices: A simple form of the decision matrix assigns scores to multiple criteria for each option and adds them up to select the preferred alternative (presumably the one with the highest score). A weighted decision matrix does the same thing, but multiplies the individual criteria scores by factor weightings. A combination of these techniques was used to compile the ratings for the comparative livability of American cities in the 1984 Places Rated Almanac. See further discussion of this below.
  • Decision Tables: This technique involves defining groups of values and the decisions to be made given different combinations of them. The input values are laid out in tables and are very amenable to being automated though a series of simple operations in computer code.
  • Decision Trees: Directed, tree structures are constructed where internal nodes represent mathematically definable sub-decisions and terminal nodes represent end results for the overall decision. The process incorporates a number of values that serve as parameters for the comparisons, and another set of working values that are compared in each step of the process.
  • Comparison Analysis: This is mentioned in the BABOK but not described. A little poking around on the web should give some insights, but I didn’t locate a single clear and consistent description for guidance.
  • Analytic Hierarchy Process (AHP): Numerous comparisons are made by multiple participants of options that are hierarchically ranked by priority across potentially multiple considerations.
  • Totally-Partially-Not: This identifies which actions or responsibilities are within a working entity’s control. An activity is totally within, say, a department’s control, partially within its control, or not at all in its control. This helps pinpoint the true responsibilities and capabilities related to the activity, which in turn can guide how to address it.
  • Multi-Criteria Decision Analysis (MCDA): An entire field of study has grown up around the study of complex, multiple-criteria problems, mostly beginning in the 1970s. Such problems are characterized by conflicting preferences and other tradeoffs, and ambiguities in the decision and criteria spaces (i.e., input and output spaces).
  • Algorithms and Simulations: Must of the material on this website discusses applications of mathematical modeling and computer simulation. There are many, many subdisciplines within this field, of which the discrete-event, stochastic simulations using Monte Carlo techniques I have worked on is just one.
  • Tradespace Analysis: Most of the above methods of analysis involve evaluating trade-offs between conflicting criteria, so there is a need to balance multiple considerations. It is often true, especially for complex decisions, that there isn’t a single optimal solution to a problem. And in any case there may not be time and resources to make the best available decision, so these methods provide a way to at least bring some consideration and rationality to the process. Decision-making is ultimately an entrepreneurial function (making decisions under conditions of uncertainty).

The Places Rated Almanac

I’ve lived in a lot of places in my life for I consider Pittsburgh to be my “spiritual” hometown. I spent many formative years and working years there and I have a great love for the city, even against my understanding of its problems. So, I and other Pittsburghers were shocked and delighted when the initial, 1985 edition of Rand McNally’s Places Rated Almanac (see also here) rated our city as the most livable in America. Not that we didn’t love it, and not that it doesn’t have its charms, but it pointed out a few potential issues with ranking things like this.

The initial work ranked the 329 largest metropolitan area in the United States on nine different categories including ambience, housing, jobs, crime, transportation, education, health care, recreation, and climate. Pittsburgh scores well on health care because it has a ton of hospitals and a decent amount of important research happens there (much of it driven by the University of Pittsburgh). It similarly gets good score for education, probably driven by Pitt and Carnegie Mellon, among many other alternatives. I can’t remember what scores it got for transportation, but I can tell you that the topography of the place makes navigation a nightmare. Getting from place to place involves as much art as science, and often a whoooole lot of patience.

It also gets high marks for climate, even though its winters can be long, leaden, gray, mucky, and dreary. So why is that? It turns out that the authors assigned scores that favored mean temperatures closest to 65 degrees, and probably favored middling amounts of precipitation as well. Pittsburgh happens to have a mean temperature of about 65 degrees, alright, but it can be much hotter in the summer and much colder in the winter. San Francisco, which ranked second or third overall in that first edition, also has a mean temperature of about 65 degrees, but the temperature is very consistent throughout the year. So which environment would you prefer, and how do you capture it in a single metric? Alternatively, how might you create multiple metrics representing different and more nuance evaluation criteria? How might you perform different analyses in all nine areas than what the authors did?

If I recall correctly, the authors also weighted the nine factors equally, but provided a worksheet in an appendix that allowed readers to assign different weights to the criteria they felt might be more important. I don’t know if it supported re-scoring individual areas for different preferences. I can tell you, for example, that the weather where I currently live in central Florida is a lot warmer and a lot sunnier and a lot less snowy than in Pittsburgh, and I much prefer the weather here.

Many editions of the book were printed, in which the criteria were continually reevaluated, and that resulted in modest reshufflings of the rankings over time. Pittsburgh continued to place highly in subsequent editions, but I’m not sure it was ever judged to be number one again. More cities were added to the list over the years as different towns grew beyond the lower threshold of population required for inclusion in the survey. Interestingly, the last-place finisher in the initial ranking was Yuba City, California, prompting its officials to observe, “Yuba City isn’t evil, it just isn’t much.”

One thing you can do with methods used to make decisions is to grind though your chosen process to generate an analytic result, and then step back and see how you “feel” about it. This may be appropriate in personal decisions like choosing a place to live, but might lead to bad outcomes for public competitions with announced criteria and scoring systems that should be adhered to.

Posted in Tools and methods | Tagged , , | Leave a comment

Business Analysis Embedded Within Project Management

I often describe the process of business analysis as being embedded in a process of project management. I’ve also described business analysis as an activity that takes place within a project management wrapper. Engagements (projects) may be run in many different styles, as shown below, but the mechanics of project management remain more or less the same.

What changes in the different regimes is the way the actual work gets done. And here I find it necessary to make yet another bifurcation. While I talk about business analysis as taking place across the entire engagement life cycle, through all of its phases, the function of the BA is different in each phase, and the level of effort is also different in each phase.

I think of essentially three groups as being involved in each engagement, the managers, the analysts, and the implementers (and testers). Let’s look at their duties and relative levels of participation across each of the phases. The descriptions are given as if every role was filled by different people with siloed skill sets, but individuals can clearly function in multiple roles simultaneously. I’ve done this in some instances and it will almost inevitably be the case in smaller teams and organizations.

  1. Intended Use (Problem Definition): This is where the project’s ultimate goals are defined, resources are procured, and governance structures are established. This work is primarily done by project and program managers, product owners and managers, and the sponsors and champions. Analysts, as learning and translating machines, can serve in this phase by understanding the full life cycle of an effort and how the initial definition and goals may be modified over time. It may be that only senior analysts participate in this phase. Implementers can contribute their knowledge of their methods and solution requirements and how they need to interact with customers.
  2. Conceptual Model: This is where the analysts shine and drive the work. The managers may need to facilitate the mechanics of the discovery and data collection processes, but the analysts will be the ones to carry it out, document their findings, and review them with the customers, making changes and corrections until all parties have reached agreement. The implementers will generally be informed about events, and may participate lightly in discovery activities or do brief site visits to get a feel for who they are serving and the overall context of the work.
  3. Requirements: This works very much like the conceptual model phase, where the analysts find out what the customers need through elicitation and review and feedback. The implementers will be a little more involved to the degree that their solutions inject their own requirements into the process. Managers facilitate the time, resources, introductions, and permissions for the other participants.
  4. Design: There are two aspects to the design. The abstract design may be developed primarily by the analysts, while the more concrete aspects of the design are likely to be developed by the implementers. I often describe the requirements phase as developing the abstract To-Be state and the design as developing the concrete To-Be state, but even the “concreteness” of the design has different levels. The abstract (but concrete!) part of the design describes the procedures, equations, data items, and outputs for the solution, while the concrete (really, really concrete!) part of the design specifies how the foregoing is implemented. I know from painful experience that you can have a really good idea what you need a system to do, but being able to implement your desires correctly and effectively can be difficult, indeed. See here, here, and here for further discussion. The latter item is especially germane.
  5. Implementation: The implementers clearly do most of the work here. The analysts serve as liaisons between the implementers and customers by facilitating ongoing communication, understanding, and correction. The managers support the process and the environment in which the work is conducted.
  6. Test (and Acceptance): The implementers (and testers) also expend most of the effort in this phase. The managers facilitate and protect the environment and verify final acceptance of all items. The analysts facilitate communication between all participants and the customer, and also continually attempt to improve the flow of the entire working process.

I tend to express the phases of my analysis framework in a streamlined form of a more involved process. I start with everything that gets done:

  • Project Planning
  • Intended Use
  • Assumptions, Capabilities, and Risks and Impacts
  • Conceptual Model (As-Is State)
  • Data Sources, Collection, and Conditioning
  • Requirements (To-Be State: Abstract)
    • Functional (What it Does)
    • Non-Functional (What it Is, plus Maintenance and Governance)
  • Design (To-Be State: Detailed)
  • Implementation
  • Test
    • Operation, Usability, and Outputs (Verification)
    • Outputs and Fitness for Purpose(Validation)
  • Acceptance (Accreditation)
  • Project Close
  • Operation and Maintenance
  • End-of-Life and Replacement

Then I drop the management wrapping at the beginning and end (with the understanding that it not only remains but is an active participant through all phases of an engagement or project/product/system life cycle) simply because it’s not explicitly part of the business analysis oeuvre.

  •   Intended Use
  •   Conceptual Model (As-Is State)
  •   Data Sources, Collection, and Conditioning
  •   Requirements (To-Be State: Abstract)
    • Functional (What it Does)
    • Non-Functional (What it Is, plus Maintenance and Governance)
  •   Design (To-Be State: Detailed)
  •   Implementation
  •   Test
    • Operation, Usability, and Outputs (Verification)
    • Outputs and Fitness for Purpose (Validation)
  •   Operation and Maintenance
  •   End-of-Life and Replacement

Then we simplify even further, since the data melts into the other phases and we don’t always worry about the full life cycle.

Now let’s consider the practice of project management in its own language. The Project Management Body of Knowledge (PMBOK) is the Project Management’s Institute’s (PMI) analog to the IIBA’s BABOK. It defines five phases of a project as follows.

The figure above shows the five phases proceeding from left to right through the course of the project. The practice embodies management of ten different areas of concern, each of which comes into play during some or all of the project’s phases. (This was true through the sixth edition of the PMBOK. The recently released seventh edition replaces the ten knowledge areas with twelve principles, including extensive coverage of Agile practices. I will update this article accordingly at some point in the future.)

The project is defined and kicked off during the initiating phase, during which the requisite stakeholders are identified and involved. The project charter is developed in this phase, shown in the integration management area in the upper left. BAs can definitely be part of the process of creating and negotiating the charter and helping to shape the project and its environment. The project charter defines the key goals of the project, the major players, and something about the process and environment.

The planning phase is where the bulk of the preparation gets done in terms of establishing the necessary aspects of the working environment and methodologies for the project. The actual work gets done in the executing phase, with the monitoring and controlling phase proceeding concurrently, but which is devoted to monitoring, feedback, and correction separate from the actual work. The closing phase ends the project, records lessons learned, archives relevant materials, celebrates its successes (hopefully…), and releases resource for other uses. The methods and concerns in each of the ten management areas all overlap with the practice of business analysis, and BAs should absolutely be involved with that work.

In the figure below I show that, once the engagement (or effort or project or venture or whatever) is set up, most of the work of the business analysis (as well as the implementation and testing) oeuvre is accomplished during the executing phase and the monitoring and controlling phase. This includes the intended use phase (which also includes the activities in the project charter), because it may change as the result of developments, discovery, and feedback over the course of the engagement.

Don’t take the location of the phases too literally. I’m not saying the first three BA phases occur during executing and the remaining three during monitoring and controlling. Rather, I’m saying that all phases of BA work are conducted during the concurrent executing and monitoring and controlling phases. Seen in this light, The initiating, planning, and closing phases from the project management oeuvre are the “wrapper” within which the bulk of an engagement’s actual work is done.

I’ll end by emphasizing a few things again. These general concepts apply no matter what project approach may be taken (e.g., Waterfall, Agile, Scrum, Kanban, SAFe, or hybrid). Individuals may wear multiple hats depending on the organization and situation. All parties should work together to bring their strengths and unique abilities together. Few participants are likely to participate through all phases of an engagement, but they should be made aware of the full context of their work. Greater understanding of the roles of all participants and job functions will greatly aid cooperation and understanding. And finally, and most importantly, that greater understanding will lead to greater success!

Posted in Management | Tagged , , | Leave a comment


Estimation is used to try to predict future outcomes related the the iron triangle elements of time, money, and, to a lesser degree quality (or features or performance). The BABOK essentially only discusses the first two. Estimates can be made of both costs and benefits. While all aspects of this process are in a sense entrepreneurial, the biggest component of entrepreneurial judgment is predicting future benefits, particularly for potential sales.

Any aspect of an effort or solution may be estimated for any part of its full life cycle. Examples include the time, cost, and effort (in terms of staff and materials) of any activity; capital, project, and fixed and variable costs of delivered solutions, potential benefits (e.g., sales, savings, reduced losses), and net performance (projected benefits minus projected costs).

The most important thing to know about estimation is that it tends to be more accurate when more information is available. This is especially true when making estimates about outcomes very similar situations from the past.

There are many methods of estimation including:

  • Top-down and Bottom-up: Estimates can be performed from both ends depending on what is known about the engagement and the solution (the project and the product). Breakdowns can be made from the highest levels down to more detailed levels, or aggregations can be made from detailed low-level information which is then grouped and summed.
  • Parametric Estimation: This method has a lot in common with bottom-up estimation. It attempts to multiply lesser-known input information (how many of A, B, and C) by better-known parametric information (e.g., the known prices for each individual example of A, B, and C). Levels of skill and experience can figure in to such calculations as well.
  • Rough Order of Magnitude (ROM): This is basically an educated guess, based on experience, impressions, and entrepreneurial judgment. There are a few pithier names for this method!
  • Rolling Wave: This involves making continuous estimates of elements throughout an engagement, which ideally become more accurate over time as more is known and less is unknown.
  • Delphi: This technique seeks estimates from a wide variety of participants, potentially over multiple iterations, until a consensus is reached. This allows certain kinds of knowledge to be shared across the participants. As an example, think of a group of coders bidding on tasks during sprint planning. Most participants might make similar judgments of the complexity of a task, but if one or two team members make very different estimates they could share that they’re aware of a simple or existing solution to the problem that will reduce the effort required, or know about hidden requirements and other stumbling blocks that will increase the effort required. As another example, the first issue of Omni Magazine included a Delphic poll of its readership asking about when certain developments, discoveries, and accomplishments might take place. The results were published in a subsequent issue.
  • PERT: This technique asks participants to estimate best-case, expected, and worst-case outcomes, which are then averaged, with the expected outcome given a weighting of four times, i.e., result = (best + 4*expected + worst) / 6.

As mentioned above, the accuracy of estimates is likely to improve when more information is available. This information can come from similar or analogous situations, historical information, expert judgment, or a combination of any or all of these.

Estimates can be given as point values or as a range, the latter of which will also indicate the degree of uncertainty. A measure called the confidence interval describes the expected range of outcomes, and it is generally expressed as (1 – expected maximum error), where the expected maximum error is a percentage of the central value. For example, an estimate of 100 plus or minus 10 would indicate a confidence interval of 90%. In the case of 100 plus or minus five, the confidence would be 95%. Certain statistical and Monte Carlo techniques generate confidence intervals. In these two examples, the maximum absolute error in one direction is sometimes called the half-width, because it is half of the full range of possible outcomes (the upper and lower bounds do not have to be the same distance from the expected value.). This information can come into play when determining needed sample sizes.

Estimates should generally be made by those responsible for the outcome of the effort for which the estimate was performed. These can, however, be checked against estimates from additional parties.

Posted in Tools and methods | Tagged , | Leave a comment

Item Tracking

Item tracking is how participants in an effort monitor what concerns, issues, and tasks are valid and need to be addressed, and who has responsibility. Items can arise in any phase of an engagement and be tracked through any other phase, including during the extended operation and maintenance phase.

Items may incorporate the following attributes, according to the BABOK. I think some of these are redundant, but tracking systems like Jira and Rally embody them by default, and can be customized to include the others. More importantly, if you look back to your own experience, you can see that most of these are implicitly present even if not formally acknowledged.

  • Item Identifier: A unique identifier that serves as a key so the item can be found.
  • Summary: A description of the issue that includes text and possibly images, diagrams, and media.
  • Category: A key that can be used to group the item with similar items.
  • Type: The kind of item. (Similar to category?) (Create as needed for your unique situation.)
  • Date Identified: Date and time the issue was raised (and introduced into the system).
  • Identified By: Name (and contact information) of individual(s) who identified or raised the issue.
  • Impact: What happens if the item is not resolved. May include specified times for milestones and completion.
  • Priority: An indication of the item’s importance and especially time requirements.
  • Resolution Date: Times by which milestones must be reached or by which the item must be resolved.
  • Owner: Who is responsible for marshaling the item through to completion.
  • Resolver: Who is responsible for resolving the item.
  • Agreed Strategy: Method for approaching or resolving the item. The BABOK presents options akin to those used in risk analysis (e.g., accept, pursue, ignore, mitigate, avoid), but others are possible.
  • Status: The current state of the item. Items may have their own life cycles (e.g., opened, assigned, in work, in testing, resolved, canceled, rejected). See below for further discussion.
  • Resolution Updates: A log of activities and status updates detailing the item’s disposition.
  • Escalation Matrix: What to do and who should do it if the item is not resolved in the allotted time.

Each organization, and even each engagement, may have its own standard procedures and vocabulary for handling items through their life cycle. When I wrote models for nuclear power plant simulators at Westinghouse we usually had three or four projects going at once, and all of them named some of their work items differently. We had DRs, TRs, and PRs, for deficiency reports, trouble reports, and problem reports at the very least, depending I think on the customer’s preferred language.

I’ve written about using systems like Jira for tracking items through the entire engagement life cycle (here), but a few years later I can see that the idea should be expanded to include an independent life cycle for items within each phase of my framework, and that may be different for different phases. For example, the life cycle for implementation items might be something like assigned, in work, in bench testing, completed (and forwarded to testing). The cycle for conceptual model items might be very different, since it involves performing discovery operations through tours, interviews, research, calculations, and data collection, and then documenting the many identified elements and circulating them for review and correction. I should do a specific write-up on this.

Statistics can be compiled on the processing and disposition of items, so the engagement teams and customers can understand and improve their working methods. Care should be taken to be aware of potential variances in the complexity and requirements of each item, so any resultant findings can be interpreted accurately and fairly.

As mentioned above, items can arise and be tracked and resolved in and through all phases in an engagement’s or product’s full life cycle. In my career I’ve seen individually tracked items mostly come from testing, customer concerns, and to do lists generated by the solution teams themselves. We often called them punch lists as projects were advancing toward completion and the number of outstanding items became small enough to be listed and attacked individually. But, depending on the maturity and complexity of your organization and your effort, you’ll want to carefully consider what system you impose on a working project. You want it to be complex enough to be powerful and clarifying for all participants, but not so overwhelming that interacting with it is almost a larger burden than the actual work. That is, it should enhance the working environment, and not impede it.

What systems have you seen for tracking items?

Posted in Tools and methods | Tagged , | Leave a comment

Benchmarking and Market Analysis

Benchmarking involves learning about activities and characteristics across industries, organizations, products, methodologies, and technologies to identify best practices, product options, and competitive requirements. Benchmarking may be performed by comparing the presence or absence of features (which video editing programs can burn to Blu-ray discs?) and also by comparing the magnitudes of various features (0-to-60 time).

The BABOK lists the following elements related to benchmarking.

  • Identifying what to study
  • Identifying market leaders
  • Learning what others are doing
  • Requesting information to learn about capabilities
  • Learning during plant visits
  • Performing gap analysis vs. market leaders
  • Developing proposals to implement best practices

Here are some examples of benchmarking.

  • When Ford released its Taurus model in 1986, winning Motor Trend’s Car of the Year Award, the design team had examined one hundred different aspects of other vehicles in its class to identify features to include and improve upon. I owned models from 1986 and 1990 and was always impressed that they included a covered storage compartment in the middle of the rear deck behind the back seat, an area which is almost always empty and neglected. I stored the car’s maintenance manuals there.
  • When a company I worked for was contracted to develop a building evacuation model, I conducted an extensive online literature search to learn what work had been performed previously along those lines. I turned up numerous methodologies, research papers, case studies, modeling techniques, and more. I later listed the parameters needed to specify and control the evacuation environment and moving entities, and the user interfaces needed to define and modify them.
  • The first engineering company I worked for introduced me to a really neat way of sharing information. The pulp and paper industry embodied a huge amount of empirical knowledge about the behavior and processing of wood fibers and the related equipment. My director would gather up a huge folder of reading material every two to four weeks and circulate it around to everyone in the department, complete with a checksheet to indicate that each engineer had taken the time to read through the materials. The magazines discussed some information that would more properly be considered market research, but that was a bit over my head at the time.
  • Government and other (usually large) entities will sometimes issue Requests for Information (RFIs) to learn about capabilities of potential vendors, suppliers, and consultants that may be able to help them solve certain problems.

Market Analysis involves studying customers, competitors, and the market environment to determine what opportunities exist and how to address them.

The BABOK lists the following elements related to market analysis.

  • Identifying customers and preferences
  • Identifying opportunities to increase value
  • Studying competitors and their operations
  • Examining market trends in capabilities and sales
  • Defining strategies
  • Gathering market data
  • Reviewing existing information
  • Reviewing data to reach conclusions

Here are some examples of market analysis.

  • The Kano Model of quality seeks to understand the voice of the customer (VOC). It provides a framework for measuring customer satisfaction and determining when improvement is needed. It plots features as shown below. It categorizes aspects of a product or service that are dissatisfiers, satisfiers, and delighters. Items should be prioritized by addressing dissatisfiers first, then satisfiers, and finally delighters. Think of a hotel room. Customers may expect it to be clean and have a desk and an ironing board and a blow drier, and if any of those things are missing or otherwise not right the customer will be unhappy. A hotel room is generally something where you cannot be surprised to the upside, but only to the downside. That said, free cookies, an exceptionally friendly staff, or unusually good WiFi may constitute a delighter under the right circumstances.

  • The companies I worked for usually did custom consulting and product development, but I observed that we might get more financial leverage by building a standalone product we could sell many times. The company then developed such a product. Never mind that they sold all of one unit.
  • The company I worked for that made HVAC controls sponsored an in-person conference with many of our customers to ask them what they most needed from us. Aside from occasional inside sales support, I usually didn’t involved in general market research.
  • Costco identifies certain markets where it seeks to place new stores. As of a few years ago, they were targeting populations of a certain size, with household incomes of at least $90,000/year, with enough space to easily store large purchases of goods. They may limit their locations to be within reasonable range of their existing logistics network. The company has probably built stores in most areas that already meet their criteria, and seeks growing areas for new locations. Similarly, I’ve watched the growth of the CVS, Walgreens, and Starbucks chains over the last twenty years, and they definitely .
  • Professional and college sports teams scout potential players from lower leagues and occasionally other sports and activities. At one time, many NFL kickers started out playing soccer (what everyone outside the US and Canada calls football).
  • Students and families regularly consult many resources when selecting colleges and universities to attend, majors to pursue, the costs of doing so, and the availability of financial aid and scholarships. Of late (too late, in my opinion), more emphasis has been placed on analyzing the economic value of various degrees, to see if the value proposition makes sense for some fields.
Posted in Tools and methods | Tagged , | Leave a comment

The Requirements Life Cycle: Management and Reuse

Just as systems, products, and engagements have life cycles, requirements do as well. It’s easy to look at a requirements traceability matrix and imagine that all requirements spring magically anew from the ether during each engagement.

Let’s look at some considerations that drive requirements creation and reuse.

  • Situation-specific requirements are the unique requirements that are identified and managed for the specific, local conditions and needs encountered in each engagement. Even if two different customers end up asking for and needing the exact same thing, the process of eliciting each of their expressed requirements is unique for that engagement. Most other types of requirements can be reused from engagement to engagement and from project to project and from release to release.
  • Internal solution requirements are those related to the solution offered by the engagement team. Vendors, consultants and even internal solution providers tend not to develop solutions from a completely blank slate for every engagement. They tend to apply variations on a limited range of solution components from their areas of specialization. For example, I spent most of my career working for vendors and consultants offering particular kinds of solutions, e.g., turnkey thermo-mechanical pulping systems, nuclear power plant simulators, Filenet document imaging solutions, furnaces and control systems for the metals industry, operations research simulations, and so on. Other solution teams will apply different components and solutions for different areas of endeavor. Each of those solution offerings have their own implicit requirements that the customer must understand. My company may include a series of 22,000 horsepower double-disc refiners in its solution, but it’s also understood that the customer has to provide a certain kind of support flooring, drainage, access clearance, electrical power, water for cooling and sealing and lubrication, and so on. So actually, requirements can go in both directions (customer-to-solution team, and solution team-to-customer). Each standard component specified for a solution may carry its own standard (reused) and situation-specific (unique) requirements.
  • Implementation tool (programming language, database system, communications protocols) requirements may be specified by customers for compatibility with other systems they operate. The furnace company I worked for provided fairly consistent solutions using similar logic and calculations, but we had to implement our Level One systems using a low-level industrial UI package specified by the customer (e.g., Wonderware or Allen-Bradley), and our Level Two supervisory control systems (my specialty) had to be written in a specified programming language (usually FORTRAN, C, or C++ at that time, and often from a specified vendor, e.g., Microsoft, DEC, or Borland, though I did at least one in Delphi/Pascal when I had the choice). Similarly, our systems had to interface with other systems using customer-specified communications protocols, and also had to interface with the customer’s plantwide DBMS system (e.g., Oracle, though many others were possible).
  • Units requirements come into play when systems have to deal with different currencies or systems of measurement. When I used to write simulation-based, supervisory control systems for metals furnaces, the customers would request that some systems use English (e.g., American!) units while the remainder of systems had to use SI (metric) units.
  • User Interface and Look and Feel requirements define consistent colors, logos, controls, layouts, and components that ensure an organization’s offerings provide a consistent user experience. This helps build messaging and branding among external users and customers and helps all users by reducing training costs and times.
  • Financial requirements relate to Generally Accepted Accounting Principles (GAAP), methods of payment, currencies handled, taxes, payment terms and windows, withholding and escrow, regulations and reporting rules, guidelines for calculating fringe benefits and G&A and overhead, definitions for parameters used in modular/definable business rules, security for account and PII information and communication, storage and logging and backup of transactional data, access control for different personnel and users, and more.
  • Methodological requirements may govern the way different phases of an engagement are carried out. This is especially germane to the work of external vendors and consultants. Particularly in cases where I did discovery and data collection at medical offices, airports, and land border ports of entry, our contracts included language describing how we needed to take pictures, record video, obtain drawings and ground plans, and conduct interviews with key personnel. Numerous requirements may be specified about how testing will be conducted and standards of acceptance. One Navy program I supported required that we follow a detailed MILSPEC for performing a full-scale independent VV&A exercise. Methodological requirements are depicted on the the RTM figure above as the items and lines at the bottom.
  • Ongoing system requirements come into play when existing systems are maintained and modified. Many requirements for the originally installed system are likely to apply to post-deployment modifications.
  • Non-functional requirements for system performance, levels of service, reliability, maintainability, and so on may apply across multiple efforts.

Requirements can come from a lot of places. While my framework addresses the active identification of requirements during an engagement, many of the requirements come implicitly from the knowledge and experience of the participants, and many others come explicitly from contracts governing engagements (at least for external solution teams). Many standard contracts are continuously accreting collections of various kinds of requirements.

What additional classes and sources of requirements can you think of?

Posted in Tools and methods | Tagged , | Leave a comment

Data Mining

Data mining is the processing of large quantities of data to glean useful insights and support decision-making. Descriptive techniques like generating graphical depictions or applying other methods allow users to identify patterns, trends, or clusters. Diagnostic techniques like decision trees or segmentation can show why patterns exist. Predictive techniques like regression or neural networks can guide predictions about future outcomes. The latter are the general purview of machine learning and (still-nascent-and-will-remain-so-for-a-long-time) AI techniques, along with simulation and other algorithms.

Data mining exercises can be described as top-down if the goal is to develop and tune an operational algorithm, or bottom-up if the goal is to discover patterns. They are said to be unsupervised if algorithms are applied blindly where investigators don’t know what they’re looking for, to see if any obvious patterns emerge. They are said the be supervised when techniques are applied to see if they turn up or confirm something specific.

This figure from my friend Richard Frederick shows these techniques in a range of increasing power and maturity. Different organizations and processes fall all along this progression.

Data comes from many sources. I describe processes of discovery and data collection in the conceptual modeling phase of my analytic framework, but data collection occurs in many other contexts as well, most notably in the operation of deployed systems. Forensic, one-off, and special investigations will tend to run as standalone efforts (possibly using my framework). Mature, deployed systems, by contrast, will collect, collate, and archive data that are processed on an ongoing basis. Development and tuning of a data mining process will be conducted on a project basis, and it will thereafter be used on an operational basis.

Development and deployment of a data mining process needs to follows these steps (per the BABOK).

  1. Requirements Elicitation: This is where the problem to be solved (or the decision to be made) and the approach to be taken are identified.
  2. Data Preparation: Analytical Dataset: This involves collecting, collating, and conditioning the data. If the goal is to develop and tune a specific operational algorithm, then the data has to be divided into three independent segments. One is used for the initial analysis, another is used for testing, and the other for final confirmation.

  3. Data Analysis: This is where most of the creative analytical work is performed. Analyses can be performed to identify the optimal values for every governing parameter, both individually and in combination with others.
  4. Modeling Techniques: A wide variety of algorithms and techniques may be applied. Many may be tried in a single analysis in order to identify the best model for deployment. Such techniques range from simple (e.g., liner regression) to very complex (e.g., neural networks), and care should be taken to ensure that the algorithms and underlying mathematics are well understood by a sufficient number of participants and stakeholders.
  5. Deployment: The developed and tuned algorithms must be integrated into the deployed system so that they absorb and process operational data and produce actionable output. They can be implemented in any language or tool appropriate to the task. Some languages are preferred for this work, but anything can be used for compatibility with the rest of the system if desired.

The figure below shows how a data mining exercise could lead to development and tuning of an analytical capability meant to support some kind of decision, based on operational data from an existing system. It further suggests how the developed and tuned capability could be deployed to the operational system as an integrated part of its ongoing function.

There are many ways data can be mined. Let’s look at some in order of increasing complexity.

  • Regression and Curve-Fitting: These techniques allow analysts to interpolate and extrapolate based on fairly straightforward, essentially one-dimensional or two-dimensional data. For example, the number of customers served at a location may be predicted using a linear extrapolation derived from the number served from some number of prior time periods.
  • Correlations and Associations: These allow analysts to understand whether a cause-and-effect relationship exists (with the proviso that correlation is not necessarily causation) or whether potential affinities (if customers like A they might like B and C), based on potentially many parallel streams of data.
  • Neural Nets and Deep Learning: These techniques allow systems to learn to sense, separate, and recognize objects and concepts based on dense but coherent streams of data. Examples include classifying sounds by frequency (different from simple high- and low-pass filters) and identifying objects in an image.
  • Semantic Processing: This involves associating data from many disparate sources based on commonalities like location, group membership, behaviors, and so on.
  • Operations Research Simulations: These potentially complex systems can help analysts design and size systems to provide a set level of service in a specified percentage of situations. For example, it may be enough to design a system that will result in serving customers with no more than a twenty-minute wait eighty percent of the time, on the theory that building a system with enough extra capacity to ensure waits are less than twenty minutes in all cases would be both expensive and wasteful.

Considering this from a different angle, let us look at a maintenance process. We might examine data to determine which parts fail most often so we can improve them or keep more replacements on hand. We might see whether we can extend the time between scheduled maintenance events without incurring more failures. Data from a series of real-time sensor systems installed on a machine, in conjunction with historical data, might be able to warn of impending failure so operations can be halted and repairs effected before a major failure occurs. Numerous sources of data can be scanned to identify issues not seen via other means (social media discussions of certain vehicles, sales of consumables by season, rescue calls concentrated in certain regions, locations, or environments).

Numerous resources provide additional information, including this Wikipedia entry.

Posted in Tools and methods | Tagged , , , , | Leave a comment

State Modeling

What do we mean by state?

PvT state diagram copied from:

We commonly think of the states of matter as solid, liquid, and gaseous, and we know that these are dependent on temperature. For water at — atmospheric pressure — we refer to these states as ice, liquid water, and steam, but every substance can assume the same states, even though they may do so at very different temperatures (and pressures). If we know the temperature, pressure, and specific volume of a substance, we also know (something about) its specific energy, and that uniquely defines the state of the substance.

Other states of matter are possible, with plasma being probably the best known. You can think of plasma as the next hottest state above gaseous, but where the electrons have too much energy to remain in orbit around their nuclei. You can think of the Bose-Einstein condensate as a “fifth state,” the next coldest below solid, where none of the particles have enoough energy to hold together as atoms. These states and many others are described in this Wikipedia article.

The behavior of the substance depends on its state, and the state of the substance depends on other characteristics, which are not themselves states.

States in Object Models

If we think of water as an object, it can have many characteristics. It can have the properties of temperature, pressure, specific volume, and specific energy (define three and the other item is defined as well, with some complications we won’t go into, mostly having to do with quantities of substance existing in multiple states simultaneously in various conditions of saturation). We specify a mass or particular volume of water to instantiate a particular object, as opposed to describing something abstract.

From these pieces of information we know what state (or sometimes states) the water is in, and then we know how it will behave. In object oriented programming, an attribute is a variable defined within an object that can take on different values. Most of the attributes in this example of a particular volume of water are independent. The state of the water is dependent on other attributes of the water. Other actions, then, are allowed, disallowed, or otherwise governed by the derived state.

State for Systems and Business Analysts

Entities can move through, move within, or remain stationary within a process, and they can change states in any of those situations. A state is a variable condition an entity can be in that affects its behavior. An entity (or object) can have multiple attributes that each reflect different possible states. Let’s look at some examples.

Here I use the term “entity” in a more general sense than what is labeled in the diagram above. For this discussion, an entity can enter, move through, and exit from a system. Think of documents being handled in a business process. An entity can also move within the system but never enter or leave, as in the case of an airport shuttle car. Stations, facilities, queues (and bags) all remain stationary, but can be entities in this sense as well. All can have different characteristics that can define states.

States of entities are defined to determine what actions can and should be performed and what decisions can be made. States of entities in systems can be defined in as many ways as your imagination can dream up — as long as everyone understands them. Here are some examples of states and how they can be defined:

  • A customer’s monthly payment status can be defined by whether payment has been received by a due date or, if payment is late, how long overdue the payment is. Different actions will be taken depending on the time of full payment, partial payment, passing defined thresholds of lateness, and so on.
  • Multiple pieces of information might be required to make certain decisions or take certain actions. States can be defined to describe the ability to make the decision or take the action based on the disposition of all of the required data items.
  • Different actions may only be supported when operations are open. Operating hours can determine open/closed states, thermostat settings, availability of certain resources, and so on.
  • States can be defined by quantities. Stocked items may need to be reordered or replenished when quantities fall to or below defined levels. A vending machine must know when it has received enough cash, coinage, or electronic payment to complete the sale and dispense product, and also give change or cancel the interaction and reassume a waiting state if appropriate.

State Models

A state model defines:

  • The possible states an entity can be in: Any attribute can be a state that can have multiple values, all of which must be defined. An entity can have many different attributes that each define multiple states.
  • The sequence of states the entity can be in: An entity (or attribute of an entity) in a given state may not be able to arbitrarily transition into every other state. Rules for defining what is possible must be defined. This can be done using tables, diagrams (see below), or other means.
  • How the entity changes states: All actions associated with transitioning into and out of each state must be defined.
  • The events and conditions that cause the entity to change states: The mechanism(s) causing states to change must be defined. Sometime an event will change a state directly, and at other times an event must scan other information to determine whether a state change has occurred.
  • The actions permitted or required by an entity in each state: Some actions may not be allowed if an entity is in certain states. Other actions must be performed if an entity is in certain states. The latter may happen when the state transition happens, while the entity is in the state, or before the entity is allowed to transition out of that state.

The figure below is a kind of state transition diagram that shows which states can be transitioned into from which prior states, and also the relative probability of each transition’s occurrence. Many types of state transition diagrams, tables, and other descriptions are possible.

It’s Tricky (h/t Run DMC)

A system as complicated as an aircraft can have many different attributes which each define a state. States can be defined in relation to other variables, and even based on other states. The diagram above shows just a small subset of attributes which may describe an aircraft and the states it may assume. What questions does the figure bring up in your mind?

These states and actions can become very, very complicated, and great care may need to be taken with language and definitions. The analysts, customers, implementers, testers, and reviewers may need to work very closely together. Analysts should be keenly aware of this complexity and ensure that the right conversations are had and the right questions are asked and answered.

Posted in Tools and methods | Tagged , , | Leave a comment


Reviews are about examining some output or artifact for quality and agreement. I primarily think of this process in terms of my framework, as shown in the figure below, but many contexts are valid. In my framework, review means building up the relevant work products in each phase and having them reviewed by the customer and reworked by the solution team until they are accepted and approved by all participants. I often talk about how the framework involves continuous iteration within and between phases, and this is always based on different kinds of reviews.

Here’s how review tends to work within each phase. For clarity the activities are described from the point of view of an external solution team developing something for a customer, but it should be understood that the line between the solution team and the customer can vary from sharply defined (I spent most of my career functioning as an external vendor or consultant) to nonexistent (internal groups can analyze and develop solutions for their own problems).

  • Intended Use (Problem Definition): The team helps the customer ensure they’ve identified the problem and purpose correctly.
  • Conceptual Model: The team documents the results of its discovery and data collection processes, and then the customer verifies that the team has achieved accurate and complete understanding of the customer’s processes and vocabulary.
  • Requirements: The team elicits requirements from the customer and collaborates to ensure that the most complete and accurate possible list of both functional and non-functional requirements is generated.
  • Design: The team proposes a design and the customer decides whether to approve it or not.
  • Implementation: The team implements the designed solution with input and continuous review by the customer.
  • Test (and Deployment and Acceptance): The team and customer complete different kinds of verifications and validations.

The customer ultimately provides the final approval or non-approval for each phase. Part of the iteration between phases involves updating the Requirements Traceability Matrix to ensure consistency horizontally between phases and logical consistency vertically within the Design/Implementation phases. Multiple individual iterative create-and-review processes may take place, serially and in parallel, within each engagement phase (that is, the figure above is a stylized, simplified representation of the overall process). Imagine everything that must be going on in an engagement involving multiple teams in a SAFe environment, as shown below.

The Scrum framework incorporates two explicit review activities. In the Sprint Review, the team describes and demonstrates the latest work increment for the customer. In the Sprint Retrospective, the team (without the customer) reviews its own working methods during the just-ended sprint. It should be understood, however, that all of the other kinds of review are still taking place.

Conducting a review involves defining the objective of the review, choosing the techniques to be used during the review, and selecting the participants in the review.

The following review techniques (named and described in the BABOK) can be used (among others).

  • Inspection: This is a formal process of reviewing some or all work products to determine whether they meet defined criteria.
  • Formal Walkthrough (Team Review): These are typically performed to review the methods and behavior of internal teams.
  • Single Issue Review (Technical Review): This is a formal review, often involving a specific technical aspect, of one outstanding area of concern.
  • Informal Walkthrough: This is a fairly informal process of reviewing an item and soliciting feedback from a small number of participants.
  • Desk Check: This informal process drafts an outside participant to perform the review.
  • Pass Around: In this process, many people review the item or items and offer feedback, usually one after the other.
  • Ad hoc: This can be any type of informal review by any type of participant.

Can you think of any additional review techniques?

Finally, the BABOK identifies the following roles. An author creates a work product or artifact. A reviewer is anyone who looks at the work product or artifact and offers feedback or approval (or non-approval). A facilitator is a neutral participant who guides a review process for others. A scribe documents the actions and results of the review.

Posted in Tools and methods | Tagged , , | Leave a comment

Business Model Canvas

Like the balanced scorecard technique I described yesterday, I view the business model canvas technique as a way to get analysts to look at organizations in a different way. It is used fairly commonly (though I had never seen nor heard of it prior to encountering that materials in the BABOK). It seems to me to be a tool more appropriate for higher-level officers in an organization to gain a readily accessible and somewhat standardized view of its activities, purpose, and effectiveness. That said, it’s not that business analysts cannot or should not be involved with.

While the technique seems to be primarily intended to review organizations from within, to determine the best ways to deliver value, it occurs to me that the technique could also be used to evaluate multiple organizations in a consistent way. This would provide an efficient way to compare and contrast their most salient characteristics.

This technique is well known enough that many templates and tools exist for creating them. Here is an example from the Wikipedia entry on the technique:

The canvas contains nine sections (some versions apparently list seven), labeled per the following list, which can be filled with any kind of information that communicates in a way that all participants understand.

  • Key Partnerships: What external organizations and resources, if any, enhance the organization’s ability to meet its goals?
  • Key Activities: Primary activities that deliver value to customers in terms of value-add (customer willing to pay), non-value-add (customer not willing to pay), and business non-value-add (customer not willing to pay but required for other reason).
  • Key Resources: Physical, financial, intellectual, and human.
  • Value Proposition: What the customer pays you for (also, their cost vs. their benefit).
  • Customer Relationships: Acquisition and retention, personal vs. impersonal, in-person vs. automated, etc.
  • Channels: (All) Modes of interaction with the customer.
  • Customer Segments: Identifying and addressing through different needs, profitability, channels, and relationship modes.
  • Cost Structure: For understanding of how to manage and improve.
  • Revenue Streams: All the ways to generate income and fees.
Posted in Tools and methods | Tagged , , | Leave a comment