How To Address A Weakness

In this evening’s discussion in our weekly Tampa IIBA certification study group we touched on the subject of dealing with weaknesses. This initially grew out of a discussion about SWOT analyses. Based on things I’ve read and my proclivity to try to look at problems from as many angles as possible, I am aware of two main approaches.

The obvious one is to make the weakness less weak. There are numerous ways to do this, depending on the nature of the weakness. One is to learn more or otherwise develop or add to the skill or capability that is lacking. This can involve bringing in new people (from within and from outside your organization), obtaining information (in books and papers and online), training in new tools or technologies (via courses, videos, and friends), and many other methods.

The other, and less intuitive, approach is to enhance your strengths so that the weakness matters less. If you or your organization is so good at something that provides a significant competitive advantage, then it may be wisest to concentrate on maintaining or improving that strength.

In general, it’s best to take the approach — or combination of approaches — that provides the highest marginal benefit. That is, go with the solution that gives the greatest bang for the buck.

Think of a football team. (We’re talking American Football here, not what us crazy Americans call soccer that the rest of the world calls football!). Every team gets to put eleven players on the field on defense at one time. Assuming the level of overall talent among teams is roughly equal, we might observe that some teams, because they have better players at certain positions or better coaching or better or different schemes, will be stronger at some facets of defense and weaker in others, while other teams are stronger or weaker in different areas.

For example, one team may have a very strong defensive line that is reasonably able to stop opposing teams’ rushing attack, but can really put a lot of pressure on the quarterback. A different team may have a less proficient defensive line but a much stronger secondary. If the first team’s defensive line can pressure the opposing quarterback enough, it may not matter that its secondary cannot cover the receivers as well, because the quarterback won’t be able to get the ball to them, anyway. If the second team’s secondary is very strong and is able to blanket the opposing receivers, it may not matter if its defensive line is weaker, because even if the quarterback has more time to throw, the receivers will never be open to throw to.

There are other ways to cover for a weak aspect of a defense. One is to improve the offense so the defense is on the field less or otherwise does not have to be as effective. Another is to modify the stadium and motivate the crowd so opposing offenses have to play in a louder environment, which should reduce their effectiveness. The number of factors that can be considered in this kind of analysis is nearly infinite, so in order to keep things simple, we’re only going to consider the problem from the two dimensions of a defense’s line vs. its secondary.

Looking at the first case of a defense with a strong line and a weaker secondary (the weakness we intend to discuss), we can see that we can either improve the secondary (the obvious approach), or (less obviously) we can maintain or improve the defensive line even more. Remember that resources are limited (the number of players on a team and the number on the field at any one time are fixed, there is a salary cap, there are recruiting restrictions, and so on) and the solution must be optimized within defined constraints. Not all problems are constrained in this exact way, but there are no problems which are not constrained in some way. It’s always a question of making the best use of the resources you have. The approach you take should be based on what works best for the current situation.

Posted in Management | Tagged , , , | Leave a comment

Basic Framework Presentation

I found it necessary to put together a shorter introduction to my business analysis framework than my normal, full-length presentation(s). The link is here.

Posted in Tools and methods | Tagged , , | Leave a comment

Decision Modeling

Many business processes require decisions to be made. Decision modeling is about the making and automating of repeatable decisions. Some decisions require unique human judgment. They may arise from unusual or entrepreneurial situations and involve factors, needs, and emotions that cannot reasonably be quantified. Other decisions should be made the same way every time, according to definable rules and processes.

Decisions are made using methods of widely varying complexity. Many of the simulation tools I created and used were directed to decision support. The most deterministic and automatable decisions tend to use the techniques toward the lower left end of the trend toward complexity and abstraction for data analysis techniques shown above, although the ability to automate decisions is slowly creeping up and to the right. I discussed some aspects of this in a recent write-up on data mining.

Decision processes embody three elements. Knowledge is the procedures, comparisons, and industry context of the decision. Information is the raw material and comparative parameters against which a decision is made. The decision itself is the result of correctly combining the first two. Business rules can involve methods, parameters, or both.

Let’s see how some of these methods may work in practice.

Decision trees, an example of which is shown above, list the relevant rules in terms of operations and parameters. The rules shown above involve simple comparisons, but more complex definitional and behavioral rules can apply. The optimization routines Amazon uses to determine how to deliver multiple items ordered at the same time in one or many shipments on one or many days and from one or more fulfillment centers involved up to 60,000 variables in the late-90s and are likely to be even larger now. A definitional rule may describe the way a calculation should be performed, while a behavioral rule might require a payment address to match a shipping address. The procedures and comparisons can be as complex as the situation demands.

The same set of rules can be drawn in the form of a decision tree as shown below.

These rules can be described during the discovery and data collection activities of the conceptual model phase, and also during the requirements and design phases. It is fascinating how many different ways such rules can be brought to life in the implementation phase.

The most direct and brute force way is shown below, in the C language.

This way also works, but looks totally different.

The number of ways this can be done is endless. The first method “hard codes” all of the definitional parameters needed for comparison. This can be somewhat opaque and hard to maintain and update. The second method defines all the parameters as variables that can be redefined on the fly, or initialized from a file, a database, or some other source. The latter is easier to maintain and is generally preferred. It is extremely important to maintain good documentation, including in the code itself. I’ve omitted most comments for clarity, but I would definitely include a lot in production code. I would also include references to the governing documents, RTM item index values, and so on to maintain tight connections between all of the sources, trackers, documents, and implementations.

In order to understand these, you’d have to know a reasonable amount about programming, and failing that you should know how to define tests that exercise every relevant case. For example, you would want to define tests that not only supplied inputs in the middle to each range, but also supplied inputs on the boundaries of each range, so you could fully ensure the greater-than-OR-equal-to or just greater-than logic tests work exactly the way you and your customers intend. Setting the requirements for these situations may require understand of organizational procedures and preferences, industry practices, competitive considerations, risk histories and profiles, and governing regulations and statutes. None of these considerations are trivial.

You will also want to work with your implementers and testers to ensure they test for invalid, nonsensical, inconsistent, or missing inputs. It’s up to the analysts and managers to be aware of what it takes to make systems as robust, flexible, understandable, and maintainable as possible. Some programmers may not want to do these things, but the good and conscientious ones will clamor to inject the widest variety of their concerns and experiences into the process. As such, it’s important to foster good relationships between all participants in the working process and have them contribute to as many engagement phases as possible.

Finally, data comes in many forms and is used and integrated into organizations’ working processes in many ways. I discuss some of them here and here, and visually suggest some in the figure below. Some of this data is used for other purposes and doesn’t directly drive decisions, and I would not assert that one kind is more important than another. In the end, it all drives decisions and it is all important to get right.

Posted in Tools and methods | Tagged , , , | Leave a comment

Decision Analysis

Some decisions are fairly straightforward to make. This is true when the number of factors is limited and when their contributions are well defined. Other decisions, by contrast, involve more factors which contributions are less clear. It is also possible that the desired outcomes or actions leading to or resulting from a decision are not well defined, or if there is disagreement among the stakeholders.

The process of making decisions in similar the entire life cycle of a business analysis engagement — writ small. It involves some of the same steps, including defining the problem, identifying alternatives, evaluating alternatives, choosing the alternative to implement, and implementing the chosen alternative. The decision-makers and decision criteria should also be identified.

Let’s look at a few methods of making multi-factor decisions at increasing levels of complexity. It is generally best to apply the simplest possible method that can yield a reasonably effective decision, because more time and effort is required as the complexity of analysis increases. I have worked on long and expensive programs to build and apply simulations to support decisions of various kinds. Simulations and other algorithms themselves vary in complexity, and using or making more approachable and streamlined tools makes them more accessible, but one should still be sure to apply the most appropriate tool for a job.

  • Pros vs. Cons Analysis: This simple approach involves identifying points for and against each alternative, and choosing the one with the most pros, fewest cons, or some combination. This is a very flat approach.
  • Force Field Analysis: This is essentially a weighted form of the pro/con analysis. In this case each alternative is given a score within an agreed-upon scale for the pro or con side, and the scores are added for each option. This method is called a force field analysis because it is sometimes drawn as a (horizontal or vertical) wall or barrier with arrows of different lengths or widths pushing against it perpendicularly from either side, with larger arrows indicating considerations with more weight. The side with the greatest total weight of arrows “wins” the decision.
  • Decision Matrices: A simple form of the decision matrix assigns scores to multiple criteria for each option and adds them up to select the preferred alternative (presumably the one with the highest score). A weighted decision matrix does the same thing, but multiplies the individual criteria scores by factor weightings. A combination of these techniques was used to compile the ratings for the comparative livability of American cities in the 1984 Places Rated Almanac. See further discussion of this below.
  • Decision Tables: This technique involves defining groups of values and the decisions to be made given different combinations of them. The input values are laid out in tables and are very amenable to being automated though a series of simple operations in computer code.
  • Decision Trees: Directed, tree structures are constructed where internal nodes represent mathematically definable sub-decisions and terminal nodes represent end results for the overall decision. The process incorporates a number of values that serve as parameters for the comparisons, and another set of working values that are compared in each step of the process.
  • Comparison Analysis: This is mentioned in the BABOK but not described. A little poking around on the web should give some insights, but I didn’t locate a single clear and consistent description for guidance.
  • Analytic Hierarchy Process (AHP): Numerous comparisons are made by multiple participants of options that are hierarchically ranked by priority across potentially multiple considerations.
  • Totally-Partially-Not: This identifies which actions or responsibilities are within a working entity’s control. An activity is totally within, say, a department’s control, partially within its control, or not at all in its control. This helps pinpoint the true responsibilities and capabilities related to the activity, which in turn can guide how to address it.
  • Multi-Criteria Decision Analysis (MCDA): An entire field of study has grown up around the study of complex, multiple-criteria problems, mostly beginning in the 1970s. Such problems are characterized by conflicting preferences and other tradeoffs, and ambiguities in the decision and criteria spaces (i.e., input and output spaces).
  • Algorithms and Simulations: Must of the material on this website discusses applications of mathematical modeling and computer simulation. There are many, many subdisciplines within this field, of which the discrete-event, stochastic simulations using Monte Carlo techniques I have worked on is just one.
  • Tradespace Analysis: Most of the above methods of analysis involve evaluating trade-offs between conflicting criteria, so there is a need to balance multiple considerations. It is often true, especially for complex decisions, that there isn’t a single optimal solution to a problem. And in any case there may not be time and resources to make the best available decision, so these methods provide a way to at least bring some consideration and rationality to the process. Decision-making is ultimately an entrepreneurial function (making decisions under conditions of uncertainty).

The Places Rated Almanac

I’ve lived in a lot of places in my life for I consider Pittsburgh to be my “spiritual” hometown. I spent many formative years and working years there and I have a great love for the city, even against my understanding of its problems. So, I and other Pittsburghers were shocked and delighted when the initial, 1985 edition of Rand McNally’s Places Rated Almanac (see also here) rated our city as the most livable in America. Not that we didn’t love it, and not that it doesn’t have its charms, but it pointed out a few potential issues with ranking things like this.

The initial work ranked the 329 largest metropolitan area in the United States on nine different categories including ambience, housing, jobs, crime, transportation, education, health care, recreation, and climate. Pittsburgh scores well on health care because it has a ton of hospitals and a decent amount of important research happens there (much of it driven by the University of Pittsburgh). It similarly gets good score for education, probably driven by Pitt and Carnegie Mellon, among many other alternatives. I can’t remember what scores it got for transportation, but I can tell you that the topography of the place makes navigation a nightmare. Getting from place to place involves as much art as science, and often a whoooole lot of patience.

It also gets high marks for climate, even though its winters can be long, leaden, gray, mucky, and dreary. So why is that? It turns out that the authors assigned scores that favored mean temperatures closest to 65 degrees, and probably favored middling amounts of precipitation as well. Pittsburgh happens to have a mean temperature of about 65 degrees, alright, but it can be much hotter in the summer and much colder in the winter. San Francisco, which ranked second or third overall in that first edition, also has a mean temperature of about 65 degrees, but the temperature is very consistent throughout the year. So which environment would you prefer, and how do you capture it in a single metric? Alternatively, how might you create multiple metrics representing different and more nuance evaluation criteria? How might you perform different analyses in all nine areas than what the authors did?

If I recall correctly, the authors also weighted the nine factors equally, but provided a worksheet in an appendix that allowed readers to assign different weights to the criteria they felt might be more important. I don’t know if it supported re-scoring individual areas for different preferences. I can tell you, for example, that the weather where I currently live in central Florida is a lot warmer and a lot sunnier and a lot less snowy than in Pittsburgh, and I much prefer the weather here.

Many editions of the book were printed, in which the criteria were continually reevaluated, and that resulted in modest reshufflings of the rankings over time. Pittsburgh continued to place highly in subsequent editions, but I’m not sure it was ever judged to be number one again. More cities were added to the list over the years as different towns grew beyond the lower threshold of population required for inclusion in the survey. Interestingly, the last-place finisher in the initial ranking was Yuba City, California, prompting its officials to observe, “Yuba City isn’t evil, it just isn’t much.”

One thing you can do with methods used to make decisions is to grind though your chosen process to generate an analytic result, and then step back and see how you “feel” about it. This may be appropriate in personal decisions like choosing a place to live, but might lead to bad outcomes for public competitions with announced criteria and scoring systems that should be adhered to.

Posted in Tools and methods | Tagged , , | Leave a comment

Business Analysis Embedded Within Project Management

I often describe the process of business analysis as being embedded in a process of project management. I’ve also described business analysis as an activity that takes place within a project management wrapper. Engagements (projects) may be run in many different styles, as shown below, but the mechanics of project management remain more or less the same.

What changes in the different regimes is the way the actual work gets done. And here I find it necessary to make yet another bifurcation. While I talk about business analysis as taking place across the entire engagement life cycle, through all of its phases, the function of the BA is different in each phase, and the level of effort is also different in each phase.

I think of essentially three groups as being involved in each engagement, the managers, the analysts, and the implementers (and testers). Let’s look at their duties and relative levels of participation across each of the phases. The descriptions are given as if every role was filled by different people with siloed skill sets, but individuals can clearly function in multiple roles simultaneously. I’ve done this in some instances and it will almost inevitably be the case in smaller teams and organizations.

  1. Intended Use (Problem Definition): This is where the project’s ultimate goals are defined, resources are procured, and governance structures are established. This work is primarily done by project and program managers, product owners and managers, and the sponsors and champions. Analysts, as learning and translating machines, can serve in this phase by understanding the full life cycle of an effort and how the initial definition and goals may be modified over time. It may be that only senior analysts participate in this phase. Implementers can contribute their knowledge of their methods and solution requirements and how they need to interact with customers.
  2. Conceptual Model: This is where the analysts shine and drive the work. The managers may need to facilitate the mechanics of the discovery and data collection processes, but the analysts will be the ones to carry it out, document their findings, and review them with the customers, making changes and corrections until all parties have reached agreement. The implementers will generally be informed about events, and may participate lightly in discovery activities or do brief site visits to get a feel for who they are serving and the overall context of the work.
  3. Requirements: This works very much like the conceptual model phase, where the analysts find out what the customers need through elicitation and review and feedback. The implementers will be a little more involved to the degree that their solutions inject their own requirements into the process. Managers facilitate the time, resources, introductions, and permissions for the other participants.
  4. Design: There are two aspects to the design. The abstract design may be developed primarily by the analysts, while the more concrete aspects of the design are likely to be developed by the implementers. I often describe the requirements phase as developing the abstract To-Be state and the design as developing the concrete To-Be state, but even the “concreteness” of the design has different levels. The abstract (but concrete!) part of the design describes the procedures, equations, data items, and outputs for the solution, while the concrete (really, really concrete!) part of the design specifies how the foregoing is implemented. I know from painful experience that you can have a really good idea what you need a system to do, but being able to implement your desires correctly and effectively can be difficult, indeed. See here, here, and here for further discussion. The latter item is especially germane.
  5. Implementation: The implementers clearly do most of the work here. The analysts serve as liaisons between the implementers and customers by facilitating ongoing communication, understanding, and correction. The managers support the process and the environment in which the work is conducted.
  6. Test (and Acceptance): The implementers (and testers) also expend most of the effort in this phase. The managers facilitate and protect the environment and verify final acceptance of all items. The analysts facilitate communication between all participants and the customer, and also continually attempt to improve the flow of the entire working process.

I tend to express the phases of my analysis framework in a streamlined form of a more involved process. I start with everything that gets done:

  • Project Planning
  • Intended Use
  • Assumptions, Capabilities, and Risks and Impacts
  • Conceptual Model (As-Is State)
  • Data Sources, Collection, and Conditioning
  • Requirements (To-Be State: Abstract)
    • Functional (What it Does)
    • Non-Functional (What it Is, plus Maintenance and Governance)
  • Design (To-Be State: Detailed)
  • Implementation
  • Test
    • Operation, Usability, and Outputs (Verification)
    • Outputs and Fitness for Purpose(Validation)
  • Acceptance (Accreditation)
  • Project Close
  • Operation and Maintenance
  • End-of-Life and Replacement

Then I drop the management wrapping at the beginning and end (with the understanding that it not only remains but is an active participant through all phases of an engagement or project/product/system life cycle) simply because it’s not explicitly part of the business analysis oeuvre.

  •   Intended Use
  •   Conceptual Model (As-Is State)
  •   Data Sources, Collection, and Conditioning
  •   Requirements (To-Be State: Abstract)
    • Functional (What it Does)
    • Non-Functional (What it Is, plus Maintenance and Governance)
  •   Design (To-Be State: Detailed)
  •   Implementation
  •   Test
    • Operation, Usability, and Outputs (Verification)
    • Outputs and Fitness for Purpose (Validation)
  •   Operation and Maintenance
  •   End-of-Life and Replacement

Then we simplify even further, since the data melts into the other phases and we don’t always worry about the full life cycle.

Now let’s consider the practice of project management in its own language. The Project Management Body of Knowledge (PMBOK) is the Project Management’s Institute’s (PMI) analog to the IIBA’s BABOK. It defines five phases of a project as follows.

The figure above shows the five phases proceeding from left to right through the course of the project. The practice embodies management of ten different areas of concern, each of which comes into play during some or all of the project’s phases. (This was true through the sixth edition of the PMBOK. The recently released seventh edition replaces the ten knowledge areas with twelve principles, including extensive coverage of Agile practices. I will update this article accordingly at some point in the future.)

The project is defined and kicked off during the initiating phase, during which the requisite stakeholders are identified and involved. The project charter is developed in this phase, shown in the integration management area in the upper left. BAs can definitely be part of the process of creating and negotiating the charter and helping to shape the project and its environment. The project charter defines the key goals of the project, the major players, and something about the process and environment.

The planning phase is where the bulk of the preparation gets done in terms of establishing the necessary aspects of the working environment and methodologies for the project. The actual work gets done in the executing phase, with the monitoring and controlling phase proceeding concurrently, but which is devoted to monitoring, feedback, and correction separate from the actual work. The closing phase ends the project, records lessons learned, archives relevant materials, celebrates its successes (hopefully…), and releases resource for other uses. The methods and concerns in each of the ten management areas all overlap with the practice of business analysis, and BAs should absolutely be involved with that work.

In the figure below I show that, once the engagement (or effort or project or venture or whatever) is set up, most of the work of the business analysis (as well as the implementation and testing) oeuvre is accomplished during the executing phase and the monitoring and controlling phase. This includes the intended use phase (which also includes the activities in the project charter), because it may change as the result of developments, discovery, and feedback over the course of the engagement.

Don’t take the location of the phases too literally. I’m not saying the first three BA phases occur during executing and the remaining three during monitoring and controlling. Rather, I’m saying that all phases of BA work are conducted during the concurrent executing and monitoring and controlling phases. Seen in this light, The initiating, planning, and closing phases from the project management oeuvre are the “wrapper” within which the bulk of an engagement’s actual work is done.

I’ll end by emphasizing a few things again. These general concepts apply no matter what project approach may be taken (e.g., Waterfall, Agile, Scrum, Kanban, SAFe, or hybrid). Individuals may wear multiple hats depending on the organization and situation. All parties should work together to bring their strengths and unique abilities together. Few participants are likely to participate through all phases of an engagement, but they should be made aware of the full context of their work. Greater understanding of the roles of all participants and job functions will greatly aid cooperation and understanding. And finally, and most importantly, that greater understanding will lead to greater success!

Posted in Management | Tagged , , | Leave a comment

Estimation

Estimation is used to try to predict future outcomes related the the iron triangle elements of time, money, and, to a lesser degree quality (or features or performance). The BABOK essentially only discusses the first two. Estimates can be made of both costs and benefits. While all aspects of this process are in a sense entrepreneurial, the biggest component of entrepreneurial judgment is predicting future benefits, particularly for potential sales.

Any aspect of an effort or solution may be estimated for any part of its full life cycle. Examples include the time, cost, and effort (in terms of staff and materials) of any activity; capital, project, and fixed and variable costs of delivered solutions, potential benefits (e.g., sales, savings, reduced losses), and net performance (projected benefits minus projected costs).

The most important thing to know about estimation is that it tends to be more accurate when more information is available. This is especially true when making estimates about outcomes very similar situations from the past.

There are many methods of estimation including:

  • Top-down and Bottom-up: Estimates can be performed from both ends depending on what is known about the engagement and the solution (the project and the product). Breakdowns can be made from the highest levels down to more detailed levels, or aggregations can be made from detailed low-level information which is then grouped and summed.
  • Parametric Estimation: This method has a lot in common with bottom-up estimation. It attempts to multiply lesser-known input information (how many of A, B, and C) by better-known parametric information (e.g., the known prices for each individual example of A, B, and C). Levels of skill and experience can figure in to such calculations as well.
  • Rough Order of Magnitude (ROM): This is basically an educated guess, based on experience, impressions, and entrepreneurial judgment. There are a few pithier names for this method!
  • Rolling Wave: This involves making continuous estimates of elements throughout an engagement, which ideally become more accurate over time as more is known and less is unknown.
  • Delphi: This technique seeks estimates from a wide variety of participants, potentially over multiple iterations, until a consensus is reached. This allows certain kinds of knowledge to be shared across the participants. As an example, think of a group of coders bidding on tasks during sprint planning. Most participants might make similar judgments of the complexity of a task, but if one or two team members make very different estimates they could share that they’re aware of a simple or existing solution to the problem that will reduce the effort required, or know about hidden requirements and other stumbling blocks that will increase the effort required. As another example, the first issue of Omni Magazine included a Delphic poll of its readership asking about when certain developments, discoveries, and accomplishments might take place. The results were published in a subsequent issue.
  • PERT: This technique asks participants to estimate best-case, expected, and worst-case outcomes, which are then averaged, with the expected outcome given a weighting of four times, i.e., result = (best + 4*expected + worst) / 6.

As mentioned above, the accuracy of estimates is likely to improve when more information is available. This information can come from similar or analogous situations, historical information, expert judgment, or a combination of any or all of these.

Estimates can be given as point values or as a range, the latter of which will also indicate the degree of uncertainty. A measure called the confidence interval describes the expected range of outcomes, and it is generally expressed as (1 – expected maximum error), where the expected maximum error is a percentage of the central value. For example, an estimate of 100 plus or minus 10 would indicate a confidence interval of 90%. In the case of 100 plus or minus five, the confidence would be 95%. Certain statistical and Monte Carlo techniques generate confidence intervals. In these two examples, the maximum absolute error in one direction is sometimes called the half-width, because it is half of the full range of possible outcomes (the upper and lower bounds do not have to be the same distance from the expected value.). This information can come into play when determining needed sample sizes.

Estimates should generally be made by those responsible for the outcome of the effort for which the estimate was performed. These can, however, be checked against estimates from additional parties.

Posted in Tools and methods | Tagged , | Leave a comment

Item Tracking

Item tracking is how participants in an effort monitor what concerns, issues, and tasks are valid and need to be addressed, and who has responsibility. Items can arise in any phase of an engagement and be tracked through any other phase, including during the extended operation and maintenance phase.

Items may incorporate the following attributes, according to the BABOK. I think some of these are redundant, but tracking systems like Jira and Rally embody them by default, and can be customized to include the others. More importantly, if you look back to your own experience, you can see that most of these are implicitly present even if not formally acknowledged.

  • Item Identifier: A unique identifier that serves as a key so the item can be found.
  • Summary: A description of the issue that includes text and possibly images, diagrams, and media.
  • Category: A key that can be used to group the item with similar items.
  • Type: The kind of item. (Similar to category?) (Create as needed for your unique situation.)
  • Date Identified: Date and time the issue was raised (and introduced into the system).
  • Identified By: Name (and contact information) of individual(s) who identified or raised the issue.
  • Impact: What happens if the item is not resolved. May include specified times for milestones and completion.
  • Priority: An indication of the item’s importance and especially time requirements.
  • Resolution Date: Times by which milestones must be reached or by which the item must be resolved.
  • Owner: Who is responsible for marshaling the item through to completion.
  • Resolver: Who is responsible for resolving the item.
  • Agreed Strategy: Method for approaching or resolving the item. The BABOK presents options akin to those used in risk analysis (e.g., accept, pursue, ignore, mitigate, avoid), but others are possible.
  • Status: The current state of the item. Items may have their own life cycles (e.g., opened, assigned, in work, in testing, resolved, canceled, rejected). See below for further discussion.
  • Resolution Updates: A log of activities and status updates detailing the item’s disposition.
  • Escalation Matrix: What to do and who should do it if the item is not resolved in the allotted time.

Each organization, and even each engagement, may have its own standard procedures and vocabulary for handling items through their life cycle. When I wrote models for nuclear power plant simulators at Westinghouse we usually had three or four projects going at once, and all of them named some of their work items differently. We had DRs, TRs, and PRs, for deficiency reports, trouble reports, and problem reports at the very least, depending I think on the customer’s preferred language.

I’ve written about using systems like Jira for tracking items through the entire engagement life cycle (here), but a few years later I can see that the idea should be expanded to include an independent life cycle for items within each phase of my framework, and that may be different for different phases. For example, the life cycle for implementation items might be something like assigned, in work, in bench testing, completed (and forwarded to testing). The cycle for conceptual model items might be very different, since it involves performing discovery operations through tours, interviews, research, calculations, and data collection, and then documenting the many identified elements and circulating them for review and correction. I should do a specific write-up on this.

Statistics can be compiled on the processing and disposition of items, so the engagement teams and customers can understand and improve their working methods. Care should be taken to be aware of potential variances in the complexity and requirements of each item, so any resultant findings can be interpreted accurately and fairly.

As mentioned above, items can arise and be tracked and resolved in and through all phases in an engagement’s or product’s full life cycle. In my career I’ve seen individually tracked items mostly come from testing, customer concerns, and to do lists generated by the solution teams themselves. We often called them punch lists as projects were advancing toward completion and the number of outstanding items became small enough to be listed and attacked individually. But, depending on the maturity and complexity of your organization and your effort, you’ll want to carefully consider what system you impose on a working project. You want it to be complex enough to be powerful and clarifying for all participants, but not so overwhelming that interacting with it is almost a larger burden than the actual work. That is, it should enhance the working environment, and not impede it.

What systems have you seen for tracking items?

Posted in Tools and methods | Tagged , | Leave a comment

Benchmarking and Market Analysis

Benchmarking involves learning about activities and characteristics across industries, organizations, products, methodologies, and technologies to identify best practices, product options, and competitive requirements. Benchmarking may be performed by comparing the presence or absence of features (which video editing programs can burn to Blu-ray discs?) and also by comparing the magnitudes of various features (0-to-60 time).

The BABOK lists the following elements related to benchmarking.

  • Identifying what to study
  • Identifying market leaders
  • Learning what others are doing
  • Requesting information to learn about capabilities
  • Learning during plant visits
  • Performing gap analysis vs. market leaders
  • Developing proposals to implement best practices

Here are some examples of benchmarking.

  • When Ford released its Taurus model in 1986, winning Motor Trend’s Car of the Year Award, the design team had examined one hundred different aspects of other vehicles in its class to identify features to include and improve upon. I owned models from 1986 and 1990 and was always impressed that they included a covered storage compartment in the middle of the rear deck behind the back seat, an area which is almost always empty and neglected. I stored the car’s maintenance manuals there.
  • When a company I worked for was contracted to develop a building evacuation model, I conducted an extensive online literature search to learn what work had been performed previously along those lines. I turned up numerous methodologies, research papers, case studies, modeling techniques, and more. I later listed the parameters needed to specify and control the evacuation environment and moving entities, and the user interfaces needed to define and modify them.
  • The first engineering company I worked for introduced me to a really neat way of sharing information. The pulp and paper industry embodied a huge amount of empirical knowledge about the behavior and processing of wood fibers and the related equipment. My director would gather up a huge folder of reading material every two to four weeks and circulate it around to everyone in the department, complete with a checksheet to indicate that each engineer had taken the time to read through the materials. The magazines discussed some information that would more properly be considered market research, but that was a bit over my head at the time.
  • Government and other (usually large) entities will sometimes issue Requests for Information (RFIs) to learn about capabilities of potential vendors, suppliers, and consultants that may be able to help them solve certain problems.

Market Analysis involves studying customers, competitors, and the market environment to determine what opportunities exist and how to address them.

The BABOK lists the following elements related to market analysis.

  • Identifying customers and preferences
  • Identifying opportunities to increase value
  • Studying competitors and their operations
  • Examining market trends in capabilities and sales
  • Defining strategies
  • Gathering market data
  • Reviewing existing information
  • Reviewing data to reach conclusions

Here are some examples of market analysis.

  • The Kano Model of quality seeks to understand the voice of the customer (VOC). It provides a framework for measuring customer satisfaction and determining when improvement is needed. It plots features as shown below. It categorizes aspects of a product or service that are dissatisfiers, satisfiers, and delighters. Items should be prioritized by addressing dissatisfiers first, then satisfiers, and finally delighters. Think of a hotel room. Customers may expect it to be clean and have a desk and an ironing board and a blow drier, and if any of those things are missing or otherwise not right the customer will be unhappy. A hotel room is generally something where you cannot be surprised to the upside, but only to the downside. That said, free cookies, an exceptionally friendly staff, or unusually good WiFi may constitute a delighter under the right circumstances.

  • The companies I worked for usually did custom consulting and product development, but I observed that we might get more financial leverage by building a standalone product we could sell many times. The company then developed such a product. Never mind that they sold all of one unit.
  • The company I worked for that made HVAC controls sponsored an in-person conference with many of our customers to ask them what they most needed from us. Aside from occasional inside sales support, I usually didn’t involved in general market research.
  • Costco identifies certain markets where it seeks to place new stores. As of a few years ago, they were targeting populations of a certain size, with household incomes of at least $90,000/year, with enough space to easily store large purchases of goods. They may limit their locations to be within reasonable range of their existing logistics network. The company has probably built stores in most areas that already meet their criteria, and seeks growing areas for new locations. Similarly, I’ve watched the growth of the CVS, Walgreens, and Starbucks chains over the last twenty years, and they definitely .
  • Professional and college sports teams scout potential players from lower leagues and occasionally other sports and activities. At one time, many NFL kickers started out playing soccer (what everyone outside the US and Canada calls football).
  • Students and families regularly consult many resources when selecting colleges and universities to attend, majors to pursue, the costs of doing so, and the availability of financial aid and scholarships. Of late (too late, in my opinion), more emphasis has been placed on analyzing the economic value of various degrees, to see if the value proposition makes sense for some fields.
Posted in Tools and methods | Tagged , | Leave a comment

The Requirements Life Cycle: Management and Reuse

Just as systems, products, and engagements have life cycles, requirements do as well. It’s easy to look at a requirements traceability matrix and imagine that all requirements spring magically anew from the ether during each engagement.

Let’s look at some considerations that drive requirements creation and reuse.

  • Situation-specific requirements are the unique requirements that are identified and managed for the specific, local conditions and needs encountered in each engagement. Even if two different customers end up asking for and needing the exact same thing, the process of eliciting each of their expressed requirements is unique for that engagement. Most other types of requirements can be reused from engagement to engagement and from project to project and from release to release.
  • Internal solution requirements are those related to the solution offered by the engagement team. Vendors, consultants and even internal solution providers tend not to develop solutions from a completely blank slate for every engagement. They tend to apply variations on a limited range of solution components from their areas of specialization. For example, I spent most of my career working for vendors and consultants offering particular kinds of solutions, e.g., turnkey thermo-mechanical pulping systems, nuclear power plant simulators, Filenet document imaging solutions, furnaces and control systems for the metals industry, operations research simulations, and so on. Other solution teams will apply different components and solutions for different areas of endeavor. Each of those solution offerings have their own implicit requirements that the customer must understand. My company may include a series of 22,000 horsepower double-disc refiners in its solution, but it’s also understood that the customer has to provide a certain kind of support flooring, drainage, access clearance, electrical power, water for cooling and sealing and lubrication, and so on. So actually, requirements can go in both directions (customer-to-solution team, and solution team-to-customer). Each standard component specified for a solution may carry its own standard (reused) and situation-specific (unique) requirements.
  • Implementation tool (programming language, database system, communications protocols) requirements may be specified by customers for compatibility with other systems they operate. The furnace company I worked for provided fairly consistent solutions using similar logic and calculations, but we had to implement our Level One systems using a low-level industrial UI package specified by the customer (e.g., Wonderware or Allen-Bradley), and our Level Two supervisory control systems (my specialty) had to be written in a specified programming language (usually FORTRAN, C, or C++ at that time, and often from a specified vendor, e.g., Microsoft, DEC, or Borland, though I did at least one in Delphi/Pascal when I had the choice). Similarly, our systems had to interface with other systems using customer-specified communications protocols, and also had to interface with the customer’s plantwide DBMS system (e.g., Oracle, though many others were possible).
  • Units requirements come into play when systems have to deal with different currencies or systems of measurement. When I used to write simulation-based, supervisory control systems for metals furnaces, the customers would request that some systems use English (e.g., American!) units while the remainder of systems had to use SI (metric) units.
  • User Interface and Look and Feel requirements define consistent colors, logos, controls, layouts, and components that ensure an organization’s offerings provide a consistent user experience. This helps build messaging and branding among external users and customers and helps all users by reducing training costs and times.
  • Financial requirements relate to Generally Accepted Accounting Principles (GAAP), methods of payment, currencies handled, taxes, payment terms and windows, withholding and escrow, regulations and reporting rules, guidelines for calculating fringe benefits and G&A and overhead, definitions for parameters used in modular/definable business rules, security for account and PII information and communication, storage and logging and backup of transactional data, access control for different personnel and users, and more.
  • Methodological requirements may govern the way different phases of an engagement are carried out. This is especially germane to the work of external vendors and consultants. Particularly in cases where I did discovery and data collection at medical offices, airports, and land border ports of entry, our contracts included language describing how we needed to take pictures, record video, obtain drawings and ground plans, and conduct interviews with key personnel. Numerous requirements may be specified about how testing will be conducted and standards of acceptance. One Navy program I supported required that we follow a detailed MILSPEC for performing a full-scale independent VV&A exercise. Methodological requirements are depicted on the the RTM figure above as the items and lines at the bottom.
  • Ongoing system requirements come into play when existing systems are maintained and modified. Many requirements for the originally installed system are likely to apply to post-deployment modifications.
  • Non-functional requirements for system performance, levels of service, reliability, maintainability, and so on may apply across multiple efforts.

Requirements can come from a lot of places. While my framework addresses the active identification of requirements during an engagement, many of the requirements come implicitly from the knowledge and experience of the participants, and many others come explicitly from contracts governing engagements (at least for external solution teams). Many standard contracts are continuously accreting collections of various kinds of requirements.

What additional classes and sources of requirements can you think of?

Posted in Tools and methods | Tagged , | Leave a comment

Data Mining

Data mining is the processing of large quantities of data to glean useful insights and support decision-making. Descriptive techniques like generating graphical depictions or applying other methods allow users to identify patterns, trends, or clusters. Diagnostic techniques like decision trees or segmentation can show why patterns exist. Predictive techniques like regression or neural networks can guide predictions about future outcomes. The latter are the general purview of machine learning and (still-nascent-and-will-remain-so-for-a-long-time) AI techniques, along with simulation and other algorithms.

Data mining exercises can be described as top-down if the goal is to develop and tune an operational algorithm, or bottom-up if the goal is to discover patterns. They are said to be unsupervised if algorithms are applied blindly where investigators don’t know what they’re looking for, to see if any obvious patterns emerge. They are said the be supervised when techniques are applied to see if they turn up or confirm something specific.

This figure from my friend Richard Frederick shows these techniques in a range of increasing power and maturity. Different organizations and processes fall all along this progression.

Data comes from many sources. I describe processes of discovery and data collection in the conceptual modeling phase of my analytic framework, but data collection occurs in many other contexts as well, most notably in the operation of deployed systems. Forensic, one-off, and special investigations will tend to run as standalone efforts (possibly using my framework). Mature, deployed systems, by contrast, will collect, collate, and archive data that are processed on an ongoing basis. Development and tuning of a data mining process will be conducted on a project basis, and it will thereafter be used on an operational basis.

Development and deployment of a data mining process needs to follows these steps (per the BABOK).

  1. Requirements Elicitation: This is where the problem to be solved (or the decision to be made) and the approach to be taken are identified.
  2. Data Preparation: Analytical Dataset: This involves collecting, collating, and conditioning the data. If the goal is to develop and tune a specific operational algorithm, then the data has to be divided into three independent segments. One is used for the initial analysis, another is used for testing, and the other for final confirmation.

  3. Data Analysis: This is where most of the creative analytical work is performed. Analyses can be performed to identify the optimal values for every governing parameter, both individually and in combination with others.
  4. Modeling Techniques: A wide variety of algorithms and techniques may be applied. Many may be tried in a single analysis in order to identify the best model for deployment. Such techniques range from simple (e.g., liner regression) to very complex (e.g., neural networks), and care should be taken to ensure that the algorithms and underlying mathematics are well understood by a sufficient number of participants and stakeholders.
  5. Deployment: The developed and tuned algorithms must be integrated into the deployed system so that they absorb and process operational data and produce actionable output. They can be implemented in any language or tool appropriate to the task. Some languages are preferred for this work, but anything can be used for compatibility with the rest of the system if desired.

The figure below shows how a data mining exercise could lead to development and tuning of an analytical capability meant to support some kind of decision, based on operational data from an existing system. It further suggests how the developed and tuned capability could be deployed to the operational system as an integrated part of its ongoing function.

There are many ways data can be mined. Let’s look at some in order of increasing complexity.

  • Regression and Curve-Fitting: These techniques allow analysts to interpolate and extrapolate based on fairly straightforward, essentially one-dimensional or two-dimensional data. For example, the number of customers served at a location may be predicted using a linear extrapolation derived from the number served from some number of prior time periods.
  • Correlations and Associations: These allow analysts to understand whether a cause-and-effect relationship exists (with the proviso that correlation is not necessarily causation) or whether potential affinities (if customers like A they might like B and C), based on potentially many parallel streams of data.
  • Neural Nets and Deep Learning: These techniques allow systems to learn to sense, separate, and recognize objects and concepts based on dense but coherent streams of data. Examples include classifying sounds by frequency (different from simple high- and low-pass filters) and identifying objects in an image.
  • Semantic Processing: This involves associating data from many disparate sources based on commonalities like location, group membership, behaviors, and so on.
  • Operations Research Simulations: These potentially complex systems can help analysts design and size systems to provide a set level of service in a specified percentage of situations. For example, it may be enough to design a system that will result in serving customers with no more than a twenty-minute wait eighty percent of the time, on the theory that building a system with enough extra capacity to ensure waits are less than twenty minutes in all cases would be both expensive and wasteful.

Considering this from a different angle, let us look at a maintenance process. We might examine data to determine which parts fail most often so we can improve them or keep more replacements on hand. We might see whether we can extend the time between scheduled maintenance events without incurring more failures. Data from a series of real-time sensor systems installed on a machine, in conjunction with historical data, might be able to warn of impending failure so operations can be halted and repairs effected before a major failure occurs. Numerous sources of data can be scanned to identify issues not seen via other means (social media discussions of certain vehicles, sales of consumables by season, rescue calls concentrated in certain regions, locations, or environments).

Numerous resources provide additional information, including this Wikipedia entry.

Posted in Tools and methods | Tagged , , , , | Leave a comment