Benchmarking and Market Analysis

Benchmarking involves learning about activities and characteristics across industries, organizations, products, methodologies, and technologies to identify best practices, product options, and competitive requirements. Benchmarking may be performed by comparing the presence or absence of features (which video editing programs can burn to Blu-ray discs?) and also by comparing the magnitudes of various features (0-to-60 time).

The BABOK lists the following elements related to benchmarking.

  • Identifying what to study
  • Identifying market leaders
  • Learning what others are doing
  • Requesting information to learn about capabilities
  • Learning during plant visits
  • Performing gap analysis vs. market leaders
  • Developing proposals to implement best practices

Here are some examples of benchmarking.

  • When Ford released its Taurus model in 1986, winning Motor Trend’s Car of the Year Award, the design team had examined one hundred different aspects of other vehicles in its class to identify features to include and improve upon. I owned models from 1986 and 1990 and was always impressed that they included a covered storage compartment in the middle of the rear deck behind the back seat, an area which is almost always empty and neglected. I stored the car’s maintenance manuals there.
  • When a company I worked for was contracted to develop a building evacuation model, I conducted an extensive online literature search to learn what work had been performed previously along those lines. I turned up numerous methodologies, research papers, case studies, modeling techniques, and more. I later listed the parameters needed to specify and control the evacuation environment and moving entities, and the user interfaces needed to define and modify them.
  • The first engineering company I worked for introduced me to a really neat way of sharing information. The pulp and paper industry embodied a huge amount of empirical knowledge about the behavior and processing of wood fibers and the related equipment. My director would gather up a huge folder of reading material every two to four weeks and circulate it around to everyone in the department, complete with a checksheet to indicate that each engineer had taken the time to read through the materials. The magazines discussed some information that would more properly be considered market research, but that was a bit over my head at the time.
  • Government and other (usually large) entities will sometimes issue Requests for Information (RFIs) to learn about capabilities of potential vendors, suppliers, and consultants that may be able to help them solve certain problems.

Market Analysis involves studying customers, competitors, and the market environment to determine what opportunities exist and how to address them.

The BABOK lists the following elements related to market analysis.

  • Identifying customers and preferences
  • Identifying opportunities to increase value
  • Studying competitors and their operations
  • Examining market trends in capabilities and sales
  • Defining strategies
  • Gathering market data
  • Reviewing existing information
  • Reviewing data to reach conclusions

Here are some examples of market analysis.

  • The Kano Model of quality seeks to understand the voice of the customer (VOC). It provides a framework for measuring customer satisfaction and determining when improvement is needed. It plots features as shown below. It categorizes aspects of a product or service that are dissatisfiers, satisfiers, and delighters. Items should be prioritized by addressing dissatisfiers first, then satisfiers, and finally delighters. Think of a hotel room. Customers may expect it to be clean and have a desk and an ironing board and a blow drier, and if any of those things are missing or otherwise not right the customer will be unhappy. A hotel room is generally something where you cannot be surprised to the upside, but only to the downside. That said, free cookies, an exceptionally friendly staff, or unusually good WiFi may constitute a delighter under the right circumstances.
  • The companies I worked for usually did custom consulting and product development, but I observed that we might get more financial leverage by building a standalone product we could sell many times. The company then developed such a product. Never mind that they sold all of one unit.
  • The company I worked for that made HVAC controls sponsored an in-person conference with many of our customers to ask them what they most needed from us. Aside from occasional inside sales support, I usually didn’t involved in general market research.
  • Costco identifies certain markets where it seeks to place new stores. As of a few years ago, they were targeting populations of a certain size, with household incomes of at least $90,000/year, with enough space to easily store large purchases of goods. They may limit their locations to be within reasonable range of their existing logistics network. The company has probably built stores in most areas that already meet their criteria, and seeks growing areas for new locations. Similarly, I’ve watched the growth of the CVS, Walgreens, and Starbucks chains over the last twenty years, and they definitely have well-established criteria for growth and expansion.
  • Professional and college sports teams scout potential players from lower leagues and occasionally other sports and activities. At one time, many NFL kickers started out playing soccer (what everyone outside the US and Canada calls football).
  • Students and families regularly consult many resources when selecting colleges and universities to attend, majors to pursue, the costs of doing so, and the availability of financial aid and scholarships. Of late (too late, in my opinion), more emphasis has been placed on analyzing the economic value of various degrees, to see if the value proposition makes sense for some fields.
Posted in Tools and methods | Tagged , | Leave a comment

The Requirements Life Cycle: Management and Reuse

Just as systems, products, and engagements have life cycles, requirements do as well. It’s easy to look at a requirements traceability matrix and imagine that all requirements spring magically anew from the ether during each engagement.

Let’s look at some considerations that drive requirements creation and reuse.

  • Situation-specific requirements are the unique requirements that are identified and managed for the specific, local conditions and needs encountered in each engagement. Even if two different customers end up asking for and needing the exact same thing, the process of eliciting each of their expressed requirements is unique for that engagement. Most other types of requirements can be reused from engagement to engagement and from project to project and from release to release.
  • Internal solution requirements are those related to the solution offered by the engagement team. Vendors, consultants and even internal solution providers tend not to develop solutions from a completely blank slate for every engagement. They tend to apply variations on a limited range of solution components from their areas of specialization. For example, I spent most of my career working for vendors and consultants offering particular kinds of solutions, e.g., turnkey thermo-mechanical pulping systems, nuclear power plant simulators, Filenet document imaging solutions, furnaces and control systems for the metals industry, operations research simulations, and so on. Other solution teams will apply different components and solutions for different areas of endeavor. Each of those solution offerings have their own implicit requirements that the customer must understand. My company may include a series of 22,000 horsepower double-disc refiners in its solution, but it’s also understood that the customer has to provide a certain kind of support flooring, drainage, access clearance, electrical power, water for cooling and sealing and lubrication, and so on. So actually, requirements can go in both directions (customer-to-solution team, and solution team-to-customer). Each standard component specified for a solution may carry its own standard (reused) and situation-specific (unique) requirements.
  • Implementation tool (programming language, database system, communications protocols) requirements may be specified by customers for compatibility with other systems they operate. The furnace company I worked for provided fairly consistent solutions using similar logic and calculations, but we had to implement our Level One systems using a low-level industrial UI package specified by the customer (e.g., Wonderware or Allen-Bradley), and our Level Two supervisory control systems (my specialty) had to be written in a specified programming language (usually FORTRAN, C, or C++ at that time, and often from a specified vendor, e.g., Microsoft, DEC, or Borland, though I did at least one in Delphi/Pascal when I had the choice). Similarly, our systems had to interface with other systems using customer-specified communications protocols, and also had to interface with the customer’s plantwide DBMS system (e.g., Oracle, though many others were possible).
  • Units requirements come into play when systems have to deal with different currencies or systems of measurement. When I used to write simulation-based, supervisory control systems for metals furnaces, the customers would request that some systems use English (e.g., American!) units while the remainder of systems had to use SI (metric) units.
  • User Interface and Look and Feel requirements define consistent colors, logos, controls, layouts, and components that ensure an organization’s offerings provide a consistent user experience. This helps build messaging and branding among external users and customers and helps all users by reducing training costs and times.
  • Financial requirements relate to Generally Accepted Accounting Principles (GAAP), methods of payment, currencies handled, taxes, payment terms and windows, withholding and escrow, regulations and reporting rules, guidelines for calculating fringe benefits and G&A and overhead, definitions for parameters used in modular/definable business rules, security for account and PII information and communication, storage and logging and backup of transactional data, access control for different personnel and users, and more.
  • Methodological requirements may govern the way different phases of an engagement are carried out. This is especially germane to the work of external vendors and consultants. Particularly in cases where I did discovery and data collection at medical offices, airports, and land border ports of entry, our contracts included language describing how we needed to take pictures, record video, obtain drawings and ground plans, and conduct interviews with key personnel. Numerous requirements may be specified about how testing will be conducted and standards of acceptance. One Navy program I supported required that we follow a detailed MILSPEC for performing a full-scale independent VV&A exercise. Methodological requirements are depicted on the the RTM figure above as the items and lines at the bottom.
  • Ongoing system requirements come into play when existing systems are maintained and modified. Many requirements for the originally installed system are likely to apply to post-deployment modifications.
  • Non-functional requirements for system performance, levels of service, reliability, maintainability, and so on may apply across multiple efforts.

Requirements can come from a lot of places. While my framework addresses the active identification of requirements during an engagement, many of the requirements come implicitly from the knowledge and experience of the participants, and many others come explicitly from contracts governing engagements (at least for external solution teams). Many standard contracts are continuously accreting collections of various kinds of requirements.

What additional classes and sources of requirements can you think of?

Posted in Tools and methods | Tagged , | Leave a comment

Data Mining

Data mining is the processing of large quantities of data to glean useful insights and support decision-making. Descriptive techniques like generating graphical depictions or applying other methods allow users to identify patterns, trends, or clusters. Diagnostic techniques like decision trees or segmentation can show why patterns exist. Predictive techniques like regression or neural networks can guide predictions about future outcomes. The latter are the general purview of machine learning and (still-nascent-and-will-remain-so-for-a-long-time) AI techniques, along with simulation and other algorithms.

Data mining exercises can be described as top-down if the goal is to develop and tune an operational algorithm, or bottom-up if the goal is to discover patterns. They are said to be unsupervised if algorithms are applied blindly where investigators don’t know what they’re looking for, to see if any obvious patterns emerge. They are said the be supervised when techniques are applied to see if they turn up or confirm something specific.

This figure from my friend Richard Frederick shows these techniques in a range of increasing power and maturity. Different organizations and processes fall all along this progression.

Data comes from many sources. I describe processes of discovery and data collection in the conceptual modeling phase of my analytic framework, but data collection occurs in many other contexts as well, most notably in the operation of deployed systems. Forensic, one-off, and special investigations will tend to run as standalone efforts (possibly using my framework). Mature, deployed systems, by contrast, will collect, collate, and archive data that are processed on an ongoing basis. Development and tuning of a data mining process will be conducted on a project basis, and it will thereafter be used on an operational basis.

Development and deployment of a data mining process needs to follows these steps (per the BABOK).

  1. Requirements Elicitation: This is where the problem to be solved (or the decision to be made) and the approach to be taken are identified.
  2. Data Preparation: Analytical Dataset: This involves collecting, collating, and conditioning the data. If the goal is to develop and tune a specific operational algorithm, then the data has to be divided into three independent segments. One is used for the initial analysis, another is used for testing, and the other for final confirmation.

  3. Data Analysis: This is where most of the creative analytical work is performed. Analyses can be performed to identify the optimal values for every governing parameter, both individually and in combination with others.
  4. Modeling Techniques: A wide variety of algorithms and techniques may be applied. Many may be tried in a single analysis in order to identify the best model for deployment. Such techniques range from simple (e.g., liner regression) to very complex (e.g., neural networks), and care should be taken to ensure that the algorithms and underlying mathematics are well understood by a sufficient number of participants and stakeholders.
  5. Deployment: The developed and tuned algorithms must be integrated into the deployed system so that they absorb and process operational data and produce actionable output. They can be implemented in any language or tool appropriate to the task. Some languages are preferred for this work, but anything can be used for compatibility with the rest of the system if desired.

The figure below shows how a data mining exercise could lead to development and tuning of an analytical capability meant to support some kind of decision, based on operational data from an existing system. It further suggests how the developed and tuned capability could be deployed to the operational system as an integrated part of its ongoing function.

There are many ways data can be mined. Let’s look at some in order of increasing complexity.

  • Regression and Curve-Fitting: These techniques allow analysts to interpolate and extrapolate based on fairly straightforward, essentially one-dimensional or two-dimensional data. For example, the number of customers served at a location may be predicted using a linear extrapolation derived from the number served from some number of prior time periods.
  • Correlations and Associations: These allow analysts to understand whether a cause-and-effect relationship exists (with the proviso that correlation is not necessarily causation) or whether potential affinities (if customers like A they might like B and C), based on potentially many parallel streams of data.
  • Neural Nets and Deep Learning: These techniques allow systems to learn to sense, separate, and recognize objects and concepts based on dense but coherent streams of data. Examples include classifying sounds by frequency (different from simple high- and low-pass filters) and identifying objects in an image.
  • Semantic Processing: This involves associating data from many disparate sources based on commonalities like location, group membership, behaviors, and so on.
  • Operations Research Simulations: These potentially complex systems can help analysts design and size systems to provide a set level of service in a specified percentage of situations. For example, it may be enough to design a system that will result in serving customers with no more than a twenty-minute wait eighty percent of the time, on the theory that building a system with enough extra capacity to ensure waits are less than twenty minutes in all cases would be both expensive and wasteful.

Considering this from a different angle, let us look at a maintenance process. We might examine data to determine which parts fail most often so we can improve them or keep more replacements on hand. We might see whether we can extend the time between scheduled maintenance events without incurring more failures. Data from a series of real-time sensor systems installed on a machine, in conjunction with historical data, might be able to warn of impending failure so operations can be halted and repairs effected before a major failure occurs. Numerous sources of data can be scanned to identify issues not seen via other means (social media discussions of certain vehicles, sales of consumables by season, rescue calls concentrated in certain regions, locations, or environments).

Numerous resources provide additional information, including this Wikipedia entry.

Posted in Tools and methods | Tagged , , , , | Leave a comment

State Modeling

What do we mean by state?

PvT state diagram copied from: http://www.mhtlab.uwaterloo.ca/courses/ece309/lectures/notes/S16_chapall_web.pdf

We commonly think of the states of matter as solid, liquid, and gaseous, and we know that these are dependent on temperature. For water at — atmospheric pressure — we refer to these states as ice, liquid water, and steam, but every substance can assume the same states, even though they may do so at very different temperatures (and pressures). If we know the temperature, pressure, and specific volume of a substance, we also know (something about) its specific energy, and that uniquely defines the state of the substance.

Other states of matter are possible, with plasma being probably the best known. You can think of plasma as the next hottest state above gaseous, but where the electrons have too much energy to remain in orbit around their nuclei. You can think of the Bose-Einstein condensate as a “fifth state,” the next coldest below solid, where none of the particles have enoough energy to hold together as atoms. These states and many others are described in this Wikipedia article.

The behavior of the substance depends on its state, and the state of the substance depends on other characteristics, which are not themselves states.

States in Object Models

If we think of water as an object, it can have many characteristics. It can have the properties of temperature, pressure, specific volume, and specific energy (define three and the other item is defined as well, with some complications we won’t go into, mostly having to do with quantities of substance existing in multiple states simultaneously in various conditions of saturation). We specify a mass or particular volume of water to instantiate a particular object, as opposed to describing something abstract.

From these pieces of information we know what state (or sometimes states) the water is in, and then we know how it will behave. In object oriented programming, an attribute is a variable defined within an object that can take on different values. Most of the attributes in this example of a particular volume of water are independent. The state of the water is dependent on other attributes of the water. Other actions, then, are allowed, disallowed, or otherwise governed by the derived state.

State for Systems and Business Analysts

Entities can move through, move within, or remain stationary within a process, and they can change states in any of those situations. A state is a variable condition an entity can be in that affects its behavior. An entity (or object) can have multiple attributes that each reflect different possible states. Let’s look at some examples.

Here I use the term “entity” in a more general sense than what is labeled in the diagram above. For this discussion, an entity can enter, move through, and exit from a system. Think of documents being handled in a business process. An entity can also move within the system but never enter or leave, as in the case of an airport shuttle car. Stations, facilities, queues (and bags) all remain stationary, but can be entities in this sense as well. All can have different characteristics that can define states.

States of entities are defined to determine what actions can and should be performed and what decisions can be made. States of entities in systems can be defined in as many ways as your imagination can dream up — as long as everyone understands them. Here are some examples of states and how they can be defined:

  • A customer’s monthly payment status can be defined by whether payment has been received by a due date or, if payment is late, how long overdue the payment is. Different actions will be taken depending on the time of full payment, partial payment, passing defined thresholds of lateness, and so on.
  • Multiple pieces of information might be required to make certain decisions or take certain actions. States can be defined to describe the ability to make the decision or take the action based on the disposition of all of the required data items.
  • Different actions may only be supported when operations are open. Operating hours can determine open/closed states, thermostat settings, availability of certain resources, and so on.
  • States can be defined by quantities. Stocked items may need to be reordered or replenished when quantities fall to or below defined levels. A vending machine must know when it has received enough cash, coinage, or electronic payment to complete the sale and dispense product, and also give change or cancel the interaction and reassume a waiting state if appropriate.

State Models

A state model defines:

  • The possible states an entity can be in: Any attribute can be a state that can have multiple values, all of which must be defined. An entity can have many different attributes that each define multiple states.
  • The sequence of states the entity can be in: An entity (or attribute of an entity) in a given state may not be able to arbitrarily transition into every other state. Rules for defining what is possible must be defined. This can be done using tables, diagrams (see below), or other means.
  • How the entity changes states: All actions associated with transitioning into and out of each state must be defined.
  • The events and conditions that cause the entity to change states: The mechanism(s) causing states to change must be defined. Sometime an event will change a state directly, and at other times an event must scan other information to determine whether a state change has occurred.
  • The actions permitted or required by an entity in each state: Some actions may not be allowed if an entity is in certain states. Other actions must be performed if an entity is in certain states. The latter may happen when the state transition happens, while the entity is in the state, or before the entity is allowed to transition out of that state.

The figure below is a kind of state transition diagram that shows which states can be transitioned into from which prior states, and also the relative probability of each transition’s occurrence. Many types of state transition diagrams, tables, and other descriptions are possible.

It’s Tricky (h/t Run DMC)

A system as complicated as an aircraft can have many different attributes which each define a state. States can be defined in relation to other variables, and even based on other states. The diagram above shows just a small subset of attributes which may describe an aircraft and the states it may assume. What questions does the figure bring up in your mind?

These states and actions can become very, very complicated, and great care may need to be taken with language and definitions. The analysts, customers, implementers, testers, and reviewers may need to work very closely together. Analysts should be keenly aware of this complexity and ensure that the right conversations are had and the right questions are asked and answered.

Posted in Tools and methods | Tagged , , | Leave a comment

Reviews

Reviews are about examining some output or artifact for quality and agreement. I primarily think of this process in terms of my framework, as shown in the figure below, but many contexts are valid. In my framework, review means building up the relevant work products in each phase and having them reviewed by the customer and reworked by the solution team until they are accepted and approved by all participants. I often talk about how the framework involves continuous iteration within and between phases, and this is always based on different kinds of reviews.

Here’s how review tends to work within each phase. For clarity the activities are described from the point of view of an external solution team developing something for a customer, but it should be understood that the line between the solution team and the customer can vary from sharply defined (I spent most of my career functioning as an external vendor or consultant) to nonexistent (internal groups can analyze and develop solutions for their own problems).

  • Intended Use (Problem Definition): The team helps the customer ensure they’ve identified the problem and purpose correctly.
  • Conceptual Model: The team documents the results of its discovery and data collection processes, and then the customer verifies that the team has achieved accurate and complete understanding of the customer’s processes and vocabulary.
  • Requirements: The team elicits requirements from the customer and collaborates to ensure that the most complete and accurate possible list of both functional and non-functional requirements is generated.
  • Design: The team proposes a design and the customer decides whether to approve it or not.
  • Implementation: The team implements the designed solution with input and continuous review by the customer.
  • Test (and Deployment and Acceptance): The team and customer complete different kinds of verifications and validations.

The customer ultimately provides the final approval or non-approval for each phase. Part of the iteration between phases involves updating the Requirements Traceability Matrix to ensure consistency horizontally between phases and logical consistency vertically within the Design/Implementation phases. Multiple individual iterative create-and-review processes may take place, serially and in parallel, within each engagement phase (that is, the figure above is a stylized, simplified representation of the overall process). Imagine everything that must be going on in an engagement involving multiple teams in a SAFe environment, as shown below.

The Scrum framework incorporates two explicit review activities. In the Sprint Review, the team describes and demonstrates the latest work increment for the customer. In the Sprint Retrospective, the team (without the customer) reviews its own working methods during the just-ended sprint. It should be understood, however, that all of the other kinds of review are still taking place.

Conducting a review involves defining the objective of the review, choosing the techniques to be used during the review, and selecting the participants in the review.

The following review techniques (named and described in the BABOK) can be used (among others).

  • Inspection: This is a formal process of reviewing some or all work products to determine whether they meet defined criteria.
  • Formal Walkthrough (Team Review): These are typically performed to review the methods and behavior of internal teams.
  • Single Issue Review (Technical Review): This is a formal review, often involving a specific technical aspect, of one outstanding area of concern.
  • Informal Walkthrough: This is a fairly informal process of reviewing an item and soliciting feedback from a small number of participants.
  • Desk Check: This informal process drafts an outside participant to perform the review.
  • Pass Around: In this process, many people review the item or items and offer feedback, usually one after the other.
  • Ad hoc: This can be any type of informal review by any type of participant.

Can you think of any additional review techniques?

Finally, the BABOK identifies the following roles. An author creates a work product or artifact. A reviewer is anyone who looks at the work product or artifact and offers feedback or approval (or non-approval). A facilitator is a neutral participant who guides a review process for others. A scribe documents the actions and results of the review.

Posted in Tools and methods | Tagged , , | Leave a comment

Business Model Canvas

Like the balanced scorecard technique I described yesterday, I view the business model canvas technique as a way to get analysts to look at organizations in a different way. It is used fairly commonly (though I had never seen nor heard of it prior to encountering that materials in the BABOK). It seems to me to be a tool more appropriate for higher-level officers in an organization to gain a readily accessible and somewhat standardized view of its activities, purpose, and effectiveness. That said, it’s not that business analysts cannot or should not be involved with.

While the technique seems to be primarily intended to review organizations from within, to determine the best ways to deliver value, it occurs to me that the technique could also be used to evaluate multiple organizations in a consistent way. This would provide an efficient way to compare and contrast their most salient characteristics.

This technique is well known enough that many templates and tools exist for creating them. Here is an example from the Wikipedia entry on the technique:



The canvas contains nine sections (some versions apparently list seven), labeled per the following list, which can be filled with any kind of information that communicates in a way that all participants understand.

  • Key Partnerships: What external organizations and resources, if any, enhance the organization’s ability to meet its goals?
  • Key Activities: Primary activities that deliver value to customers in terms of value-add (customer willing to pay), non-value-add (customer not willing to pay), and business non-value-add (customer not willing to pay but required for other reason).
  • Key Resources: Physical, financial, intellectual, and human.
  • Value Proposition: What the customer pays you for (also, their cost vs. their benefit).
  • Customer Relationships: Acquisition and retention, personal vs. impersonal, in-person vs. automated, etc.
  • Channels: (All) Modes of interaction with the customer.
  • Customer Segments: Identifying and addressing through different needs, profitability, channels, and relationship modes.
  • Cost Structure: For understanding of how to manage and improve.
  • Revenue Streams: All the ways to generate income and fees.
Posted in Tools and methods | Tagged , , | Leave a comment

Balanced Scorecard

I have never explicitly used this technique, but I have certainly examined problems, systems, and opportunities from all the perspectives contained within it. This is true of many techniques described in the BABOK. I often note that every formalized technique or analytical framework is just an organized way of getting you to do what you should be doing anyway, if you were particularly experienced, creative, or thorough. I further frequently note that all described techniques can be and are criticized. While these techniques can inform and clarify the thinking of an experienced practitioner, less experienced practitioners have to start somewhere, and this is an interesting technique for directing analysis to many important considerations.

As someone who has studied economics for much of my life, and especially in the last two decades, I appreciate that economics is the study of choices made under conditions of scarcity and that, contrary to many people’s impressions, those choices do not always involve money. It may be similarly tempting to evaluate all business (or organizational more generally) activities in terms of money, but the balanced scorecard techniques forces the analyst to look at other areas. A monetary value can be placed on every activity, sure, but you have to drill down through any layers of cause and effect to see what they might be.

The balanced scorecard explores a business or system across four dimensions:

  • Financial: Inspiring analysts for look at considerations other than finance does not mean that it doesn’t need to be considered. After all, if an organization or a process continually loses money, it won’t be around for long. (Accounting, by the way, along with the recognition that one of the three functions of money is to serve as a unit of account, is one of thee great discoveries of humankind.)
  • Learning and Growth: This encompasses employee training, corporate culture, and all manner of innovation and improvement. It can even involve training the customers. Improvements in financial performance can result in tracing back through all activities to see how the effect the ending outcomes.
  • Business Process: This involves measuring the performance of people and processes internally, and customer satisfaction externally.
  • Customer: This dimension focuses of the customers’ needs and satisfaction.

There is some overlap between those ares, but the point is to get analysts to focus on each area, specifically.

Analysis of each of those dimensions involves:

  • Objectives: What is the organization trying to accomplish?
  • Measures: How can the organization determine whether it is succeeding?
  • Targets: These must be expressed in terms of things that can be measured (in theory). TQM defines “quality” as adherence to requirements (i.e., a result either meets defined standards or it does not.). This attempts to turn continuum problems into discrete (yes/no, pass/fail) problems.
  • Initiatives: What activities will the organization take to improve or maximize performance within each dimension?

It may helpful to construct a table, but since multiple objectives, measures, targets, and initiative can be defined for each dimension, that’s probably overkill.

Again, this isn’t a method I’ve ever explicitly used. As the BABOK notes, it does not take the place of other types of absolutely necessary forms of analysis. I believe it’s main utility is to inspire analysts to consider things from different points of view.

I once had the insight that most of the stuff in the training materials for my Scrum certifications was kind of beside the point, but that flipping through the three slim notebooks of course notes one morning every couple of weeks might help keep some ideas fresh in my mind. Reviewing the list of fifty BABOK techniques may serve a similar purpose.

Posted in Tools and methods | Tagged , , | Leave a comment

Concept Modeling

A concept model, or conceptual model, is an abstract representation of an organization, process, system, or product. It relates the nouns and verbs and other categorizations within and between elements. It can take any arbitrary form, in contrast with the prescribed rules for formatting a mind map. It can include labels on the nodes and on the connections between them. It uses vocabulary germane to the industry, project, and engagement team.

I think of two possible definitions of what makes a concept model. In my framework, any abstract representation created during the conceptual model phase is a conceptual model. This is usually generated after the discovery operation. The other context, per the BABOK, is more general, and has to do with the makeup and contents of the model (which is usually a drawing or figure but should include all descriptive materials that communicate similar information).

I have often tended to construct these in the form of process diagrams but they can also take the form of architecture diagrams, hierarchy diagrams, representational diagrams, and hybrids of these and others. Certain drawings tend to be produced during specific phases I define in the framework. Concept models, it should come as no surprise, are produced to reflect the As-Is state in the conceptual model phase. They can also be produced to represent the abstract To-Be state in the requirements phase. Model diagrams produced in the design phase tend to be more concrete in their more detailed description of the To-Be state. Diagrams from the Implementation and Test phases will be even more concrete as they will reflect the As-Built state (which itself becomes the new As-Is state).

This foregoing discussion of abstractness and concreteness is a little different than what a concept model is meant to represent. Concept models are necessarily abstract in that they are not intended to show exact details of physical objects, even if they are otherwise drawn to scale. The figure below shows a simulation model built for a land border crossing. It is laid out on a scale drawing of the ground layout of the facility, so the correct distances and movement times are modeled. This particular diagram represents the As-Is state, but a To-Be diagram prepared for the design or implementation phase would look exactly the same. What makes it a concept model is that it shows the nouns and verbs of the customer’s process using its preferred vocabulary.

A diagram of this layout, by itself, would not be considered to be a concept model.

This model might be considered a little more conceptual as it shows less detail.

The next series of diagrams show different conceptual representations of a different systems.

Posted in Tools and methods | Tagged , , , | Leave a comment

Extended Engagement Life Cycle

When creating a new project, engagement, or product from scratch, we should think in terms of its full life cycle. For this reason I have drawn the diagram of my business analysis engagement framework with two extra cycles. One shows an extended period of operation and maintenance, where the originally delivered capability is exercised by the customer for its intended use. During this time the system may be maintained, modified, and updated. When the delivered capability as a whole is no longer useful, it is retired and replaced. This is shown as a separate and final phase of a long-term engagement.

Here are the all the possible phases in the context of a full life cycle engagement, with the two new additional phases added. The new phases occur after the initial engagement has been closed out, including final delivery, acceptance, and handover to the customer. Note that different internal and external vendors or consultants may serve on different engagement teams during the initial engagement phases and the extended support and retirement phases.

The Framework:
  • Project Planning
  • Intended Use
  • Assumptions, Capabilities, and Risks and Impacts
  • Conceptual Model (As-Is State)
  • Data Sources, Collection, and Conditioning
  • Requirements (To-Be State: Abstract)
    • Functional (What it Does)
    • Non-Functional (What it Is, plus Maintenance and Governance)
  • Design (To-Be State: Detailed)
  • Implementation
  • Test
    • Operation, Usability, and Outputs (Verification)
    • Outputs and Fitness for Purpose(Validation)
  • Acceptance (Accreditation)
  • Project Close
  • Operation and Maintenance
  • End-of-Life and Replacement

Here is the above list in the more stylized and streamlined form I show in the main figure(s).

The Framework: Simplified
  •   Intended Use
  •   Conceptual Model (As-Is State)
  •   Data Sources, Collection, and Conditioning
  •   Requirements (To-Be State: Abstract)
    • Functional (What it Does)
    • Non-Functional (What it Is, plus Maintenance and Governance)
  •   Design (To-Be State: Detailed)
  •   Implementation
  •   Test
    • Operation, Usability, and Outputs (Verification)
    • Outputs and Fitness for Purpose (Validation)
  •   Operation and Maintenance
  •   End-of-Life and Replacement

Discussion of these topics is always a bit vague in the BABOK, because the business analysis oeuvre explicitly differentiates itself from the project management oeuvre (see here for a discussion of the overlaps), and also because the BABOK tries not to be overly prescriptive. It gives you a whole bunch of techniques and contexts and ways of thinking about things, but it never says, “Do A, then do B, then do C.” My framework may appear to be at least a little bit prescriptive, but it isn’t really. It really just codifies language that is already in the BABOK, and it gives practitioners a better feel for what they’re doing and when and why. I discuss how the main phases all occur in different contexts and management approaches here. I touch of the same things in my standard presentation on my overall framework.

In keeping with this amorphousness and flexibility, I’ll point out that individual modifications to an existing, deployed capability are themselves independent engagements with all six of the standard phases. This is true even if there are many such modifications over a long period of time, and even if individual ones are small and informal enough not to require large-scale efforts in every phase. I include the figure below for illustration of the idea.

Indeed, the End-of-Life and Replacement phase is likely to be a full engagement on its own.

Posted in Tools and methods | Leave a comment

Survey or Questionnaire

Surveys and questionnaires are excellent tools for gathering data, opinions, needs, and observations from large groups of respondents in a relatively short time. They come in many forms, with individual questions being either open-ended, where respondents can provide any type of answer they want, or closed-ended, where respondents must choose from a fixed group of possible responses. The former may require significantly more effort and interpretation to process, while the latter are more amenable to automation.

I’ve incorporated surveys in my own work here and studied specific types of surveys while preparing for my Lean Six Sigma Black Belt exam. I discuss a particularly memorable decision that used survey techniques, at least in part, here. The treatment of this subject in the BABOK is outstanding, so I’m mostly going to paraphrase its descriptions.

The process for conducting surveys is outlined below.

  1. Prepare the survey or questionnaire and plan for how the data is going to be collected and processed.
    • Define the objective: Determine what information you hope to gather to support the decision(s) you are typing to make.
    • Define the target survey group: Identify the relevant audience to be queried. This may involve anything from a broad customer base to a narrow job function.
    • Choose the correct format: Determine the type of survey questions to ask to get the types of responses you need. You should consider the level of engagement of the audience you are polling. Highly engaged customers and employees may be willing to spend a lot of time and effort providing detailed and voluminous responses, while you may design a lighter and more streamlined process in hopes of gaining sufficient responses from less engaged participants.
    • Select a sample group if appropriate: You may want to or have to choose a subset of a larger group to survey. This may require statistical structuring across many demographics to keep the results from being skewed.
    • Select distribution and collection methods: Determine how the surveys will be sent and how the answers will be returned and processed.
    • Set the target response rate and response end time: Determine the minimum number of responses you need for the effort to be considered valid, and the time window within which responses will be accepted.
    • Determine if additional activities are needed to support the effort: Additional work may need to be done to design the questions or interpret the responses. This work may involve interviews and other techniques.
    • Write the questions: These will be based on the information you need and the decision processes you hope to support. The size, format, and complexity of the survey will be determined by the audience you plan to query and the information you hope to obtain.
    • Test the survey or questionnaire: This involves testing of the mechanics of distribution of questions and collection and processing of responses (verification), and also the methods of assessing the responses for correctness and applicability (validation).
  2. Distribute the survey or questionnaire while considering:
    • the urgency to obtaining the responses,
    • the level of security required, and
    • the location of the respondents.
  3. Gather the responses and document the results.
    • Collate the responses.
    • Summarize the results.
    • Review the details and identify any emerging themes.
    • Formulate categories for breaking down the data.
    • Arrange the results into actionable segments.

When writing about BABOK techniques on this blog I don’t generally go into their strengths and weaknesses. I figure if you understand the techniques well enough you should be able to reason through most working situations or potential questions on certification exams. However, there are some well known issues with this process that aren’t mentioned in the BABOK.

The first issue is selection bias in the respondents, which can take many forms. One example is that people may be more likely to offer responses (or reviews) when they are angry with a product or service. That might yield useful information about complaints, but it may not give an accurate reading of the overall level of satisfaction. Another example is that the nature of the survey may tend to elicit responses from people in excess of their proportion in the overall population. Many requests for responses published in magazines yielded notoriously skewed figures for the prevalence of certain behaviors. In addition, respondents may be inclined to stretch the truth by telling a tall tale or two. The method of polling may skew the results as well. Several studies over recent decades have identified potential issues with telephone canvassing, especially in advance of political elections. If certain demographics are less likely to be at home at certain hours, do not have landlines, or tend to screen calls or hang up on pollsters, the validity of such polls can be severely compromised.

Another issue is that people may simply lie. This can happen when the results aren’t sufficiently confidential or when respondents don’t wish to seem mean, prejudiced, or otherwise unpleasant. It’s also possible when describing affinities for things people have no talent for. Respondents may enjoy the idea of singing popular music in a band, but it’s not going to matter much if they can’t carry a tune in a bucket. (I experienced a couple of these in a career assessment survey in high school. I don’t know if my responses skewed any wider results since the exercise was intended to illuminate our own interests and abilities, but if they were trying to do anything else it couldn’t have been good.)

A major problem with polling, especially in certain subjects, is that the questions may tend to lead subjects toward certain responses. This may not be purposeful, in which case testing, review, and revision should be applied to correct any deficiencies, but we are probably all aware of polls that are structured to drive public opinion rather than accurately reflect it.

Posted in Tools and methods | Tagged , , , , | Leave a comment