User Stories

The definition given in the BABOK is, “A user story represents a small, concise statement of functionality or quality needed to deliver value to a specific stakeholder.” This is a very general definition and is potentially different from a task assignment. These are usually employed during the requirements phase, whereas tasks may be assigned in any phase. I discuss a way to think about these activities and how to track them here.

A classic user story is often written in the following form:

As a role , I would like to take some action so I can achieve some result .

This format can be thought of in terms of who, what, and why. “Given… when… then…” is another possible format. I opine that requirements especially, and tasks more generally, should be written in any format that yields the correct results. As long as the person or persons assigned the requirement or task accomplish the desired thing, the expression is “correct.” I share this because some people may demand that (especially) requirements must be written in a particular way, and I think such demands miss the point.

So what are we trying to communicate in these stories?

  • Stakeholder Needs: The stories attempt to provide chunks of future capability that serve stakeholder needs.
  • Solution Needs: Some stakeholder needs cannot be fulfilled without supporting capabilities that may not have been identified by the stakeholders. These often take the form of technical and other indirect prerequisites.
  • Prioritization: This describes the order in which things must be done. This can be expressed as a directly scored parameter (e.g., high, medium, low) or indirectly through processes like backlog grooming.
  • Estimation: This can involve costs in time and various resources and also benefits. Being mindful of both supports cost-benefit analyses.
  • Solution Delivery Plans: Defining what is needed begins to define how it can and should be delivered.
  • Definition of Ready: This is a measure of how clear and complete a story has to be before it may be assigned to be worked on.
  • Definition of Done (Testing Requirements): An almost limitless variety of tests may be performed on units of delivered value. This definition says something about the tests the delivered value must pass.
  • Value Delivered: This defines the value of the capability delivered. It supports a wide variety of analysis.
  • Traceability (within the logical design hierarchy and across phases): Each story should relate to other ones in ways that make logical sense.
  • Subsequent Analysis: Developing and prioritizing stories may help clarify existing stories and illuminate the need for new ones.
  • Units of Project Management and Reporting: Organizations need to be able to manage resources (time, money, materials, labor, and environmental factors) to understand how much they can accomplish relative to what is desired.

User stories should ideally have a number of desirable attributes, as often enumerated in the acronym, “INVEST.” They should be Independent, meaning that the descriptions should lead to creating the right outcomes no matter what else is happening; Negotiable, in that a team can employ examine related trade-offs from many points of view; Valuable, so they are seen to provide provable benefits; Estimable, so the team and identify the relevant costs across all resource types; Small, so they can be worked in manageable increments; and Testable, so they can be demonstrated to meet standards defined for them by the relevant parties.

Larger stories should ultimately be broken down into smaller ones until they are of proper granularity. Some of the criteria for breaking down user stories is summarized here. They can and should be examined and broken down (or rightsized) when enough is known to conduct a proper analysis. Ideas can enter the effort at any scope and scale. Larger-scale ideas are sometimes referred to as epics. Those can be broken down into individual user stories, and those can be broken down into potentially multiple tasks.

One the one hand, calling these items user stories might be a little goofy. This indicates that the main way to describe needs for an effort is in terms of what benefits users. One the other hand, what other standard of value should there be? I think some of the confusion stems from the fact that user stories and the tasks needed to implement them, as well as the epics groups of stories may comprise, are often discussed somewhat interchangeably. As long as we are careful to define terms and otherwise be clear, this should not pose too much of a problem.

Posted in Tools and methods, Uncategorized | Tagged , , | Leave a comment

Collaborative Games

This technique involves engaging a group of people in a structured activity that allows them to develop shared understanding and insight into a problem or solution. The activity is usually conducted in three steps: introduction, rules, and beginning; exploration via communication and iteration; and closeout, assessment, and ranking. Once the most promising idea or ideas are identified, the team can move forward based on that knowledge.

I bolded the idea of communication and iteration because that is the biggest idea and I revisit it over and over. Everything in my framework is based on this, as are many if not most of the techniques listed. The purpose is to get the largest possible volume of ideas generated by the greatest number of — and widest variety of — participants, each of whom bring potentially different expertise, needs, and insights. The more ideas generated, in an environment of enhanced and shared understanding, the better solutions will be developed. This kind of shared participation can also enhance buy-in from all parties.

It is often useful to have a neutral facilitator in charge of running the activity. This may mitigate against any individual or small group with an agenda to exert undue influence on the direction the activity follows or time the activity continues.

The BABOK describes three of the most common collaborative games, although many more are possible. Interestingly, the activities described in the BABOK tend to drive to solutions to specific problems, while many of those listed elsewhere (I love the Spaghetti Tower exercise from the second link, above) are geared more to team-building and are often used as icebreakers in workshops I’ve attended.

  • Product Box: The team creates text and graphics for a notional package that the product would be delivered in. This gets the group to think about what is most important to communicate in terms of features and, ideally, benefits. The information is often drawn directly on a cardboard box in a constructed or an unfolded flat form, but the team might also consider alternative materials, dimensions, proportions, and materials.
  • Affinity Map: This is one of my favorite activities of all time, and typically involves writing ideas on Post-It notes and then arranging them into groups based on similarities of different kinds. There are many possible characteristics by which items may be grouped, and the exercise helps the participants better comprehend the range of possibilities and which of those are most important. By way of illustration, I will refer you to this post, which relates the story of how I tried to classify all my Lego pieces when I was a young teenager.
  • Fishbowl: In this activity, half of the participants discuss the topic at hand while the others watch, as if the observers are watching people discuss something in a fishbowl. The purpose is that the observers may see biases, things that are left unsaid, other possibilities, and other potential errors of omission and commission. It may be useful to exchange roles and repeat the exercise a second time.

Again, all the exercises described, whether to gain insight into a specific problem or allow people to practice communication and get to know each other, are about fostering communication and iteration.

Posted in Tools and methods | Tagged , , | Leave a comment

Vendor Assessment

This technique, as described in the BABOK, is about evaluating the stability, suitability, and capability of individual vendors. This information can also be used to compare vendors, using any number of qualification/disqualification criteria and scoring systems (e.g., the first three analyses discussed here), but this article is about what the major criteria used for evaluation and comparison are.

Vendors, of course, are organizations (or individuals) that provide goods and services that your organization does not, or that supplement your offerings when you have limited capacity to deliver. Vendors can be engaged to provide those goods and services to you or your organization, or to a customer you or your organization are serving.

Both the goods and services provided by the vendor, and the vendor itself, may be evaluated. Different criteria may be evaluated based on the situation. The BABOK concentrates mostly on the quality and health of the vendor. I would guess that evaluation of their offerings falls under solution evaluation, in general, which is why it isn’t addressed much in this section of the Techniques chapter.

The criteria for judging the vendor are listed as:

  • Knowledge and Expertise: This measures what the vendor knows and how well it can apply that knowledge.
  • Licensing and Pricing Models: This must be judged based on the anticipated usage scenarios, especially when the technical capabilities are not strongly differentiated. There can be many combinations and permutations of deployment and access configurations and all need to be compared using a similar cost-benefit analysis.
  • Vendor Market Position: This can reflect the relative popularity and success of a vendor, and can also affect how important you and your organization may be to the vendor. There may be varied benefits from being part of different customer communities.
  • Terms and Conditions: These involve the continuity and integrity of the provided products and services. This can affect future decisions to change vendors, how sharing and confidentiality of both organizations’ data is handled, how disputes and modifications are to be handled, and how updates and features and delivery are to be managed.
  • Vendor Experience, Reputation, and Stability: This involves the impressions of other customers and competitors, the attitude of the vendor’s representatives, and the financial stability of the vendors. Your organization should also plan for contingencies to act upon if any of these characteristics change over time (especially if such changes are sudden and for the worse).
Posted in Tools and methods | Tagged , , | Leave a comment

All Solutions Are Iterative

I’ve spilled a lot of wind and electrons talking and writing about iteration as a path to solving problems, in a business analysis context and elsewhere. My framework incorporates multiple levels of iteration, of course, but through earning and maintaining my certifications, and a lifetime of working and learning, I’ve encountered many formal and historical approaches to solving problems in this way.

  • DMAIC: This Six Sigma technique for process improvement is an acronym for a series of steps which are always performed in the same order. The steps are Define, Measure, Analyze, Improve, and Control. (Wikipedia article)
  • DMADV: This Six Sigma technique for process design (it is also known as DFSS or Design For Six Sigma) is an acronym for a series of steps which are always performed in the same order. The steps are Define, Measure, Analyze, Design (the improvement), and Verify the design. (Wikipedia article and also here)
  • PDCA: This technique for process improvement is an acronym for a series of steps which are always performed iteratively in the same order. The steps are Plan, Do, Check, and Act. (Wikipedia article)
  • OODA loop: This iterative process, originally designed by an Air Force colonel, is used in operational situations. My impression was that it was originally conceived as a guideline for how to think in a real-time combat situation, but the approach has been expanded to apply to longer-term activities involving people, especially when in opposition (e.g., in legal proceedings). The acronym stands for Observe-Orient-Decide-Act. (Wikipedia article)
  • The Five Whys: This approach to root cause analysis asks investigators to continually ask “Why” something is happening, continuously digging deeper (or sideways to explore other possibilities if the current one appears to be a dead end) in an iterative fashion until the problem is correctly identified and a correction potentially applied.
  • Scrum, Kanban, etc.: As the briefest of overviews, Scrum (Wikipedia article) is an approach to developing processes and products that iteratively (and relatively quickly and often) iterates toward a (reasonably well-known) goal while applying as many resources as it takes to reach the goal in the allotted time. Kanban (Wikipedia article), by contract, works in a similar fashion but proceeds at whatever pace is supported by the resources available. Other variations and combinations are practiced (e.g., Scrumban), but after a while it all gets to be arguing how many angels can dance on the head of a pin.
  • “Agile Is Dead!”: This talk (on Youtube), hosted by one of the authors of the original Agile Manifesto (who ought to understand what problems the manifesto was actually trying to address), suggests that we should spend less time trying to adhere to the formal rules of various management frameworks, and spend more time communicating with each other and making continuous, small course corrections based on what we learn. In other words, we should iteration often and together.
  • The Scientific Method: This approach (Wikipedia article) to learning about the workings of nature (and systems), asks us to rigorously ask questions, develop and test hypotheses, and compare the results of reality against our conceptions, in a continuous and iterative way until more thorough and accurate understanding is achieved.
    • Understanding Thermodynamics: This book describes the history of the attempts to understand thermodynamics, a process which took over 150 years!
    • String Theory or Whatever: … is basically a mess, and may well be a time- and resource-consuming dead end in the search for understanding the deeper physics of our universe. Given how long it took to understand Thermodynamics, we see that deeper and more correct understanding may take a lot of time, consume a lot of resources, and in general require a lot of iteration. This may not be fun, but it is necessary and inevitable.

In short, almost nothing worthwhile is accomplished on the first try. If it looks like some expert breezed in and made something look easy, you can probably be assured that they iterated the skills needed to achieve the result an awful lot of times. Just because you don’t see the iterations, doesn’t mean they didn’t happen.

Posted in Tools and methods | Tagged , , , | Leave a comment

Stakeholder List, Map, or Personas

Many different tools can be used to describe different kinds of stakeholders. In general, a stakeholder is anyone involved in or affected by any organizational activity, across the entire potential life cycle of that activity. This encompasses the design and implementation of an organizational process, along with its execution and maintenance.

A stakeholder list is no more or less than an enumeration of the different participants, usually in terms of their roles. Important distinctions of classes of stakeholders are internal vs. external, creator vs. user vs. manager vs. maintainer, type of implementation specialist (programmer, graphic artist, database specialist, content and story analyst, etc.), provider vs. customer vs. experiencer of externalities, and so on. (Which other divisions can you think of?)

Stakeholder maps come in two forms, a stakeholder matrix and an onion diagram. A matrix essentially contrasts two continuous scales, the level of influence of the stakeholder and the impact on the stakeholder (i.e., how the stakeholder affects others vs. how they are affected by others).

Stakeholder onion diagrams show which participants have the most affect on or are most affected by different systems or solutions. While the roles may vary in their specifics, they don’t really change in character. It’s also true that customers, who are presumably represented in the outer ring and thus least affected (or drive the process the least) can be internal personnel and may also be the primary beneficiaries of the new or modified system, whether directly or indirectly.

Stakeholders may further be classified using a RACI matrix (or a responsibility matrix), which describes whether people in different roles are (or should be):

  • Responsible: These individuals are charged with performing the requisite analysis, implementation, testing, and deployment required to effect the desired system or modification.
  • Accountable: This individual (of whom “There can be only one!“) is held accountable for accomplishing what is desired.
  • Consulted: These individuals provide input into the work to be performed, but neither perform it nor are accountable for it.
  • Informed: These individuals are notified about what is happening and what has happened, but otherwise how no input or responsibility for the work. They may, however, be the ultimate users or beneficiaries.

Finally, stakeholders of different kinds, most typically users, may be defined in terms of personas. Many different kinds of people may interact with systems and processes in different capacities, and their duties, privileges, and interactions need to be defined in appropriate detail. All envisioned personas must be identified during the analysis so their needs may be properly addressed in the new or modified system or process. All of the tools of business analysis can be applied to ensure this analysis is thorough and accurate.

Posted in Tools and methods | Tagged , , | Leave a comment

Roles and Permissions Matrix

This technique involves the specification of actions that are allowed to be taken by actors in different, defined roles in an organization. While these can and should be defined in general use, as a practical matter this most often comes up in the design of software systems, where roles and permissions can be rigorously defined and enforced.

I analyzed operations in dental offices on two separate occasions. Five roles were defined: dentist, hygienist, assistant, helper, and administrator. The industry is governed by formal regulations that define what dental operations dentists, hygienists, and assistants can (with increasing limitations) perform. Helpers cannot perform any type of work on patients, but they can support and prepare spaces, supplies, and equipment for the other workers. Administrators handle paperwork, billing, communication, ordering, and scheduling for the office. The activities and permissions for each role could be expressed in the form of the matrix shown below, though the example in the figure is for a more generalized engineering operation.

The things that must be defined are:

  • Roles: These are categories to which packages of permissions can be defined. Individuals or users are assigned roles.
  • Activities: These define the specific activities that may be performed by personnel in each role.
  • Authorities (Permissions): These show who can do what.

The roles and activities define a two-dimensional matrix, while the permissions show the ability of the permission for each combination. In many computer systems, the permissions are enforced based on the roles assigned to each user when they log in.

These permissions can take many forms. File systems often define roles for what kinds of users can manipulate files in different ways. More involved IT systems may define an almost infinite variety of roles and operations. Considering the figure above, we see a number of roles (and families of roles), and operations. This example is notional. Many operations can be defined for many different classes of operations. It you can think of it, you can define and implement it. The key, as always, is to ensure the designers, maintainers, and users of a system clearly understand how the system is to be defined and used.

Posted in Tools and methods | Tagged , , | Leave a comment

Scaling Solution Approaches

The same process and component phases are applied to analysis/build/modify/improve efforts of every scope and scale. The two main things to vary are the breadth of the process analyzed and the number of phases worked through.

The breadth of an effort can range from a single sub-process, for example a single operation highlighted by the red circle in the figure below, to an entire business process. An effort can examine any portion of a larger process between those extremes.

Single sub-processes should be examined from a SIPOC point of view, as shown in the figure below.

Beginning Scope: Single Sub-Process

Questions that can be asked (and short run solutions and opportunities potentially identified) for each element of that sub-process as follows.

  • 1. Source
    • What is the source? A person? A supply of physical items? A supply of informational items? An automated system?
    • Is the source always reachable when needed?
    • When is the source available?
    • What is the means of contact or communication with the source?
    • Is the current source the best way to get the required input?
    • Does the source know enough to consistently provide the correct input?
    • Can the quality of the source be improved?
    • When does the source acquire the desired input?
    • What are the chances that the source does not have the input?
    • What are the chances that the source has an incorrect input?
  • 2. Input
    • What is the input? (person, physical item, informational item)
    • Have all qualities that may affect processing been identified?
    • When is it received?
    • How can its acceptability and correctness be evaluated?
    • Is it properly formatted?
    • Are the relevant values in range?
    • Are the inputs well-defined?
    • How are errors handled?
    • How much can inputs vary?
    • Is there any way to handle or correct inputs with errors or omissions?
    • Can work proceed without the input? Are there assumptions that can be made by default?
    • Are there any additional inputs that might be useful? Are they available? What would it take to incorporate them?
    • Are inputs queued, and if so, how?
    • Can multiple items be processed at the same time (in parallel)? Are there any differences in the items processed?
  • 3. Process
    • Are all required inputs available?
    • How are multiple inputs synchronized so they are correctly handled together?
    • Is there any way to proceed when inputs are wrong or missing?
    • When and how are errors handled or reworked?
    • What is the nature of the process? Does it involve a transformation of one or more properties? Combination of multiple inputs (assembly)? Breaking apart into multiple outputs (division)?
    • Are there variations in the process? Are they based on characteristics of the inputs?
    • Are decisions and operations well defined (by written procedures, logic tables, routing instructions?
    • Are outputs deterministic or varying (stochastic)?
    • If multiple output destinations (customers) are possible, how is routing determined?
    • What percentage of outputs are routed to each destination and why? Should this change?
    • Do outputs depend on human judgment?
    • Can any operations be automated?
    • Can any operations be streamlined, simplified, or reduced?
    • Are calculations well defined?
    • How much variation is acceptable? How can it be reduced?
    • Can conditions that may lead to future errors be identified? (Condition-based Maintenance)
    • What operating data best serve as KPIs? Are these captured, and if so, how? Is there a better way?
    • How has the process changed over time?
    • What factors cause variations in the outputs, time taken, or resources consumed?
    • Do different operators get different results? Why?
    • Do operators require any particular background, education, training, licenses, and so on?
    • Who has the capability to effect any identified changes?
    • What permissions are needed?
    • How is the process initiated (triggered)?
    • Does the process run continually or only occasionally (in batches, on rare conditions, etc.)?
    • Is this sub-process on the main flow (all items processed) or a side or conditional flow (only some items processed)?
  • 4. Output
    • Are the criteria for acceptable outputs well defined?
    • How are outputs judged against those criteria?
    • Are all outputs tested, only some, or none? Why?
    • Are any of the tests automated, or can they be?
    • Are the causes of unacceptability identified and understood?
    • What percentage of outputs are not acceptable?
    • Can unacceptable outputs be reworked or repaired?
    • Are any changes to the outputs desired?
  • 5. Customer
    • Who or what is the customer? (Person, physical, informational)
    • Do different customers react to outputs in different ways?
    • What alternatives do customers consider?
    • What records are kept about customers?
    • Are the outputs appropriate for the customer(s)?
    • How are rejected outputs handled? Can they be reworked or corrected?
    • When are customers available?
    • Do customers ever change?

Some general questions can also be asked

  • What are the installed costs for the process?
  • What are the fixed and variable operating costs for the process?
  • Are we examining the process in terms of quality, throughput, or both?
  • Can the process be broken down even farther? For example, a given type of worker or workstation may perform multiple actions sequentially or conditionally based upon multiple criteria. The operation can be analyzed strictly in terms of resources consumed (including time), or in terms of specific transformations carried out.
  • Is there any information certain analysts or other participants are not allowed to see?

The important thing to understand about the above is that the proper questions will be asked based on what is actually encountered. There is no fixed set of questions that can be put into a flowchart that will always lead to the right questions being asked, the right solutions being identified, and the right problems being “solved.” The goal is rather to be aware of the vast number of questions that can be asked, the vast number of potential solutions or modifications that can be applied, and how the fitness of identified alternatives can be compared.

Scaling Up in Scope: Multiple Sub-Processes

The next step up in scale is to examine a larger portion of a process, or an entire business process. This is something composed of many sub-processes as described above. In the topmost figure, consider not just a single sub-process as shown in the red circle, but the entire process as shown in the complete figure.

In this situation we differentiate between process nodes that are connected with the external world, especially involving inputs, and those that are contained entire within the organization. The organization should be able to answer all of the above questions for each phase of internal processes if the organization is sufficiently mature. At the very least it should be far easier to find the answers and effect any necessary changes. Finding answers and making modifications in processes connected to the outside world can be much more complicated and potentially time-consuming.

Another thing to consider is that different KPIs may exist or need to be identified to assess the performance of the whole system, or at least larger swaths of it. Analysis must also include queuing effects between the different process nodes, and throughout the system as a whole.

Scaling Up in Effort: Partial vs. Full Life Cycle, and Part-Time vs. Full-Time

When considering the scale of a potential engagement, the number of phases worked through can also vary. Initial engagements will typically help define the problem (Intended Use phase) and understand what is going on now (Conceptual Model / As-Is phase). If requirements or designs or fixes can be identified that will yield meaningful short term benefits, then those will be offered and/or implemented in the initial engagement. In general, however, initial engagements are only intended as the beginning of a longer process of identifying what needs to be done, what analysis and implementation methods are likely to be suitable, and to introduce customers to the analysis structure.

An analysis team can perform any phase of effort from defining a problem to figuring out what’s going on now to identifying requirements to helping identify and select different solutions and vendors to helping conduct implementation, test, and acceptance operations. A team can perform the work independently, can work with the customer through various phases, can train the customer to perform all this work on their own. The team can also be available continually or only intermittently to answer questions and provide reviews and incremental training and guidance, and can be on site or remote.

Posted in Tools and methods | Tagged , , | Leave a comment

Data Flow Diagrams

There are a couple of ways to represent the flow of data in any existing or proposed system. One is by creating a flow diagram using any notation you prefer that communicates the necessary understanding to all participants. I have done this a number of ways, as I will describe below. The other way, and the one described in the BABOK (and thus what will need to be understood to pass any of the IIBA’s relevant certification exams), is to use a formal methodology like Yourdon DeMarco, Gane & Sarson, SSADM, or Yourdon and Coad.

Those formal methods are described in many places, with this webpage giving a particularly good overview. The formal methods all include the same four components, which are external entities, processes, data stores, and data flows. One major limitation not mentioned by the BABOK, or hinted at indirectly, is that the formal methods don’t show specific operations, timing, loops, or other logic. They just provide a static overview of what things can flow from where to where.

As an industry there had to be a place to start, but I find these notations to be almost wholly unsatisfying. There is sooo much more information which can be shown, which could greatly enhance the understanding of all participants.

The first thing to understand is that both formal and ad hoc methods call upon the analyst to consider everything from a S-I-P-O-C (or C-O-P-I-S) point of view. This helps the analyst identify the source of input data into and the destination of output data from every process.

The formal methods give no indication of the contents of the data items transferred hither and thither, but those can be documented in endless ways. For example, some information can be included on the drawings (as shown in the drawing directly below), and detailed descriptions of the structure and meaning of data items (or collections thereof) can be included in other kinds of written documentation.

Ad hoc drawings can include data flows in conjunction with movements of other, typically physical, entities. In this case the analyst might show the movement of physical entities with solid lines and data items with dashed lines. Additionally, indications of business functions and departments, IT infrastructure items, and human participants can all be included on drawings, as long as the meaning and context are clear and well understood.

A lot of other information about control flow, logic, and the nature of specific calculations can be included as well, as in the following handful of examples. The first includes logic and decisions, the second the same plus human users interacting with the system in many places (which itself could bear a great deal of additional description), the third includes a lot of architectural elements, and the fourth shows how some calculations are carried out.

Diagrams can describe different kinds of entities and messages, logic, decisions, storages, and other elements in any way analysts and customers can understand.

Diagrams can show data associated with different processes in many ways, and can also show where people interact with the system.

Diagrams can show how data are logically organized and processed in many different ways and in many different contexts.

This diagram provides a different and more detailed view of the same system described in the diagram directly above. Use your creativity!

I discuss a wide variety of inter-process communication methods here. Each one involves sending a package of data from one process to another. Sometimes the processes are within the same program, sometimes they are in different programs running on the same machine, sometimes they are on different machines on the same network, and sometimes they are on different machines on different networks. What they all have in common is that they all require both physical and data protocols to ensure the communication is completed successfully. I discuss one aspect of this here.

Again, be sure you understand the formal methods defined in the BABOK when it is appropriate to do so, but feel free to represent storages and flows of data as you see fit in other situations. The best way to judge whether a given representation is appropriate or allowed is whether or not it builds clear, shared understanding, and results in the creation of an effective solution.

Posted in Tools and methods | Tagged , , , | Leave a comment

Business Cases

A business case is essentially a cost-benefit analysis of taking a given course of action. Actions may include embarking on new projects, acquiring resources or other organizations, making non-routine purchases, adopting new methods, entering partnerships, seeking new lines of business, and so on.

A business case may be constructed just like any other project, except the implementation and test phases aren’t part of this named effort. That is, you do everything you would normally: identify the problem, figure out what’s going on, identify needs, assess risks and constraints and assumptions, and design potential solutions. If the benefits of the solution outweigh the costs, either at all or by some specified margin, then the course of action is pursued.

The BABOK describes the process this way:

  1. Need Assessment: This defines some capability or improvement that may drive a benefit to the organization.
  2. Desired Outcomes: This is a description of the benefits realized, which can be expressed in terms of time, cost, or quality (or features).
  3. Analyze Alternatives: This is where you assess the costs and benefits of one or more potential solution approaches. This corresponds to the design phase of any project (possibly in conjunction with the conceptual modeling phase).
    • Scope: This is where the size and boundaries of the proposed effort are determined. These define what will and will not be included in the analysis. It also helps analysts choose where to break larger problems down into smaller and more manageable ones.
    • Feasibility: This analysis determines whether something can (practically) be done at all. This can be done from a technical or logistics point of view, but the major overlap is with the various financial analyses.
    • Assumptions, Risks, and Constraints: I include this in my larger list of project activities, and near the beginning, but I don’t include it in the streamlined, stylized diagram of my framework. That’s because it isn’t really a standalone activity, but takes place in potentially every phase throughout the entire engagement.
    • Financial Analysis and Value Assessment: This often involves a full cost-benefit analysis expressed in financial terms, but many judging and scoring systems can be used. For example, a weighted multiplicative system can allow alternatives to be compared both directly by feature and personally by perceived importance.
  4. Recommend Solution: Choose the best option – or not to proceed at all.
Posted in Tools and methods | Tagged , , | Leave a comment


Prototyping is the act of mocking up some aspect of an envisaged product or capability in order to asses its suitability, acceptability, or performance. It is usually applied to product design, but can be used in many different contexts for both internal and external customers. Successive iterations of a prototype, either as throw-away one-offs or through evolutionary modifications, help determine whether requirements have been properly expressed and understood, the item can be manufactured efficiently or at all, contains all desired functionality, is comfortably usable (ergonomics), is understandable and intuitive, and so on. Prototypes may be physical objects, systems, arrangements (or layouts), environments, or procedures.

Prototypes can be used in proof-of-concept exercises, to determine whether the desired outcomes or effects can be achieved. A form study prototype can be used to evaluated fit and finish, manufacturability, ergonomics, material requirements, and other things. A usability prototype may be used to examine user interactions and comprehension. A visual prototype can bee used to assess how an item looks in terms of its visibility (potato peelers with potato-colored handles are more likely to be thrown away with the peeled skins, which may or may not be a good thing depending on your point of view) or appeal. A functional prototype is used to see how things work.

Methods of prototyping include the following:

  • Storyboarding: This allows investigation of the order, location, appearing, and arrangements of things and events. Movies are often storyboarded before production begins, but the technique can be applied to any series of events.
  • Paper Prototyping: This involves the creation of one or more drawings, which may be done to any degree of accuracy or scale, to see how things fit together and build shared understanding.
  • Workflow Modeling: This describes the flow of operations, decisions made and criteria required to make them, execution brances and diversions, and so on. They are often defined as flowcharts.
  • Simulation: Objects, and systems can be simulated when it is too expensive, time-consuming, or dangerous to build and exercise them in the real world. A wide variety of simulation techniques may be employed for this research (e.g., different kinds of analog, continuous, discrete-event, and hybrid simulations).
  • Physical Models: When in doubt, build something real and see how it goes.

Examples of prototyping:

  • Wind tunnel testing: This is applied to air and ground vehicles and also structures both singly and in groups. Water-borne vehicles are similarly tested in large tanks.
  • Flight simulators: These can be used to test physical performance characteristics of aircraft and also operational procedures.
  • Full-scale functional prototypes: Entire cable channels and museums are devoted to all the crazy flying aircraft that have been produced to explore various ideas and capabilities.
  • Ergonomic studies: Hand-held objects are often studied and designed with a keen eye toward comfort, usability, and safety.
  • Simulations of border crossings are used to understand ongoing issues and the effects of proposed changes, and also in the process of designing and building new ones.
  • Architects often build building models and landscapes out of foamcore and cardboard so customers can viscerally understand the look and feel of a proposed structure and its environment.
  • Building Information Systems (BIMs) like Revit are used to specify the design of a building and its fixtures and infrastructure in great detail. This supports defining bills of material, construction order and requirements, skills needs for all the different components and systems, site preparation and staging, and so on.
  • User Interfaces for computer software can be mocked up using tools like Balsamiq, Visio, or UI-builders included in different programming products. They make involve greater or lesser degrees of working functionality.(/li)
  • The command module simulator was famously used to identify a procedure that allowed the Apollo 13 spacecraft to be safely shut down and restarted, which allowed the crew to make it back home safely against long odds.(/li)

It is important to note that prototypes may turn up unexpected causes and effects, but also might not. Many lessons have had to be learned the hard way. You can’t test for effects you don’t know even exist. Remember the (first) Tacoma Narrows Bridge, the Hyatt Regency Walkway Collapse in Kansas City, and the failure of the railroad bridge that led to the formation of the Order of the Engineer.

Posted in Tools and methods | Tagged , , | Leave a comment