A Few Interesting Talks on Architecture and Microservices

These talks were recommended by Chase O’Neill, a very talented colleague from my time at Universal, and proved to be highly interesting. I’ve thought about things like this through my career, and they’re obviously more germane given where things are currently heading, so I’m sharing them here.

https://www.youtube.com/watch?v=KPtLbSEFe6c

https://www.youtube.com/watch?v=STKCRSUsyP0

https://www.youtube.com/watch?v=MrV0DqTqpFU

Let me know if you found them as interesting as I did.

Posted in Uncategorized | Leave a comment

Using Data In My Framework and In Simulations

I recently wrote about how data is collected and used in the different phases of my business analysis framework. After giving the most recent version of my presentation on the subject I was asked for clarification about how the data is used, so I wanted to write about that today.

I want to start by pointing out that data comes in many forms. It can be highly numeric, which is more the case for simulations and physical systems, and it can be highly descriptive, which is often more the case for business systems. Make no mistake, though, it’s all data. I’ll describe how data came into play in several different jobs I did to illustrate this point.

Sprout-Bauer (now Andritz) (details)

My first engineering job was as a process engineer for an engineering and services firm serving the pulp and paper industry. Our company (which was part of Combustion Engineering at the time, before being acquired by Andritz) sold turnkey mechanical pulping lines based on thermomechanical refiners. I did heat and material balances and drawings for sales proposals, and pulp quality and mill audits to show that we were making our quality and quantity guarantees and to serve as a basis for making process improvement recommendations. Data came in three major forms:

  • Pulp Characteristics: Quite a few of these are defined in categories like freeness (Canadian Standard Freeness is approximately the coolest empirical measure of anything, ever!), fiber properties, strength properties, chemical composition, optical properties, and cleanliness. We’d analyze the effectiveness of the process by analyzing the progression of about twenty of these properties at various points in the production line(s). I spent my first month in the company’s research facility in Springfied, Ohio learning about the properties they typically used. It seems that a lot of these measures have been automated, which must be really helpful for analysts at the plants. It used to be that I’d go to a plant to draw samples from various points in the process (you’d have to incrementally fill buckets at sample ports about hourly over the course of a day), then dewater them, seal them in plastic bags, label them, and ship them off to Springfield, where the lab techs would analyze everything and report the resullts. Different species of trees and hard and soft woods required different processing as well, and that was all part of the art and science of designing and analyzing pulping processes. One time we had to send wood chips back in 55-gallon drums. Somehow this involved huge bags and a pulley hanging over the side of a 100-foot-high building. My partners held me by the back of my belt as I leaned out to move the pulley in closer so we could feed a rope through it. So yeah, data.
  • Process volumes and contents: Pulp-making is a continuous process so everything is expressed on a rate basis. For example, if the plant was supposed to produce 280 air dried metric tons per day it might have a process flow somewhere with 30,000 gallons per minute at 27% consistency (the percentage of the mass flow composed of wood fiber with the remainder being steam, a few noncondensable gases, chemicals like liquors and bleaches, and some dirt or other junk). Don’t do the math, I’m just giving examples here. The flow conditions also included temperatures (based on total energy or enthalpy), and pressures, which allowed calculation of the volume flows and thus the equipment and pipe sizing needed to support the desired flow rates. The thermodynamic properties of water (liquid and gaseous) are a further class of data needed to perform these calculations. They’ve been compiled by researchers of the years. The behavior of flow through valves, fittings, and pipe is another form of data that has been compiled over time.
  • The specifications and sizes of different pieces of equipment were also part of the data describing each system. Many pieces of equipment came in standard sizes because it was too difficult to make custom-sized versions. This was especially true of pulp refiners, which came in several different configurations. Other items were custom made for each application. Examples of these included conveyors, screw de-watering presses, and liquid phase separators. Some items, like screens and cleaners, were made in standard sizes and various numbers of them were used in parallel to support the desired flow rates. Moreover, the screens and cleaners would often be arranged in multiple phases. I didn’t calculate flows based on equipment sizes for the most part, I calculated them based on the need to produce a certain amount of pulp. The equipment and piping would later be sized to support the specified flows.
  • The fourth item in this three-item list is money. In the end, every designed process had to be analyzed in terms of installed, fixed, and operating costs vs. benefits from sales. I didn’t do those calculations as a young engineer but they were definitely done as we and our customers worked out the business cases. I certainly saw how our proposals were put together and had a decent idea of what things cost. I’d learn a lot more about how those things are done in later jobs.

All these data elements have to be obtained through observation, testing, research, and elicitation (and sometimes negotiation), and all must be understood to analyze the process.

Westinghouse Nuclear Simulator Division (details)

Here I leveraged a lot of the experience I gained analyzing fluid flows and doing thermodynamic analyses in the paper industry. Examples of how I/we incorporated thermodynamic properties are here, here, and here. In this case the discovery was done for the modellers in that the elements to be simulated were already identified. This meant that we started with data collection, which we performed in two phases. We started by visiting the plant control room and recording the readings on every dial and indicator and the position or every switch, button, and dial. This gave us an indication of a few of the flows, pressures, and temperatures at different points in the system. The remainder of those values had to be calculated based on the equipment layouts and the properties of the fluids.

  • Flow characteristics: These were mostly based on the physical properties of water and steam but we sometimes had to consider noncondensables, especially when they made up the bulk of the flow, as they did in the building spaces and the offgas system I worked on. We also had to consider concentrations of particulates like boron and radioactive elements. The radiation was tracked as an abstract emittance level that decayed over time. We didn’t worry about the different kinds of radiation and the particles associated with them. (As much as I’ve thought about this work in the years since I did it I find it fascinating that I never really “got” this detail until just now as I’m writing this.) As mentioned above, the thermodynamic properties of the relevant fluids have all been discovered and compiled over the years.
  • Process volumes and contents: The flow rates were crucial and were driven by pressure differentials and pump characteristics and affected by the equipment it flowed through.
  • The specifications and sizes of different pieces of equipment were also part of the data describing each system. We needed to do detailed research through a library of ten thousand documents to learn the dimensions and behavior of all the pipes, equipment items, and even rooms in the containment structure.

Beyond the variables describing process states and equipment characteristics, the simulation equations required a huge number of coefficients. These all had to be calculated from the steady-state values of the different process conditions. I spent so much time doing calculations and updating documents that I found it necessary to create a tool to manage and automate the process.

Another important usage of data was in the interfaces between models. These had to be negotiated and documented, and the different models had to be scheduled to run in a way that would minimize race conditions as each model updated its calculations in real-time.

CIScorp (details)

In this position I did the kind of business process analysis and name-address-phone number-account number programming I’d been trying to avoid, since I was a hard core mechanical engineer, and all. Who knew I’d learn so much and end up loving it? This position’s contrast to most others I worked in the first half of my career taught me more about performing purposeful business analysis than any other single thing I did. I’m not sure I understood the oeuvre as a whole at the time, but it certainly gave me a solid grounding and a lot of confidence for things I did later. Here I write about how I integrated various insights and experiences over time to become the analyst and designer that I am today.

The FileNet document imaging system is used to scan documents so their images are moved around almost for free while the original hardcopies are warehoused. We’d do a discovery process to map an organization’s activities, say, the disability underwriting section of an insurance company, to find out what was going on. We interviewed representatives of the groups of workers in each process area to identify each of the possible actions they could take in response to receipt of any type of document or group of documents. This gave us the nouns and verbs of the process. Once we knew that, we’d gather up the adjectives of the process, the data that describe the activities, the entities processed (the documents), and the results generated. We gathered the necessary data mostly through interviews and reviews of historical records.

The first phase of the effort involved a cost-benefit analysis that only assessed the volumes and process times associated with the section’s activities. Since this was an estimation we collected process times via a minimum of observation and descriptions of SMEs. As a cross-check we reviewed whether our findings made sense in light of what we knew about how many documents were processed daily by each group of workers and the amount of time taken per action. Since the total amount of time spent per day per worker usually totaled up to just around eight hours we assumed our estimates were on target.

The next step was to identify which actions no longer needed to be carried out by workers, since all operations involving the physical documents were automated. We also estimated the time needed for the new actions of scanning and indexing the documents as they arrived. Finally, given assumptions for average pay rates for each class of worker, we were able to calculate the cost of running the As-Is process and the To-Be automated process and estimate the total savings that would be realized. We ended up having a long discussion about whether we’d save one minute per review or two minutes per review of collated customer files by the actual underwriting analysts, which was the most important single activity in the entire process. We ultimately determined that we could make a sufficient economic case for the FileNet solution by assuming a time savings of only one minute per review. The customer engaged two competitors, each of whom performed similar analyses, and our solution was judged to realize the greater net savings, about thirty percent per year on labor costs.

The data items identified and analyzed were similar to those I worked with in my previous positions. They were:

  • Document characteristics: The type of document determined how it needed to be processed. The documents had to be collated into patient files, mostly containing medical records, that would be reviewed and scored. This would determine the overall risk for a pool of employees a potential customer company wanted to provide disability coverage for. The insurer’s analysis would determine whether it would agree to provide coverage and what rate would be charged for that pool.
  • Process volumes and contents: These flows were defined in terms of documents per day per worker and per operation, with the total number arriving for processing each day being known.
  • The number and kind of workers in each department is analogous to the equipment described in the systems above. The groups of workers determined the volume of documents that could be processed and the types of transformations and calculations that could be carried out.

Once the initial phase was completed we examined the documents more closely to determine exactly what information had to be captured from them in order to collate them into files for each employee and how those could be grouped with the correct company pool. This information was to be captured as part of the scanning and indexing operation. The documents would be scanned and automatically assigned a master index number. Then, an index operation would follow which involved reading information identifying an employee so it could be entered into a form on screen. Other information was entered on different screens about the applying company and its employee roster. The scores for each employee file, as assigned by the underwriters, also had to be included. The data items needed to design the user and control screens all had to be identified and characterized.

Bricmont (details)

The work I did at Bricmont was mapped a little bit differently than the work at my previous jobs. It still falls into the three main classifications in a sense but I’m going to describe things differently in this case. For additional background, I describe some of the detailed thermodynamic calculations I perform here.

  • Material properties of metals being heated or even melted: As in previous descriptions, the properties of materials are obtained from published research. Examples of properties determined as a function of temperature are thermal conductivity and specific heat capacity. Examples of properties that remained constant were density, the emissivity of (steel) workpieces, and the Stephan-Boltzmann constant.
  • Geometry of furnaces and metal workpieces being heated: The geometry of each workpiece determines the layout of the nodal network within it. The geometry of the furnace determines how heat, and therefore temperature, is distributed at different locations. The location of workpieces relative to each other determines the amount of heat radiation that can be transferred to different sections of the surface of the workpieces (this obviously doesn’t apply for heating by electrical induction). This determines viewing angles and shadows cast.
  • Temperatures and energy inputs: Energy is transferred from furnaces to workpieces usually by radiative heat transfer, except in the cases where electric induction heating is used. Heat transfer is a function of temperature differential (technically the difference between the fourth power of the absolute temperature of the furnace and the fourth power of the temperature of the workpiece) for radiative heating methods and a function of the electrical inputs minus losses for inductive methods.
  • Contents of messages received from and sent to external systems: Information received from external systems included messages about the nature of workpieces or materials loaded into a furnace, the values of instrument readings (especially thermocouples that measure temperature), other physical indicators that result in the movement of workpieces through the furnace, and permissions to discharge heated workpieces to the rolling mill, if applicable. Information forwarded to other systems included messages to casters or slab storage to send new workpieces, messages to the rolling mill about the workpiece being discharged, messages to the low-level control system defining what the new temperature or movement setpoints should be, and messages to higher-level administrative and analytic systems about all activities.
  • Data logged and retrieved for historical analysis: Furnace control systems stored a wide range of operating, event, and status data that couuld be retrieved for further analysis.

The messages relayed between systems employed a wide variety of inter-process communication methods.

American Auto-Matrix (details)

In this position I worked with low-level controllers that exchanged messages with each other to control HVAC devices, manage other devices and setpoints, and record and analyze historical data. The platforms I worked on were mostly different from those at previous jobs but in principle they did the same things. This was mostly interesting because of the low-level granularity of the controllers used and the variety of communication protocols employed.

Regal Decision Systems and RTR Technologies (details here and here)

The major difference between the work I did with these two companies and all the previous work I’d done is that I switched from doing continuous simulation to discrete-event simulation. I discuss some of the differences here, though I could always go into in more detail. At a high level the projects I worked on incorporated the same three major classes of data as what I’ve described above (characteristics of the items being processed, the flow and content of items being processed, and the characteristics of the subsystems where the processing occurs). However, while discrete-event simulation can be deterministic, its real power comes from being able to analyze stochastic processes. This is accomplished by employing Monte Carlo methods. I described how those work in great detail yesterday.

To review, here are the major classes of data that describe a process and are generated by a process:

  • Properties of items or materials being processed
    • physical properties of materials that affect processing
    • information content of items that affect processing
    • states of items or materials that affect processing
    • contents of messages that affect processing
  • Volumes of items or materials being processed
  • Characteristics of equipment or activities doing the processing
  • Financial costs and benefits
  • Output data, items, materials, behaviors, or decisions generated by the process
  • Historical data recorded for later analysis
Posted in Engineering, Tools and methods | Tagged , , , | Leave a comment

Discrete-Event Simulations and Monte Carlo Techniques

“It was smooth sailing” vs. “I hit every stinkin’ red light today!”

Think about all the factors that might affect how long it takes you to drive in to work every day. What are the factors that might affect your commute, bearing in mind that models can include both scheduled and unscheduled events?

  • start time of your trip
  • whether cars pull out in front of you
  • presence of children (waiting for a school bus or playing street hockey)
  • presence of animals (pets, deer, alligators)
  • timing of traffic lights
  • weather
  • road conditions (rain, slush, snow, ice, hail, sand, potholes, stuff that fell out of a truck, shredded tires, collapsed berms
  • light level and visibility
  • presence of road construction
  • occurrence of accidents
  • condition of roadways
  • condition of alternate routes
  • mechanical condition of car
  • your health
  • your emotional state (did you have a fight with your significant other? do you have a big meeting?)
  • weekend or holiday (you need to work on a banker’s holiday others may get)
  • presence of school buses during the school year, or crossing guards for children walking to school
  • availability of bus or rail alternatives (“The Red Line’s busted? Again?”)
  • distance of trip (you might work at multiple offices or with different clients)
  • timeliness of car pool companions
  • need to stop for gas/coffee/breakfast/cleaning/groceries/children
  • special events or parades (“What? The Indians finally won the Series?”)
  • garbage trucks operating in residential areas

So how would you build such a simulation? Would you try to represent only the route of interest and apply all the data to that fixed route, or would you build a road network of an entire region to drive (possibly) more accurate conditions on the route of interest? (SimCity simulates an entire route network based on trips taken by the inhabitants in different buildings, and then animates moving items proportional to the level of traffic in each segment.)

Now let’s try to classify the above occurrences in a few ways.

  • Randomly generated outcomes may include:
    • event durations
    • process outcomes
    • routing choices
    • event occurrences (e.g., failure, arrival; Poisson function)
    • arrival characteristics (anything that affects outcomes)
    • resource availability
    • environmental conditions
  • Random values may be obtained by applying methods singly and in combination, which can result in symmetrical or asymmetrical results:
    • data collected from observations
    • continuous vs. discrete function outputs
    • single- and multi-dice combinations
    • range-defined buckets
    • piecewise linear curve fits
    • statistical and empirical functions (the SLX programming language includes native functions for around 40 different statistical distributions)
    • rule-based conditioning of results

Monte Carlo is a famous gambling destination in the Principality of Monaco. Gambling, of course, is all about random events in a known context. Knowing that context — and setting ultimate limits on the size of bets — is how the house maintains its advantage, but that’s a discussion for another day! When applied to simulation, the random context comes into play in two ways. First, the results of individual (discrete) events are randomized, so a range of results are generated as the simulation runs over multiple iterations. The random results are generated based on known distributions of possible outcomes. Sometimes these are made by guesstimating, but more often they are based on data collected from actual observations. (I’ll describe how that’s done, below.) The second way the random context comes into play is when multiple random events are incorporated into a single simulation so their results interact. If you think about it, it wouldn’t be very interesting to analyze a system based on a single random distribution, because the output distribution would be essentially the same as the input distribution. It’s really the interaction of numerous random events that make such analyses interesting.

First, let’s describe how we might collect the data from which we’ll build a random distribution. We’ll start by collecting a number of sample values using some form of the observation technique from the BABOK, ensuring that we capture a sufficient number of values.

What we do next depends on the type of data we’re working with. In the two classic cases we order the data and figure out how many occurrences of each kind of value occur. In both cases we start by arranging the data items in order. If we’re dealing with a limited number of specific values, examples of which could be the citizenship category of a person presenting themselves for inspection at a border crossing or the number number of separate deliveries it will take to fulfill an order, then we just count up the number of occurrences of each measured value. If we’re dealing with a continuous range of values that has a known upper and lower limit, with examples being process times or the total value of an order, then we must rearrange the data in order and break it into “buckets” across intervals chosen to capture the shape of the data accurately. Sometimes data is collected in raw form and then analyzed to see how it should be broken down, while in other cases a judgment is made about how the data should be categorized ahead of time.

The data collection form below shows an example of how continuous data was pre-divided into buckets ahead of time. See the three rightmost columns for processing time. Note further that items that took less than two minutes to process could be indicated by not checking any of the boxes.

In the case where the decision of how to analyze data is made after it’s collected we’ll use the following procedures. If we identify a limited number of fixed categories or values we’ll simply count the number of occurrences of each. Then we’ll arrange them in some kind of order (if that makes sense) and determine the cumulative number of readings and determine the proportion of each result’s occurrence. A random number (from zero to one) can be generated against the cumulative distribution shown below, which determines the bucket and the related result.

Given data in a spreadsheet like this…

…the code could be initialized and run something like this:

There are always things that could be changed or improved. For example, the data could be sorted in order from most likely to least likely to occur, which would minimize the execution time as the function would generally loop through fewer possibilities. Alternatively, the code could be changed so some kind of binary search is used to find the correct bucket. This would make the function run times highly consistent at the cost of making the code more complex and difficult to read.

This is pretty straightforward and the same methods could be used in almost any programming language. That said, some languages have specialized features to handle these sorts of operations. Here’s how the same random function declaration would look in GPSS/H, which is something of an odd beast of a language that looks a little bit like assembly in its formatting (though not for its function declarations). GPSS/H is so unusual, in fact, that the syntax highlighter here doesn’t recognize that language and color it consistently.

In this case the 12345 is an example of the function index, the handle by which the function is called. RN1 indicates that the random function iterates a die roll of 0.0-0.999999 between the first values in each pair and, when the correct bucket is found, returns the second value. D0010 indicates that the array has ten elements. The defining values are given as a series of x,y value pairs separated by slashes. The first number of each pair is the independent value while the second is the dependent value.

The language defines five different types of functions. Some function types limit the output values to the minimum or maximum if the input is outside the defined range. I’m not doing that in my example functions because I’ve limited the possible input values.

So that’s the simple case. Continuous functions do mostly the same thing but also perform a linear interpolation between the endpoints of each bucket’s dependent value. Let’s look at the following data set and work through how we might condition the information for use.

In this case we just divide the range of observed results (18-127 seconds) by 20 buckets an see how many results fall into each range. We then do the cumulations and proportions like before, though we need 21 values so we have upper and lower bounds for all 20 buckets for the interpolation operation. If longer sections of the results curve appear to be linear then we can omit some of the values in the actual code implementation. If we do this then we want to shape a piecewise linear curve fit to the data. The red line graph superimposed on the bar chart above shows that the fit including only the items highlighted in orange in the table is fairly accurate across items 5 through 12, 12 through 18, and 18 through 20.

The working part of the C++ code would look like this:

Looking at the results of a system simulated in this way you can get a range of answers. Sometimes you get a smooth curve that might be bell-shaped and skewed left. This would indicate that much of the time your commute duration fell in a typical window but was occasionally a little shorter but sometimes longer, and sometimes by a lot. Sometimes the results can be discontinuous, which means you sometimes get a cluster of really good or really bad overall results if things stack up just right. Some types of models are so granular that the variations kind of cancel each other out, and thus we didn’t need to run many iterations. This seemed to be the case with the detailed traffic simulations we built and ran for border crossings. In other cases, like in the more complicated aircraft support logistics scenarios we analyzed, the results could be a lot more variable. This meant we had to run more iterations to be sure we were generating a representative output set.

Interestingly, exceptionally good and exceptionally poor systemic results can be driven by the exact same data for the individual events. It’s just how things stack up from run to run that makes a given iteration good or bad. If you are collecting data during a day when an exceptionally good or bad result was obtained in the real world, the granular information obtained should still give you the more common results in the majority of cases. This is a very subtle thing that took me a while to understand. And, as I’ve explained elsewhere, there are a lot of complicated things a practitioner needs to understand to be able to do this work well. The teams I worked on had a lot of highly experienced programmers and analysts and we had some royal battles over what things meant and how things should have been done. In the end I think we ended up with a better understanding and some really solid tools and results.

Posted in Simulation | Tagged , , | Leave a comment

Combined Survey Results (late March 2019)

The additional survey results from yesterday are included in the combined results here.

List at least five steps you take during a typical business analysis effort.

  1. Requirements Gathering
  2. Initiation
  3. Testing
  4. QA
  5. Feedback
  6. User acceptance
  1. Requirement Elicitation
  2. UX Design
  3. Software Design for Testability
  1. Identify Business Goal
  2. ID Stakeholders
  3. Make sure necessary resources are available
  4. Create Project Schedule
  5. Conduct regular status meetings
  1. Meet with requester to learn needs/wants
  2. List details/wants/needs
  3. Rough draft of Project/proposed solutions
  4. Check in with requester on rough draft
  5. Make edits/adjustments | test
  6. Regularly schedule touch-point meeting
  7. Requirement analysis/design | functional/non-functional
  8. Determine stakeholders | user acceptance
  1. List the stakeholders
  2. Read through all documents available
  3. Create list of questions
  4. Meet regularly with the stakeholders
  5. Meet with developers
  6. Develop scenarios
  7. Ensure stakeholders ensersy requirements
  8. other notes
    • SMART PM milestones
    • know players
    • feedback
    • analysis steps
    • no standard
  1. identify stakeholders / Stakeholder Analysis
  2. identify business objectives / goals
  3. identify use cases
  4. specify requirements
  5. interview Stakeholders
  1. project planning
  2. user group sessions
  3. individual meetings
  4. define business objectives
  5. define project scope
  6. prototype / wireframes
  1. identify audience / stakeholders
  2. identify purpose and scope
  3. develop plan
  4. define problem
  5. identify objective
  6. analyze problems / identify alternative solutions
  7. determine solution to go with
  8. design solution
  9. test solution
  1. gathering requirements
  2. assess stakeholder priorities
  3. data pull
  4. data scrub
  5. data analysis
  6. create summary presentation
  1. define objective
  2. research available resources
  3. define a solution
  4. gather its requirements
  5. define requirements
  6. validate and verify requirements
  7. work with developers
  8. coordinate building the solutions
  1. requirements elicitation
  2. requirements analysis
  3. get consensus
  4. organizational architecture assessment
  5. plan BA activities
  6. assist UAT
  7. requirements management
  8. define problem to be solved
  1. understand thhe business need of the request
  2. understand why the need is important – what is the benefit/value?
  3. identify the stakeholders affected by the request
  4. identify system and process impacts of the change (complexity of the change)
  5. understand the cost of the change
  6. prioritize the request in relation to other requests/needs
  7. elicit business requirements
  8. obtain signoff on business requests / validate requests
  1. understanding requirements
  2. writing user stories
  3. participating in Scrums
  4. testing stories
  1. research
  2. requirements meetings/elicitation
  3. document requirements
  4. requirements approvals
  5. estimation with developers
  6. consult with developers
  7. oversee UAT
  8. oversee business transition
  1. brainstorming
  2. interview project owner(s)
  3. understand current state
  4. understand need / desired state
  5. simulate / shadow
  6. inquire about effort required from technical team
  1. scope, issue determination, planning
  2. define issues
  3. define assumptions
  4. planning
  5. ccommunication
  6. analysis – business and data modeling
  1. gather data
  2. sort
  3. define
  4. organize
  5. examples, good and bad
  1. document analysis
  2. interviews
  3. workshops
  4. BRD walkthroughs
  5. item tracking
  1. ask questions
  2. gather data
  3. clean data
  4. run tests
  5. interpret results
  6. visualize results
  7. provide conclusions
  1. understand current state
  2. understand desired state
  3. gap analysis
  4. understand end user
  5. help customer update desired state/vision
  6. deliver prioritized value iteratively
  1. define goals and objectives
  2. model As-Is
  3. identify gaps/requirements
  4. model To-Be
  5. define business rules
  6. conduct impact analysis
  7. define scope
  8. identify solution / how
  1. interview project sponsor
  2. interview key stakeholders
  3. read relevant information about the issue
  4. form business plan
  5. communicate and get buy-in
  6. goals, objectives, and scope
  1. stakeholder analysis
  2. requirements gathering
  3. requirements analysis
  4. requirements management – storage and updates
  5. communication – requirements and meetings
  1. analyze evidence
  2. desiign application
  3. develop prototype
  4. implement product
  5. evaluate product
  6. train users
  7. upgrade functionality
  1. read material from previous similar projects
  2. talk to sponsors
  3. web search on topic
  4. play with current system
  5. ask questions
  6. draw BPMs
  7. write use cases
  1. document current process
  2. identify users
  3. meet with users; interview
  4. review current documentation
  5. present proposed solution or iteration
  1. meeting with stakeholders
  2. outline scope
  3. research
  4. write requirements
  5. meet and verify with developers
  6. test in development and production
  7. outreach and maintenance with stakeholders
  1. As-In analysis (current state)
  2. write lightweight business case
  3. negotiate with stakeholders
  4. write user stories
  5. User Acceptance Testing
  6. cry myself to sleep 🙂
  1. initiation
  2. elicitation
  3. discussion
  4. design / user stories / use cases
  5. sign-off
  6. sprints
  7. testing / QA
  8. user acceptance testing
  1. planning
  2. elicitation
  3. requirements
  4. specification writing
  5. QA
  6. UAT
  1. identify the problem
  1. studying subject matter
  2. planning
  3. elicitation
  4. functional specification writing
  5. documentation
  1. identify stakeholders
  2. assess working approach (Waterfall, Agile, Hybrid)
  3. determine current state of requirements and maturity of project vision
  4. interview stakeholders
  5. write and validate requirements
  1. problem definition
  2. value definition
  3. decomposition
  4. dependency analysis
  5. solution assessment
  1. process mapping
  2. stakeholder interviews
  3. write use cases
  4. document requirements
  5. research
  1. listen – to stakeholders and customers
  2. analyze – documents, data, atc. to understand thhings further
  3. repeat back what I’m hearing to make sure I’m understanding correctly
  4. synthesize – the details
  5. document – as needed(e.g., Visio diagramsPowerPoint decks, Word, tool, etc.)
  6. solution
  7. help with implementing
  8. assess and improve – if/as needed
  1. understand the problem
  2. understand the environment
  3. gather the requirements
  4. align with IT on design
  5. test
  6. train
  7. deploy
  8. follow-up
  1. watch how it is currently done
  2. listen to clients’ pain points
  3. define goals of project
  1. critical path tasks
  2. pros/cons of tasks
  3. impacts
  4. risks
  5. goals
  1. discovery – high level
  2. analysis / evaluation
  3. presentation of options
  4. requirements gathering
  5. epic / feature / story definition’
  6. prioritization
  1. who is driving the requirements?
  2. focus on what is needed for project
  3. who is going to use the product?
  1. elicit requirements
  2. hold focus groups
  3. create mock-ups
  4. test
  5. write user stories
  1. analyze
  2. document process
  3. identify waste (Lean)
  4. communicate
  5. document plan / changes
  1. meeting
  2. documentation
  3. strategy
  4. execution plan
  5. reporting plan
  1. requirements gathering
  2. delivery expectations
  3. user experience work with customer
  4. process mapping
  5. system and user testing
  6. system interaction (upstram and downstream) how does a change affect my process?
  7. understanding stakeholders
  1. stakeholder elicitation
  2. brainstorming
  3. requirements analysis
  4. wireframing
  5. process / flow diagrams
  1. current state analysis
  2. future state
  3. gap analysis
  4. requirements gathering
  5. success metrics
  1. interview users
  2. gather requirements
  3. document business rules
  4. business process flow
  5. mock-ups
  1. UX design review
  2. requirements gathering
  3. vision gathering / understanding
  1. requirements elicitation
  2. gap analysis
  1. shadow users
  2. follow up to verify understanding of business and need
  3. mockups, high-level design concept
  4. present mockup, design concept
  5. create and mintain stories and acceptance criteria
  1. brainstorming
  2. external stakeholder feedback
  3. internal stakeholder feedback
  4. break down epics
  5. user stories
  6. building
  1. stakeholder analysis
  2. elicitation activity plan
  3. requirements tracing
  4. prototyping
  5. document analysis
  1. research
  2. requirements analysis
  3. state chart diagram
  4. execution plan
  5. reporting plan

List some steps you took in a weird or non-standard project.

  • Steps:
    1. 1. Why is there a problem? Is there a problem?
    2. 2. What can change? How can I change it?
    3. 3. How to change the process for lasting results
  • A description of “weird” usually goes along with a particular person I am working with rather than a project. Some people like things done a certain way or they need things handed to them or their ego stroked. I accommodate all kinds of idiosyncrasies so that I can get the project done on time.
  • adjustments in project resources
  • after initial interview, began prototyping and iterated through until agreed upon design
  • built a filter
  • create mock-ups and gather requirements
  • create strategy to hit KPIs
  • data migration
  • data dictionary standardization
  • describing resource needs to the customer so they better understand how much work actually needs to happen and that there isn’t enough staff
  • design sprint
  • design thinking
  • developers and I create requirements as desired
  • did my own user experience testing
  • document requirements after development work has begun
  • documented non-value steps in a process new to me
  • explained project structure to stakeholders
  • For a client who was unable to clearly explain their business processes and where several SMEs had to be consulted to form the whole picture, I drew workflows to identify inputs/outputs, figure out where the gaps in our understanding existed, and identify the common paths and edge cases.
  • got to step 6 (building) and started back at step 1 multiiple times
  • guided solutioning
  • identified handoffs between different contractors
  • identify end results
  • interview individuals rather than host meetings
  • investigate vendor-provided code for business process flows
  • iterative development and delivery
  • made timeline promises to customers without stakeholder buy-in/signoff
  • make excutive decisions withoutstakeholder back-and-forth
  • mapped a process flow on a meeting room wall and had developers stick up arrows and process boxes like I would create in Visio to get engagement and consensus
  • moved heavy equipment
  • moved servers from one office to another
  • observe people doing un-automated process
  • personally evaluate how comitted mgt was to what they said they wanted
  • phased delivery / subject areas
  • physically simulate each step of an operational process
  • process development
  • regular status reports to CEO
  • resources and deliverables
  • reverse code engineering
  • review production incident logs
  • showed customer a competitor’s website to get buy-in for change
  • simulation
  • start with techniques from junior team members
  • starting a project without getting agreed funding from various units
  • statistical modeling
  • surveys
  • team up with PM to develop a plan to steer the sponsor in the right diection
  • town halls
  • track progress in PowerPoint because the sponsor insisted on it
  • train the team how to read use case diagrams
  • translating training documents into Portuguese
  • travel to affiliate sites to understand their processes
  • understanding cultural and legal requirements in a foreign country
  • use a game
  • using a ruler to estimate level of effort to digitize paper contracts in filing cabinets gathered over 40 years
  • work around manager who was afraid of change – had to continually demonstrate the product, ease of use, and savings
  • worked with a mechanic
  • write requirements for what had been developed

Name three software tools you use most.

  • Excel (27)
  • Visio (18)
  • Jira (17)
  • Word (15)
  • Confluence (8)
  • Outlook (7)
  • PowerPoint (7)
  • SharePoint (6)
  • Azure DevOps (5)
  • Google Docs (4)
  • MS Team Foundation Server (4)
  • email (3)
  • MS Teams (3)
  • Draw.io (2)
  • MS Dynamics (2)
  • MS Office (2)
  • MS Visual Studio (2)
  • Notepad (2)
  • OneNote (2)
  • Siebel (2)
  • Slack (2)
  • SQL Server (2)
  • Version One (2)
  • Adobe Reader (1)
  • all MS products (1)
  • ARC / Knowledge Center(?) (Client Internal Tests) (1)
  • Balsamiq (1)
  • Basecamp (1)
  • Blueprint (1)
  • Bullhorn (1)
  • CRM (1)
  • database, spreadsheet, or requirement tool for managing requirements (1)
  • Doors (1)
  • Enbevu(?) (Mainframe) (1)
  • Enterprise Architect (1)
  • Gephi (dependency graphing) (1)
  • Google Calendar (1)
  • Google Drawings (1)
  • illustration / design program for diagrams (1)
  • iRise (1)
  • Kingsway Soft (1)
  • Lucid Chart (1)
  • LucidChart (1)
  • Miro Real-Time Board (1)
  • MS Office tools (1)
  • MS Project (1)
  • MS Word developer tools (1)
  • NUnit (1)
  • Pendo (1)
  • Power BI (1)
  • Process 98 (1)
  • Python (1)
  • R (1)
  • requirements repositories, e.g., RRC, RTC (1)
  • RoboHelp (1)
  • Scribe (1)
  • Scrumhow (?) (1)
  • Skype (1)
  • SnagIt (1)
  • SQL (1)
  • Tableau (1)
  • Visible Analyst (1)
  • Visual Studio (1)
  • Visual Studio Team Server (1)
  • Vocera EVA (1)

Name three non-software techniques you use most.

  • interviews (4)
  • communication (3)
  • brainstorming (2)
  • meetings (2)
  • process mapping (2)
  • prototyping (2)
  • relationship building (2)
  • surveys (2)
  • wireframing (2)
  • “play package” (1)
  • 1-on-1 meetings to elicit requirements (1)
  • active listening (1)
  • analysis (1)
  • analyze audience (1)
  • apply knowledge of psychology to figure out how to approach the various personalities (1)
  • business process analysis (1)
  • business process modeling (1)
  • calculator (1)
  • change management (1)
  • charting on whiteboard (1)
  • coffees with customers (1)
  • coffees with teams (1)
  • collaboration (1)
  • conference calls (1)
  • conflict resolution and team building (1)
  • costing out the requests (1)
  • critical questioning (1)
  • critical questioning (ask why fiive times), funnel questioning (1)
  • data analysis (1)
  • data modeling (1)
  • decomposition (1)
  • design thinking (1)
  • develop scenarios (1)
  • development efforts (1)
  • diagramming/modeling (1)
  • document analysis (1)
  • documentation (1)
  • documenting notes/decisions (1)
  • drinking (1)
  • elicitation (1)
  • expectation level setting (1)
  • face-to-face technique (1)
  • facilitiation (1)
  • fishbone diagram (1)
  • Five Whys (1)
  • focus groups (1)
  • handwritten note-taking (1)
  • hermeneutics / interpretation of text (1)
  • impact analysis (1)
  • individual meetings (1)
  • informal planning poker (1)
  • initial mockups / sketches (1)
  • interview (1)
  • interview end user (1)
  • interview stakeholders (1)
  • interview users (1)
  • interviewing (1)
  • JAD sessions (Joint Application Development Sessions) (1)
  • job shadowing (1)
  • listening (1)
  • lists (1)
  • meeting facilitation (prepare an agenda, define goals, manage time wisely, ensure notes are taken and action items documented) (1)
  • mind mapping (1)
  • notes (1)
  • note-taking (1)
  • observation (1)
  • organize (1)
  • paper (1)
  • paper easels (1)
  • pen and paper (1)
  • phone calls and fate-to-face meetings (1)
  • Post-It notes (Any time of planning or breaking down of a subject, I use different colored Post-Its, writing with a Sharpie, on the wall. This allows me to physically see an idea from any distance. I can also move and categorize at will. When done, take a picture.) (1)
  • prioritization (MOSCOW) (1)
  • process decomposition (1)
  • process design (1)
  • process flow diagrams (1)
  • process modeling (1)
  • product vision canvas (1)
  • prototyping (can be on paper) (1)
  • recognize what are objects (nouns) and actions (verbs) (1)
  • requirements elicitation (1)
  • requirements meetings (1)
  • requirements verification and validation (1)
  • requirements workshop (1)
  • responsibility x collaboration using index cards (1)
  • rewards (food, certificates) (1)
  • Scrum Ceremonies (1)
  • Scrums (1)
  • shadowing (1)
  • SIPOC (1)
  • sketching (1)
  • spreadsheets (1)
  • stakeholder analysis (1)
  • stakeholder engagement (1)
  • stakeholder engagement – visioning to execution and post-assessment (1)
  • stakeholder interviews (1)
  • swim lanes (1)
  • taking / getting feedback (1)
  • taking notes (1)
  • test application (1)
  • training needs analysis (1)
  • use paper models / process mapping (1)
  • user group sessions (1)
  • user stories (1)
  • visual modeling (1)
  • walking through client process (1)
  • whiteboard diagrams (1)
  • whiteboard workflows (1)
  • whiteboarding (1)
  • whiteboards (1)
  • workflows (1)
  • working out (1)
  • workshops (1)

Name the goals of a couple of different projects (e.g., automate a manual process, interface to a new client, redesign screens, etc.)

  • add enhancements to work flow app
  • adding feature toggles for beta testing
  • adhere to regulatory requirements
  • adjusting solution to accommodate the needs of a new/different user base
  • automate a manual form with a workflow
  • automate a manual login/password generation and dissemination to users
  • automate a manual process
  • automate a manual process, reduce time and staff to accomplish a standard organizational function
  • automate a paper-based contract digitization process
  • automate and ease reporting (new tool)
  • automate highly administrative, easily repeatable processes which have wide reach
  • automate manual process
  • automate new process
  • automate risk and issue requirements
  • automate the contract management process
  • automate the process of return goods authorizations
  • automate workflow
  • automate workflows
  • automation
  • block or restore delivery service to areas affected by disasters
  • bring foreign locations into a global system
  • build out end user-owned applications into IT managed services
  • business process architecture
  • clear bottlenecks
  • consolidate master data
  • create a “how-to” manual for training condo board members
  • create a means to store and manage condo documentation
  • create a reporting mechanism for healthcare enrollments
  • data change/update
  • data migration
  • design processes
  • develop a new process to audit projects in flight
  • develop and interface between two systems
  • develop data warehouse
  • develop effort tracking process
  • develop new functionality
  • develop new software
  • document current inquiry management process
  • enhance current screens
  • enhance system performance
  • establish standards for DevOps
  • establish vision for various automation
  • I work for teams impplementing Dynamics CRM worldwide. I specialize in data migration and integration.
  • implement data interface wiith two systems
  • implement new software solution
  • implement software for a new client
  • implement vendor software with customizations
  • improve a business process
  • improve system usability
  • improve the usage of internal and external data
  • improve user interface
  • include new feature on mobile application
  • increase revenue and market share
  • integrate a new application with current systems/vendors
  • maintain the MD Product Evaluation List (online)
  • map geographical data
  • merge multiple applications
  • migrate to a new system
  • move manual Excel reports online
  • new functionality
  • process data faster
  • process HR data and store records
  • product for new customer
  • prototype mobile app for BI and requirements
  • provide business recommendations
  • provide new functionality
  • recover fuel-related cost fluctuations
  • redesign
  • redesign a system process to match current business needs
  • reduce technical debt
  • re-engineer per actual user requirements
  • reimplement solution using newer technology
  • replace current analysis tool with new one
  • replace legacy system
  • replace manual tools with applications
  • replatform legacy system
  • rewrite / redesign screens
  • simplify / redesign process
  • simplify returns for retailer and customer
  • standardize / simplify a process or interface
  • system integration
  • system integration / database syncing
  • system performance improvements
  • system-to-system integration
  • technical strategy for product
  • transform the customer experience (inside and outside)
  • UI optimization
  • update a feature on mobile app
  • update the e-commerce portion of a website to accept credit and debit cards
Posted in Tools and methods | Tagged , , , , , | Leave a comment

A Simulationist’s Framework for Business Analysis: Round Five

Today I gave this talk at the Project Summit – Business Analyst World conference in Orlando. The slides for the presentation are at:

http://rpchurchill.com/presentations/zSimFrameForBA_PS-BAW_Orlando/SimFrameForBA.html

In this version I furthered the development of my concept of the Unified Theory of Business Analysis. I also collected more survey responses, the results of which are reported below.

List at least five steps you take during a typical business analysis effort.

  1. requirements gathering
  2. delivery expectations
  3. user experience work with customer
  4. process mapping
  5. system and user testing
  6. system interaction (upstram and downstream) how does a change affect my process?
  7. understanding stakeholders
  1. stakeholder elicitation
  2. brainstorming
  3. requirements analysis
  4. wireframing
  5. process / flow diagrams
  1. current state analysis
  2. future state
  3. gap analysis
  4. requirements gathering
  5. success metrics
  1. interview users
  2. gather requirements
  3. document business rules
  4. business process flow
  5. mock-ups
  1. UX design review
  2. requirements gathering
  3. vision gathering / understanding
  1. requirements elicitation
  2. gap analysis
  1. shadow users
  2. follow up to verify understanding of business and need
  3. mockups, high-level design concept
  4. present mockup, design concept
  5. create and mintain stories and acceptance criteria
  1. brainstorming
  2. external stakeholder feedback
  3. internal stakeholder feedback
  4. break down epics
  5. user stories
  6. building
  1. stakeholder analysis
  2. elicitation activity plan
  3. requirements tracing
  4. prototyping
  5. document analysis
  1. research
  2. requirements analysis
  3. state chart diagram
  4. execution plan
  5. reporting plan

List some steps you took in a weird or non-standard project.

  • data migration
  • did my own user experience testing
  • got to step 6 (building) and started back at step 1 multiiple times
  • mapped a process flow on a meeting room wall and had developers stick up arrows and process boxes like I would create in Visio to get engagement and consensus
  • moved servers from one office to another
  • process development
  • showed customer a competitor’s website to get buy-in for change

Name three software tools you use most.

  • Visio (4)
  • Excel (3)
  • Jira (3)
  • MS Teams (3)
  • PowerPoint (3)
  • Draw.io (2)
  • Word (2)
  • Balsamiq (1)
  • Google Docs (1)
  • iRise (1)
  • Lucid Chart (1)
  • Miro Real-Time Board (1)
  • MS Office (1)
  • Pendo (1)
  • Siebel (1)
  • Slack (1)
  • Visual Studio (1)
  • Visual Studio Team Server (1)
  • Vocera EVA (1)

Name three non-software techniques you use most.

  • brainstorming
  • brainstorming
  • business process modeling
  • charting on whiteboard
  • document analysis
  • interview
  • interviews
  • job shadowing
  • mind mapping
  • note-taking
  • paper easels
  • product vision canvas
  • requirements elicitation
  • requirements workshop
  • SIPOC
  • surveys
  • taking / getting feedback
  • walking through client process
  • whiteboards
  • workshops

Name the goals of a couple of different projects (e.g., automate a manual process, interface to a new client, redesign screens, etc.)

  • adding feature toggles for beta testing
  • automate manual process
  • automate risk and issue requirements
  • enhance current screens
  • new functionality
  • product for new customer
  • prototype mobile app for BI and requirements
  • provide new functionality
  • replace legacy system
  • rewrite / redesign screens
  • system performance improvements
  • system-to-system integration
  • UI optimization
Posted in Tools and methods | Tagged , , , , | Leave a comment

What Do I Mean By “Solve The Problem Abstractly?”

Requirements and Design Phases

Looking at the phases I’ve described in my business analysis framework, I wanted to describe the major difference between the step for requirements and design. The artifacts created as part of these phases, which usually include documentation but can also include prototypes, can vary greatly. I’ve seen these steps take a lot of different forms and I’ve certainly written and contributed to a lot of documents and prototypes that looked very, very different.

As I’ve described in my presentations, the phases in my framework are:

  • Project Planning: Planning the engagement, what will be done, who will be involved, resource usage, communication plan, and so on.
  • Intended Use: The business problem or problems to be solved and what the solution will try to accomplish.
  • Assumptions, Capabilities, Limitations, and Risks & Impacts: Identifying what will be included and omitted, what assumptions might be made in place of hard information that might not be available, and the consequences of those choices.
  • Conceptual Model: The As-Is state. Mapping of the existing system as a starting point, or mapping a proposed system as part of requirements and design for new projects.
  • Data Sources: Identifying sources of data that are available, accurate, and authoritative.
  • Requirements: The Abstract To-Be state. Description of the elements and activities to be represented, the data needed to represent them, and any meta-information or procedures needed for support.
  • Design: The Concrete To-Be state. Description of how the identified requirements are to be instantiated in a working solution (either as a new system or as a modification to an existing system) and the plan for implementation and transition.
  • Implementation: The phase where the solution is actually implemented. In a sense, this is where the actual work gets done.
  • Test Operation and Usability: Verification. Determining whether the mechanics of the solution work as designed.
  • Test Outputs and Suitability for Purpose: Validation. Determining whether the properly function system solves the actual problem it was intended to solve.
  • Acceptance: Accreditation. Having the customer (possibly through an independent agent) decide whether they will accept the solution provided.

The requirements and design phases are sometimes combined by practitioners but I want to discuss how they can and should be differentiated. Also, given the potentially amorphous and iterative nature of this process, I’ll also discuss how the requirements phase interacts with and overlaps the conceptual model and the data sources phases.

As you can see from the description of the phases above, I refer to the requirements phase as coming up with an abstract solution to the problem under analysis. People have a lot of different conceptions of what this can be. For example, it can contain written descriptions of actions that need to be supported. It can be written solely in the form of user stories. My preference is to begin with some form of system diagram or process map to provide a base to work from and to leverage the idea that a picture is worth a thousand words. That is to be followed by descriptions of all events, activities, messages, entities, and data items that are involved.

Data

Even more specifically, the nature of all the data items should be defined. Six Sigma methodologies define nine different characteristics that can be associated with each data item. They can be looked up easily enough. I prefer to think about things this way:

  • Measure: A label for the value describing what it is or represents
  • Type of Data:
    • numeric: intensive value (temperature, velocity, rate, density – characteristic of material that doesn’t depend on the amount present) vs. extensive value (quantity of energy, mass, count – characteristic of material that depends on amount present)
    • text or string value (names, addresses, descriptions, memos, IDs)
    • enumerated types (color, classification, type)
    • logical (yes/no, true/false)
  • Continuous or Discrete: most numeric values are continuous but counting values, along with all non-numeric values, are discrete
  • Deterministic vs. Stochastic: values intended to represent specific states (possibly as a function of other values) vs. groups or ranges of values that represent possible random outcomes
  • Possible Range of Values: numeric ranges or defined enumerated values, along with format limitations (e.g., credit card numbers, phone numbers, postal addresses)
  • Goal Values: higher is better, lower is better, defined/nominal is better
  • Samples Required: the number of observations that should be made to obtain an accurate characterization of possible values or distributions
  • Source and Availability: where and whether the data can be obtained and whether assumptions may have to be made in its absence
  • Verification and Authority: how the data can be verified (for example, data items provided by approved individuals or organizations may be considered authoritative)

These identified characteristics provide guidance to the participants in the design and implementation phases. Many of the data items may be identified as part of the conceptual model phase. These are the items associated with the As-Is state (for cases where there is an existing system or process to analyze). New data items will usually be identified as part of the requirements phase, both for modified parts of the original system and for the elements needed to control them. Data items may also be removed during the transition from the conceptual model to the requirements if an existing system is being pruned or otherwise rearranged or streamlined.

I wrote here about one of my college professor’s admonitions that you need to solve the problem before you plug the numbers in. He described how students would start banging away on their hand-held calculators when they got lost (yeah, I’m that old!), as if that was going to help them in some way. He said that any results obtained without truly understanding what was going on were only going to lead to further confusion. The students needed to fully develop the equations needed to describe the system they were analyzing (he was my professor for Physics 3 in the fall of my sophomore year), and simplify and rearrange them until the desired answer stood alone on one side of the equals sign. Then and only then should the given values be plugged into the variables on the other side of the equals sign, so the numeric answer could be generated. Problems can obviously involve more than single values for answers but the example is pretty clear.

So, the requirements should identify the data items that need to be included to describe and control the solution. The design might then define the specific forms those representations will take. Those definitions can include specific form of the variables (in terms of type and size or width) appropriate for the computer language or tool (e.g., database implementation) in which the solution (or the computer-based part of it) will be implemented. Those details definitely must be determined for the implementation.

The requirements descriptions should also include how individual data items should naturally be grouped. This includes groupings for messages, records for storage and retrieval, and associations with specific activities, entities, inputs, and outputs.

Contexts

An important part of what must be captured and represented is the information needed for the user to interact with and control the system. The conceptual model, particularly when a simulation is involved, mostly contains information about the process itself, but can clearly simulate a limited scope of user interactions. The requirements phase is where it’s most important to incorporate all contexts of user behavior and interactions.

The two major contexts of user interaction involve manipulations of data items (typically CRUD operations) and initiation of events (typically non-CRUD operations). Another contextual differentiation is the operating mode of a system. Modes can involve creation and editing, running, maintenance, analysis, and deployment, among others.

Skills Required for Different Phases

Identification of activities and values can be performed by anyone with good analysis skills, though it always helps to also have architecture and implementation skills. Business analysts should have the requisite skills to carry out all actions of the requirements phase.

The design phase is where practitioners with specific skills start to be needed. The skills needed are driven by the environment in which the solution will be implemented. Solutions come in many shapes and sizes. Processes for serving fancy coffees, making reservations at hospitality and entertainment venues, and manufacturing hand tools can be very different, and the skills needed to design solutions can vary widely.

A standard, BABOK-based business analyst should be able to analyze and design a process for serving customers directly in a store. Designing a computer-based solution will require different skills based on the scope, scale, and components of the solution envisioned. A solution based on a single desktop might require basic programming experience. A solution based on peer-to-peer networked computers might require experience in programming, inter-process communication, and real-time considerations. A simple, web-based system might require knowledge of basic web programming, different levels of understanding of tools like WordPress, or other automated web-builder tools. An enterprise-level, web-based system might require knowledge of DevOps, server virtualization, cloud computing, and clustering. People mostly seem to refer to this skillset when they use the term solutions architect, though I interpret this term more widely. Designing a manufacturing system might require knowledge of engineering, specific physical processes, and non-trivial knowledge of a wide range of computing subjects. A knowledge of simulation might be helpful for a lot of different solutions.

No matter what skills are required or thought to be required by the design practitioners, the business analyst needs to be able to work closely with them to serve as an effective bridge to business customers. I’m certainly capable of creating designs in most if not all of these contexts, even if I don’t have some of the specific implementation skills. No one knows everything, what’s important is to know what’s needed and how to work with and organize the requisite practitioners.

The skills required for the implementation phase are definitely more specific based on the specific nature of the solution. A business analyst needs to be able to communicate with all kinds of implementation practitioners (in both directions!) in order to serve as an effective liaison between those implementors and the business customers they serve.

Summary

Solving the problem abstractly, to me, means writing a comprehensive set of requirements that facilitates the creation of an effective design. The requirements are the abstract representation of the To-Be state while the design is the concrete representation of the To-Be state. It describes what’s actually going to be built. The implementation results in the actual To-Be state.

Posted in Tools and methods | Tagged , , , | Leave a comment

Unified Theory of Business Analysis: Part Three

How The Most Commonly Used Software Tools Apply to the Solution Effort and the Engagement Effort

Continuing last week’s discussions I wanted to analyze how business analysts tend to apply their favorite software tools.

  • Excel (24)

    Microsoft Excel is such a general purpose tool that it can be used to support almost any activity. It is highly useful for manipulating data and supporting calculations for solutions but is equally useful for managing schedule, cost, status, and other information as part of any engagement. I’ve been using spreadsheets since the 80s and I know I’ve used them extensively for both kinds of efforts.

  • Jira (14)

    While Jira and related systems (e.g., Rally, which surprisingly never came up in my survey even once) are used to manage information about solutions, it’s less about the solutions than keeping it all straight and aiding in communication and coordination. As such, I consider it to be almost entire geared to support engagement efforts.

  • Visio (14)

    Visio could be used for diagramming work breakdown structures, critical paths, organizational arrangements, and so on, but it seems far more directed to representing systems and their operations as well as solutions and their designs. Therefore I classify Visio as primarily as an aid to solution efforts.

  • Word (13)

    Like Excel, Microsoft Word is another general purpose tool that can be used to support both solution efforts and engagement efforts.

  • Confluence (8)

    Confluence is a bit of an odd duck. It’s really good for sharing details about rules, discovered operations and data, and design along with information about people and project statuses. It suffers from the weakness that the information entered can be difficult to track down unless some form of external order is imposed. The tool can readily be used to support both kinds of efforts.

  • Outlook (7)

    Microsoft Outlook is a general communication tool that manages email, meetings, facilities, and personnel availability. It is most often leverage to support the engagement effort.

  • SharePoint (6)

    SharePoint is another Microsoft product that facilitates sharing of information among teams, this time in the form of files and notifications. This makes it most geared towards supporting engagement efforts.

  • Azure DevOps (5)

    I’m not very familiar with this tool, having only been exposed to it during a recent presentation at the Tampa IIBA Meetup. It seems to me that this product is about instantiating solutions while its sister product, Visual Studio Team Services (VSTS) is meant to serve the engagement effort. I would, of course, be happy to hear different opinions on this subject.

  • Team Foundation Server (4)

    I’m mostly classifying this as supporting the engagement effort, since it seems to be more about communication and coordination, but to the degree that it supports source code directly iit might have to be considered as supporting solution efforts as well. Like I’ve said, feel free to pipe in with your own opinion.

  • PowerPoint (4)

    I’ve used PowerPoint for its effective diagramming capabilities, which sometimes allow you to do things that are more difficult to do in other graphics packages like Visio. That said, as primarily a presentation and communications tool, I think it should primarily be classified as supporting engagements.

  • Email (3)

    Email is a general form of communication and coordination and is definitely most geared toward engagement efforts.

  • Google Docs (3)

    These tools provide analogs to the more general Microsoft office suite of tools and should support both solution and engagement efforts. However, I’m thinking these are more generally used to support the engagement side of things.

  • MS Dynamics (2)

    These tools seem to be mostly about supporting engagements, although the need to customize them for specific uses may indicate that it’s also something of a solution in itself.

  • Visual Studio (2)

    Any tools meant to directly manipulate source code must primarily be used to support solution efforts.

  • Notepad (2)

    This general purpose tools can be used for almost anything, and is thus appropriate for supporting both solution and engagement efforts.

  • OneNote (2)

    This is another very generalized Microsoft tool that facilitates sharing of information that can be used to support solution and engagement efforts.

  • SQL Server (2)

    SQL Server is almost always part of a solution.

The survey respondents had identified 38 other software tools at the time of this writing, none of which were mentioned more than once. They were mostly variations on the tools discussed in detail here, and included tools for diagramming, communication and coordination, and analysis. A small number of explicit programming tools were listed (e.g., Python, R) along with some automated testing tools that are usually the province of implementation and test practitioners. It’s nice to see BAs that have access to a wider range of skills and wear multiple hats.

Here’s the summary of how I broke things down. Please feel free to offer suggestions for how you might classify any of these differently.

Software Tool Survey Count Effort Type
Excel 24 Both
Jira 14 Engagement
Visio 14 Solution
Word 13 Both
Confluence 8 Both
Outlook 7 Engagement
SharePoint 6 Engagement
Azure DevOps 5 Solution
Team Foundation Server 4 Engagement
PowerPoint 4 Engagement
Email 3 Engagement
Google Docs 3 Engagement
MS Dynamics 2 Engagement
Visual Studio 2 Solution
Notepad 2 Both
OneNote 2 Both
SQL Server 2 Solution
Posted in Management | Tagged , , | Leave a comment

Unified Theory of Business Analysis: Part Two

How The Fifty Business Analysis Techniques Apply to the Solution Effort and the Engagement Effort

Yesterday I kicked off this discussion by clarifying the difference between the solution effort and the engagement effort. Again, the solution effort involves the work to analyze, identify, and implement the solution. Note that the solution should include not just the delivered operational capability but also the means of operating and governing it going forward. The engagement effort involves the meta-work needed to manage the process of working through the different phases. An engagement can be a project of fixed scope or duration or it can be an ongoing program (consisting of serial or parallel projects) or maintenance effort (like fixing bugs using a Kanban methodology). It’s a subtle difference, as I described, but the discussion that follows should make the difference more clear.

I want to do so by describing how the fifty techniques business analysis techniques defined in the BABOK (chapter ten in the third edition) relate to the solution and engagement efforts. I want to make it clear that as a mechanical and software engineer, I’m used to doing the analysis to identify the needs, do the discovery, collect the data, and define the requirements; manage the process from beginning to end; and then doing the implementation, test, and acceptance. A lot of BAs don’t have the experience of doing the actual implementations. To me it’s all part of the same process.

The definitions for each business analysis technique are taken verbatim from the BABOK. The descriptions of the uses to which they can be put within the context of my analytical framework are my own. I also refer to different kinds of model components as I described here.

  1. Acceptance and Evaluation Criteria: Acceptance criteria are used to define the requirements, outcomes, or conditions that must be met in order for a solution to be considered acceptable to key stakeholders. Evaluation criteria are the measures used to assess a set of requirements in order to choose between multiple solutions.

    An acceptance criterion could be the the maximum processing time of an item going through a process. An evaluation criteria might be that the item must exit a process within 22 minutes at least 85 percent of the time. Similarly, the processing cost per item might need to be less than US $8.50. These criteria can apply to individual stations or a system as a whole (e.g., a document scanning operation could be represented by a single station while a group insurance underwriting evaluation process would be made up of a whole system of stations, queues, entries, and exits).

    Identifying the acceptance criteria is part of the solution effort. Knowing this has to be done is part of the engagement effort. Note that this will be the case for a lot of the techniques.

  2. Backlog Management: The backlog is used to record, track, and prioritize remaining work items.

    The term backlog is usually used in a Scrum context today, but the idea of lists of To-Do items, either as contract requirements or deficiency punch lists, has been around approximately forever. I’d bet my left arm they had written punch lists when they were building the pyramids in Egypt.

    Managing the backlog is primarily part of the engagement effort. Doing the work of the individual backlog items is part of the solution effort.

  3. Balanced Scorecard: The balanced scorecard is used to manage performance in any business model, organizational structure, or business process.

    This is a technique only a consultant could love. In my opinion it’s just a made up piece of marketing twaddle. Like so many other techniques, it’s just a formalized way of making sure you do what you should be doing anyway.

    OK, taking off my cranky and cynical hat I’ll say that techniques like these exist because somebody found them useful. While it’s clear that managers and practitioners should be evaluating and improving their capabilities from the point of view of customers, internal processes, learning and growth (internal and external), and financial performance, it never hurts to be reminded to do so explicitly. And heck, my own framework isn’t much more than an organized reminder to make sure you do everything you need to do in an organized way.

    I would place this activity primarily in the context of the engagement effort, since it strikes me as a meta-evaluation of an organization that defines the context that determine the kind of solutions that might be needed.

  4. Benchmarking and Market Analysis: Benchmarking and market analysis are conducted to improve organizational operations, increase customer satisfaction, and increase value to stakeholders.

    This technique is about comparing the performance of your individual stations or activities and overall system (composed of collections of individual stations, queues, entries, and exits) to those of other providers in the market, by criteria you identify.

    This seems to lean more in the direction of an engagement effort, with the details of changes made based on the findings being part of solution efforts.

  5. Brainstorming: Brainstorming is an excellent way to foster creative thinking about a problem. The aim of brainstorming is to produce numerous new ideas, and to derive from them themes for further analysis.

    This technique involves getting (generally) a group of people together to think about a process to see what options exist for adding, subtracting, rearranging, or modifying its components to improve the value it provides. This can be done when improving an existing product or process or when creating a new one from scratch.

    A “process” in this context could be an internal organizational operation, something a product is or does, or something customers might need to improve their lives. It’s a very general and flexible concept.

    Brainstorming is mostly about the solution effort, though engagement issues can sometimes be addressed in this way.

  6. Business Capability Analysis: Business capability analysis provides a framework for scoping and planning by generating a shared understanding of outcomes, identifying alignment with strategy, and providing a scope and prioritization filter.

    Processes made up of stations, queues, entries, and exits represent the capabilities an organization has. They can be analyzed singly and in combination to improve the value an organization can provide.

    The write-up in the BABOK approaches this activity a bit differently and in an interesting way. It’s worth a read. That said, looking at things the way I propose, in combination with the other ideas described in this write-up, will provide the same results.

    Since this technique is about the organization and its technical capabilities, it might be considered a mix of engagement and solution work, but overall it tends toward the engagement side of things.

  7. Business Cases: A business case provides a justification for a course of action based on the benefits to be realized by using the proposed solution, as compared to the cost, effort, and other considerations to acquire and live with that solution.

    A business case is an evaluation of the costs and benefits of doing things a different way, or of doing a new thing or not doing it at all. Business cases are evaluated in terms of financial performance, capability, and risk.

    Business cases can be a mix of engagement and solution effort.

  8. Business Model Canvas: A business model canvas describes how an enterprise creates, delivers, and captures value for and from its customers. It is comprised of nine building blocks that describe how an organization intends to deliver value: Key Partnerships, Key Activities, Key Resources, Value Proposition, Customer Relationships, Channels, Customer Segments, Cost Structure, and Revenue Streams.

    Again, this is a fairly specific format for doing the kinds of analyses we’re discussing over and over again. It is performed at the level of an organization on the one hand, but has to be described at the level of individual processes and capabilities on the other. Once you’re describing individual processes and capabilities, you’re talking about operations in terms of process maps and components, where the maps include the participants, information flows, financial flows, resource flows, and everything else.

    This technique also applies to both engagement and solution efforts.

  9. Business Rules Analysis: Business rules analysis is used to identify, express, validate, refine, and organize the rules that shape day-to-day business behavior and guide operational business decision making.

    Business rules are incorporated in stations and process maps in the form or decision criteria. Physical or informational items are not allowed to move forward in a process until specified criteria are met. Business rules can be applied singly or in combination in whatever way works for the organization and the process. If mechanisms for applying business rules can be standardized and automated, then so much the better.

    Business rules are almost always created and evaluated at the level of the solution.

  10. Collaborative Games: Collaborative games encourage participants in an elicitation activity to collaborate in building a joint understanding of a problem or a solution.

    This has never been one of my go-to techniques, but I admire the people who can think up things that are applicable to a given real-world problem. I’ve seen these used pretty much exclusively as teaching and team-building exercises, and it’s difficult to imagine how this activity could be mapped directly onto a process map or its components.

    That said, there are ways to gamify almost anything, even if it’s just awarding points for doing certain things with prizes awarded at intervals to individuals and teams who rack up the most. When I was in the Army a colleague came up with a way to have us remember a sequence of actions a weapon system crew had to call out to track and fire a missile, reinforced by calling out items as we passed a volleyball among ourselves.

    If anything this technique belongs more in the realm of engagement efforts.

  11. Concept Modeling: A concept model is used to organize the business vocabulary needed to consistently and thoroughly communicate the knowledge of a domain.

    This is an integral part of domain knowledge acquisition, which I consider to be a core function of business analysts.

    The description in the BABOK talks mostly about defining the relevant nouns and verbs in a domain, free of “data or implementation biases,” which is good advice based on my experience. The nouns and verbs of a process or domain should be identified (and understood) during the discovery phase of an engagement and then, only when that is done, the data should be captured during an explicit data collection phase. It’s possible to do both things at once, but only if you have a very clear understanding of what you’re doing.

    This detailed work is almost always performed in the context of solution efforts.

  12. Data Dictionary: A data dictionary is used to standardize a definition of a data element and enable a common interpretation of data elements.

    This technique goes hand in hand with the concept modeling technique described directly above. Instead of describing domain concepts, though, it is intended to catalog detailed descriptions of data items and usages. Where domain nouns and verbs identify things (including ideas) and actions, data items are intended to characterize those things and actions. For example, a domain concept would be an isolated inspection process, which is a verb. The location or facility at which the inspection is performed would be a noun. The time it takes to perform the inspection is a datum (or collection of data, if a distribution of times is to be used in place of a single, fixed value) that characterizes the verb, while the number of items that can be inspected simultaneously is a datum (or collection of data is, say, that number changes throughout the day according to a schedule or in response to demand) that characterizes the noun.

    The general type of data (e.g., continuous vs. discrete, integer vs. real, logical vs. numeric vs. enumerated type) may also be recorded in the dictionary. Specific information may also be recorded based on how the data is used in terms of the implementation. These may include numeric ranges (minimums and maximums), valid enumerated values (e.g., colors, months, descriptive types or classes of items or events), and detailed storage types (short and long integers, floating point variables of different lengths, string lengths, data structures descriptions, and so on).

    A data dictionary is definitely associated with the solution effort.

  13. Data Flow Diagrams: Data flow diagrams show where data comes from, which activities process the data, and if the output results are stored or utilized by another activity or external entity.

    Data flow diagrams can be process maps of their own or can be embedded in process maps that also show the movement and processing of physical entities. Different representations might be used to make it clear which is which if both physical and informational entities are described in the same map, to improve clarity and understanding.

    This technique is used in the context of the solution effort.

  14. Data Mining: Data mining is used to improve decision making by finding useful patterns and insights from data.

    People typically think of “Big Data” or “AI” or “machine learning” when they talk of data mining, but the concept applies to data sets of any size. Intelligent perusal and analysis of data can yield a lot of insights, even if performed manually by a skilled practitioner.

    Graphical methods are very useful in this regard. Patterns sometime leap right out of different kinds of plots and graphs. The trick is to be creative in how those plots are designed and laid out, and also to have a strong understanding of the workings and interrelationships of a system and what the data mean. The BABOK mentions descriptive, diagnostic, and predictive methods, which can all use different analytical sub-techniques.

    Although this technique can be used in support of the engagement effort if it’s complex enough, this kind of detailed, high-volume methodology is almost always applied to the solution.

  15. Data Modeling: A data model describes the entities, classes or data objects relevant to a domain, the attributes that are used to describe them, and the relationships among them to provide a common set of semantics for analysis and implementation.

    I see this as having a lot of overlap with the data dictionary, described above. It aims to describe data conceptually (as I describe in the Conceptual Model phase of my analytic framework), logically (in terms of normalization, integrity, authority, an so on), and physically (as instantiated or represented in working system implementations).

    This technique is definitely used in solution efforts.

  16. Decision Analysis: Decision analysis formally assesses a problem and possible decisions in order to determine the value of alternate outcomes under conditions of uncertainty.

    The key point here is uncertainty. Deterministic decisions made on the basis known values and rules are far easier to analyze and implement. Those can be captured as business rules as described above in a comparatively straightforward manner.

    Uncertainly may arise when aspects of the problem are not clearly defined or understood and when there is disagreement among the stakeholders. The write-up in the BABOK is worth reviewing, if you have the chance. There is obviously a wealth of additional material to be found elsewhere on this subject.

    Interestingly, this kind of uncertainty is largely distinguished from the type of uncertainty that arises from the systemic variability stemming from concatenations of individually variable events, as found in processes described by Monte Carlo models. That kind of uncertainty is manageable within certain bounds and the risks are identifiable.

    This technique is always part of work on the solution.

  17. Decision Modeling: Decision modeling shows how repeatable business decisions are made.

    This technique contrasts to Decision Analysis in that it is generally concerned with decisions that are made with greater certainty, where the factors and governing values are well understood, and where the implementations of the required calculations is straightforward. Note that the processes and calculations can be of any complexity, the differentiation is about the level of certainty or decidability.

    This technique is also always part of the solution effort.

  18. Document Analysis: Document analysis is used to elicit business analysis information, including contextual understanding and requirements, by examining available materials that describe either the business environment or existing organizational assets.

    Document analysis can aid in domain knowledge acquisition, discovery, and data collection as described and linked previously. Documents can help identify the elements and behaviors of a system (its nouns and verbs), the characteristics of a system (adjectives and adverbs modifying the nouns and verbs), and the entities processed by the system. It the entities to be processed by the system are themselves documents, then they drive the data that the system must process, transport, and act upon.

    In all cases this work is performed during the solution effort.

  19. Estimation: Estimation is used by business analysts and other stakeholders to forecast the cost and effort involved in pursuing a course of action.

    Estimation is a mix of science and art, and probably a black art at that. Any aspect of a process or engagement can be estimated, and the nature and accuracy of the estimate will be based on prior experience, entrepreneurial judgment, gathered information, and gut feel.

    The BABOK describes a number of sub-techniques for performing estimations. They are top-down, bottom-up, parametric estimation, rough order of magnitude (ROM), Delphi, and PERT. Other methods exist but these cover the majority of those actually used.

    As described in the BABOK this technique is intended to be used as part of the engagement effort.

  20. Financial Analysis: Financial analysis is used to understand the financial aspects of an investment, a solution, or a solution approach.

    Fixed and operating costs must be balanced against revenue generated for every process or group of processes. Costs may be calculated by working bottom-up from individual components and entities while revenues come from completed (through the sale) work items. The time value of money is often considered in such analyses; I took a course on this as a freshman in engineering school. I’m glad I gained this understanding so early. Risk is often calculated in terms of money, as well, and it should be remembered that uncertainty is always a form of risk.

    I’ve always enjoyed analyzing systems from a financial point of view, because it provides the most honest insight into what factors need to be optimized. For example, you can make something faster at the expense of taking longer to make it, requiring more expensive materials, making it more expensive to build and maintain, and making it more difficult to understand. You can optimize along any axis beyond the point at which it makes sense. Systems can be optimized along many different axes but the ultimate optimization is generally financial. This helps you figure out the best way of making trade-offs between speed, resource usage, time, and quality, or among the Iron Triangle concepts of

    Detailed cost data is sometimes hidden from some analysts, for a number of reasons. A customer or department might not want other employees or consultants knowing how much individual workers get paid. That’s a good reason in a lot of cases. Companies hiding or blinding data from outside consultants for reasons of competitive secrecy are also reasonable. A bad reason is when departments within an organization refuse to share data in order to protect their “rice bowls.” In some case you’ll build the capability of doing financial calculations into a system and test it with dummy data, and then release it to organizations authorized to work with live data in a secure way.

    This is always an interesting area to me because four generations of my family have worked in finance in one capacity or another. As an engineer and process analyst I’m actually kind of a black sheep!

    Financial analysis is a key part of both the solution and engagement efforts.

  21. Focus Groups: A focus group is a means to elicit ideas and opinions about a specific product, service, or opportunity in an interactive group environment. The participants, guided by a moderator, share their impressions, preferences, and needs.

    Focus groups can be used to generate ideas or evaluate reactions to certain ideas, products, opportunities, and risks. In terms of mapped processes the group can think about individual components, the overall effect of the process or system, or the state of an entity — especially if it’s a person and even more so if it’s a customer (see this article on service design) — as it moves through a process or system.

    This technique is primarily used in solution efforts.

  22. Functional Decomposition: Functional decomposition helps manage complexity and reduce uncertainty by breaking down processes, systems, functional areas, or deliverables into their simpler constituent parts and allowing each part to be analyzed independently.

    Representing a system as a process map is a really obvious form of functional decomposition, is it not? A few more things might be said, though.

    I like to start by breaking a process down into the smallest bits possible. From there I do two things.

    One is that I figure out whether I need all the detail I’ve identified, and if I find that I don’t I know I can aggregate activities at a higher level of abstraction or moit them entirely if appropriate. For example, a primary inspection at a land border crossing can involve a wide range of possible sub-activities, and some of those might get analyzed in more detailed studies. From the point of view of modeling the facility, however, only the time taken to perform the combined activities may be important.

    The other thing I do is group similar activities and see if there are commonalities I can identify that will allow me to represent them with a single modeling or analytical component or technique. It might even make sense to represent a wide variety of similar activities with a standard component that can be flexibly configured. This is part of the basis for object-oriented design.

    A more general benefit of functional decomposition is simply breaking complex operations down until they are manageable, understandable, quantifiable, and controllable. The discussion in the BABOK covers a myriad of ways that things can be broken down and analyzed. The section is definitely worth a read and there are, of course, many more materials available on the subject.

    Functional decomposition is absolutely part of the solution effort.

  23. Glossary: A glossary defines key terms relevant to a business domain.

    Compiling a glossary of terms is often a helpful aid to communication. It ensures that all parties are using an agreed-upon terminology. It helps subject matter experts know that the analysts serving them have sufficient understanding of their work. Note that this goes in both directions. SMEs are assured that the analysts serving them are getting it right, but the analysts are (or should be) bringing their own skills and knowledge to the working partnership, so their terms of art may also be added to the glossary so the SMEs being served understand how the analysts are doing it.

    The glossary is chiefly part of the solution effort.

  24. Interface Analysis: Interface analysis is used to identify where, what, why, when, how, and for whom information is exchanged between solution components or across solution boundaries.

    This is a particular area of interest for me, so I’m going to break it down across all the concepts described in the BABOK.

    • User interfaces, including human users directly interacting with the solution within the organization: Users and user interfaces can be represented as stations or process blocks in any system. They can be represented as processes, inputs, outputs, and possibly even queues in a process map. In other cases they can be represented as entities which move through and are processed by the system. (Alternatively, they can be described as being served by the system or process.) The data items transmitted between these components and entities are themselves entities that move through and are potentially transformed by the system.
    • People external to the solution such as stakeholders or regulators: Information or physical materials exchanged with external actors (like customers) are typically represented as entities, while the actors themselves can be represented as entities or fixed process components.
    • Business processes: Process maps can represent activities at any level of aggregation and abstraction. A high-level business operation might be represented by a limited number of fixed components (stations, queues, entries, and exits) while each component (especially stations) could be broken down into its own process map (with its own station, queue, entry, and exit blocks). An example of this would be the routing of documents and groups of documents (we can call those files or folders) through an insurance underwriting process, where the high-level process map shows how things flow through various departments and where the detailed process maps for each department show how individual work items are routed to an processed by the many individuals in that department.
    • Data interfaces between systems: The word “system” here can refer to any of the other items on this list. In general a system can be defined as any logical grouping of functionality. In a nuclear power plant simulator, for example, the different CPU cores, shared memory system, hard disks, control panels and controls and displays and indicators mounted on them, thermo-hydraulic and electrical system model subroutines, standardized equipment handler subroutines, hard disks, tape backup systems, and instructor control panels are all examples of different systems that have to communicate with each other. As a thermo-hydraulic modeler, a lot of the interface work I did involved negotiating and implementing interfaces between my fluid models and those written by other engineers. Each full-scope simulator would include on the order of fifty different fluid systems along with a handful of electrical and equipment systems. Similarly, the furnace control systems I wrote had to communicate with several external systems that ran on other types of computers with different operating systems and controlled other machines in the production line. Data interfaces had to be negotiated and implemented between all of them, too. The same is true of the Node.js microservices that provided the back-end functionality accessed by users through web browsers and Android and iOS apps.
    • Application Programming Interfaces (APIs): APIs can be prepackaged collections of code in the form of libraries or frameworks, while microservices provide similarly centralized functionality. The point is that the interfaces are published so programs interacting with them can do so in known ways without having to worry (in theory) about the details what’s going on in the background.
    • Hardware devices: Communication with hardware devices isn’t much different than the communications described above. In fact, the interfaces described above often involve hardware communications. (See a brief rundown of the 7-layer OSI model here.)

    I’ve written a lot about inter-process communication in this series of blog posts.

    Data interfaces can be thought of at an abstract level and at a concrete level. Business analysts will often identify the data items that need to be collected, communicated, processed, stored, and so on, while software engineers and database analysts will design and implement the systems that actually carry out the specified actions. Both levels are part of the solution effort.

  25. Interviews: An interview is a systematic approach designed to elicit business analysis information from a person or group of people by talking to the interviewee(s), asking relevant questions, and documenting the responses. The interview can also be used for establishing relationships and building trust between business analysts and stakeholders in order to increase stakeholder involvement or build support for a proposed solution.

    Interviews involve talking to people. In this context the discussions can be about anything related to a process as it is mapped, in terms of its functional and non-functional characteristics. The discussions can be about individual components or entities, subsets of items, or the behavior of a system in its entirety.

    I’ve interviewed SMEs, executives, line workers, engineers, customers, and other stakeholders to learn what they do, what they need, and how their systems work. I’ve done this in many different industries in locations all over the world. The main thing I’ve learned is to be pleasant, patient, respectful, interested, and thorough. Being thorough also means writing down your findings for review by the interviewees, so they can confirm that you captured the information they provided correctly.

    This work is definitely part of the solution effort.

  26. Item Tracking: Item tracking is used to capture and assign responsibility for issues and stakeholder concerns that pose an impact to the solution.

    In the context described by the BABOK, the items discussed aren’t directly about the mapped components and entities of a system of interest. They are instead about the work items that need to be accomplished in order to meet the needs of the customers and stakeholders being served by the analyst. In that sense we’re talking about tracking all the meta-activities that have to get done to create or improve a mapped process.

    I really enjoy using a Requirements Traceability Matrix (RTM) to manage these meta-elements. If done the way I describe in my blog post (just linked, and here generally, and also throughout the BABOK), the work items and the mapped components and entities will all to accounted for and linked forward and backward through all phases of the engagement. Indeed, the entire purpose of my framework is to ensure the needs of the stakeholders are met by ensuring that all necessary components of a system or process are identified and instantiated or modified. This all has to happen in a way that provides the desired value.

    In the context discussed in the BABOK, this technique is used as part of the engagement effort.

  27. Lessons Learned: The purpose of the lessons learned process is to compile and document successes, opportunities for improvement, failures, and recommendations for improving the performance of future projects or project phases.

    Like the item tracking technique described above, this is also kind of a meta-technique that deals with the engagement process and not the operational system under analysis. The point here is to review the team’s activities to capture what was done well and what could be done better in the future. This step is often overlooked by organizations and teams are created and disbanded without a lot of conscious review. It’s one of those things where you never have time to do it right but you always have time to do it over.

    It’s supposed to be a formal part of the Scrum process on a sprint and project basis, but it really should be done periodically in every engagement, and especially at the end (if there is an identifiable end). Seriously, take the time do do this.

    As stated, this technique is also used during the engagement process.

  28. Metrics and KPIs: Metrics and key performance indicators measure the performance of solutions, solution components, and other matters of interest to stakeholders.

    These are quantifiable measures of the performance of individual components, groups of components, and systems or processes as a whole. They’re also used to gauge the meta-progress of work being performed through an engagement.

    The BABOK describes an indicator as a numerical measure of something, while a metric is a comparison of an indicator to a desired value (or range of values). For example, an indicator is the number of widgets produced during a week, while a metric is whether or not that number is above a specific target, say, 15,000.

    Care should be taken to ensure that the indicators chosen are readily collectible, can be determined at a reasonable cost, are clear, are agreed-upon, and and are reliable and believed.

    Metrics and KPIs are identified as part of the solution effort. Note that the ongoing operation and governance of the solution by the customer, possibly after it has been delivered, is all part of the solution the BA and his or her colleagues should be thinking about.

  29. Mind Mapping: Mind mapping is used to articulate and capture thoughts, ideas, and information.

    I’ve never used an explicit mind-mapping tool, and I’ve only encountered one person who actively used one (as far as I know). That said, there are a lot of ways to accomplish similar ends without using an explicit software tool.

    This technique actually has a lot in common with the part of the functional decomposition technique that involves organizing identified concepts into logical groups. Brainstorming with Post-It notes is a manual form of mind-mapping, and the whole process of discovery and domain knowledge acquisition (as I describe them) is supposed to inspire this activity.

    For most of my life I’ve taken notes on unlined paper so I have the freedom to write and draw wherever I like to show relationships and conceptual groupings. Another way this happens is through being engaged with the material over time. I sometimes refer to this as wallowing in the material. Insights are gained and patterns emerge in the course of your normal work as long as you are mentally and emotionally present. These approaches might not work for everyone, but they’ve worked for me.

    This technique is primarily applied as part of the solution effort.

  30. Non-Functional Requirements Analysis: Non-functional requirements analysis examines the requirements for a solution that define how well the functional requirements must perform. It specifies criteria that can be used to judge the operation of a system rather than specific behaviors (which are referred to as the functional requirements).

    These define what a solution is rather than what it does. The mapped components of a process or system (the stations, queues, entries, exits, and aths along with the entities processed), define the functional behavior of the process or system. The non-functional characteristics of a process or system describe the qualities it must have to provide the desired value for the customer.

    Non-functional requirements are usually expressed in the form of -ilities. Examples are reliability, modularity, flexibility, robustness, maintainability, scalability, usability, and so on. Non-functional requirements can also describe how a system or process is to be maintained and governed.

    The figure at the bottom of this article graphically describes my concept of a requirements traceability matrix. Most of the lines connecting items in each phase are continuous from the Intended Use phase through to the Final Acceptance phase. These cover the functional behaviors of the system or process. The non-functional requirements, by contrast, are shown as beginning in the Requirements phase. I show them this way because the qualities the process or system needs to have are often a function of what it needs to do, and thus they may not be known until other things about the solution are known. That said, uptime and performance guarantees are often parts of contracts and identified project goals from the outset. When I was working the paper industry, for example, the reliability (see there? a non-functional requirement!) of the equipment was improving to the point that turnkey system suppliers were starting to guarantee 95% uptime where the previous standard had been 90%. I know that plant utilization levels in the nuclear industry have improved markedly over the years, as competing firms have continued to learn and improve.

    I include a few other lines at the bottom of the figure to show that the engagement meta-process itself might be managed according to requirements of its own. However, I consider non-functional requirements analysis to be part of the solution effort.

  31. Observation: Observation is used to elicit information by viewing and understanding activities and their context. It is used as a basis for identifying needs and opportunities, understanding a business process, setting performance standards, evaluating solution performance, or supporting training and development.

    Observation is basically the process of watching something to see how it works. It can be used during discovery and data collection activities.

    Observations are made as part of the solution effort.

  32. Organizational Modeling: Organizational modeling is used to describe the roles, responsibilities, and reporting structures that exist within an organization and to align those structures with the organization’s goals.

    Organizational models are often represented in the form of classic org charts. However, the different parts and functions within an organization can be mapped and modeled as stations (and other model components) or entities. Furthermore, entities are sometimes resources that are called from a service pool to a requesting process station to perform various actions. They may be called in varying numbers and for varying time spans based on the activities and defined business rules.

    This activity is performed during the solution effort.

  33. Prioritization: Prioritization provides a framework for business analysts to facilitate stakeholder decisions and to understand the relative importance of business analysis information.

    This technique refers to prioritizing work items through time and different phases of an engagement. It does not refer to the order in which operations must be carried out by the system under analysis, according to business and other identified rules.

    The BABOK describes four different approaches to prioritization. They are grouping, ranking, time boxing/budgeting, and negotiation. These are just specific approaches to doing a single, general thing.

    Prioritization in this context is absolutely part of the engagement effort.

  34. Process Analysis: Process analysis assesses a process for its efficiency and effectiveness, as well as its ability to identify opportunities for change.

    In a sense this is the general term for everything business analysts do when looking at how things are supposed to get done. That is, everything having to do with the functional parts of a process involve process analysis. Examination of the non-functional aspects of an operation, what things are as opposed to what they do, is still undertaken in support of a process, somewhere.

    In terms of a mapped process this technique involves identifying the components in or needed for a process, how to add or remove or modify components to change how it operates, how to move from a current state to a future state, and the impacts of proposed or real changes made.

    Interestingly, simulation is considered to be a method of process analysis and not process modeling. This is because the model is considered to be the graphical representation itself, according to the BABOK, while simulation is a means of analyzing the process. I suppose this distinction might matter on one question on a certification exam, but in the end these methods all flow together.

    Although the engagement effort is itself a process, this technique is intended to apply to the solution effort.

  35. Process Modeling: Process modeling is a standardized graphical model used to show how work is carried out and is a foundation for process analysis.

    Process models are essentially the graphical maps I’ve been describing all along, as they are composed of stations, queues, entries, exits, connections or paths, and entities. Information, physical objects, and activities of all kinds can be represented in a process map or model.

    When they say a picture is worth a thousand words, this is the kind of thing they’re talking about. In my experience, nothing enhances understanding and agreement more than a visual process model.

    The BABOK goes into some detail about the various methods that exist for creating process maps, but in my mind they’re more alike than different.

    This technique is clearly to be applied as part of the solution effort.

  36. Prototyping: Prototyping is used to elicit and validate stakeholder needs through an iterative process that creates a model or design of requirements. It is also used to optimize user experience, to evaluate design options, and as a basis for development of the final business solution.

    A prototype is a temporary or provisional instantiation of part or all of a proposed solution. Prototypes are created so they can be understood and interacted with. The idea of a prototypes is extremely flexible. They can range from temporary models made with throwaway materials to working bits of software to full-scale, flyable aircraft.

    A simulation of something can also serve as a prototype, especially simulations of systems with major physical components.

    Prototyping is absolutely part of the solution effort.

  37. Reviews: Reviews are used to evaluate the content of a work product.

    Work products in my framework are created in every phase of an engagement, and thus reviews should be conducted during every phase of the engagement. This is done to ensure that all parties are in agreement and that all work items have been addressed before moving on to the next phase. The review in each phase is performed iteratively until the work for that phase is agreed to be complete. If items are found to have been missed in a phase, the participants return to the previous phase and iteratively complete work and review cycles until it is agreed that attention can be returned to the current or future phases.

    Note that multiple items can be in different phases through any engagement. While each individual item is addressed through one of more phases, it is always possible for multiple work items to be in process in different phases simultaneously.

    Reviews are meta-processes that are part of the engagement effort.

  38. Risk Analysis and Management: Risk analysis and management identifies areas of uncertainty that could negatively affect value, analyzes and evaluates those uncertainties, and develops and manages ways of dealing with the risks.

    The subject of risk can be addressed at the level of individual components in mapped systems, especially when provisions for failures are built into models, but they are more often handled at the level of the system or engagement. The BABOK gives a good overview of risk analysis and management and many other materials are available.

    Risks are evaluated based on their likelihood of occurrence and their expected impact. There are a few straightforward techniques for calculating and tracking those, mostly involving risk registers and risk impact scales. The classic approaches to risk are generally categorized as avoid (take steps to prevent negative events from happening), transfer (share the risk with third parties, with the classic example being through insurance), mitigate (act to reduce the chances a negative event will occur or reduce the impact if it does occur), accept (deal with negative events as they occur and find workarounds), and increase (find ways to take advantage of new opportunities).

    Risk has been an important part of two classes of simulation work I’ve done. The first involved dealing with adverse occurrences in nuclear power plants. A variety of system failures (e.g., leaks and equipment malfunctions) were built into the fluid, electrical, and hardware models in the operator training simulators we built. If such failures happened in a real plant (see Three Mile Island, where a better outcome would have resulted if the operators had just walked away and let the plant shut down on its own), especially since the likelihood of any particular adverse event was rare, there would be the possibility that the operators would not know how to diagnose the problem in real time and take the proper corrective action. Building full scope training simulators reduced the risk of operating actual plants by giving operators the experience of reacting to a wide variety of failures they could learn how to diagnose and mitigate in a safe environment. In terms of mapped processes, the failure mechanisms were built in to the model components themselves (stations, paths or connections, entries, and exits, and so on), and controlled by training supervisors.

    The second way I worked with risk in the context of simulation was when I worked with Monte Carlo models. These involve running a simulation through multiple iterations where many of the outcomes are determined randomly. The combinations and permutations of how those many random events concatenate and interact can result in a wide variety of outcomes at the whole-system level. The purpose of these simulations was to learn how often negative system outcomes were going to occur (based on potentially thousands of possible failures that occurred at known rates), and work on ways to mitigate the effects of negative systemic outcomes.

    Risks are managed at both the solution and the engagement level. It is one of the most important considerations for any analyst or manager.

  39. Roles and Permissions Matrix: A roles and permissions matrix is used to ensure coverage of activities by denoting responsibility, to identify roles, to discover missing roles, and to communicate results of a planned change.

    This technique is used to determine what which actors are empowered to take which actions. In terms of system maps the actors will be represented as process blocks or possibly resources able to interact with an trigger other system events.

    In the context of the BABOK this analysis is part of the solution effort. It deals with how the customers will interact with the provided solution.

  40. Root Cause Analysis: Root cause analysis is used to identify and evaluate the underlying causes of a problem.

    Root cause analysis is always served by knowing more about the system being investigated. The more you know about the components and interactions of a system the easier it will be to determine the root cause of any outcome, whether desired or undesired.

    Process or system maps greatly aid in the understanding of a system and its behavior and the behavior of its components. Fishbone diagrams and The Five Whys are important sub-techniques used in this kind of analysis.

    This technique is used as part of the solution effort.

  41. Scope Modeling: Scope models define the nature of one or more limits or boundaries and place elements inside or outside those boundaries.

    Process or system maps make very clear what is within the scope and outside the scope of any system. Note that systems can be defined in many ways and that many subsystems can be aggregated to form a larger system. An example of this from my own experience is the creation of fifty or more separate fluid, electrical, and equipment handler models to construct a full scope operator training simulator.

    Scope boundaries are usually determined by grouping components together in a logical way so their interactions and effects have their greatest effect in a specified range of space, time, function, and actors (people). The scope of different systems also defines responsibility for outcomes related to them.

    An example of scope determined by space would be hydrological models where the flow of water is governed by the topology and composition of the landscape. An example of scope determined by time is the decision to include hardware and components that affect system behavior when a plant is operating to generate power, and not to include equipment like sample and cleanout ports that are only used when the plant is in a maintenance outage. An example of scope determined by function is the isolation of the logical components that filter reactor water to remove elements like boron and other particulates. An example of scope determined by actors is the assignment of permissions (or even requirements) to give approval for certain actions based on holding a certain credential, as in requiring sign-off by a civil engineer with a PE license or requiring a brain surgeon to have an M.D. and malpractice insurance.

    Analyzing the system’s components is an aspect of the solution effort, while determining what components and responsibilities are in and out of scope might be part of the engagement effort, since this may involve negotiation of what work does and does not get done.

  42. Sequence Diagrams: Sequence diagrams are used to model the logic of usage scenarios by showing the information passed between objects in the system through the execution of the scenario.

    Sequence diagrams are a specific kind of process map that organizes events in time. The specific methods for creating them are defined in the UML standard (I bought this book back in the day). The diagrams don’t necessarily show why things happen, but they do give insight into when they happen and in what order. They also show the messages (or objects) that are passed between the different packages of functionality.

    This detailed activity takes place in the context of the solution effort.

  43. Stakeholder List, Map, or Personas: Stakeholder lists, maps, and personas assist the business analyst in analyzing stakeholders and their characteristics. This analysis is important in ensuring that the business analyst identifies all possible sources of requirements and that the stakeholder is fully understood so decisions made regarding stakeholder engagement, collaboration, and communication are the best choices for the stakeholder and for the success of the initiative.

    Stakeholders are affected by work done to build or modify the process or system under investigation and by the work done through the phases of the engagement itself. They may be included as components or resources in process models. The degree to which they are affected by the current scope of work or are able to exert influence over the capabilities under analysis can be represented by onion diagrams or stakeholder matrices (of which the RACI Matrix is a classic example).

    This technique is definitely part of the engagement effort.

  44. State Modeling: State modeling is used to describe and analyze the different possible states of an entity within a system, how that entity changes from one state to another, and what can happen to the entity when it is in each state.

    A state model describes the conditions or configurations an object or component can be in, the triggers or events that cause the state to change, and the allowable states that can be transitioned to from any other state. Diagrams can show these transitions independent of any other process map or timing diagram or the concepts can be embedded or otherwise combined. In terms of the elements I’ve defined, stations, paths or connections, entities, resources, and even entries and exits can all change state if it makes sense in a given context.

    This work is always part of the solution effort.

  45. Survey or Questionnaire: A survey or questionnaire is used to elicit business analysis information—including information about customers, products, work practices, and attitudes—from a group of people in a structured way and in a relatively short period of time.

    The information gathered can relate to any aspect of work related to the solution or to the engagement.

  46. SWOT Analysis: SWOT analysis is a simple yet effective tool used to evaluate an organization’s strengths, weaknesses, opportunities, and threats to both internal and external conditions.

    These efforts are generally undertaken at the level of the behavior or capability of entire systems, though they can also be about technologies that can improve all or part of an existing system or even the work of one or more phases of an engagement.

    This technique is therefore applicable to both the solution effort and the engagement effort.

  47. Use Cases and Scenarios: Use cases and scenarios describe how a person or system interacts with the solution being modeled to achieve a goal.

    These are used in the design, requirements, and implementation phases of an engagement, to describe user actions to achieve certain ends. The are many ways to describe the qualities and capabilities a solution needs to have, and use cases are only one.

    While use cases are similar in many ways to user stories, they aren’t always exactly the same thing.

    These techniques are applied to describe detailed aspects of the solution.

  48. User Stories: A user story represents a small, concise statement of functionality or quality needed to deliver value to a specific stakeholder.

    Like use cases, user stories are an effective means for capturing and communicating information about individual features of a solution. Note that use cases are not to be confused with formal requirements.

    The technique is generally associated with the Agile paradigm in general and the Scrum methodology in particular. They are typically written in the following form, or something close to it: “As a user role, I would like to take some action so I can achieve some affect.”

    While this is a useful technique, I have the (possibly mistaken) impression that it is relied upon and otherwise required far in excess of its actual usefulness. For example, I would disagree with anyone who asserts that every backlog item must be written in the form of a user story.

    Like the use case technique from directly above, this technique is also applied as a detailed part of the solution effort.

  49. Vendor Assessment: A vendor assessment assesses the ability of a vendor to meet commitments regarding the delivery and the consistent provision of a product or service.

    Vendors may be evaluated for the ability to perform any work or deliver any value related to the solution or to the engagement.

  50. Workshops: Workshops bring stakeholders together in order to collaborate on achieving a predefined goal.

    These collaborations with groups of stakeholders can be used to gather information about or review any aspect of work on the solution effort or any phase of the engagement.

Here is a summary listing of the fifty BA techniques and the type of effort they most apply to.

BA Technique Effort Type
Backlog Management Engagement
Balanced Scorecard Engagement
Benchmarking and Market Analysis Engagement
Brainstorming Solution
Business Capability Analysis Engagement
Business Cases Both
Business Model Canvas Both
Business Rules Analysis Solution
Collaborative Games Engagement
Concept Modelling Solution
Data Dictionary Solution
Data Flow Diagrams Solution
Data Mining Solution
Data Modelling Solution
Decision Analysis Solution
Decision Modelling Solution
Document Analysis Solution
Estimation Engagement
Financial Analysis Both
Focus Groups Solution
Functional Decomposition Solution
Glossary Solution
Interface Analysis Solution
Interviews Solution
Item Tracking Engagement
Lessons Learned Engagement
Metrics and KPIs Solution
Mind Mapping Solution
Non-Functional Requirements Analysis Solution
Observation Solution
Organizational Modelling Solution
Prioritization Engagement
Process Analysis Solution
Process Modelling Solution
Prototyping Solution
Reviews Engagement
Risk Analysis and Management Both
Roles and Permissions Matrix Solution
Root Cause Analysis Solution
Scope Modelling Both
Sequence Diagrams Solution
Stakeholder List, Map, or Personas Engagement
State Modelling Solution
Survey or Questionnaire Both
SWOT Analysis Both
Use Cases and Scenarios Solution
User Stories Solution
Vendor Assessment Both
Workshops Both

Here are the counts for each type of effort:

Effort Type Number of Occurrences
Solution 30
Engagement 11
Both 9
Posted in Management | Tagged , | Leave a comment

Unified Theory of Business Analysis: Part One

Defining and Distinguishing Between the Solution Effort and the Engagement Effort

There seems to be a great deal of confusion in describing what business analysts do, since they often seem to be doing such different things. I’ve confirmed this after having taken a number of surveys of business analysts. I’ve also seen close to fifty presentations on the subject, taken a training course, earned a certification, followed a whole bunch of discussion on LinkedIn and elsewhere, and reviewed job ads until my eyes bled. I’ve thought about this subject a lot over the years and the answer has finally become rather clear to me.

I’ve worked through a number of engagements of various kinds. They’ve ranged from small to large, quick to long-lasting, and very ad hoc to very formal. What I noticed is that the smaller, shorter, less formal ones all involved the same activities as the larger, longer, formal ones, it’s just that they didn’t include every possible activity. And therein lies the discovery. I call it the Unified Theory of Business Analysis.

The reason so many BA efforts feel so different is because many efforts either do not include every activity, or because any one BA may only be involved in a limited part of a larger effort. This is true with respect to the multiple phases of an engagement through time, and to the multiple components of a solution’s scope are addressed.

I’m convinced that all of the phases of a full scale engagement are actually conducted during even the most slap-dash, ad hoc analysis effort. However, in many cases the context is so well understood that many of the steps I define in my business analysis framework are addressed implicitly to the point where no special effort is taken to do everything you can do in every possible phase. You could have a big, formal effort with kickoff and close-out meetings for every phase, or you could hear that Dwayne in shipping hates the new data entry screen and why don’t Diane and Vinay shoot down, find out what the confusion is, and put together a quick fix for him. In the latter case the Intended Use and Conceptual Model model phases are obviated and the remaining phases can be accomplished with minimal oversight, time, and fanfare. If an effort is simple enough it doesn’t require the overhead of requirements traceability matrices, change control boards, and a bunch of documentation (though the right people should be involved and the relevant documents and systems should be updated).

This is why things look different to BA practitioners and their managers and colleagues through the phases of an engagement through time, but it’s also possible that BAs will only be involved in a limited part of the solution effort, as I discussed and defined yesterday.

In short, I wanted to introduce this discussion be distinguishing between the solution effort, which is the process of defining and implementing the solution, and the engagement effort, which is the meta-process of managing the phases that support the solution. It may seem like a subtle distinction but it seemed important to me as I prepare for the next part of this discussion.

Posted in Management | Tagged , | Leave a comment

Components of a Solution

I wanted to define some terms and provide some context in preparation for the next few posts. Specifically, I’m describing the components of a solution. A process model is a model of a system that either does exist or will exist, either after it is implemented or after it is modified from an existing system. I’m going to be referring to the work needed to affect the solution itself as the solution or the solution effort over the next few days.

As called out in the diagram directly below, a model is made up of many standard components. I’ll refer to these repeatedly in the descriptions that follow, so I want to start by describing the components here.

  • System (or Model): A system is the entire system or process being investigated. A model is a representation of that system or process.
  • Entity: Entities are items that move around in the system. They can represent information or physical objects, including people. Entities will sometimes pass through a system (like cars going through a car wash) and sometimes move only within a system (like aircraft in a maintenance simulation or employees serving customers in a barber shop). Entities may split or spawn new entities at various points within the system, and can also be merged or joined together.
  • Entry: Entities of go into a system at entry points. Note that the entry can represent an exit point from an external system, and thus a connection with that system.
  • Station (or Process Block or Activity): These blocks are where the important activities and transformations happen. The important concepts associated with stations are processing time, diversion percentage (percentage of time entities being processed go to different destinations when they leave the station), and the nature of the transformations carried out on the entities being processed (if any). A wide variety of other characteristics and behaviors can be defined for these components, and those characteristics (like being open or closed, for example) can change as defined by a schedule.
  • Group (or Facility): A group of stations performing related operations is referred to as a facility. A toll plaza includes multiple stations that all do the same or very similar things. A loading dock that serves multiple trucks works the same way, where each space is a separate station. In a more information-driven setting, a roomful of customer service workers could be represented as individual stations, and collectively would be referred to as a group or facility.
  • Queue: A queue is where entities wait to be processed. FIFO, LIFO, and other types of queues can be defined. A queue can have a defined, finite capacity or an infinite capacity. If the queue has a physical component it might be given physical dimensions for length, area, or volume. A minimum traversal time may also be defined in such a case.
  • Path: A path defines how entities move from one non-path component to another. In logical or informational systems the time needed to traverse a path may be zero (or effectively zero), while length and speed may have to be defined for physical paths. The direction of movement along each path must also be specified. Paths are often one-way only, but can allow flow in both directions (although it’s often easier to include separate paths for items traveling in each direction).
  • Exit: Entities leave a system through exit components. Note that the exit can represent an entry point of an external system, and thus a connection with that system.
  • Bag (or Parking Lot): A bag is a special kind of queue where entities may be stored but may be processed at arbitrary times according to arbitrary criteria. They are used when their operations are not FIFO, LIFO, or some other obvious behavior. You can think of them like parking lots, where the residence time is determined by the amount of time it takes passengers to get out of the car, walk somewhere, conduct their business in some establishment, walk back to the car, get back in and buckled up, get their phone mounted and plugged in, take a swing of their coffee, set the GPS for their next destination, get their audiobook playing, readjust their mirrors, and finally start pulling out. In such a case the car will be represented by an entity that continues to take up space in the parking lot (or bag), while the passenger is a new entity that is spawned or split from the car entity, and then later merged or rejoined back with it.
  • Combined (Queue and Process): It is possible to define components that embody the characteristics of both a station and a queue. This is typically done for representational simplicity.
  • Resource (not shown): A resource is something that must be available to carry out an operation. Resources can be logical (like electricity or water, though they can be turned off or otherwise fail to be available) or they can be discrete, like mechanics in a repair shop. When a car pulls into a station or process block (representing a space in a garage) a mechanic has to go to perform the service. Sometimes multiple mechanics are needed to carry out an operation. Sometimes different specialists are needed to perform certain actions. Discrete resources can be represented by entities in a system. If no, or not enough, resources are available, the the process waits for them to become available, and only starts its clock when they do.
  • Resource Pool (not shown): Resources can be drawn from a collection, which itself is referred to as a pool. There can be one or multiple pools of resources for different purposes and the resources can have different characteristics. There may be different numbers and types of resources available at different times, and this can be defined according to a schedule.
  • Component: All of the items listed above — except the system itself — are referred to as system (or model) components.

That just describes some of the major items I tend to include in process models. Here are some of the others that are possible.

  • Event: Events are occurrences or changes of state that trigger a change to some other component in a system.
  • Decision Point: These control elements govern the movement of entities through the system, either logically or physically. The diversion percentage characteristic described for stations, above, is an example of embedding a decision point into another component. However, decision points can be included in models as standalone components. In my experience they are usually connected by paths and not other components, but that is an implementation detail.
  • Role: This represents a person or group involved in the process. Roles can be modeled as stations or resources, as described above, but they can also be included as other kinds of components, depending on the implementation.

If the process model is itself a simulation there is yet another layer of components can be added. These are meta-elements that control the simulation and its components. Simulations are often called models, although the BABOK reserves the word model for the graphical representation of a system and associates simulation with the concept of process analysis.

  • Editing interface for the system: This allows a user to add, remove or reconfigure components within the system.
  • Editing interface for components: This allows a user to define or modify the operational characteristics of the components. This can include schedules, events, arrivals, and other items, as well as traditional elements like durations, dimensions, and capacities.
  • Operational interface: This allows a user to start, pause, and stop the simulation.
  • Data analysis capability: Simulations are generally implemented to generate a lot of output data. They can also require a lot of complex input data. Integrated analysis capabilities are sometimes included in simulation tools.
Posted in Tools and methods | Tagged , , , | Leave a comment