Jira and Confluence for RTMs and My Framework

I finally finished the Udemy course on Jira and Confluence. As I watched each section of the course I thought about how the capabilities of the two products can be used to represent Requirements Traceability Matrices, and furthermore how they could be used to do so for my project framework.

My framework includes the idea of linking one-to-many and many-to-one between columns going from left to right (beginning of project to end) and, as I’ve thought about it in detail over the last week-plus, I see that it might also include the idea of linking items in non-adjacent (non-successive) columns. Might. I’ll be clarifying that going forward. Anyway, the framework is best represented using a graph database, which is a bit out of the ordinary, as I’ll explain.

Since the ultimate representation I have in mind is complicated, I’ll describe a simple representation and slowly add complications until the end goal is reached.

Start Off Doing It By Hand

Let’s start with a fairly simple project and describe how I would manage it manually. Since I’ve usually done this in combination with building simulations of some kind we’ll build on that concept as well.

The intended uses describe how the solution will be used to solve identified business problems. Imagine there’s only a single item to keep it simple. Give it a code of A-1.

The items in the conceptual model are identified though a process of discovery and data collection. These define the As-Is state of a system. Label these items B-1 through B-n. Each is linked to intended use item A-1. These can be stored in a document, a spreadsheet, or a database, but picture the items written in a document.

The items in the requirements address how to represent and how to control each conceptual model item. These define a path to the To-Be state in an abstract way. Label these items C-1 through C-n. Each item is linked to one of the conceptual model (B-) items, such that every B- item is linked to a requirement. Those describe the functional requirements of the solution. Non-functional requirements are listed with labels C-n+1 through C-x. They do not have to link back to conceptual model items. Imagine these items are written in a document.

The items in the design describe the proposed ways in which the solution will be implemented. These represent the To-Be state in a concrete way. The items must include the details of the implementation process. Label them D-1 through D-x. Each item is linked to one of the requirements (C-) items, such that every C- item is linked to a design item. Imagine these items in a document as well.

The implementation items are tracked as they are assigned and worked on. They should be labeled E-1 through E-x and each should be lined to a design (D-) item such that all of the design items link to implementation items. These items can be listed in a document.

The test items are labeled F-1 through F-x and are linked to implementation (E-) items such that all E- items are linked to test items. These items can be written in a document.

The acceptance items are labeled G-1 through G-x and are linked to implementation (F-) items such that all F- items are linked to acceptance items. These items can also be listed in a document.

That description may be tedious and repetitive but it gives us a place to start.

The first wrinkle we might introduce is to include all of the items in a single document in an outline or tabular form. It gets unwieldy as the number of items grows, so another idea would be to include the items in a spreadsheet, where we can combine the features of an outline and columns, and where the rows can be viewed easily even as they grow wider.

Test items are interesting because there are so many different forms of them. There are many types of automated and manual tests and there are a number of software systems that track the items and generate reports. Such systems can be used to supplement the written forms of the previous items. Such systems usually support labels that fit any scheme you like. Specialized tests that aren’t tracked by automated systems can be tracked in written form. The more subjective tests that rely on expert judgment are most likely to be tracked in writing.

The virtue of keeping everything in writing like this is that intended uses, conceptual model items, requirements, and designs can be maintained as separate documents and possibly be managed by separate teams. The labeling and cross-linking have to be performed manually by one or more people who have a solid understanding of the entire work flow.

Enter Jira (and Confluence)

Here’s where we get to Jira and Confluence. The simplest way I can think of to integrate Jira into the RTM process is to use it to store the status of implementation items only. This means that the items in columns A through D, or the intended use, conceptual model, requirement, and design items, can stay in there separate documents. The design items can be written in such a way that they define the work to be done by the implementation agents. They can be written as user stories, epics, tasks, or sub-tasks.

These items can be given custom workflows that move items through different kinds of evaluations. They follow all the rules for writing effective descriptions, including definitions of done, and so on. This works well for different layers of testing. For example, a software item’s definition of done may include requirements for integrated tests. There might be a code review step followed by a manual test step. A custom workflow could handle many situations where multiple test (F-) items are mapped to each implementation (E-) item. They could handle the parallel definitions of tests in a written document of test items by walking the implementation items through a serial test process. Subjective tests could be run as a separate step for some items.

Why not use Jira for tracking items from phases before implementation? I can think of a few reasons.

  • The items walk through all phases and cannot therefore be conveniently viewed in the context of each phase. That is, it’s not super easy to see all of the items together in their context of, say, the conceptual model. This objection can be overcome by including a custom field with the relevant text, and then using JQL (Jira Query Language) functions to list and sort the information in those fields as desired.
  • It’s difficult to capture all of the one-to-many relationships that are possible advancing through the phases left to right. This can be overcome somewhat for testing as described above but it breaks down trying to span more than two of the other phases (e.g., concept-to-requirement-to-design). Capturing many-to-one relationships wouldn’t be a treat, either.
  • Each item in each phase may require a lot of documentation in terms of text, data, history, discussion, graphics, and so on, so trying to maintain all of that through multiple phases seems like it could get out of control in a big hurry.
  • Most of the material I’ve seen on Jira is in the context of handling action items. Implementation and test items are actions to take, but it’s harder to think of earlier phase items in that way. Up until the implementation phase the items serve more as background for the work to be done. I suppose you could create implementation items like, “do discovery” and “collect data” and “write requirements,” but those items don’t yield a traceable list of individual items unless the context is understood.

The latter concern leads to a major issue that needs to be discussed.

Waterfall vs. Agile

It might sound like my framework, with its need to do a lot of upfront elicitation, conceptualizing, and requirements gathering, forces you into more of a Waterfall mode. Agile, by contract, promises to let you begin doing meaningful work almost right away, starting from a small core of obvious functionality and progressively elaborating to a complete, thorough, nuanced solution.

Waterfall, at its worst, involves a huge amount of planning with the idea that most of the analysis is performed at the beginning, so that the implementers can go off and do their own thing and all of the pieces will automagically come together at the end. We know from experience that that way lies madness. Plenty of projects have failed because the pieces do not come together at the end and even if they do, the lack of interaction with and feedback from the customers means that the developed system might not meet the customers’ needs very well — or at all. This can happen even if the initial customer engagement is very good, because customers might not know what they even want to ask for in the beginning, and lack of ongoing interaction denies them the opportunity to leverage their specialized experience to identify opportunities as the work and refinement progresses.

Agile techniques in general, and Scrum techniques in particular, are intended to maintain customer engagement on the one hand, and ensure the implementation team always has something running and it continually revisiting and working out integration and release issues. Agile and Scrum at their worst, however, assume that you don’t need to do much upfront planning at all and you can just start building still willy-nilly and hammer into shape over time. We used to call that cowboy coding and it was seen as a bad thing back in the day. Putting a shiny gloss of Scrum in it doesn’t fundamentally change the fact that such an approach is broken. (It also means that querying would-be hires about Scrum trivia items to gauge their knowledge entirely misses the point of how work actually gets done and how problems get solved. Good thing I’m not bitter about this…)

I could go on and on about how Agile and Scrum have been misused and misunderstood but I’ll keep this brief. The truth is that the approaches are more reasonably viewed on a continuum. It isn’t all or nothing one way or the other. You’re crazy if you don’t do some upfront work and planning, and you’re also crazy if you don’t start building something before too much time has gone by. What are some factors that change how you skew your approach in on direction?

  • The scope and scale of the envisioned solution: Smaller efforts will generally require less planning and setup.
  • How well the problem is understood: If the analysis and implementation teams already understand the situation well they will generally require less planning.
  • The novelty of the problem: If the problem is recognizable then the amount of planning will be reduced. If it takes a long time to identify the true problem then much or even most of the work will involve upfront analysis.
  • The novelty of the solution: If a team is adapting an existing solution to a new problem then the amount of upfront will will be reduced. The team can start modifying and applying parts of the solution as discovery and data collection proceed. If a novel solution is called for it’s better to wait for more information.
  • The planned future uses of the solution: If the solution is going to be used for a long time, either as a one-off or as the basis of a modular, flexible framework (i.e., a tool) that can be applied over and over, it’s a good idea to devote more effort to analysis, planning, and setup. If the effort is to keep an existing system afloat for a while longer, a quick implementation will do, if it’s appropriate. Note that it’s possible to develop a flexible tool or framework over numerous engagements, slowly developing and automating parts of it as consistent, repeating elements are identified. This may be necessitated by funding limitations. Building and using tools imposes an overhead.
  • The quality of the desired solution: Every effort is constrained by the iron triangle of cost, quality, and time, where quality may be expressed in terms of features, performance, robustness, maintainability, and so on. Put a different way, you can have it fast, cheap, or good: pick two. At the extreme you could have it really fast, really cheap, or really good: pick one! The space shuttle had to be really good (and they still lost two of them), but it was really not fast and really not cheap.

There’s some overlap here but I’m sure you can recall time when you faced the same decisions.

In simulation in particular you pretty much have to understand what you’re going to simulate before you do it, so there might be a lot of upfront work to do the discovery, data collection, and planning to understand a new system and adapt a solution. However, if you already have a solution you merely need to adapt to a new situation, you can jump in and start customizing to the novelties almost as soon as they’re identified. I’ve worked at both ends of this spectrum and everywhere in between.

Roughly speaking, if you have to do a lot of upfront work the effort will tend toward Waterfall, though within that you can still use Agile and Scrum techniques to do the implementation, guided by the customer’s impression of what provides the most business value first and the implementer’s impression of what is technically needed first.

If you have reasonably well-understood problems and solutions, as in handling ongoing bug fixing and feature requests for an existing system, and where you are operating mostly in a reactive mode, the effort will tend toward a Kanban style.

For efforts that are somewhere in the middle, where there is a balance between planning and execution or discovery and implementation, the preferred m.o. would tend toward Scrum. Interestingly, teams sometimes use Kanban approaches in this situation, not because there are being strictly reactive, but because they are using Kanban as an effort-metering approach for a team of a fixed size. In those cases I’m thinking that the cost or scope or time would have to be flexible, and this approach might not work for a firm, fixed price contract with an external client.

People’s impression of management techniques, specifically regarding Waterfall and Agile/Scrum/Kanban can vary strongly based on their own experience. They may tend to think that the way they’ve done it is the only or best way it should be done. They may also regard the body of knowledge surrounding Scrum as being very rigid and important to follow closely, although I view the whole thing more as a guideline or a collection of techniques rather than holy writ. I pretty much figure if you’re treating Scrum procedures as holy writ you’re doing it wrong. That said, I am not saying that an appreciation of the context and application of the generalized ideas is not helpful or necessary.

I’ll also observe that the context of Scrum and Kanban is pretty limited. If I look through the training materials from the certification classes I took a while back I can argue that a significant amount of material in those booklets is general management, professional, and computer science stuff that doesn’t have much if anything to do with Scrum or Agile. In that regard the whole movement is a bit of a fraud, or at the least an overblown solution looking for a problem. This is why the initial champions of the technique have more or less moved on.

As for me, I’m fond of noting that I’ve been doing Agile since before Agile had a name, even if I hadn’t been doing explicit Scrum. I always started with discovery and data collection to build a conceptual model of the system to be simulated or improved. I always identified a flexible architecture that would solve the problems in a general way. I always built a small functional core that got something up and running quickly that was then expanded until the full solution was realized. I always worked with people at all levels of customer organizations to get ongoing feedback on my understanding and their satisfaction of the solutions’ operation, look and feel, accuracy, functionality, and fitness for purpose.

I finally note that, especially for the product owner role, the ability to perform relevant subject matter analysis, apply industry expertise and business judgment, the ability to communicate with all manner of specialists from users to managers to line operators to technical teams, and the ability to create and understand solutions are all far more important than any specific knowledge of Agile, Scrum, Jira, or any related technique or tool. Relative to understanding and being able to work with people and technology to solve problems, Agile and Scrum are trivial. This is especially true is other strong Scrum practitioners are present.

For example, the product owner role will sometimes be filled by a senior manager of a customer organization or internal competency, while a ScrumMaster will be a member of a vendor organization or a different internal competency. In these cases the product owner might be an expert in the business needs and be able to work with knowledgeable members of the solution team to groom the backlog in terms of business value, but that individual might not know a thing about the details of the proposed technical solution, how to do DevOps, how to write code, how to run a Scrum ceremony, or whatever. Such an individual would be guided by the ScrumMaster to get up to speed on the Scrum process and by specialists on the technical team(s) to work on the solution elements themselves. Like I said, every organization will treat these functions differently. Ha ha, I guess I should do a survey of the expected duties and capabilities of the expected Scrum roles are to illustrate the wide range of different approaches that are taken. It was highly illuminating when I did this for business analysts!

Posted in Management, Tools and methods | Tagged , , , , , , , , , | Leave a comment

Pittsburgh IIBA Meetup: Do You Think You Are a Good Listener? Let Us Surprise You!

This evening’s talk at the Pittsburgh IIBA Meetup was about communication skills. Although there was some discussion about conveying messages the emphasis was on how to receive them. Both are important to business analysts. BAs must provide context for what they’re trying to elicit so they get what they want, but they must be patient enough to receive the information effectively as it’s offered.

There have been times when I’ve done this really well in my career (and in my life), and times when I’ve done this less well. Some reflection on my own has made me aware of my worst behaviors so I won’t ever repeat them but, as in many things, if I want to keep getting better in an area it helps to be thinking about it. In order to get better at something one has to be continuously (and purposefully) engaged with the material, so this talk was good for that engagement.

I’ll post a link to the presentation materials if and when they’re made available. In the meantime I’ll share a link to an online listening skills evaluation that was included in the slides.

The speakers shared a lot of insights into things to do and not to do, many of which I’d seen before and a couple I hadn’t. I’ve gotten to know one of the speakers a little bit and told her about something I learned in a leadership school in the Army, though it’s a common concept. When I saw it it was called, simply, a “communication model.” That’s not super descriptive, but the idea is easy enough to grasp:

  1. What the sender thinks was sent
  2. What the sender actually sent
  3. What the receiver actually received
  4. What the receiver thinks was received

It seemed to me that many of the concepts described in the talk reflected different aspects of this communication model. How to send clearly, how to prevent and resolve misunderstandings, how to maintain strong engagement so all parties are incentivized to keep working until mutual understanding is achieved, how to ensure that you understand what the speaker is saying and not confirming what you want to hear, carefully setting ground rules and defining terms, overcoming certain barriers, and more were all discussed. The speakers and audience members shared numerous personal stories and observations relating to these ideas, and I found the entire session to be productive.

There are many technical aspects of discovery and data collection processes but they can’t be executed effectively in many cases if they aren’t conducted in an environment with good communication and strong, supportive engagement.

Finally, the speakers mentioned the classic Dale Carnegie book, How to Win Friends and Influence People, which I’ve linked before and highly recommend. To that I’ll list a handful of other books I’ve read on related subjects. This may seem a bit longhaired but interpersonal skills are as important for technicians and analysts as they are for anyone, and perhaps more so.

Feeling Good Together by David D. Burns.
The Five Love Languages by Gary Chapman.
Emotional Intelligence by Daniel Goleman, Ph.D.
Conversation Confidence by Leil Lowndes.

Posted in Management, Soft Skills, Tools and methods | Tagged , , , | Leave a comment

A Simulationist’s Framework for Business Analysis: One More Update

So I’m working my way through the Jira course and I finally came to a clear understanding of a special case I’ve been trying to clarify.

For the most part the processes of discovery and data collection result in the creation of a conceptual model. The requirements for what must be addressed (included), modified, or eliminated in the conceptual model get written up in a straightforward manner from there, and the design, implementation, and test items follow.

There was an extra step when I wrote thermo-hydraulic models for nuclear power plant simulators at Westinghouse, and that’s what I’ve been trying to classify. I think it ultimately doesn’t matter how I define it, the point of this isn’t to define a rigid, formal framework, the point is to have a flexible framework that helps you get the work done.

At Westinghouse the discovery process involved a review of drawings to identify the components of the plant that needed to be simulated to drive and be controlled by every item on the operator panels. That was performed by senior engineers in conjunction with plant personnel. The data collection process involved a trip to the plant to record all the readings and switch settings at steady state, followed by a period of researching more plant documentation to determine the dimensions and operating details of all the equipment for each system and the state of the fluids within each part of the system. This involved a lot of calculations. The requirements were for a system that generated I/O behavior for every control panel item that was within one percent of plant values at steady state and within ten percent during transients, so they were straightforward also.

So far, so good, right? Here’s where it gets trippy.

The solution involved writing first principles simulation code based on the differential equations that described the behavior of the fluid in each system. The same procedure followed for electrical models. Therefore the solution involved two steps: defining all of the required variables and governing equations, and then implementing them in code.

The issue I’ve been having when remembering all this was how to think about the write-up of the equations. In some ways it’s a conceptual representation of the plant systems, so it can be thought of as part of the conceptual model. In other ways it’s a guide for what’s required for the implementation, so it can be thought of as a specific part of the requirements. In still other ways it’s the result of a creative solution process so it’s part of the design. The architecture of the system is definitely part of the design but also part of the implementation.

I’ve been picturing the equations as part of the conceptual model, but on review they seem more properly to be part of the design, and so my confusion is potentially resolved. You may see this differently, and you’re welcome to do things your own way as your situation and impressions guide you.

Many sources (for example see here and here) inspired and informed my idea of a conceptual model, but my definition and usage applies for my framework only.

* * * * *

As an aside I can tell you that defining the governing equations and implementing them in code are very different problems. I’ve thought the hole in Westinghouse’s otherwise thorough and professional management and tracking process involved not having enough review of the methods for implementing the identified equations in code. There was a really good review process for the write-up of the equations, but there was never a review of the proposed implementation methods. The modelers were mostly left to do their best and their codes would simply be tested to see if they worked. Given the experience of some of the senior staff, including this review step might have saved a lot of wasted effort.

When I observe that I’ve seen things done well and less than well, and that my interest is always in helping organizations support things that work and avoid things that don’t, I’m often thinking of this episode. The oversight was subtle and it took me a few years to identify what was missed.

Posted in Tools and methods | Tagged , , | Leave a comment

A Simulationist’s Framework for Business Analysis: An Update

While continuing to work through my Jira course with an eye toward using it in conjunction with a Requirements Traceability Matrix, I’ve been thinking about the context of how the work actually gets done.

I’ve discussed the content of the RTM in quite a lot of detail, but I haven’t talked too much about how the items in it are generated and linked.

The intended uses, or business needs, are identified by the customer or through an elicitation process. It will often happen that the needs identified by a customer will be clarified or added to by the analysts through the elicitation process and follow-on activities.

The items that make up the conceptual model are identified during the discovery process and characterized through the data collection process. Note that the conceptual model and requirement phases may be combined. Alternatively, the conceptual model phase may be combined with the design phase instead, where discovery of the elements needed to address the identified requirements takes place.

The items that make up the requirements are identified based on the items in the conceptual model and additional elicitation. If the project involves building a simulation then building all of the processes and entities in the conceptual model would obviously be a requirement. I write separate requirements for each item so they can be tracked and tested more accurately. Additional requirements have to do with how the solution is to be implemented and managed (non-functional requirements), items that are to be added beyond what is specified in the original conceptual model, and specific items that will address the identified business needs. Items that are going to be removed from what is described in the original conceptual model must be listed, too.

The design items are linked to requirements in a fairly straightforward way. That said, a really good discovery and design process will decompose a system to identify many common elements. A good design will incorporate generalized elements that flexibly represent many specific items in the conceptual model. A discrete-event simulation might include a general process element that can be readily customized to represent numerous specific processes at a location. For example, one could create separate simulation objects for primary inspection booths, secondary inspection booths, mechanical safety inspection slots, customs inspection dock spaces, tollbooths, and so on at land border crossings, or one could recognized the similarities between all of those facilities and implement a single object type that can be modified to represent any of them. This approach will make the implementation more consistent, more modular and flexible, easier to document, easier to build and maintain, easier to modify, and easier to understand.

The implementation items describe what is needed to instantiate the elements of the design.

The test items are linked directly to the implementation items. All links between phases (or columns) can be one-to-many or many-to-one, but the relationships between implementation items and their tests are the most likely to be many-to-one (going left to right or from each implementation item to many tests). This is because many different kinds of automated and manual tests can be performed for each element of the solution. The types of tests can be performed at different levels of abstraction as well. Automated verification tests can be created in large numbers to check individual operations in detail. Validation tests based on expert judgment might be more subjective.

The acceptance items are generally derived directly from the tests. If all the tests pass, then acceptance should be guaranteed. It is therefore very important to properly and completely identify all of the tests and their acceptance criteria when working through a project.

These descriptions may seem obvious but I wanted to describe them explicitly to make sure the understanding and classification is clear. Using Jira or other approaches is a very specific process and we have to have clear language and definitions going forward.

Posted in Tools and methods | Tagged , , , | Leave a comment

Requirements Traceability Matrix (RTM)

One of my favorite tools to use during any project, but particularly for software projects, is the Requirements Traceability Matrix (RTM). I’ve discussed this in presentations I’ve given on the framework I use for performing business analysis (and project management to a lesser degree), but I wanted to take some time to take about using the RTM in detail.

The purpose of the RTM is to make sure that all requirements map to business needs and that all of the requirements get fulfilled and accepted. People have done this sort of thing for years (decades… and longer) using punch lists and other techniques, but the RTM has a specific structure.

I’m going to describe its usage in terms of the framework I use but you can use any series of steps with any names you’d like. The most basic set of steps you can probably identify is Business Need, Requirement, Implementation, Acceptance. The framework I use consists of these steps: Intended Use, Conceptual Model, Requirements, Design, Implementation, Test, Acceptance. The conceptual model part has to do with describing the As-Is state of an existing system, but if a new system is being constructed there might not be an existing system to work from, and there may or may not be a need for a conceptual model.

Here’s a graphical representation of what a Requirements Traceability Matrix might look like, based on the workflow I use:

The idea is that every item in every column (step of the workflow) maps to at least one item in the previous and subsequent columns. That is, every item in the conceptual model maps to a business need (the intended use), every requirement maps to an item in the conceptual model, every element of the design maps to a requirement, every element of the implementation maps to a design element, every test item maps to an implementation item, and every acceptance item maps to a test item. The relationships can be one-to-many and many-to one in either direction, but the relationship is typically one-to-many going left to right.

An exception to the idea that items always have to map in both directions is that non-function requirements elements do not necessarily have to map back to items in the conceptual model or intended use. That said, the use of specific hardware and software elements may be driven by identifiable business needs, but other aspects may not be. For example, deploying operations to Amazon Web Services may be part of an explicit organizational strategy, but ensuring that a solution is modular, flexible, and maintainable may be just good practice. You can think of these ideas however you’d like.

You may notice a few dashed lines going back to question marks in the diagram. These show that if you identify something that’s needed part way through the process that doesn’t obviously map to an existing item in the previous step of your workflow, then that may indicate that you need to add something to that previous step.

Finally, the items at the bottom represent free-floating project methodology requirements, if they exist. They’re kind of like non-functional requirements in that they don’t address what the solution does (functional requirements), or what the solution is, but how the steps should be carried out. For example, we wrote procedures for collecting data at land border crossings at one of the companies I worked for. Another example may involve writing automated tests for implementation items when using test-driven development (TDD).

Let’s talk about all of the steps one by one.

The intended uses, essentially the identified business needs, describe the high-level operations the organization needs to accomplish in order to serve its customers. Examples include making decisions on whether or not to provide insurance coverage for a given customer, allowing a customer to place an online order, and allowing customers to request slots in a schedule.

The conceptual model is an abstract representation of a system. It may include entities, stations, operations, and transformations. The representation may take almost any form. In my work these representations have included diagrams, equipment lists, and equations.

The requirements describe the elements and operations that most be addressed in the solution. They describe the concepts that must be represented and the means of controlling the relevant operations.

The design describes the proposed elements that will make up the solution.

The implementation items are the specific pieces of work, in terms of objects, repositories, and processes, that need to be made real to construct the planned solution.

The test items are intended to address each element of the implementation.

The acceptance items are approved when each of the tests are shown to have been passed.

Once we understand what goes into a requirements traceability matrix (which is sometime shortened to requirements trace matrix), we can think about how we actually implement and use one.

The simplest way is to write it out in text, for example in Microsoft Word. They can be written in outline form but the key to linking items is to give some sort of alphanumerical label to each item so they can refer to each other.

A large VV&A project I did involved a 200-plus page document in three sections that had a trace matrix as part of its content. We embedded tables for each of the three identified intended uses, and each table mapped all of the implementation items to the requirements. There was no step for conceptual model and all of the test information was included in a separate, standalone bug-tracking system. The acceptance criteria were addressed in several different ways.

Remembering that testing involves both verification and validation, we see that there might be many different test items linked to the various implementation elements. Verification is meant to determine whether things work operationally. Do elements of the user interface cause the expected actions to occur when they are activated? Do calculations provide the expected results given specified inputs? Can certain types of users access certain screens? These kinds of tests are easier to automate and have more objective criteria for correctness. Validation is more subtle. It is meant to determine whether the implemented solution properly and sufficiently addresses the business need (the reason the solution was implemented in the first place). The criteria for validation might be very subjective, and might have to be determined by expert judgment.

It’s also easy to imagine doing an RTM in Excel, using an outline format, though it might be difficult to make such a resource available simultaneously to a large group of users. A relational database implementation with a dedicated user interface may also work, and that’s what I’m thinking about specifying.

I’m also thinking specifically about how to use Jira in a way that is compatible with the idea of an RTM, since I’m working my way through a Udemy course on Jira right now. I’ll let you know how that goes in subsequent posts.

Posted in Tools and methods | Tagged , , , | Leave a comment

Data Collection: Characterizing a Process

Having discussed the process of discovery yesterday I wanted to go into detail about data collection today. While discovery identifies the nouns and verbs of a process, data collection identifies the adjectives and adverbs. I’ve listed a bunch of ways to do both in this post on the observation technique, but here I’ll take a moment to describe how the collected data are used.

Data may be classified in several ways:

Continuous vs. Discrete: Continuous data can be represented numerically. This can include real numbers associated with physical dimensions, spans of time, amounts of energy, and intrinsic properties like hardness or pressure. This can also include counting variables that are usually integers, but not always. Discrete data items have to do with characteristics that aren’t necessarily numeric, like on or off, good or bad, colors, and so on. Interestingly, some of these can be represented numerically as discrete or continuous values. On/off is typically represented by one and zero (or boolean or logical), but color might be represented by names (e.g., blue or gray) or combinations of frequencies or brightness (like RGBA values).

Collected as point value or distribution: A point value is something that’s measured once and represents a single characteristic. A hammer might weight sixteen ounces and the interior volume of a mid-sized sedan might be 96.3 cubic feet. An example of distributed data would be a collection of times taken to complete an operation. Both continuous and discrete data can be calculated as distributions. A distribution of continuous data could be the weights of 237 beetles, while a distribution of discrete data could be the number of times an entity goes from one location to each of several possible new locations. A sufficient number of samples have to be collected for distributed data, which I discuss here.

Used as point value or distribution: Data collected as single point and distributed values can be used as point values. Single-to-single is straightforward but distributed-to-single is only a little more complicated. If a single point value is derived from a set of distributed values it will typically capture a single aspect of that collection. Examples are the average value, the mean, minimum, maximum, standard deviation, range, and a handful of others. Data used as distributed values must be collected from a distribution of data samples. There are a number of ways this can be done. Mostly this has to do with whether an interpolation will or will not be used to generate values between the sample data values.

Given a two-column array based on the table above (in code we wouldn’t bother storing the values in the middle column), we’d generate a target value against the column of independent values, identify the “bucket” index, and then return the associated dependent value. In the case of the table above the leftmost column would be the dependent value and the rightmost column would be the independent value. This technique is good for discrete data like destinations, colors, or part IDs. If the target value was 23 (or 23.376) then the return value would be 30.

It’s also possible to interpolate between rows and generate outputs that weren’t part of the input data set.

The interesting thing about data used as a distribution is that only one resultant value is used at a time. There are two ways this could happen. One is to generate a point value for a one-time use (for example, the specific heat as a function of temperature for steel). The other is to generate a series of point values as part of a Monte Carlo analysis (for example, a continuous range of process times for some event). Monte Carlo results ultimately have to be characterized statistically.

Posted in Tools and methods | Tagged , , | Leave a comment

Discovery: Learning What’s In a Process

Discovery is observing or researching what’s in a system or process. It identifies facilities, entities, resources, information, and activities. These are the nouns and verbs that comprise the system or process under investigation.

Discovery does not try to quantify the details of the process, which can be thought of as the adjectives and adverbs of the system or process. It merely identifies what happens to what, who does it, where things go, and how transformations are effected. These processes can sometimes be performed simultaneously, but for the most part discovery is done first so you know what quantification needs to be done. I refer to that quantification work as data collection and will describe it in detail in a separate post.

Discovery and data collection are both carried out using the techniques of subject matter expert interviews and observation. Focus groups, workshops, and document reviews can be used to a lesser degree. Although discovery and data collection are carried out the using some of the same techniques as elicitation (interviews, focus groups, and workshops), I think of elicitation as a way to identify needs rather than learn what’s going on. Focus groups and workshops are a form of group interview and these are sometimes called JAD or Joint Application Design sessions.

Discovery takes place in two possible contexts, with one being far more important than the other. The less important context involves creating an entirely new process, where you are working backwards from the desired outcomes and discovering what steps are needed to produce them. That process is more formally, and correctly, known as design. That effort is almost always performed in tandem with a form of data collection, since the “discovered” process must be parameterized to test its effectiveness. The more important context is that of an analyst coming on to an existing system that needs to be improved or adapted to a new situation. In this case the system or process already exists, and discovery refers to the analyst learning about it.

Adapting a process or system to a new situation turns out to be quite a broad area. For example, in the paper industry I worked for a capital equipment vendor of pulping equipment. If I was visiting an existing mill to do a performance audit or improvement review I would learn about the system what was already in place through review of drawings, walking the plant, and getting guided tours from plant personnel. That’s pretty straightforward. However, when designing a new process for a potentially new mill, I wasn’t really designing a new process from scratch. I was instead using the equipment that makes up a standard pulping system and applying variations (in equipment and sub-process types, size and model and number of pieces of each type of equipment, chemical and thermal treatments, and so on) that met the customers’ specifications. Similarly, when I worked as a business process reengineering specialist I was automating existing manual systems by applying a standard set of tools and techniques, which included FileNet document imaging hardware and software.

It’s rare that analysts and designers get to approach new problems with and entirely clean slate, so perhaps it is better to think of discovery as taking place on a continuum of how well defined the initial process is that an analyst will try to improve, automate, or implement.

Here’s how I approached discovery activities at various jobs and on various projects:

Sprout-Bauer (now Andritz)
Discovery
I divided my time into auditing and improving existing systems and designing and sizing new systems. Discovery for me took place mostly when performing audits and performance reviews. In order to prepare for each analysis I would review the relevant P&IDs (Process and Instrumentation Drawings) before leaving the office and I would walk the process first thing upon arriving at the plant. Walking the process was especially important because I often had to locate the sample ports I’d need to use to collect materials to send back to the lab for characterization.

I ended up being the keeper of many of those drawings. They ended up in racks in my cube, hung from clamping bars I’d tighten with a Swingline stapler! Older drawings that couldn’t be printed directly from the CAD system had to be copied using a special machine that would duplicate C- and D-size prints, which was always kind of fun as well. Over time I looked through all the company’s brochures to compile the most complete possible list of refiner types, sizes, and capacities, and doubtless I would have done that with other types of equipment as well. (I was surprised nobody had this. I expected some items to be custom for each job but probably not as many things as appeared to be.)

When I designed and sized new systems for proposals I wasn’t really doing discovery. I was only drawing and characterizing the systems that were being worked out by the senior engineers and managers for each customer through their internal discussions and external interactions with customers.

Westinghouse Nuclear Simulator Division
In this case the discovery was done by senior engineers, in concert with senior engineers from the customers’ power plants. They would sit together and highlight the equipment, pipes, connections, and instrumentation shown on the P&IDs using wide highlighter markers. Items on the drawing would be highlighted if their function was controlled by something on one of the main control room panels or if their function affected a reading on one of those panels. The main fluid and electrical components were all highlighted but most sample ports, cleanout loops, and other support equipment that was only used during outages was not highlighted. The highlighted items were then redrawn in a simplified form and given to the system modelers to guide their work. That is, the discovery was essentially complete when the individual modelers were brought into the process; the modelers were then left to do the research needed to characterize the systems they were assigned. This involved data collection, calculation, and a certain amount of interaction with the customer. That was a huge effort and I felt like I gained a career’s worth of experience doing it.

Micro Control Systems
Discovery here meant learning enough about the software system to be able to make the required changes. I also had to learn a new operating system, though that isn’t part of what I’m calling Discovery. I ended up acquiring a lot of domain knowledge of steel heating and processing (which helped me land the position at Bricmont), but that was just the raw information the populated the data structures I was modifying.

CIScorp
A lot of the discovery activities at CIScorp were kind of ad hoc but I learned more about it on one job than I did on any other before or since. The funny thing about it is that it seemed like the easiest thing in the world to do at the time, which I’m sure I owe to the experience of my senior manager and our customer liaison, who was himself either a director or VP. The project was to automate a heavily manual process of evaluating a potential customer’s employees’ health status to determine whether or not to underwrite a disability insurance policy, and if so what premium to company would need to pay. The goal was to identify every process and information flow within the department and see what could be automated.

We implemented new systems using FileNet document imaging hardware and software and the automation was achieved by scanning all received documents (of which thousands would come in from the employees’ health care providers) and performing all the collating, review, scoring, and decision-making steps using electronic copies of the documents rather than the physical paper copies that were sent in. It turned out that physically processing and moving all that paper required a lot of labor, far more than what was required to scan and index the originals and ship them off to long-term storage without needing to be highly collated.

The customer liaison walked us through every step of the existing manual process and had us talk to a handful of people in each function. There were something like eight different major functions and each of those had up to 40 employees. This was needed to handle the thousand of documents that flowed into the insurer each day. My senior manager was with us for the first week or so and then more or less left us on our own (I was the project coordinator and site rep and I had one younger analyst with me who was fresh out of college) after having imparted some wisdom about what to expect, how to interact with certain people, and how to map things out.

We actually combined the initial discovery process with some light data collection. While we were meeting people and learning the process we got estimations of how long each operation took to complete (based on limited observations and descriptions provided by individual workers) and how often various sub-operations occurred given the total number of documents processed. This information allowed us to calculate the total number of labor hours expended in every activity, and how many could be saved through automation. This was important because the first phase of that work was a competitive feasibility study to see which vendor (and toolset) would be used for the implementation. Since our company and the FileNet tools we were using won that competition (I was part of another two-person team that did something similar for a different insurance company), our company got to do the implementation as well. That involved another round of elicitation and discovery to gather the details needed to identify the data fields, business rules, user interfaces, and calculations that would be needed.

Bricmont (now Andritz)
Most of my work at Bricmont involved discovery processes that were much like those at Westinghouse, meaning that the contract defined what the system had to do, what it was connected to, and what the inputs and outputs were. Some aspects of each project were specified formally in the contract. That said, there was a certain amount of working out the details of the interfaces with other system vendors and the customer, so that was definitely discovery as opposed to data collection and characterization.

For example, the plant might have a certain convention for naming individual slabs, and that information had to be tracked from the caster (provided by a different vendor), to and through the furnace (that we provided), and to and through the rolling mill (provided by a different vendor) and points beyond (coiling and shipping, provided by other vendors). That information was also shared with the Level 1 and Level 2 systems within the furnace system and the Level 3 (and possibly Level 4) system(s) of the plantwide system.

Here’s a documented example of an interface arranged between a thin slab caster and a tunnel furnace in Monterrey, Mexico. It defines three types of messages, each of which includes a header data. The messages were specified in terms of the binary layout of the data and the messages were sent between DEC computers using the DECMessageQ protocol. The programmers also had to know how the bytes were packed on 1-, 2-, or 4-byte boundaries for debugging and testing purposes.


Caster Data
A cut at the shear at the caster end of the furnace, causes the caster level 2 computer sends telegrams to the furnace level 2 computer. These messages describe the slab data and chemistry for the piece currently entering the furnace when a tail is cut, or slab data for the piece whose head was just created, when a head is cut. The tail message is also sent when tailing out. These messages contain a header with the data. The header contains the following data:

No. Name Type Format Length Designation
1 WHO ASCII A7 7 Sender
2 DATE ASCII A10 10 Date
3 TIME ASCII A8 8 Time
4 TYPE ASCII A1 1 Telegram number
5 LEN ASCII A4 4 Message length

Three messages (telegrams) are associated with Caster data. They are:

Telegram 01: Thin Slab Report – Caster to TF, on tail cut
Telegram 02: Analysis Data – Caster to TF, on tail cut
Telegram 03: Thin Slab Report – Caster to TF, on head cut

The details of the caster messages follow:

Telegram: 01
Sender: Caster Computer
Receiver: Tunnel Furnace Computer
Function: Final Thin Slab Data Report
Occurrence: After each valid tail cut complete
Length: 76 bytes with header, 46 bytes without header

Data Fields :

No. Data/Comment Format Data Type Min/Max Units
1 Slab Number Char C*9
2 Plan Number Char C*9
3 Slab Type Char C*1 H,T,W
4 Grade ASCII Char C*7
5 Weight Word I*4 KG
6 Width Word I*4 mm
7 Thickness Word I*4 mm
8 Estimated Length Word I*4 mm
9 Actual Length Word I*4 mm
Telegram: 02
Sender: Caster Computer
Receiver: Tunnel Furnace Computer
Function: Analysis Data
Occurrence: After each valid tail cut complete
Length: 132 bytes with header, 102 bytes without header

Data Fields:

No. Data/Comment Format Data Type Min/Max Units
1 Slab Number Char C*9
2 Plan Number Char C*9
3 C Word I*4 .001%
4 Mn Word I*4 .001%
5 Si Word I*4 .001%
6 P Word I*4 .001%
7 S Word I*4 .001%
8 Cu Word I*4 .001%
9 Pb Word I*4 .001%
10 V Word I*4 .001%
11 Ni Word I*4 .001%
12 Cr Word I*4 .001%
13 Mo Word I*4 .001%
14 Ti Word I*4 .001%
15 Ca Word I*4 ppm
16 Sn Word I*4 .001%
17 As Word I*4 .001%
18 Pb Word I*4 .001%
19 Al Word I*4 .001%
20 Al-S Word I*4 .001%
21 O Word I*4 ppm
Telegram: 03
Sender: Caster Computer
Receiver: Tunnel Furnace Computer
Function: Initial Thin Slab Data Report
Occurrence: After each valid head cut complete
Length: 76 bytes with header, 46 bytes without header

Data Fields :

No. Data/Comment Format Data Type Min/Max Units
1 Slab Number Char C*9
2 Plan Number Char C*9
3 Slab Type Char C*1 H,T,W
4 Grade ASCII Char C*7
5 Weight Word I*4 KG
6 Width Word I*4 mm
7 Thickness Word I*4 mm
8 Estimated Length Word I*4 mm
9 Actual Length Word I*4 mm

Please refer to the SMS Caster Computer System Duty Book, section 3.5 for additional details on these messages

American Auto-Matrix
Discovery here meant learning about the software system and communications protocols. Since I was working with the software and drivers whose functions were defined by previously defined business-driven requirements, I didn’t do much of anything in the way of discovery. Early on I took a trip with several colleagues to a customer site at a college campus in New York to learn about the problems they were having with the PC software. The distributed HVAC controllers weren’t a problem but the monitoring and configuration system had some issues.

Regal Decision Systems
I had the chance to really develop and perfect my discovery techniques during my time at Regal. The first project they made for me (they didn’t have an actual place to put me so they created an opportunity for me, for which I am most appreciative) involved building a tool to simulate operations in dental and medical offices. The discovery I performed involved touring and mapping out the office, which included nine treatment rooms serving up to four dentists and a number of hygienists and assistants, and interviewing the dentists to learn what activities took place throughout the business. I also reviewed the appointment book to determine the number, duration, and type of procedures that were typically processed, as well as the communication with patients and insurance companies and administrative actions of various kinds. I used this knowledge to guide the subsequent data collection and characterization efforts.

The next thing I worked on was documenting the security inspection procedures for passengers and luggage at the the 31 largest airports in the US and territories, of which I was discovery lead at seven. Each visit started with a meeting of airport management that included demos of the kinds of simulations the company built and employed for analysis. We then collected drawings of each of the inspection spaces and toured each area to gather further details that drawings wouldn’t depict. The drawings provided most of the physical measurements and notes, questions, and observations were used to describe each area and its operations and rules in detail. We did further data collection at the same time by videotaping several hours of operations so we could quantify process times, diversion percentages, and arrival rates.

At the same time I also began to travel to land border crossings all over the northern and southern borders of the U.S.. These trips also involved kickoff meetings and demos with senior Customs and Immigration officers (these formerly warring departments have since been merged, along with supporting functions, into the combined Customs and Border Protection Agency, which is part of Homeland Security) and continued with tours of the port facilities. We learned what facilities and operations took place at each location, the kind of traffic served (private vehicles, commercial vehicles, buses, and pedestrians; some ports also process rail traffic, but I never collected data on those). The initial discovery efforts guided subsequent data collection and process characterization activities.

The company eventually built tools to model the port activities on the Canadian and Mexican sides of the border as well (plus operations on southern Mexico; I visited a crossing on the Guatemala border), and their processes were very different than those employed on the U.S. side. I was the lead discovery agent and wrote all the documentation that would be used to build the modeling tools and guide data collection efforts. We started by traveling to Ottawa and Mexico City to meet with representatives from the major agencies that had a presence at their respective port facilities to learn what their concerns were, what they might have wanted to learn from simulation analyses we performed. We then started touring representative large ports, first to discover the specific activities that took place at each, and later to perform combined discovery and data collection efforts. I ultimately visited twenty three different facilities in person and analyzed that many more in detail.

Another major effort undertaken during my time at Regal involved building evacuations, and we built tools to simulate a wide variety of buildings and ground areas (e.g., the National Mall during mass participation events like 4th of July fireworks) for several different applications and four major customers. I helped do the elicitation of business needs and capabilities, did in-person tours to gather details not shown in floor plan and other drawings we collected, and negotiated interfaces (data items, layouts, and communication protocols) with partner vendors. I did a lot of design, documentation, UI wireframing, coding, and analysis for all these projects.

I took part in similar activities for a few additional efforts of smaller scope around also taking over a program management slot for some Navy support contracts. That involved discovery of an entirely different kind! That aside, I learned a lot about first responder operations, and the National Incident Management System (NIMS), and a few interesting pieces of related technology.

RTR Technologies
Since RTR had split off from Regal and served largely the same customers, I was able to leverage a lot of my existing subject matter expertise and domain knowledge. Although we did a lot of work related to border operations I never traveled to or analyzed operations in detail at any individual port. Instead, I learned about how data is collected and processed behind the scenes nationally and how headquarters processes it to determine and justify the staff (and funding) needed to support the operations CBP is charged with carrying out. These discovery efforts involved interviews with a lot of managers, engineers, and analysts in DC as well as reverse engineering existing staffing models and learning the process by which each year’s estimates were put together.

The other major activity, which took up the majority of my time, was supporting an ongoing program that maintained and employed a large, complex simulation of aircraft maintenance and support logistics. A lot of the discovery for me involved learning the software and its operations, the relevant data sources, the rules governing maintenance operations, and the needs of the different analysis efforts. Discovery also involved learning details of many different aircraft.

The final effort I supported, and ultimately marshaled to a close, was a third-party (independent) VV&A of a software tool used by the Navy and Marine Corps to manage fleets of aircraft over their deployment and service life. Discovery in this case included learning the details of the software itself but more importantly the methodology we needed to employ to conduct the independent verification and validation effort of the two major software modules.

Conclusion

Discovery mostly involves learning about the process you are trying to create, model, automate, or improve. This is done using interviews, observation, document review, and other means. I classify learning about internal and external customer needs to be something different; I tend to call that Intended Use Definition. However, the processes often overlap and are conducted iteratively.

Posted in Tools and methods | Tagged , , , | Leave a comment

A Simulationist’s Framework for Business Analysis: Combined Survey Results

These results are combined from versions of this talk given in Pittsburgh, DC, and Baltimore.

List at least five steps you take during a typical business analysis effort.

I would list the steps I take in conducting an analysis project as follows.

Project Planning
Intended Use (Identify or Receive Business Needs)
Assumptions, Capabilities, and Risks and Impacts
Conceptual Model (As-Is State)
Data Sources
      –Requirements (To-Be State: Abstract)
      –Functional (What it Does)
Non-Functional (What it Is, plus Maintenance and Governance)
Design (To-Be State: Detailed)
Implementation
Test
      –Operation and Usability (Verification)
      –Outputs (Validation)
Acceptance (Accreditation)
Project Closeout

As I point out in the presentation the knowledge in the BABOK and the ideas they contain map roughly to these steps, though they are necessarily a bit generalized. The audience members whose surveys I collected reported that they follow the same rough procedures, in whole or in part; they just tend to use different language in many cases.

  1. Requirements Gathering
  2. Initiation
  3. Testing
  4. QA
  5. Feedback
  6. User acceptance
  1. Requirement Elicitation
  2. UX Design
  3. Software Design for Testability
  1. Identify Business Goal
  2. ID Stakeholders
  3. Make sure necessary resources are available
  4. Create Project Schedule
  5. Conduct regular status meetings
  1. Meet with requester to learn needs/wants
  2. List details/wants/needs
  3. Rough draft of Project/proposed solutions
  4. Check in with requester on rough draft
  5. Make edits/adjustments | test
  6. Regularly schedule touch-point meeting
  7. Requirement analysis/design | functional/non-functional
  8. Determine stakeholders | user acceptance
  1. List the stakeholders
  2. Read through all documents available
  3. Create list of questions
  4. Meet regularly with the stakeholders
  5. Meet with developers
  6. Develop scenarios
  7. Ensure stakeholders ensersy(?) requirements
  8. other notes
    • SMART PM milestones
    • know players
    • feedback
    • analysis steps
    • no standard
  1. identify stakeholders / Stakeholder Analysis
  2. identify business objectives / goals
  3. identify use cases
  4. specify requirements
  5. interview Stakeholders
  1. project planning
  2. user group sessions
  3. individual meetings
  4. define business objectives
  5. define project scope
  6. prototype / wireframes
  1. identify audience / stakeholders
  2. identify purpose and scope
  3. develop plan
  4. define problem
  5. identify objective
  6. analyze problems / identify alternative solutions
  7. determine solution to go with
  8. design solution
  9. test solution
  1. gathering requirements
  2. assess stakeholder priorities
  3. data pull
  4. data scrub
  5. data analysis
  6. create summary presentation
  1. define objective
  2. research available resources
  3. define a solution
  4. gather its requirements
  5. define requirements
  6. validate and verify requirements
  7. work with developers
  8. coordinate building the solutions
  1. requirements elicitation
  2. requirements analysis
  3. get consensus
  4. organizational architecture assessment
  5. plan BA activities
  6. assist UAT
  7. requirements management
  8. define problem to be solved
  1. understand the business need of the request
  2. understand why the need is important – what is the benefit/value?
  3. identify the stakeholders affected by the request
  4. identify system and process impacts of the change (complexity of the change)
  5. understand the cost of the change
  6. prioritize the request in relation to other requests/needs
  7. elicit business requirements
  8. obtain signoff on business requests / validate requests
  1. understanding requirements
  2. writing user stories
  3. participating in Scrums
  4. testing stories
  1. research
  2. requirements meetings/elicitation
  3. document requirements
  4. requirements approvals
  5. estimation with developers
  6. consult with developers
  7. oversee UAT
  8. oversee business transition
  1. brainstorming
  2. interview project owner(s)
  3. understand current state
  4. understand need / desired state
  5. simulate / shadow
  6. inquire about effort required from technical team
  1. scope, issue determination, planning
  2. define issues
  3. define assumptions
  4. planning
  5. communication
  6. analysis – business and data modeling
  1. gather data
  2. sort
  3. define
  4. organize
  5. examples, good and bad
  1. document analysis
  2. interviews
  3. workshops
  4. BRD walkthroughs
  5. item tracking
  1. ask questions
  2. gather data
  3. clean data
  4. run tests
  5. interpret results
  6. visualize results
  7. provide conclusions
  1. understand current state
  2. understand desired state
  3. gap analysis
  4. understand end user
  5. help customer update desired state/vision
  6. deliver prioritized value iteratively
  1. define goals and objectives
  2. model As-Is
  3. identify gaps/requirements
  4. model To-Be
  5. define business rules
  6. conduct impact analysis
  7. define scope
  8. identify solution / how
  1. interview project sponsor
  2. interview key stakeholders
  3. read relevant information about the issue
  4. form business plan
  5. communicate and get buy-in
  6. goals, objectives, and scope
  1. stakeholder analysis
  2. requirements gathering
  3. requirements analysis
  4. requirements management – storage and updates
  5. communication – requirements and meetings
  1. analyze evidence
  2. design application
  3. develop prototype
  4. implement product
  5. evaluate product
  6. train users
  7. upgrade functionality
  1. read material from previous similar projects
  2. talk to sponsors
  3. web search on topic
  4. play with current system
  5. ask questions
  6. draw BPMs
  7. write use cases
  1. document current process
  2. identify users
  3. meet with users; interview
  4. review current documentation
  5. present proposed solution or iteration
  1. meeting with stakeholders
  2. outline scope
  3. research
  4. write requirements
  5. meet and verify with developers
  6. test in development and production
  7. outreach and maintenance with stakeholders
  1. As-In analysis (current state)
  2. write lightweight business case
  3. negotiate with stakeholders
  4. write user stories
  5. User Acceptance Testing
  6. cry myself to sleep đŸ™‚
  1. initiation
  2. elicitation
  3. discussion
  4. design / user stories / use cases
  5. sign-off
  6. sprints
  7. testing / QA
  8. user acceptance testing

List some steps you took in a weird or non-standard project.

I would classify these as specific activities that fall into place in the normal framework and are only listed as non-standard because the individuals reporting them hadn’t done them often enough to see that. I’ve done almost all of these things at one time or another. There is also the possibility that people listed these things as non-standard simply because they were asked to.

As the respondents were of many ages and levels of experience it makes me wonder how people come to be business analysts. It seems to me that most people transition to the practice from other functions, though I’ve met people who began their careers doing it. Certainly the IIBA is making allowances for training beginning professionals in the practice. I am somewhere in between. I began as an engineer and software developer who just happen to analyze systems using the same techniques. Most of the systems were fluid or thermodynamic systems but there was an interval where I worked for a company that did business process reengineering using a document imaging capability.

One other item that comes up in this list is the need to iron out problems with individuals and managers, which sometimes involves working around them. If there’s anything that’s more common than needing to iron out problems with people, I don’t know what it is. That comes up in every profession and every relationship. I can report that my ability to fulfill professional duties, like all others, has gotten easier as I’ve learned how to work with people more cooperatively, smoothly, and supportively.

  • Steps:
    1. Why is there a problem? Is there a problem?
    2. What can change? How can I change it?
    3. How to change the process for lasting results
  • Adjustments in project resources
  • after initial interview, began prototyping and iterated through until agreed upon design
  • create mock-ups and gather requirements
  • describing resource needs to the customer so they better understand how much work actually needs to happen and that there isn’t enough staff
  • Developers and I create requirements as desired
  • documented non-value steps in a process new to me
  • explained project structure to stakeholders
  • guided solutioning
  • identified handoffs between different contractors
  • interview individuals rather than host meetings
  • iterative development and delivery
  • Made timeline promises to customers without stakeholder buy-in/signoff
  • make excutive decisions withoutstakeholder back-and-forth
  • observe people doing un-automated process
  • personally evaluate how comitted mgt was to what they said they wanted
  • phased delivery / subject areas
  • physically simulate each step of an operational process
  • Regular status reports to CEO
  • simulation
  • starting a project without getting agreed funding from various units
  • statistical modeling
  • surveys
  • town halls
  • Travel to affiliate sites to understand their processes
  • use a game
  • using a ruler to estimate level of effort to digitize paper contracts in filing cabinets gathered over 40 years
  • work around manager who was afraid of change – had to continually demonstrate the product, ease of use, and savings
  • Write requirements for what had been developed

Name three software tools you use most.

The frequency of reported tool use matches closely with my own experience. Excel is such a generalized and approachable tool that people tend to use it for almost everything. It’s no wonder BAs explicitly report using it more often than anything else. Word and PowerPoint are almost as ubiquitous and like SharePoint, Outlook, and similar tools they organize and enhance communication and mutual understanding. Jira and Confluence are used quite often to manage requirements and work products using a standard relational tool. They are used quite often are more and more shops adopt Agile methods in general and Scrum and its cousins in particular.

Specialized tools like SQL and R come up less often than I might have expected, but we’re working with a small sample size and there may be cases where audience members use it but not as often as other tools they did report. Several specialized development tools were listed, showing that there is an overlap between analysis and development skills.

  • Excel (15)
  • Word (11)
  • Visio (8)
  • Jira (7)
  • Confluence (5)
  • SharePoint (4)
  • PowerPoint (3)
  • MS Outlook (2)
  • Notepad (2)
  • SQL Server (2)
  • Adobe Reader (1)
  • all MS products (1)
  • ARC / Knowledge Center(?) (Client Internal Tests) (1)
  • Azure (1)
  • Basecamp (1)
  • Blueprint (1)
  • CRM (1)
  • database, spreadsheet, or requirement tool for managing requirements (1)
  • Doors (1)
  • Email (1)
  • e-mail (1)
  • Enbevu(?) (Mainframe) (1)
  • Enterprise Architect (1)
  • Google Docs (1)
  • Google Drawings (1)
  • illustration / design program for diagrams (1)
  • LucidChart (1)
  • MS Office (1)
  • MS Project (1)
  • MS Visual Studio (1)
  • MS Word developer tools (1)
  • NUnit (1)
  • OneNote (1)
  • Process 98 (1)
  • Python (1)
  • R (1)
  • requirements repositories, e.g., RRC, RTC (1)
  • RoboHelp (1)
  • Scrumhow (?) (1)
  • SnagIt (1)
  • SQL (1)
  • Tableau (1)
  • Team Foundation Server (1)
  • Visible Analyst (1)
  • Visual Studio (MC) (1)

Name three non-software techniques you use most.

It’s a bit more difficult to group and count these. The ideas of interviewing and meeting come up more than anything else, though almost never in exactly the same language. Various forms of diagramming, modeling, and decomposition come up often as well, and those are tools I emphasize in my own practice and in the slides. It might make a fun group exercise to group these into categories, though I’m sure the results would not be surprising.

  • communication (3)
  • meetings (2)
  • prototyping (2)
  • Scrum Ceremonies (2)
  • “play package” (1)
  • 1-on-1 meetings to elicit requirements (1)
  • active listening (1)
  • analysis (1)
  • analyze audience (1)
  • apply knowledge of psychology to figure out how to approach the various personalities (1)
  • business process analysis (1)
  • calculator (1)
  • conflict resolution and team building (1)
  • costing out the requests (1)
  • data modeling (1)
  • decomposition (1)
  • develop scenarios (1)
  • diagramming/modeling (1)
  • documentation (1)
  • elicitation (1)
  • expectation level setting (1)
  • facilitiation (1)
  • fishbone Diagram (1)
  • five Whys (1)
  • handwritten note-taking (1)
  • hermeneutics / interpretation of text (1)
  • impact analysis (1)
  • individual meetings (1)
  • initial mockups / sketches (1)
  • interview end user (1)
  • interview stakeholders (1)
  • interview users (1)
  • interviews (1)
  • JAD sessions (Joint Application Development Sessions) (1)
  • listening (1)
  • lists (1)
  • Notes (1)
  • organize (1)
  • paper (1)
  • pen and paper (1)
  • phone calls and fate-to-face meetings (1)
  • process decomposition (1)
  • process flow diagrams (1)
  • process mapping (1)
  • process Modeling (1)
  • recognize what are objects (nouns) and actions (verbs) (1)
  • requirements meetings (1)
  • responsibility vs. collaboration using index cards (1)
  • rewards (food, certificates) (1)
  • shadowing (1)
  • Spreadsheets (1)
  • surveys (1)
  • swim lanes (1)
  • taking notes (1)
  • test application (1)
  • training needs analysis (1)
  • use paper models / process mapping (1)
  • user group sessions (1)
  • user stories (1)
  • whiteboard diagrams (1)
  • whiteboard workflows (1)
  • wireframing (1)
  • workflows (1)

Name the goals of a couple of different projects (e.g., automate a manual process, interface to a new client, redesign screens, etc.)

We unsurprising find our goals to be, roughly: automate, create, change, develop, improve, process, redesign/re-engineer, replace, simplify, update. Ha ha, it’s almost like we’re talking about business analysis and process improvement!

  • adhere to regulatory requirements
  • adjusting solution to accommodate the needs of a new/different user base
  • automate a manual login/password generation and dissemination to users
  • automate a manual process
  • automate a manual process, reduce time and staff to accomplish a standard organizational function
  • automate a paper-based contract digitization process
  • automate and ease reporting (new tool)
  • automate new process
  • automate the contract management process
  • automation
  • block or restore delivery service to areas affected by disasters
  • clear bottlenecks
  • create a “how-to” manual for training condo board members
  • create a means to store and manage condo documentation
  • create a reporting mechanism for healthcare enrollments
  • data change/update
  • data migration
  • develop data warehouse
  • develop effort tracking process
  • develop new functionality
  • document current inquiry management process
  • enhance system performance
  • implement new software solution
  • improve a business process
  • improve system usability
  • improve user interface
  • include new feature on mobile application
  • increase revenue and market share
  • maintain the MD Product Evaluation List (online)
  • map geographical data
  • move manual Excel reports online
  • process data faster
  • process HR data and store records
  • provide business recommendations
  • recover fuel-related cost fluctuations
  • redesign
  • reduce technical debt
  • re-engineer per actual user requirements
  • reimplement solution using newer technology
  • replace current analysis tool with new one
  • “replat form” legacy system (?)
  • simplify returns for retailer and customer
  • system integration
  • system integration / database syncing
  • update a feature on mobile app
Posted in Tools and methods | Tagged , , , , | Leave a comment

A Simulationist’s Framework for Business Analysis: Round Three

I was finally able to give this talk to the IIBA’s Baltimore Chapter. The results of the newest survey are below, and the results from the initial talks are here and here. The link to the current version of the presentation follows directly.

https://www.rpchurchill.com/presentations/SimFrameForBA/index.html

Here are the latest survey results.

List at least five steps you take during a typical business analysis effort.

  1. stakeholder analysis
  2. requirements gathering
  3. requirements analysis
  4. requirements management – storage and updates
  5. communication – requirements and meetings
  1. analyze evidence
  2. design application
  3. develop prototype
  4. implement product
  5. evaluate product
  6. train users
  7. upgrade functionality
  1. read material from previous similar projects
  2. talk to sponsors
  3. web search on topic
  4. play with current system
  5. ask questions
  6. draw BPMs
  7. write use cases
  1. document current process
  2. identify users
  3. meet with users; interview
  4. review current documentation
  5. present proposed solution or iteration
  1. meeting with stakeholders
  2. outline scope
  3. research
  4. write requirements
  5. meet and verify with developers
  6. test in development and production
  7. outreach and maintenance with stakeholders
  1. As-Is analysis (current state)
  2. write lightweight business case
  3. negotiate with stakeholders
  4. write user stories
  5. User Acceptance Testing
  6. cry myself to sleep đŸ™‚
  1. initiation
  2. elicitation
  3. discussion
  4. design / user stories / use cases
  5. sign-off
  6. sprints
  7. testing / QA
  8. user acceptance testing

List some steps you took in a weird or non-standard project.

  • documented non-value steps in a process new to me
  • guided solutioning
  • identified handoffs between different contractors
  • iterative development and delivery
  • make executive decisions without stakeholder back-and-forth
  • personally evaluate how committed management was to what they said they wanted
  • phased delivery / subject areas
  • starting a project without getting agreed funding from various units
  • work around manager who was afraid of change – had to continually demonstrate the product, ease of use, and savings

Name three software tools you use most.

  • Excel (4)
  • Jira (3)
  • Word (3)
  • Confluence (2)
  • PowerPoint (2)
  • e-mail (1)
  • Google Docs (1)
  • Google Drawings (1)
  • MS Word developer tools (1)
  • RoboHelp (1)
  • SharePoint (1)
  • SnagIt (1)
  • Visio (1)

Name three non-software techniques you use most.

  • analysis
  • analyze audience
  • apply knowledge of psychology to figure out how to approach the various personalities
  • communication
  • expectation level setting
  • JAD sessions (Joint Application Development Sessions)
  • meetings
  • phone calls and fate-to-face meetings
  • “play package”
  • process flow diagrams
  • prototyping
  • test application
  • training needs analysis
  • user stories
  • whiteboard diagrams
  • wireframing
  • workflows

Name the goals of a couple of different projects (e.g., automate a manual process, interface to a new client, redesign screens, etc.)

  • automate and ease reporting (new tool)
  • automate the contract management process
  • automation
  • block or restore delivery service to areas affected by disasters
  • create a “how-to” manual for training condo board members
  • create a means to store and manage condo documentation
  • create a reporting mechanism for healthcare enrollments
  • develop data warehouse
  • develop effort tracking process
  • develop new functionality
  • document current inquiry management process
  • maintain the MD Product Evaluation List (online)
  • move manual Excel reports online
  • process HR data and store records
  • recover fuel-related cost fluctuations
  • redesign
  • reduce technical debt
  • re-engineer per actual user requirements
  • replace current analysis tool with new one
  • simplify returns for retailer and customer

A compilation of results from all three surveys will be posted shortly.

Posted in Tools and methods | Tagged , , , , | Leave a comment

Structured Thought: Problem Solving

On Monday I attended a Meetup at the IIBA’s Pittsburgh Chapter for a presentation about structured problem solving given by a gentleman named Greg Acton.

The presentation included an introduction of about twenty minutes, a small group exercise for another twenty minutes, and a follow-on group discussion for a further twenty minutes, all of which were highly engaging. Mr. Acton added another twenty minutes or so sharing an interesting story about how the analysis technique he described worked on a different problem at a well-known organization.

I enjoyed the entire affair because it lines up so closely with my own experiences and body of knowledge. To begin, Mr. Acton and I have a number of shared or parallel experiences. We both spent some early years in the military (he on Aegis Missile cruisers, I in Air Defense Artillery, where I worked with several different missile systems; I also supported several Navy programs over the years and was at one point offered a job building maintenance training simulations of Aegis cruisers, which for various reasons I declined). We both trained as Six Sigma Black Belts. He plied his Six Sigma training at Amazon in the early 2000s. I made it through several rounds of programming tests and phone interviews with Amazon in the late 90s, with a goal of programming large-scale optimizations of product distribution and warehouse pre-positioning involving 60,000 variables at a pop, before deciding that 70 hours a week working on Unix systems in Seattle was less appealing than other options I had at the time (which themselves sometimes involved huge commitments of time). Finally, we both attended Carnegie Mellon University (he in computer science followed by an MBA, I in mechanical engineering with a bunch of computing courses). One of the courses I enjoyed most during my time there was called Analysis, Synthesis, and Evaluation (ASE, it’s no longer offered under that name but the ideas remain in force all through the university’s verbiage), which was all about problem solving and creative thinking. One of the difficult things for me to assimilate at that time was the idea that many of the parameters of a problem could be estimated just by thinking about them. I was stuck on the concept of having to go look and measure things. Having looked at and measured countless things over the years I have come to appreciate the idea of getting an order-of-magnitude hack at a problem without a vast amount of moving around.

The example I’ll offer is one our course instructor gave, which was estimating how much water a pendulum-driven bilge pump might have to remove from a boat during the course of a day. Figuring one wave-driven pump cycle every few seconds and a volume of water measuring twenty feet by five feet by four inches (give or take in any direction), you might end up with 57,400 cubic inches of water. Hmmm. Given 86,400 seconds per day we might need a bilge pump to remove one the order of two cubic inches per cycle. That seems like a lot. If we assume ten feet by two feet by three inches we get something like a half a cubic inch per cycle. That could work for smaller boats and for less leakage. When the professor worked it out in class he suggested the amount of water moved per cycle was roughly equivalent to spitting one time. That seemed tractable for a simple, inexpensive, unpowered device for small boats.

One thing we didn’t get to do in that course, which I very much looked forward to, was participate in the bridge building exercise. To illustrate that engineers are essentially economists who do physics, the activity shows that engineering decisions are driven by a constrained optimization process. The goal is to build a bridge that spans a specified distance (about 24 or 30 inches) and holds a flat platform/roadway. Weights are then hung from the roadway. Points are scores for the amount of weight supported up to 35 pounds and for the inverse weight of the bridge. That is, the lighter the bridge, the more points scored. Structures were typically made out of combinations of balsa wood, fishing line, cardboard, and glue. I saw older classmates do this before my junior year but for whatever reason we didn’t do this. (I remain somewhat cranky that I feel like I spent four years doing random differential equations and never feeling like I was ever doing anything, but I guess that’s why I enjoyed my computing courses and hands-on projects I got to do with my fraternity during those years.) I heard that a similar contest in earlier years involved building a contraption to help an egg survive being dropped from the nearby Panther Hollow Bridge. (We jokingly referred to this as the Panther Hollow Drop Test and applied it to many items in a humorous vein.) Points in this context were awarded increasingly with decreasing weight and decreasing fall time. A parachute-based device would gain points for being light but lose points for falling slowly. A flexible, water-containing device would fall quickly but be heavy. You get the idea…

One of the techniques described in the ASE class was to brainstorm every possible variable that could have an effect on the operation of a system or process. We tended to list things in groups which were plotted at right angles (imagine a chessboard in two dimensions or a Rubik’s Cube in three) to define combinations of variables we’d have to consider, which brings us back to the presentation. We were asked to analyze why certain classes of Lyft drivers achieved slightly different outcomes. (I’m going to be deliberately vague about the details.) The specific method Mr. Acton described involved starting with one or more root factors and then branching off through more and more sub-factors, listing possible outcomes of each (in this case each factor could reasonably be divided into only two cases). In our group I suggested listing every possible factor that could affect a trip (or fare) and a driver’s willingness to volunteer for it. The speaker described a left-to-right breakdown of the problem (one to many) but allowed that a right-to-left approach could also be valid (start with many and organize). That turns out to be a common approach, and brainstorming is, of course, a standard BABOK technique. We knew about surge pricing, where prices increase to incentivize drivers to meet demand in more difficult situations, so we concentrated on listing factors which different drivers would approach differently. We didn’t get our list diagrammed out the way we were shown but we had a good conversation. And, obviously, one can only expect to get so far in twenty minutes.

Next came the open discussion, where each of the six groups described what they had come up with. It was interesting that more or less every group came up with at least some unique factors to consider in analyzing the problem. When Mr. Acton described the ultimate findings they made sense. The finding also noted that the system was “working as designed” and so there wasn’t necessarily a “problem.” The kinds of analyses each group was doing could ultimately have led them to the correct answer, though I can’t remember offhand if any of the groups had listed either of the two factors that ultimately drove the variation that inspired the analysis in the first place.

On the theory that if all you have is a hammer everything looks like a nail I was reminded all through the evening of a number of techniques I’ve used and how they could also be applied successfully.

The first was called Force Field Analysis, which I learned about from a Total Quality Management (TQM) consultant in the mid-90s. I don’t know if that could have solved the whole problem, but I was reminded of it during the small group session as we were thinking about how different factors affected different types of drivers.

As a simulationist I clearly spent a lot of time thinking both about how I might build a simulation of the Lyft ecosystem that could generate statistics that would illuminate the problem and how I would definitely do discovery and collect data for building it. The latter activity alone may well allow me (or anyone) to identify the factors that turned out to be decisive.

I was finally reminded of the idea of analyzing equations to see the effects of modifying every possible term. Back in the day I wrote a tool to calculate pressure drops through sections of piping systems of the type found in paper mills and nuclear power plants. I had to do a bit of digging in old code and documentation and a bit of web searching to recreate the crime đŸ™‚ but the basic equation is shown below (click on it to go to the source page).

That’s not all there is to it, the Darcy-Weisbach friction coefficient (represented by the lambda character), itself needs to be calculated (and is in my code). One method for doing so is linked at the linked page. I’d give you what I had in my engineering notes from December of 1989 but that particular file is somewhere in storage. Anyway, the point is that you can see what happens if you change things. The length term is in the numerator so it’s clear that pressure drops will be larger in longer runs of pipe. The diameter of the pipe is in the denominator. This indicates that a larger (wider) pipe will incur a lower pressure drop. The density term is in the denominator, which means that you need to move more mass you need more pressure. The velocity term is in the denominator and is a squared function, which means that you would need four times the pressure to move the fluid at double the speed. The coefficient is not constant, and varies with factors like the roughness of the inside of the pipe (which itself may be a function of its age) and the viscosity of the fluid being pushed (molasses is more viscous than water or steam and thus requires more pressure to push).

This kind of analysis is great when the terms are well understood and verified through experiment and long practice. When you’re trying to write your own equations, however, you can run into trouble.

Consider the analysis of several factors on the price of a good in a market. The best economic analysts know you can’t write an accurate equation to describe this but as a thought experiment we can write something like:

This isn’t a “real” equation in any sense but it can be a useful tool for understanding the factors that may affect prices. If the supply of money or demand for a good increase, those terms being in the numerator, the price of the good should, in theory, increase. If the demand for money or the supply of a good increase then the price of the good would tend to decrease. Like everything in economics it’s way more complicated but remember it’s a thought experiment, a tool.

There are a few other things we can do with this rough tool. We can look at all of the terms individually and see what factors affect the values of each. Many economic theorists assert that the supply of money should be defined in different ways. The Federal Reserve publishes a whole range of values for different formulations of the money supply. Other economists define even more values. When it comes to demand for money how does one even assign a numeric value to it? The same applies to the demand for a good. How do substitution effects figure in? What special situations occur? Do you measure the supply of a good by what’s available to consumers, what’s sitting in warehouses (or in DeBeers’ diamond safes being purposefully held off the market), or what’s planned in terms of production capacity or reserves in the ground?

Analysts often have trouble defining concepts like inflation, and many different formulations and definitions exist. As citizens and observers we feel in our gut that it’s a comparison between the supply and demand of money vs. the supply and demand of goods on a macro scale, and we might be able to consider the problem in terms of the above equation. The use of such an equation is problematic but again we can do some useful thought exercises. If the money supply goes down then we would usually expect that prices would go down (we’d call this negative inflation or deflation). So far, so good, right? But what if the demand for money goes down even more? This would (again, in theory) cause prices to rise. This would be counterintuitive to many economists and many other observers.

Looking at the problem from a different angle we could ask whether trying to come up with a single index value for inflation across an entire economy even has meaning. Consider the following graph of price changes in different sectors over a recent twenty year period.

If prices in different sectors are changing at such different rates then what does a single, aggregate index number mean? Are the sector indices even meaningful given that they are also based on a myriad of individual prices?

A really good monetary theorist would account for all of these terms but some prominent economists appear to omit terms, like the demand for money. How would that affect an analysis?

Returning to our work as system, process, or business analysts we see that we have to try very hard to ensure we identify every relevant factor. To do this we may have to identify every possible factor and explicitly determine whether it’s relevant. I like to invoke this cover photo from an interesting book to illustrate the point.

This image cleverly demonstrates why it’s important to look at problems from all angles. The many techniques listed in the BABOK represent a diffuse way to look at problems from many angles, while something like Unified Modeling Language (UML) is a more organized way.

Mr. Acton also referred to some prickly problems that involved large numbers of interrelated values, and he used the term cross-product to describe that interrelatedness. I don’t remember the exact details or context but it made me remember a series of mechanisms in a large scale maintenance logistics model I worked with for nearly five years while supporting Navy and Marine Corps aviation activities. A number of very smart people analyzed, pondered, modified, and argued over the model the entire time with a near-religious fervor (which I loved, by the way) and we ended up making some fairly substantive changes during the time I was there. One change was based on an understanding I came to of the context of how a complicated cross-product actually generated outcomes in the heart of the model. Again, I’m not sure we were thinking about the same things but we’re clearly both thinking of complex, interrelated systems and effects.

The speaker described a single technique but also explained that he works with an organization that employs and teaches a wide range of problem-solving techniques that are all geared toward the practice of peeling problems apart so they can be properly analyzed and effectively understood and improved. I imagine that the entire body of knowledge would be worthwhile to explore.

Posted in Tools and methods | Tagged , , | Leave a comment