Understanding and Monitoring Microservices Across Five Levels

Did I know anything about microservices (or DevOps, or…) when I landed at Universal recently? No, I did not. Did that stop me from figuring it out in short order? Nope. Did that stop me from being able to reverse-engineer their code stacks in multiple languages from day one? Of course not. Did that stop me from seeing things nobody else in the entire organization saw and designing a solution that would have greatly streamlined their operations and saved them thousands of hours of effort? Not a chance. Could I see what mistakes they had made in the original management and construction of the system and would I have done it differently? You betcha.

While there are definitely specific elements of tradecraft that have to be learned to make optimal use of microservices, and there would always be more for me to learn, the basics aren’t any different than what I’ve already been doing for decades. It didn’t take long before the combination learning how their system was laid out, seeing the effort it took to round up the information needed to understand problems as they came up in the working system, and seeing a clever monitoring utility someone had created before it became obvious they needed a capability that would help them monitor and understand the status of their entire system. Like I said, it didn’t require a lot of work, it was something I could just “see.” I suggested an expanded version of the monitoring tool they’d created, that let anyone see how every part of the system worked at every level.

Now don’t get me wrong, there were a lot of smart and capable and dedicated people there, and they understood a lot of the architectural considerations. So what I “saw” wasn’t totally new in detail, but is was certainly new in terms of its scope, in that it would unify and leverage a lot of already existing tools and techniques. As I dug into what made the system tick I saw that concepts and capabilities broke down into five different layers, but first some background.

Each column in the figure above represents a single microservices environment. The rightmost column shows a production environment, the live system of record that actually supports business operations. It represents the internal microservices and the external systems that are part of the same architectural ecosystem. The external items might be provided by third-party as a standard or customized capability. They can be monitored and interacted with, but the degree to which they can be controlled and modified might be limited. The external systems may not be present in every environment. Environments may share access to external systems for testing, or may not have access at all, in which case interactions have to be stubbed out or otherwise simulated or handled.

The other columns represent other environments that are likely to be present in a web-based system. (Note that microservices can use any communication protocol, not just the HTTP used by web-facing systems. The other environments are part of a DevOps pipeline used to develop and test new and modified capabilities. Such pipelines may have many more steps, than what I’ve shown here, but new code is ideally entered into the development environment and then advanced rightward from environment as it passes different kinds of tests and verifications. There may be an environment dedicated to entering and testing small hotfixes that can be advanced directly to the production environment with careful governance.

The basic structure of a microservice is shown above. I’ve written about monitoring processes here, and this especially make sense in a complicated microservices environment. I’ve also written about determining what information needs to be represented and what calculations need to be performed before working out detailed technical instantiations here. A microservice can be thought of as a set of software capabilities that perform defined and logically related functions, usually in a distributed system architecture, in a way that ideally doesn’t require chaining across to other microservices (like many design goals or rules of thumb, this regulation need not be absolute, just know what you’re trying to do and why; or, you’ve got to know the rules before you can break them!).

I’ve listed them here in the raw form in which I discovered them. I never got to work with anyone to review and implement this stuff in detail; something came up every time we scheduled a meeting, so I’ve found inconsistencies as I worked on this article. I cleaned some things up, eliminated some duplications, and made them a bit more organized and rational down below. A more detailed write-up follows.

Layer 1 – Most Concrete
Hardware / Local Environment

  • number of machines (if cluster)
  • machine name(s)
  • cores / memory / disk used/avail
  • OS / version
  • application version (Node, SQL, document store)
  • running?
  • POC / responsible party
  • functions/endpoints on each machine/cluster

Layer 2

Gateway Management (Network / Exposure / Permissioning)

  • IP address
  • port (different ports per service or endpoint on same machine?)
  • URL (per function/endpoint)
  • certificate
  • inside or outside of firewall
  • auth method
  • credentials
  • available?
  • POC / responsible party
  • logging level set
  • permissions by user role

Level 3

QA / Testing

  • tests being run in a given environment
  • dependencies for each path under investigation (microservices, external systems, mocks/stubs)
  • schedule of current / future activities
  • QA POC / responsible party
  • Test Incident POC / responsible party
  • release train status
  • CI – linting, automatically on checkin
  • CI – unit test
  • CI – code coverage, 20% to start (or even 1%), increase w/each build
  • CI – SonarQube Analysis
  • CI – SonarQube Quality Gate
  • CI – VeraCode scan
  • CI – compile/build
  • CI – deploy to Hockey App for Mobile Apps / Deploy Windows
  • CD – build->from CI to CD, publish to repository server (CodeStation)
  • CD – pull from CodeStation -> change variables in Udeploy file for that environment
  • CD – deploy to target server for app and environment
  • CD – deploy configured data connection, automatically pulled from github
  • CD – automatic smoke test
  • CD – automatic regression tests
  • CD – send deploy data to Kibana (deployment events only)
  • CD – post status to slack in DevOps channel
  • CD – roll back if not successful
  • performance test CD Ubuild, Udeploy (in own perf env?)

Level 4

Code / Logic / Functionality

  • code complete
  • compiled
  • passes tests (unit, others?)
  • logging logic / depth
  • Udeploy configured? (how much does this overlap with other items in other areas?)
  • test data available (also in QA area or network/environment area?)
  • messaging / interface contracts maintained
  • code decomposed so all events queue-able, storable, retryable so all eventually report individual statuses without loss
  • invocation code correct
  • version / branch in git
  • POC / responsible party
  • function / microservice architect / POC
  • endpoints / message formats / Swaggers
  • timing and timeout information
  • UI rules / 508 compliance
  • calls / is-called-by –> dependencies

Level 5 – Most Abstract

Documentation / Support / Meta-

  • management permissions / approvals
  • documentation requirements
  • code standards reviews / requirements
  • defect logged / updated / assigned
  • user story created / assigned
  • POC / responsible party
  • routing of to-do items (business process automation)
  • documentation, institutional memory
  • links to related defects
  • links to discussion / interactive / Slack pages
  • introduction and training information (Help)
  • date & description history of all mods (even config stuff)
  • business case / requirement for change

That’s my stream-of-consciousness take.

Before we go into detail let’s talk about what a monitoring utility might look like. The default screen of the standalone web app that served as my inspiration for this idea showed a basic running/not-running status for every service in every environment and looked something like this. It was straight HTML 5 with some JavaScript and Angular and was responsive to changes in the screen size. There were a couple of different views you could choose to see some additional status information for every service in an environment or for a single service across all environments. It gathered most of its status information by repeatedly sending a status request to the gateway of each service in each environment. Tow problems with this approach were that the original utility pinged the services repeatedly without much or any delay, and multiple people could (and did) run the app simultaneously, which hammered the network and the services harder than was desirable. These issues could be addressed by slowing down the scan rate and hosting the monitoring utility on a single website that users could load to see the results of the scans. That would still require a certain volume of refreshes across the network (and there are ways to even minimize those) but queries of the actual service endpoints would absolutely be minimized.

It was real simple. Green text meant the service was running and red text meant it wasn’t. Additional formatting or symbology could be added for users who are colorblind.

Now let’s break this down. Starting from the most concrete layer of information…

Layer 1 – Most Concrete
Hardware / Local Environment

This information describes the actual hardware a given piece of functionality is hosted on. Of course, this only matters if you are in control of the hardware. If you’re hosting the functionality on someone else’s cloud service then there my be other things to monitor, but it won’t be details you can see. Another consideration is whether a given collection of functionality requires more resources than a single host machine has, in which case the hosting has to be shared across across multiple machines, with all the synchronization and overhead that implies.

When I was looking at this information (and finding that the information in lists posted in various places didn’t match) it jumped out at me that there were different versions of operating systems on different machines, so that’s something that should be displayable. If the web app had a bit more intelligence, it could produce reports on machines (and services and environments) that were running each OS version. There might be good reasons to support varied operating systems and versions, and you could include logic that identified differences against policy baselines for different machines for different reasons. The point is that you could set up the displays to include any information and statuses you wanted. This would provide at-a-glance insights into exactly what’s going on at all times.

Moreover, since this facility could be used by a wide variety of workers in the organization, this central repository could serve as the ground truth documentation for the organization. Rather than trying to organize and link to web pages and Confluence pages and SharePoint documents and manually update text documents and spreadsheets on individual machines, the information could all be in one place that everyone could find. And even if separate documents needed to be maintained by hand, such a unified access method could provide a single, trusted pointer to the correct version of the correct document(s). This particular layer of information might not be particularly useful for everyone, but we’ll see when we talk about the other layers that having multiple eyes on things allows anomalies to be spotted and rectified much more quickly. If all related information is readily available and organized in this way, and if people understood how to access it, then the amount of effort spent finding the relevant people and information when things went wrong would be reduced to an absolute minimum. In a large organization the amount of time and effort that could be saved would be spectacular. Different levels of permissions for viewing, maintenance, and reporting operations could also be included as part of a larger management and security policy.

I talked about pinging the different microservices, above. That’s a form of ensuring that things are running at the application level, or the top level of the seven-layer OSI model. The status of machines can be independently monitored using other utilities and protocols. In this case they might operate at a much lower level. If the unified interface I’m discussing isn’t made to perform such monitoring directly, it could at least provide a trusted link to or description of where to access and how to use the appropriate utility. I was exposed to a whole bunch of different tools at Universal, and I know I didn’t learn them all or even learn what they all were, but such a consistent, unified interface would tell people where things were and greatly streamline awareness, training, onboarding, and overall organizational effectiveness.

  • number of machines (if cluster)
  • machine name(s)
  • cores / memory / disk used/avail
  • OS / version
  • application version (Node, SQL, document store)
  • running (machine level)
  • POC / responsible party
  • functions/endpoints on each machine/cluster

Last but not least, the information tracked and displayed could include meta-information about who was responsible for doing certain things and who to contact at various times of day and days of week. Some information would be available and displayed on a machine-by-machine basis, some based on the environment, some based on services individually or in groups, and some on an organizational basis. The diagram below shows some information for every service for every environment, but other information only for each environment. Even more general information, such as that applying to the entire system, could be displayed in the top margin.

Layer 2

Gateway Management (Network / Exposure / Permissioning)

The next layer is a little less concrete and slightly more abstract and configurable, and this involves the network configuration germane to each machine and each service. Where the information in the hardware layer wasn’t likely to change much (and might be almost completely obviated when working with third-party, virtualized hosting), this information can change more readily, and it is certainly a critical part making this kind of system work.

The information itself is fairly understandable, but what gives it power in context is how it links downward to the hardware layer and links upward to the deployment and configuration layer. That is, every service in every environment at any given time is described by this kind of five-layer stack of hierarchical information. If multiple endpoints are run on a shared machine then the cross-links and displays should make that clear.

  • IP address
  • port (different ports per service or endpoint on same machine?)
  • URL (per function/endpoint)
  • SSL certificate (type, provider, expiration date)
  • inside or outside of firewall
  • auth method
  • credentials
  • available?
  • POC / responsible party
  • logging level set
  • permissions by user role

There are a ton of tools that can be used to access, configure, manage, report on, and document this information. Those functions could all be folded into a single tool, though that might be expensive, time-consuming, and brittle. As described above, however, links can be provided to the proper tools and various sorts of live information.

Level 3

QA / Testing

Getting still more abstract, we could also display the status of all service items as they are created, modified, tested, and moved through the pipeline toward live deployment. The figure below shows how the status of different build packages could be shown in the formatted display we’ve been discussing. Depending on what is shown and how it’s highlighted, it would be easy to see the progress of the builds through the system, when each code package started running, the status of various test, and so on. You could highlight different builds in different colors and include extra symbols to show how they fit in sequence.

  • tests being run in a given environment
  • dependencies for each path under investigation (microservices, external systems, mocks/stubs)
  • schedule of current / future activities
  • QA POC / responsible party
  • Test Incident POC / responsible party
  • release train status
  • CI – linting, automatically on checkin
  • CI – unit test
  • CI – code coverage, 20% to start (or even 1%), increase w/each build
  • CI – SonarQube Analysis
  • CI – SonarQube Quality Gate
  • CI – VeraCode scan
  • CI – compile/build
  • CI – deploy to Hockey App for Mobile Apps / Deploy Windows
  • CD – build->from CI to CD, publish to repository server (CodeStation)
  • CD – pull from CodeStation -> change variables in Udeploy file for that environment
  • CD – deploy to target server for app and environment
  • CD – deploy configured data connection, automatically pulled from github
  • CD – automatic smoke test
  • CD – automatic regression tests
  • CD – send deploy data to Kibana (deployment events only)
  • CD – post status to slack in DevOps channel
  • CD – roll back if not successful
  • performance test CD Ubuild, Udeploy (in own perf env?)

A ton of other information could be displayed in different formats, as shown below. The DevOps pipeline view shows the status of tests passed for each build in an environment (and could include every environment, I didn’t make complete example displays for brevity, but it could also be possible to customize each display as shown). These statuses might be obtained via webhooks to the information generated by the various automated testing tools. As much or as little information could be displayed as desired, but it’s easy to see how individual status items can be readily apparent at a glance. Naturally, the system could be set up to only show adverse conditions (tests that failed, items not running, permissions not granted, packages that haven’t moved after a specified time period, and so on).

A Dependency Status display could show what services are dependent on other services (if there are any). This gives the release manager insight into what permissions can be given to advance or rebuild individual packages if its clear there won’t be any interactions. It also shows what functional tests can’t be supported if required services aren’t running. If unexpected problems are encountered, in functional testing it might be an indication that the messaging contracts between services needs to be examined, it something along those lines was missed. In the figure below, Service 2 (in red) requires that Services 3, 5, and 9 must be running. Similarly, Service 7 (in blue) requires that Services 1, 5, and 8 are running. If any of the required services aren’t running then meaningful integration tests cannot be run. Inverting that, the test manager could also see at a glance which services could be stopped for any reason. If Services 2 and 7 both needed to be running, for example, then services 4 and 6 could be stopped without interfering with anything else. There are doubtless more interesting, comprehensive, and efficient ways to do this, but the point is merely to illustrate the idea. This example does not show dependencies for external services, but those could be shown as well.

The Gateway Information display shows how endpoints and access methods are linked to hardware items. The Complete Requirements Stack display could show everything from the hardware on up to the management communications, documentation, and permissioning.

The specific tools used by any organization are likely to be different, and in fact are very likely to vary within an organization at a given time (Linux server function written in Node or Java, iOS or Android app, website or Angular app, or almost any other possibility), and will certainly vary over time as different tools and languages and methodologies come and go. The ones shown in these examples are notional, and different operations may be performed in different orders.

Level 4

Code / Logic / Functionality

The code, logic, and style is always my main concern and that most matching my experience. I interacted with people doing all these things, but I naturally paid closest attention to what was going on in this area.

  • code complete: flag showing whether code is considered complete by developer and submitted for processing.
  • compiled: flag showing whether code package is compiled. This is might only apply to the dev environment if only compiled packages are handled in higher environments.
  • passes tests (unit, others?): flag(s) showing status of local (pre-submittal) tests passed
  • logging logic / depth: settings controlling level of logging of code for this environment. These settings might vary by environment. Ideally, all activity at all levels would be logged in the production environment, so complete auditing is possible to ensure no transactions ever fail to complete or leave an explanation of why they did not. Conversely, more information might be logged for test scenarios than for production. It all depends on local needs.
  • Udeploy configured: Flag for whether proper deployment chain is configured for this code package and any dependencies. (How much does this overlap with other items in other areas?)
  • test data available : Every code package needs to have a set test data to run against. Sometimes this will be native to automated local tests and at other times will involved data that must be provided by or available in connected capabilities through a local (database) or remote (microservice, external system, web page, or mobile app) communications interface. The difficulty can vary based on the nature of the data. Testing credit card transactions so all possible outcomes are exercised is a bit of a hassle, for example. It’s possible that certain capabilities won’t be tested in every environment (which might be redundant in any case), so what gets done and what’s required in each environment in the pipeline needs to be made clear.
  • messaging / interface contracts maintained: There should be ways to automatically test the content and structure of messages passed between different functional capabilities. This can be done structurally (as in the use of JavaScript’s ability to query an object to determine what’s in it to see if it matches what’s expected, something other languages can’t always do) or by contents (as in the case of binary messages (see story 1) that could be tested based on having unique data items in every field). Either way, if the structure of any message is changed, someone has to verify that all code and systems that use that message type are brought up to date. Different versions of messages may have to be supported over time as well, which makes things even more complicated.
  • code decomposed so all events queue-able, storable, retryable so all eventually report individual statuses without loss and so that complete consistency of side-effects is maintained: People too often tend to write real-time systems with the assumption that everything always “just works.” Care must be taken to queue any operations that fail so they ccan be retried until they pass or, if they are allowed to fail legitimately, care must be taken to ensure that all events are backed out, logged, and otherwise rationalized. The nature of the operation needs to be considered. Customer facing functions need to happen quickly or be abandoned or otherwise routed around while in-house supply and reporting items could potentially proceed with longer delays.
  • invocation code correct: I can’t remember what I was thinking with this one, but it could have to do with the way the calling or initiating mechanism works, which is similar to the messaging / interface item, above.
  • version / branch in git: A link to the relevant code repository (it doesn’t have to be git) should be available and maintained. If multiple versions of the same code package are referenced by different environments, users, and so on, then the repository needs to be able to handle all of them and maintain separate links to them. The pipeline promotion process should also be able to trigger automatic migrations of code packages in the relevant code repositories. The point is that it should be easy to access thhe code in the quickest possible way for review, revision, or whatever. A user shouldn’t have to go digging for it.
  • POC / responsible party: How to get in touch with the right person depending on the situation. This could include information about who did the design, who did the actual ccoding (original or modification), who the relevant supervisor or liaison is, who the product owner is, or a host of other possibilities.
  • function / microservice architect / POC: This is just a continuation of the item above, pointing to the party responsible for an entire code and functionality package.
  • endpoints / message formats / Swaggers: This is related to the items about messaging and interface contracts but identifies the communication points directly. There are a lot of ways to exercise communication endpoints, so all could be referenced or linked to in some way. Swaggers are a way to automatically generate web-based interfaces that allow you to manually (or automatically, if you do it right) populate and send messages to and receive and display messages from API endpoints. A tool called Postman does something similar. There are a decent number of such tools around, along with ways to spoof endpoint behaviors internally, and I’ve hand-written test harnesses of my own. The bottom line is that more automation is better, but the testing tools used should be described, integrated, or at least pointed to. Links to documentation, written and video tutorials, and source websites are all useful.
  • timing and timeout information: The timing of retries and abandonments should be parameterized and readily visible and controllable. Policies can be established governing rules for similar operations across the entire stack, or at least in the microservices. Documentation describing the process and policies should also be pointed to.
  • UI rules / 508 compliance: This won’t apply to back-end microservices but would apply to front-end and mobile codes and interfaces. It could also apply to user interfaces for internal code.
  • calls / is-called-by –> dependencies: My idea is that this is defined at the level of the microservice, in order to generate the dependency display described above. This might not be so important if microservices are completely isolated from each other, as some designers opine that microservices should never call each other, but connections with third-party external systems would still be important.
  • running (microservice/application level): indication of whether the service or code is running on its host machine. This is determined using an API call.

Level 5 – Most Abstract

Documentation / Support / Meta-

    While the lower levels are more about ongoing and real-time status, this level is about governance and design. Some of the functions here, particularly the stories, defects, tracking, and messaging, are often handled by enterprise-type systems like JIRA and Rally, but they could be handled by a custom system. What’s most important here is to manage links to documentation in whatever style the organization deems appropriate. Definition of done calculations could be performed here in such a way that builds, deployments, or advancements to higher environments can only be performed if all the required elements are updated and verified. Required items could include documentation (written or updated, and approved), management permission given, links to Requirements Traceability Matrix updated (especially to the relevant requirement), announcements made by environment or release manager or other parties, and so on.

    Links to discussion pages are aslo important, since they can point directly to the relevant forums (on Slack, for example). This makes a lot more of the conversational and institutional memory available and accessable over time. Remember, if you can’t find it, it ain’t worth much.

  • management permissions / approvals
  • documentation requirements
  • code standards reviews / requirements
  • defect logged / updated / assigned
  • user story created / assigned
  • POC / responsible party
  • routing of to-do items (business process automation)
  • documentation, institutional memory
  • links to related defects
  • links to discussion / interactive / Slack pages
  • introduction and training information (Help)
  • date & description history of all mods (even config stuff)
  • business case / requirement for change: linked forward and backward as part of the Requirements Traceability Matrix.

Visually you could think of what I’ve described looking like the figure below. Some of the information is maintained in some kind of live data repository, from which the web-based, user-level-access-controlled, responsive interface is generated, while other information is accessed by following links to external systems and capabilities.

And that’s just the information for a single service in a single environment. The entire infrastructure (minus the external systems) contains a like stack for all services in all environments.

There’s a ton more I could write about one-to-many and many-to-one mapping in certain situations (e.g., when multiple services run on a single machine in a given environment or when a single service is hosted on multiple machines in a given — usually production — environment to handle high traffic volumes), and how links and displays could be filtered as packages moved through the pipeline towards live deployment (by date or build in some way), but this post is more than long enough as it is. Don’t you agree?

Posted in Tools and methods | Tagged , , | Leave a comment

A Few Interesting Talks on Architecture and Microservices

These talks were recommended by Chase O’Neill, a very talented colleague from my time at Universal, and proved to be highly interesting. I’ve thought about things like this through my career, and they’re obviously more germane given where things are currently heading, so I’m sharing them here.

https://www.youtube.com/watch?v=KPtLbSEFe6c

https://www.youtube.com/watch?v=STKCRSUsyP0

https://www.youtube.com/watch?v=MrV0DqTqpFU

Let me know if you found them as interesting as I did.

Posted in Uncategorized | Leave a comment

Unified Theory of Business Analysis: Part Four

How Different Management Styles Work

Solutions, and let’s face it, those quite often involve the development and modification of software these days, can be realized in the context of many different types of organization. A lot of bandwidth is expended insisting that everything be done in a specific Agile or Scrum context, and people can get fairly militant about how Scrum processes should be run. I tend to be a little more relaxed about all that, figuring that correctly understanding a problem and designing an excellent solution is more important than the details of any management technique, and here’s why.

Beginning at the beginning, my framework lists a group of major steps to take: intended use, conceptual model, requirements, design, implementation, and testing. (I discuss the framework here, among other places.) While there are a few ancillary steps involving planning, data, and acceptance, these are the major things that have to get done. The figure below shows that each step should be pursued iteratively until all stakeholders agree that everything is understood and complete before moving on. Moreover, the team can always return to an earlier step or phase to modify anything that might have been missed in an earlier phase. This understanding should be maintained even as we discuss different configurations of phases going forward.

The most basic way to develop software, in theory, is through a Waterfall process. In this style of development you march through each phase in order, one after another, and everyone goes home happy, right?

Riiiiiight…

No one ever really developed software that way. Or, more to the point, no one did so with much success on any but the most trivial projects. There was always feedback, learning, adjustment, recycling, and figuring stuff out as you go. The key is that people should always be working together and talking to each other as the work progresses, so all parties stay in sync and loose ends get tied up on the fly. There are a lot of ways for this process to fail, and most of them involve people not talking to each other.

A significant proportion of software projects fail. A significant proportion of software projects have always failed. And a significant proportion of software projects are going to go right on failing — and mostly for the same reason. People don’t communicate.

Running Scrum or Agile or Kanban or Scrumban or WaterScrum or anything else with a formal definition isn’t a magic bullet by itself. I’ve seen huge, complicated Scrum processes in use in situations where stuff wasn’t getting done very efficiently.

So let’s look at some basic organization types, which I diagram below, using the basic six phases I list above.

Waterfall is the traditional form of project organization, but as a practical matter nobody ever did it this way — if they wanted their effort to be successful. First of all, any developer will perform an edit-compile-test cycle on his or her local machine if at all possible. Next, individuals and teams will work in a kind of continuous integration and test process, building up from a minimum core of running code and slowly expanding until the entire deliverable is complete.

So what defines a waterfall process? Is it defined by when part of all of the new code runs or when new code is delivered? Developing full-scope nuclear power plant training simulators at Westinghouse was a long-term effort. Some code was developed and run almost from the beginning and code was integrated into the system through the whole process but the end result wasn’t delivered until the work was essentially complete, shipped to the site, and installed about three years later. The model-predictive control systems I wrote and commissioned were developed and deployed piecemeal, often largely on-site.

My point isn’t to argue about details, it’s to show there’s a continuum of techniques. Pure methodologies are rarely used.

The main point of Agile is to get people talking and get working code together sooner rather than later, so you aren’t just writing a whole bunch of stuff and trying to stick it together at the very end. The Agile Manifesto was a loose confederation of ideas by seventeen practitioners in 2001 who couldn’t agree on much else. The four major statements are preferences — not absolute commandments.

People get even more wound up in the practice of Scrum. It seems to me that people who’ve only been in one or two organizations think that what they’ve seen is the only way to do things. They tend to think that asking a few trivia questions is a meaningful way to gauge someone else’s understanding. Well, I’ve seen the variety of different ways organizations do things and I know that everything written in the Scrum manuals (the training manuals I got during my certification classes are shown above and all can be reviewed in 10-15 minutes for details once you know the material) should be treated more as a guideline or a rule of thumb rather than an unbreakable commandment. In fact, I will state that if you’re treating any part of the Scrum oeuvre as an unbreakable commandment you’re doing it wrong. There are always exceptions and contexts and you’d know that if you’ve seen enough.

I showed the Scrum process, in terms of my six major engagement phases, as a stacked and offset series of Waterfall-like efforts. This is to show that a) every work item needs to proceed through all phases, and more or less in order, b) that not all the work proceeds in order within a given sprint; items are pulled from a backlog of items generated during intended use, conceptual model, and requirements phases which are conducted separately (and which could be illustrated as a separate loop of ongoing activities), and c) and that work tends to overflow from sprint to sprint depending on how the specific organization and deployment pipeline is set up. A small team working on a desktop app might proceed through coding, testing, and deployment of all items in a single sprint. A larger or more distributed team, especially where coders create work items that are submitted to dedicated testers, might complete coding and local testing in one sprint, while the testers will approve and deploy those items in the next. If there is a long, multi-stage development and deployment process, supporting multiple teams working in parallel and serially, like you’d find in a DevOps environment supporting a large-scale web-based operation, then who knows how long it will take to get significant items though the entire pipeline? And how does that change if significant rework or clarification is needed?

A lot of people also seem to think their environment or their industry is uniquely complex, and that other types of experience aren’t really translatable. They don’t necessarily know about the complexities other people have to deal with. I’ve written desktop suites of applications that require every bit as much input, care, planning, real-time awareness, internal and external integration, mathematical and subject matter acumen, cyclic review, and computer science knowledge as the largest organizational DevOps processes out there. What you’re doing ain’t necessarily that special.

All projects, even in software development, are not one-size-fits all. As I stress in the talks I give about my business analysis framework, the BABOK is written in a diffuse way precisely because the work comes in so many different forms. There is no cookie cutter formula to follow. Even my own work is just restating the material in a different context. Maybe the people I’ve been interacting with know that, but whatever they think I’m not communicating to them, there’s surely been a lot they haven’t communicated to me. Maybe that’s just because communication about complex subjects is always difficult (and even simple things, all too often), or maybe a lot of people are just thinking inside their own boxes.

This figure above can also be used to represent an ongoing development or support process as it’s applied to an already working system. This can take the form of Scrum, Kanban, Scrum-Ban, or any other permutation of these ideas. Someone recently asked me what I thought the difference between Scrum and Kanban was, and I replied that I didn’t think it was particularly clear cut. I’ve seen a lot of descriptions (including talks at IIBA Meetups) about how each methodology can be used for native development and for ongoing support. The biggest difference I’d gathered was that Kanban was more about work-metering, or managing in the context of being able to do a relatively fixed amount of work in a given time with a relatively fixed number of resources, while Scrum was more flexible and scalable depending on the situation. A search of the differences between Scrum and Kanban turns up a fairly consistent list of differences — none of which I care about very much. Is there anything in any of those descriptions which experienced practitioners couldn’t discuss and work out in a relatively short conversation? Moreover, is that material anywhere near as important as being able to understand customers’ problems and developing phenomenal solutions to them in an organized, thorough, and even stylish way, which is the actual point of what I’ve been doing for the last 30-plus years, in a variety of different environments? Moreover, referring to my comment above that if you’re doing things strictly by the book you’re probably doing it wrong, I have to ask whether things don’t always end up as a hybrid anyway, or that you don’t adapt as needed? I’ve commented previously that the details of specific kinds of Agile organizations are about one percent of doing the actual analysis and solutioning that provides value. That’s not likely to make a lot of people happy, least of all the screeners who congratulate themselves for supposedly being able to spot fraudulent Scrum practitioners, but that’s where I come down on it. While those folks are asking about things that don’t matter much, they’re failing to understand things that do.

Another problem with the Agile and Scrum ways of thinking is that some people might think that you might need to do some initial planning in a Sprint Zero but then you can be off and running and making meaningful deliveries shortly thereafter. I hope people don’t think that, but I wanted to show a hybrid Waterfall-Scrum-like process to illustrate how Scrum techniques can be used in a larger context where an overall framework is definitely needed. In such a scenario, a significant amount of intended use, conceptual modeling, requirements, and design work has to be done before any implementation can begin, and there will need to be significant integration and final acceptance testing at the end, some of which may not be able to be conducted until all features are implemented. In the middle of that, however, a Scrum-like process can be used to strategically order work items for incremental development, with some time allowed for low-level, local design and unit and integration testing. The point is never to follow a prescription from someone’s book, but to do what makes sense based on what you actually need.

If you’re doing simulation, either as the end goal of an engagement or as a supporting part of an engagement, the construction of a simulation is a separate project on its own, and that separate project requires a full progression through all the standard phases. I don’t show a specific phase where the simulation is used, that’s implied by the phase or phases in which the construction of the simulation is embedded. If the individual simulations used in each analysis have to be built or modified by a tool that builds simulations, then the building (and maintenance) of the tool is yet another level of embedding. Tools to create simulations can be used for one-offs by individual customers, but they are often maintained and modified over long periods of time to support an ongoing series of analyses of existing processes or projects.

In the diagram I show the construction of the simulation as being embedded in the design and implementation phases of a larger project, but that’s just a starting point. Let’s look at some specific uses of simulation (and I’ve done all of these but the last two, almost always with custom simulations stick-built in various high-level languages, and often that I’ve written personally in whole or in part), and see how the details work out. I describe the following applications of simulation in detail here, and I can tell ahead of time that there’s going to be some repetition, but let’s work through the exercise to be thorough.

  • Design and Sizing: In this case the construction and use of the simulation in all within the design phase of the larger project. I used an off-the-shelf package called GEMS to simulate prospective designs for pulping lines at my first engineering job. It was the only off-the-shelf package I ever used. Everything else was custom.

  • Operations Research: This usually, but not always, involved modelling a system to see how it would behave in response to changes in various characteristics. Here are a few of the efforts we applied it to at two of my recent stops.

    • We used a suite of model-building tools called BorderWizard, CanSim, and SimFronteras to build models of almost a hundred land border ports of entry on both sides of both U.S. borders — and one crossing on Mexico’s southern border with Guatemala. The original tool was created and improved over time and used to create baseline models of all the ports in question. That way, when an analysis needed to be done for a port, the existing baseline model could be modified and run and the output data analyzed.

      The creation, maintenance, and update of the tools was a kind of standalone effort. The creation of the individual baselines all sprang from a common intended use phase but proceeded through all the other phases separately. The building of each crossing’s baseline is embedded as a separate project in the analysis project’s conceptual model phase.

      The individual analyses are what is represented in the header diagram for this section. Each standalone analysis project proceeded through its own intended use, conceptual model, and requirements phases. Updating the configuration of the model based on the needs of the particular analysis was embedded in the design phase of the analysis project. Using the results of the model runs to guide the modifications to the actual port and its operations was embedded in the implementation phase of the analysis project. The test phase of the analysis or port improvement project involved seeing whether the changes made met the intended use.

      I participated in a design project to determine the requirements for and test the performance of proposed designs for the the facilities on the U.S. side of an entirely new crossing on the border between Maine and Canada. This work differed from our typical projects only in that we didn’t have a baseline model to work from.

    • The work we did on aircraft maintenance and logistics using (what we called) the Aircraft Maintenance Model (AMM) proceeded in largely the same way, except that we didn’t have a baseline model to work from. Instead, the input data to the one modeling engine that was used for all analyses were constructed in modular form, and the different modules could be mixed and matched and modified to serve the analysis at hand. We did have a few baseline input data components, but most of the inputs components had to be created from scratch from historical data and prevailing policy for each model run.

    • I had a PC version of the simulation I used to install on DEC machines at plants requesting that hardware. I’d develop it on the PC first to get all the parameters right, and then I’d re-code that design for the new hardware, in whatever language the customer wanted (the PC code was in Pascal, the customers wanted FORTRAN, C, or C++ at that time). One of our customers, in a plant in Monterrey, Mexico, which I visited numerous times over the course of two installations (my mother died while I was in the computer room there), saw the PC version and wondered if it could be used to do some light operations research, since it included all the movement and control mechanisms along with the heating mechanisms, and it had a terrific graphic display. We ended up selling them a version of the desktop software as an add-on, and they apparently got a lot of use out of it.

      My creation of the PC version of the software for design and experimentation was embedded in the conceptual model, requirements, and design phases of larger control system projects. Adapting it so it could be used by the customer was a standalone project in theory, though it didn’t take much effort.

    • We built simulations we used to analyze operations at airports and to determine staffing needs in response to daily flight schedule information we accessed (the latter could be thought of as a form or real-time control in a way). That simulation was embedded in a larger effort, probably across the requirements, design, and implementation phases.

  • Real-Time Control: Simulations are used in real-time control in a lot of different contexts. I’ve written systems that contained elements that ran anywhere from four times a second to once per day. If decisions are made and actions taken based on the result of a time-bound simulation, then it can be considered a form of real-time control.

    The work I did in the metals industry involved model-predictive simulation. The simulation was necessary because the control variable couldn’t be measured and thus had to be modeled. In this case the simulation wasn’t really separate from or embedded within another project, it was, from the point of view of a vendor providing a capability to a customer, the point of the deliverable. The major validation step of the process, carried out in the test phase of my framework, is ensuring that the surface temperatures of the discharged workpiece match what the control system supposedly calculated and controlled to. The parameters I was given and the methods I employed were always accurate, and the calculated values always matched the values read by the pyrometer. The pyrometer readings could only capture the surface temperature of the workpieces, so a second validation was whether the workpieces could be properly rolled so the final products had the desired properties and the workpieces didn’t break the mill stands.

  • Operator Training: Simulations built for operator training are usually best thought of as ends in themselves, and thus often should be thought of as standalone projects. However, the development and integration of simulations for this purpose can be complex and change and grow over extended periods of time, so there can be multiple levels of context.

    • The nuclear power plant simulators I helped build worked the same way as the control systems I wrote for the metals industry, except that they were more complicated, involved more people, and took longer to finish. The simulation was itself the output, along with the behavior of the control panels, thus building the simulations were part and parcel of the finished, standalone product. They weren’t something leveraged for a higher end. The same is true for flight simulators, military weapon simulators, and so on.

    • We made a plug-in for an interactive, multi-player training simulator used to examine various threat scenarios in different kinds of buildings and open spaces. The participants controlled characters in the simulated environment much like they would in a video game. Our plug-in autonomously guided the movements of large number of people as the evacuated the defined spaces according to defined behaviors. In terms of the our customer’s long-term development and utilization of the larger system our effort would be considered to be embedded in the design and implementation phases of an upgrade to their work. In terms of our developing the plug-in for our customer, it seemed like a standalone project.

  • Risk Analysis: Risk can be measured and analyzed in a lot of ways, but the most common I’ve encountered is in the context of Monte Carlo simulations, where results are expressed in terms of the percentage of times a certain systemic performance standard was achieved, and in operator training, where the consequences of making mistakes can be seen and trainees can learn the importance of preventing adverse scenarios from happening.

  • Economic Analysis: Costs and benefits can be assigned to every element and activity in a simulation, so any of the contexts described here can be leveraged to support economic calculations. Usually these are performed in terms of money, which is pretty straightforward (if you can get the data) but other calculations were performed as well. For example, we modified one of our BorderWizard models to total up the amount of time trucks sat at idle as they were waiting to be processed in the commercial section of a border crossing in Detroit, so an estimate could be made of the amount of pollution that was being emitted, and how much it might be reduced if the truck could be processed more quickly.

  • Impact Analysis: Assessing the change in behavior or outputs in response to changes in inputs is essentially gauging the impact of a proposed change. This applies to any of the contexts listed her.

  • Process Improvement: These are generally the same as operations research simulations. The whole point of ops research is process improvement and optimization, though it sometimes involves studies of feasibility and cost (to which end see economic analysis, above).

  • Entertainment: Simulations performed for entertainment can be thought of as being standalone projects if the end result is used directly, as in the case of a video game or a virtual reality environment. If the simulation is used to create visuals for a movie then it might be thought of as an embedded project during the implementation phase of the movie. If the simulation is used in support of something else that will be done for a movie, for example, the spectacular car-jump-with-a-barrel-roll stunt the the James Bond movie “The Man With the Golden Gun,” then it might be thought of as being embedded in the larger movie’s design phase. The details of what phase things are in aren’t all that important, as the context is intuitively obvious when you think about it. We’re just going through the examples as something of a thought experiment.

  • Sales: Simulations are often used to support sales presentations, and as such might have to look a little spiffier than the simulations working analysts (who understand the material and are more interested in the information generated than they are in the visual pizazz) tend to work with. If an existing simulation can’t be used, then the process of creating a new version to meet the new requirement may be thought of as a standalone project.

If a project involves improving an existing process, then the work of the analyst has to begin with understanding what’s already there. This involves performing process discovery, process mapping, and data collection in order to build a conceptual model of the existing system. Work can then proceed to assess the methods of improvements and their possible impacts.

If a project involves creating a new operation from the ground up, then the conceptual modeling phase is itself kind of embedded in the requirements and design phases of the new project. The discovery, mapping, and data collection phases will be done during requirements and design. That said, if some raw material can be gleaned from existing, analogous or competitive projects elsewhere, then this initial background research could be thought of as the conceptual modeling phase.

Conclusion

This exercise might have been a bit involved, but I think it’s important to be able to draw these distinctions so you have a better feel for what you’re doing and when and why. I slowly learned about all these concept for years as I was doing them, but I didn’t have a clear understanding of what I was doing or what my organization was doing. That said, getting my CBAP certification was almost mindlessly easy because I’d seen everything before and I had ready references and examples for every concept the practice of business analysis supposedly involves. The work I’ve done since then, here in these blog post and in the presentations I’ve given, and all I’ve learned from attending dozens of talks by other BAs, Project Managers, and other practitioners, has continued to refine my understanding of these ideas.

Looking back based on what I know now, I see where things could have been done better. It might be a bit megalomaniacal to describe these findings as a “Unified Theory of Business Analysis,” but the more this body of knowledge can be arranged and contextualized so it can be understood and used more easily and effectively, then I think it’s useful.

If you think this material is useful, tell everyone you know. If you think it sucks, tell me!

Posted in Management | Tagged , , , , , | Leave a comment

Using Data In My Framework and In Simulations

I recently wrote about how data is collected and used in the different phases of my business analysis framework. After giving the most recent version of my presentation on the subject I was asked for clarification about how the data is used, so I wanted to write about that today.

I want to start by pointing out that data comes in many forms. It can be highly numeric, which is more the case for simulations and physical systems, and it can be highly descriptive, which is often more the case for business systems. Make no mistake, though, it’s all data. I’ll describe how data came into play in several different jobs I did to illustrate this point.

Sprout-Bauer (now Andritz) (details)

My first engineering job was as a process engineer for an engineering and services firm serving the pulp and paper industry. Our company (which was part of Combustion Engineering at the time, before being acquired by Andritz) sold turnkey mechanical pulping lines based on thermomechanical refiners. I did heat and material balances and drawings for sales proposals, and pulp quality and mill audits to show that we were making our quality and quantity guarantees and to serve as a basis for making process improvement recommendations. Data came in three major forms:

  • Pulp Characteristics: Quite a few of these are defined in categories like freeness (Canadian Standard Freeness is approximately the coolest empirical measure of anything, ever!), fiber properties, strength properties, chemical composition, optical properties, and cleanliness. We’d analyze the effectiveness of the process by analyzing the progression of about twenty of these properties at various points in the production line(s). I spent my first month in the company’s research facility in Springfied, Ohio learning about the properties they typically used. It seems that a lot of these measures have been automated, which must be really helpful for analysts at the plants. It used to be that I’d go to a plant to draw samples from various points in the process (you’d have to incrementally fill buckets at sample ports about hourly over the course of a day), then dewater them, seal them in plastic bags, label them, and ship them off to Springfield, where the lab techs would analyze everything and report the resullts. Different species of trees and hard and soft woods required different processing as well, and that was all part of the art and science of designing and analyzing pulping processes. One time we had to send wood chips back in 55-gallon drums. Somehow this involved huge bags and a pulley hanging over the side of a 100-foot-high building. My partners held me by the back of my belt as I leaned out to move the pulley in closer so we could feed a rope through it. So yeah, data.
  • Process volumes and contents: Pulp-making is a continuous process so everything is expressed on a rate basis. For example, if the plant was supposed to produce 280 air dried metric tons per day it might have a process flow somewhere with 30,000 gallons per minute at 27% consistency (the percentage of the mass flow composed of wood fiber with the remainder being steam, a few noncondensable gases, chemicals like liquors and bleaches, and some dirt or other junk). Don’t do the math, I’m just giving examples here. The flow conditions also included temperatures (based on total energy or enthalpy), and pressures, which allowed calculation of the volume flows and thus the equipment and pipe sizing needed to support the desired flow rates. The thermodynamic properties of water (liquid and gaseous) are a further class of data needed to perform these calculations. They’ve been compiled by researchers of the years. The behavior of flow through valves, fittings, and pipe is another form of data that has been compiled over time.
  • The specifications and sizes of different pieces of equipment were also part of the data describing each system. Many pieces of equipment came in standard sizes because it was too difficult to make custom-sized versions. This was especially true of pulp refiners, which came in several different configurations. Other items were custom made for each application. Examples of these included conveyors, screw de-watering presses, and liquid phase separators. Some items, like screens and cleaners, were made in standard sizes and various numbers of them were used in parallel to support the desired flow rates. Moreover, the screens and cleaners would often be arranged in multiple phases. I didn’t calculate flows based on equipment sizes for the most part, I calculated them based on the need to produce a certain amount of pulp. The equipment and piping would later be sized to support the specified flows.
  • The fourth item in this three-item list is money. In the end, every designed process had to be analyzed in terms of installed, fixed, and operating costs vs. benefits from sales. I didn’t do those calculations as a young engineer but they were definitely done as we and our customers worked out the business cases. I certainly saw how our proposals were put together and had a decent idea of what things cost. I’d learn a lot more about how those things are done in later jobs.

All these data elements have to be obtained through observation, testing, research, and elicitation (and sometimes negotiation), and all must be understood to analyze the process.

Westinghouse Nuclear Simulator Division (details)

Here I leveraged a lot of the experience I gained analyzing fluid flows and doing thermodynamic analyses in the paper industry. Examples of how I/we incorporated thermodynamic properties are here, here, and here. In this case the discovery was done for the modellers in that the elements to be simulated were already identified. This meant that we started with data collection, which we performed in two phases. We started by visiting the plant control room and recording the readings on every dial and indicator and the position or every switch, button, and dial. This gave us an indication of a few of the flows, pressures, and temperatures at different points in the system. The remainder of those values had to be calculated based on the equipment layouts and the properties of the fluids.

  • Flow characteristics: These were mostly based on the physical properties of water and steam but we sometimes had to consider noncondensables, especially when they made up the bulk of the flow, as they did in the building spaces and the offgas system I worked on. We also had to consider concentrations of particulates like boron and radioactive elements. The radiation was tracked as an abstract emittance level that decayed over time. We didn’t worry about the different kinds of radiation and the particles associated with them. (As much as I’ve thought about this work in the years since I did it I find it fascinating that I never really “got” this detail until just now as I’m writing this.) As mentioned above, the thermodynamic properties of the relevant fluids have all been discovered and compiled over the years.
  • Process volumes and contents: The flow rates were crucial and were driven by pressure differentials and pump characteristics and affected by the equipment it flowed through.
  • The specifications and sizes of different pieces of equipment were also part of the data describing each system. We needed to do detailed research through a library of ten thousand documents to learn the dimensions and behavior of all the pipes, equipment items, and even rooms in the containment structure.

Beyond the variables describing process states and equipment characteristics, the simulation equations required a huge number of coefficients. These all had to be calculated from the steady-state values of the different process conditions. I spent so much time doing calculations and updating documents that I found it necessary to create a tool to manage and automate the process.

Another important usage of data was in the interfaces between models. These had to be negotiated and documented, and the different models had to be scheduled to run in a way that would minimize race conditions as each model updated its calculations in real-time.

CIScorp (details)

In this position I did the kind of business process analysis and name-address-phone number-account number programming I’d been trying to avoid, since I was a hard core mechanical engineer, and all. Who knew I’d learn so much and end up loving it? This position’s contrast to most others I worked in the first half of my career taught me more about performing purposeful business analysis than any other single thing I did. I’m not sure I understood the oeuvre as a whole at the time, but it certainly gave me a solid grounding and a lot of confidence for things I did later. Here I write about how I integrated various insights and experiences over time to become the analyst and designer that I am today.

The FileNet document imaging system is used to scan documents so their images are moved around almost for free while the original hardcopies are warehoused. We’d do a discovery process to map an organization’s activities, say, the disability underwriting section of an insurance company, to find out what was going on. We interviewed representatives of the groups of workers in each process area to identify each of the possible actions they could take in response to receipt of any type of document or group of documents. This gave us the nouns and verbs of the process. Once we knew that, we’d gather up the adjectives of the process, the data that describe the activities, the entities processed (the documents), and the results generated. We gathered the necessary data mostly through interviews and reviews of historical records.

The first phase of the effort involved a cost-benefit analysis that only assessed the volumes and process times associated with the section’s activities. Since this was an estimation we collected process times via a minimum of observation and descriptions of SMEs. As a cross-check we reviewed whether our findings made sense in light of what we knew about how many documents were processed daily by each group of workers and the amount of time taken per action. Since the total amount of time spent per day per worker usually totaled up to just around eight hours we assumed our estimates were on target.

The next step was to identify which actions no longer needed to be carried out by workers, since all operations involving the physical documents were automated. We also estimated the time needed for the new actions of scanning and indexing the documents as they arrived. Finally, given assumptions for average pay rates for each class of worker, we were able to calculate the cost of running the As-Is process and the To-Be automated process and estimate the total savings that would be realized. We ended up having a long discussion about whether we’d save one minute per review or two minutes per review of collated customer files by the actual underwriting analysts, which was the most important single activity in the entire process. We ultimately determined that we could make a sufficient economic case for the FileNet solution by assuming a time savings of only one minute per review. The customer engaged two competitors, each of whom performed similar analyses, and our solution was judged to realize the greater net savings, about thirty percent per year on labor costs.

The data items identified and analyzed were similar to those I worked with in my previous positions. They were:

  • Document characteristics: The type of document determined how it needed to be processed. The documents had to be collated into patient files, mostly containing medical records, that would be reviewed and scored. This would determine the overall risk for a pool of employees a potential customer company wanted to provide disability coverage for. The insurer’s analysis would determine whether it would agree to provide coverage and what rate would be charged for that pool.
  • Process volumes and contents: These flows were defined in terms of documents per day per worker and per operation, with the total number arriving for processing each day being known.
  • The number and kind of workers in each department is analogous to the equipment described in the systems above. The groups of workers determined the volume of documents that could be processed and the types of transformations and calculations that could be carried out.

Once the initial phase was completed we examined the documents more closely to determine exactly what information had to be captured from them in order to collate them into files for each employee and how those could be grouped with the correct company pool. This information was to be captured as part of the scanning and indexing operation. The documents would be scanned and automatically assigned a master index number. Then, an index operation would follow which involved reading information identifying an employee so it could be entered into a form on screen. Other information was entered on different screens about the applying company and its employee roster. The scores for each employee file, as assigned by the underwriters, also had to be included. The data items needed to design the user and control screens all had to be identified and characterized.

Bricmont (details)

The work I did at Bricmont was mapped a little bit differently than the work at my previous jobs. It still falls into the three main classifications in a sense but I’m going to describe things differently in this case. For additional background, I describe some of the detailed thermodynamic calculations I perform here.

  • Material properties of metals being heated or even melted: As in previous descriptions, the properties of materials are obtained from published research. Examples of properties determined as a function of temperature are thermal conductivity and specific heat capacity. Examples of properties that remained constant were density, the emissivity of (steel) workpieces, and the Stephan-Boltzmann constant.
  • Geometry of furnaces and metal workpieces being heated: The geometry of each workpiece determines the layout of the nodal network within it. The geometry of the furnace determines how heat, and therefore temperature, is distributed at different locations. The location of workpieces relative to each other determines the amount of heat radiation that can be transferred to different sections of the surface of the workpieces (this obviously doesn’t apply for heating by electrical induction). This determines viewing angles and shadows cast.
  • Temperatures and energy inputs: Energy is transferred from furnaces to workpieces usually by radiative heat transfer, except in the cases where electric induction heating is used. Heat transfer is a function of temperature differential (technically the difference between the fourth power of the absolute temperature of the furnace and the fourth power of the temperature of the workpiece) for radiative heating methods and a function of the electrical inputs minus losses for inductive methods.
  • Contents of messages received from and sent to external systems: Information received from external systems included messages about the nature of workpieces or materials loaded into a furnace, the values of instrument readings (especially thermocouples that measure temperature), other physical indicators that result in the movement of workpieces through the furnace, and permissions to discharge heated workpieces to the rolling mill, if applicable. Information forwarded to other systems included messages to casters or slab storage to send new workpieces, messages to the rolling mill about the workpiece being discharged, messages to the low-level control system defining what the new temperature or movement setpoints should be, and messages to higher-level administrative and analytic systems about all activities.
  • Data logged and retrieved for historical analysis: Furnace control systems stored a wide range of operating, event, and status data that couuld be retrieved for further analysis.

The messages relayed between systems employed a wide variety of inter-process communication methods.

American Auto-Matrix (details)

In this position I worked with low-level controllers that exchanged messages with each other to control HVAC devices, manage other devices and setpoints, and record and analyze historical data. The platforms I worked on were mostly different from those at previous jobs but in principle they did the same things. This was mostly interesting because of the low-level granularity of the controllers used and the variety of communication protocols employed.

Regal Decision Systems and RTR Technologies (details here and here)

The major difference between the work I did with these two companies and all the previous work I’d done is that I switched from doing continuous simulation to discrete-event simulation. I discuss some of the differences here, though I could always go into in more detail. At a high level the projects I worked on incorporated the same three major classes of data as what I’ve described above (characteristics of the items being processed, the flow and content of items being processed, and the characteristics of the subsystems where the processing occurs). However, while discrete-event simulation can be deterministic, its real power comes from being able to analyze stochastic processes. This is accomplished by employing Monte Carlo methods. I described how those work in great detail yesterday.

To review, here are the major classes of data that describe a process and are generated by a process:

  • Properties of items or materials being processed
    • physical properties of materials that affect processing
    • information content of items that affect processing
    • states of items or materials that affect processing
    • contents of messages that affect processing
  • Volumes of items or materials being processed
  • Characteristics of equipment or activities doing the processing
  • Financial costs and benefits
  • Output data, items, materials, behaviors, or decisions generated by the process
  • Historical data recorded for later analysis
Posted in Engineering, Tools and methods | Tagged , , , | Leave a comment

Discrete-Event Simulations and Monte Carlo Techniques

“It was smooth sailing” vs. “I hit every stinkin’ red light today!”

Think about all the factors that might affect how long it takes you to drive in to work every day. What are the factors that might affect your commute, bearing in mind that models can include both scheduled and unscheduled events?

  • start time of your trip
  • whether cars pull out in front of you
  • presence of children (waiting for a school bus or playing street hockey)
  • presence of animals (pets, deer, alligators)
  • timing of traffic lights
  • weather
  • road conditions (rain, slush, snow, ice, hail, sand, potholes, stuff that fell out of a truck, shredded tires, collapsed berms
  • light level and visibility
  • presence of road construction
  • occurrence of accidents
  • condition of roadways
  • condition of alternate routes
  • mechanical condition of car
  • your health
  • your emotional state (did you have a fight with your significant other? do you have a big meeting?)
  • weekend or holiday (you need to work on a banker’s holiday others may get)
  • presence of school buses during the school year, or crossing guards for children walking to school
  • availability of bus or rail alternatives (“The Red Line’s busted? Again?”)
  • distance of trip (you might work at multiple offices or with different clients)
  • timeliness of car pool companions
  • need to stop for gas/coffee/breakfast/cleaning/groceries/children
  • special events or parades (“What? The Indians finally won the Series?”)
  • garbage trucks operating in residential areas

So how would you build such a simulation? Would you try to represent only the route of interest and apply all the data to that fixed route, or would you build a road network of an entire region to drive (possibly) more accurate conditions on the route of interest? (SimCity simulates an entire route network based on trips taken by the inhabitants in different buildings, and then animates moving items proportional to the level of traffic in each segment.)

Now let’s try to classify the above occurrences in a few ways.

  • Randomly generated outcomes may include:
    • event durations
    • process outcomes
    • routing choices
    • event occurrences (e.g., failure, arrival; Poisson function)
    • arrival characteristics (anything that affects outcomes)
    • resource availability
    • environmental conditions
  • Random values may be obtained by applying methods singly and in combination, which can result in symmetrical or asymmetrical results:
    • data collected from observations
    • continuous vs. discrete function outputs
    • single- and multi-dice combinations
    • range-defined buckets
    • piecewise linear curve fits
    • statistical and empirical functions (the SLX programming language includes native functions for around 40 different statistical distributions)
    • rule-based conditioning of results

Monte Carlo is a famous gambling destination in the Principality of Monaco. Gambling, of course, is all about random events in a known context. Knowing that context — and setting ultimate limits on the size of bets — is how the house maintains its advantage, but that’s a discussion for another day! When applied to simulation, the random context comes into play in two ways. First, the results of individual (discrete) events are randomized, so a range of results are generated as the simulation runs over multiple iterations. The random results are generated based on known distributions of possible outcomes. Sometimes these are made by guesstimating, but more often they are based on data collected from actual observations. (I’ll describe how that’s done, below.) The second way the random context comes into play is when multiple random events are incorporated into a single simulation so their results interact. If you think about it, it wouldn’t be very interesting to analyze a system based on a single random distribution, because the output distribution would be essentially the same as the input distribution. It’s really the interaction of numerous random events that make such analyses interesting.

First, let’s describe how we might collect the data from which we’ll build a random distribution. We’ll start by collecting a number of sample values using some form of the observation technique from the BABOK, ensuring that we capture a sufficient number of values.

What we do next depends on the type of data we’re working with. In the two classic cases we order the data and figure out how many occurrences of each kind of value occur. In both cases we start by arranging the data items in order. If we’re dealing with a limited number of specific values, examples of which could be the citizenship category of a person presenting themselves for inspection at a border crossing or the number number of separate deliveries it will take to fulfill an order, then we just count up the number of occurrences of each measured value. If we’re dealing with a continuous range of values that has a known upper and lower limit, with examples being process times or the total value of an order, then we must rearrange the data in order and break it into “buckets” across intervals chosen to capture the shape of the data accurately. Sometimes data is collected in raw form and then analyzed to see how it should be broken down, while in other cases a judgment is made about how the data should be categorized ahead of time.

The data collection form below shows an example of how continuous data was pre-divided into buckets ahead of time. See the three rightmost columns for processing time. Note further that items that took less than two minutes to process could be indicated by not checking any of the boxes.

In the case where the decision of how to analyze data is made after it’s collected we’ll use the following procedures. If we identify a limited number of fixed categories or values we’ll simply count the number of occurrences of each. Then we’ll arrange them in some kind of order (if that makes sense) and determine the cumulative number of readings and determine the proportion of each result’s occurrence. A random number (from zero to one) can be generated against the cumulative distribution shown below, which determines the bucket and the related result.

Given data in a spreadsheet like this…

…the code could be initialized and run something like this:

There are always things that could be changed or improved. For example, the data could be sorted in order from most likely to least likely to occur, which would minimize the execution time as the function would generally loop through fewer possibilities. Alternatively, the code could be changed so some kind of binary search is used to find the correct bucket. This would make the function run times highly consistent at the cost of making the code more complex and difficult to read.

This is pretty straightforward and the same methods could be used in almost any programming language. That said, some languages have specialized features to handle these sorts of operations. Here’s how the same random function declaration would look in GPSS/H, which is something of an odd beast of a language that looks a little bit like assembly in its formatting (though not for its function declarations). GPSS/H is so unusual, in fact, that the syntax highlighter here doesn’t recognize that language and color it consistently.

In this case the 12345 is an example of the function index, the handle by which the function is called. RN1 indicates that the random function iterates a die roll of 0.0-0.999999 between the first values in each pair and, when the correct bucket is found, returns the second value. D0010 indicates that the array has ten elements. The defining values are given as a series of x,y value pairs separated by slashes. The first number of each pair is the independent value while the second is the dependent value.

The language defines five different types of functions. Some function types limit the output values to the minimum or maximum if the input is outside the defined range. I’m not doing that in my example functions because I’ve limited the possible input values.

So that’s the simple case. Continuous functions do mostly the same thing but also perform a linear interpolation between the endpoints of each bucket’s dependent value. Let’s look at the following data set and work through how we might condition the information for use.

In this case we just divide the range of observed results (18-127 seconds) by 20 buckets an see how many results fall into each range. We then do the cumulations and proportions like before, though we need 21 values so we have upper and lower bounds for all 20 buckets for the interpolation operation. If longer sections of the results curve appear to be linear then we can omit some of the values in the actual code implementation. If we do this then we want to shape a piecewise linear curve fit to the data. The red line graph superimposed on the bar chart above shows that the fit including only the items highlighted in orange in the table is fairly accurate across items 5 through 12, 12 through 18, and 18 through 20.

The working part of the C++ code would look like this:

Looking at the results of a system simulated in this way you can get a range of answers. Sometimes you get a smooth curve that might be bell-shaped and skewed left. This would indicate that much of the time your commute duration fell in a typical window but was occasionally a little shorter but sometimes longer, and sometimes by a lot. Sometimes the results can be discontinuous, which means you sometimes get a cluster of really good or really bad overall results if things stack up just right. Some types of models are so granular that the variations kind of cancel each other out, and thus we didn’t need to run many iterations. This seemed to be the case with the detailed traffic simulations we built and ran for border crossings. In other cases, like in the more complicated aircraft support logistics scenarios we analyzed, the results could be a lot more variable. This meant we had to run more iterations to be sure we were generating a representative output set.

Interestingly, exceptionally good and exceptionally poor systemic results can be driven by the exact same data for the individual events. It’s just how things stack up from run to run that makes a given iteration good or bad. If you are collecting data during a day when an exceptionally good or bad result was obtained in the real world, the granular information obtained should still give you the more common results in the majority of cases. This is a very subtle thing that took me a while to understand. And, as I’ve explained elsewhere, there are a lot of complicated things a practitioner needs to understand to be able to do this work well. The teams I worked on had a lot of highly experienced programmers and analysts and we had some royal battles over what things meant and how things should have been done. In the end I think we ended up with a better understanding and some really solid tools and results.

Posted in Simulation | Tagged , , | Leave a comment

Combined Survey Results (late March 2019)

The additional survey results from yesterday are included in the combined results here.

List at least five steps you take during a typical business analysis effort.

  1. Requirements Gathering
  2. Initiation
  3. Testing
  4. QA
  5. Feedback
  6. User acceptance
  1. Requirement Elicitation
  2. UX Design
  3. Software Design for Testability
  1. Identify Business Goal
  2. ID Stakeholders
  3. Make sure necessary resources are available
  4. Create Project Schedule
  5. Conduct regular status meetings
  1. Meet with requester to learn needs/wants
  2. List details/wants/needs
  3. Rough draft of Project/proposed solutions
  4. Check in with requester on rough draft
  5. Make edits/adjustments | test
  6. Regularly schedule touch-point meeting
  7. Requirement analysis/design | functional/non-functional
  8. Determine stakeholders | user acceptance
  1. List the stakeholders
  2. Read through all documents available
  3. Create list of questions
  4. Meet regularly with the stakeholders
  5. Meet with developers
  6. Develop scenarios
  7. Ensure stakeholders ensersy requirements
  8. other notes
    • SMART PM milestones
    • know players
    • feedback
    • analysis steps
    • no standard
  1. identify stakeholders / Stakeholder Analysis
  2. identify business objectives / goals
  3. identify use cases
  4. specify requirements
  5. interview Stakeholders
  1. project planning
  2. user group sessions
  3. individual meetings
  4. define business objectives
  5. define project scope
  6. prototype / wireframes
  1. identify audience / stakeholders
  2. identify purpose and scope
  3. develop plan
  4. define problem
  5. identify objective
  6. analyze problems / identify alternative solutions
  7. determine solution to go with
  8. design solution
  9. test solution
  1. gathering requirements
  2. assess stakeholder priorities
  3. data pull
  4. data scrub
  5. data analysis
  6. create summary presentation
  1. define objective
  2. research available resources
  3. define a solution
  4. gather its requirements
  5. define requirements
  6. validate and verify requirements
  7. work with developers
  8. coordinate building the solutions
  1. requirements elicitation
  2. requirements analysis
  3. get consensus
  4. organizational architecture assessment
  5. plan BA activities
  6. assist UAT
  7. requirements management
  8. define problem to be solved
  1. understand thhe business need of the request
  2. understand why the need is important – what is the benefit/value?
  3. identify the stakeholders affected by the request
  4. identify system and process impacts of the change (complexity of the change)
  5. understand the cost of the change
  6. prioritize the request in relation to other requests/needs
  7. elicit business requirements
  8. obtain signoff on business requests / validate requests
  1. understanding requirements
  2. writing user stories
  3. participating in Scrums
  4. testing stories
  1. research
  2. requirements meetings/elicitation
  3. document requirements
  4. requirements approvals
  5. estimation with developers
  6. consult with developers
  7. oversee UAT
  8. oversee business transition
  1. brainstorming
  2. interview project owner(s)
  3. understand current state
  4. understand need / desired state
  5. simulate / shadow
  6. inquire about effort required from technical team
  1. scope, issue determination, planning
  2. define issues
  3. define assumptions
  4. planning
  5. ccommunication
  6. analysis – business and data modeling
  1. gather data
  2. sort
  3. define
  4. organize
  5. examples, good and bad
  1. document analysis
  2. interviews
  3. workshops
  4. BRD walkthroughs
  5. item tracking
  1. ask questions
  2. gather data
  3. clean data
  4. run tests
  5. interpret results
  6. visualize results
  7. provide conclusions
  1. understand current state
  2. understand desired state
  3. gap analysis
  4. understand end user
  5. help customer update desired state/vision
  6. deliver prioritized value iteratively
  1. define goals and objectives
  2. model As-Is
  3. identify gaps/requirements
  4. model To-Be
  5. define business rules
  6. conduct impact analysis
  7. define scope
  8. identify solution / how
  1. interview project sponsor
  2. interview key stakeholders
  3. read relevant information about the issue
  4. form business plan
  5. communicate and get buy-in
  6. goals, objectives, and scope
  1. stakeholder analysis
  2. requirements gathering
  3. requirements analysis
  4. requirements management – storage and updates
  5. communication – requirements and meetings
  1. analyze evidence
  2. desiign application
  3. develop prototype
  4. implement product
  5. evaluate product
  6. train users
  7. upgrade functionality
  1. read material from previous similar projects
  2. talk to sponsors
  3. web search on topic
  4. play with current system
  5. ask questions
  6. draw BPMs
  7. write use cases
  1. document current process
  2. identify users
  3. meet with users; interview
  4. review current documentation
  5. present proposed solution or iteration
  1. meeting with stakeholders
  2. outline scope
  3. research
  4. write requirements
  5. meet and verify with developers
  6. test in development and production
  7. outreach and maintenance with stakeholders
  1. As-In analysis (current state)
  2. write lightweight business case
  3. negotiate with stakeholders
  4. write user stories
  5. User Acceptance Testing
  6. cry myself to sleep 🙂
  1. initiation
  2. elicitation
  3. discussion
  4. design / user stories / use cases
  5. sign-off
  6. sprints
  7. testing / QA
  8. user acceptance testing
  1. planning
  2. elicitation
  3. requirements
  4. specification writing
  5. QA
  6. UAT
  1. identify the problem
  1. studying subject matter
  2. planning
  3. elicitation
  4. functional specification writing
  5. documentation
  1. identify stakeholders
  2. assess working approach (Waterfall, Agile, Hybrid)
  3. determine current state of requirements and maturity of project vision
  4. interview stakeholders
  5. write and validate requirements
  1. problem definition
  2. value definition
  3. decomposition
  4. dependency analysis
  5. solution assessment
  1. process mapping
  2. stakeholder interviews
  3. write use cases
  4. document requirements
  5. research
  1. listen – to stakeholders and customers
  2. analyze – documents, data, atc. to understand thhings further
  3. repeat back what I’m hearing to make sure I’m understanding correctly
  4. synthesize – the details
  5. document – as needed(e.g., Visio diagramsPowerPoint decks, Word, tool, etc.)
  6. solution
  7. help with implementing
  8. assess and improve – if/as needed
  1. understand the problem
  2. understand the environment
  3. gather the requirements
  4. align with IT on design
  5. test
  6. train
  7. deploy
  8. follow-up
  1. watch how it is currently done
  2. listen to clients’ pain points
  3. define goals of project
  1. critical path tasks
  2. pros/cons of tasks
  3. impacts
  4. risks
  5. goals
  1. discovery – high level
  2. analysis / evaluation
  3. presentation of options
  4. requirements gathering
  5. epic / feature / story definition’
  6. prioritization
  1. who is driving the requirements?
  2. focus on what is needed for project
  3. who is going to use the product?
  1. elicit requirements
  2. hold focus groups
  3. create mock-ups
  4. test
  5. write user stories
  1. analyze
  2. document process
  3. identify waste (Lean)
  4. communicate
  5. document plan / changes
  1. meeting
  2. documentation
  3. strategy
  4. execution plan
  5. reporting plan
  1. requirements gathering
  2. delivery expectations
  3. user experience work with customer
  4. process mapping
  5. system and user testing
  6. system interaction (upstram and downstream) how does a change affect my process?
  7. understanding stakeholders
  1. stakeholder elicitation
  2. brainstorming
  3. requirements analysis
  4. wireframing
  5. process / flow diagrams
  1. current state analysis
  2. future state
  3. gap analysis
  4. requirements gathering
  5. success metrics
  1. interview users
  2. gather requirements
  3. document business rules
  4. business process flow
  5. mock-ups
  1. UX design review
  2. requirements gathering
  3. vision gathering / understanding
  1. requirements elicitation
  2. gap analysis
  1. shadow users
  2. follow up to verify understanding of business and need
  3. mockups, high-level design concept
  4. present mockup, design concept
  5. create and mintain stories and acceptance criteria
  1. brainstorming
  2. external stakeholder feedback
  3. internal stakeholder feedback
  4. break down epics
  5. user stories
  6. building
  1. stakeholder analysis
  2. elicitation activity plan
  3. requirements tracing
  4. prototyping
  5. document analysis
  1. research
  2. requirements analysis
  3. state chart diagram
  4. execution plan
  5. reporting plan

List some steps you took in a weird or non-standard project.

  • Steps:
    1. 1. Why is there a problem? Is there a problem?
    2. 2. What can change? How can I change it?
    3. 3. How to change the process for lasting results
  • A description of “weird” usually goes along with a particular person I am working with rather than a project. Some people like things done a certain way or they need things handed to them or their ego stroked. I accommodate all kinds of idiosyncrasies so that I can get the project done on time.
  • adjustments in project resources
  • after initial interview, began prototyping and iterated through until agreed upon design
  • built a filter
  • create mock-ups and gather requirements
  • create strategy to hit KPIs
  • data migration
  • data dictionary standardization
  • describing resource needs to the customer so they better understand how much work actually needs to happen and that there isn’t enough staff
  • design sprint
  • design thinking
  • developers and I create requirements as desired
  • did my own user experience testing
  • document requirements after development work has begun
  • documented non-value steps in a process new to me
  • explained project structure to stakeholders
  • For a client who was unable to clearly explain their business processes and where several SMEs had to be consulted to form the whole picture, I drew workflows to identify inputs/outputs, figure out where the gaps in our understanding existed, and identify the common paths and edge cases.
  • got to step 6 (building) and started back at step 1 multiiple times
  • guided solutioning
  • identified handoffs between different contractors
  • identify end results
  • interview individuals rather than host meetings
  • investigate vendor-provided code for business process flows
  • iterative development and delivery
  • made timeline promises to customers without stakeholder buy-in/signoff
  • make excutive decisions withoutstakeholder back-and-forth
  • mapped a process flow on a meeting room wall and had developers stick up arrows and process boxes like I would create in Visio to get engagement and consensus
  • moved heavy equipment
  • moved servers from one office to another
  • observe people doing un-automated process
  • personally evaluate how comitted mgt was to what they said they wanted
  • phased delivery / subject areas
  • physically simulate each step of an operational process
  • process development
  • regular status reports to CEO
  • resources and deliverables
  • reverse code engineering
  • review production incident logs
  • showed customer a competitor’s website to get buy-in for change
  • simulation
  • start with techniques from junior team members
  • starting a project without getting agreed funding from various units
  • statistical modeling
  • surveys
  • team up with PM to develop a plan to steer the sponsor in the right diection
  • town halls
  • track progress in PowerPoint because the sponsor insisted on it
  • train the team how to read use case diagrams
  • translating training documents into Portuguese
  • travel to affiliate sites to understand their processes
  • understanding cultural and legal requirements in a foreign country
  • use a game
  • using a ruler to estimate level of effort to digitize paper contracts in filing cabinets gathered over 40 years
  • work around manager who was afraid of change – had to continually demonstrate the product, ease of use, and savings
  • worked with a mechanic
  • write requirements for what had been developed

Name three software tools you use most.

  • Excel (27)
  • Visio (18)
  • Jira (17)
  • Word (15)
  • Confluence (8)
  • Outlook (7)
  • PowerPoint (7)
  • SharePoint (6)
  • Azure DevOps (5)
  • Google Docs (4)
  • MS Team Foundation Server (4)
  • email (3)
  • MS Teams (3)
  • Draw.io (2)
  • MS Dynamics (2)
  • MS Office (2)
  • MS Visual Studio (2)
  • Notepad (2)
  • OneNote (2)
  • Siebel (2)
  • Slack (2)
  • SQL Server (2)
  • Version One (2)
  • Adobe Reader (1)
  • all MS products (1)
  • ARC / Knowledge Center(?) (Client Internal Tests) (1)
  • Balsamiq (1)
  • Basecamp (1)
  • Blueprint (1)
  • Bullhorn (1)
  • CRM (1)
  • database, spreadsheet, or requirement tool for managing requirements (1)
  • Doors (1)
  • Enbevu(?) (Mainframe) (1)
  • Enterprise Architect (1)
  • Gephi (dependency graphing) (1)
  • Google Calendar (1)
  • Google Drawings (1)
  • illustration / design program for diagrams (1)
  • iRise (1)
  • Kingsway Soft (1)
  • Lucid Chart (1)
  • LucidChart (1)
  • Miro Real-Time Board (1)
  • MS Office tools (1)
  • MS Project (1)
  • MS Word developer tools (1)
  • NUnit (1)
  • Pendo (1)
  • Power BI (1)
  • Process 98 (1)
  • Python (1)
  • R (1)
  • requirements repositories, e.g., RRC, RTC (1)
  • RoboHelp (1)
  • Scribe (1)
  • Scrumhow (?) (1)
  • Skype (1)
  • SnagIt (1)
  • SQL (1)
  • Tableau (1)
  • Visible Analyst (1)
  • Visual Studio (1)
  • Visual Studio Team Server (1)
  • Vocera EVA (1)

Name three non-software techniques you use most.

  • interviews (4)
  • communication (3)
  • brainstorming (2)
  • meetings (2)
  • process mapping (2)
  • prototyping (2)
  • relationship building (2)
  • surveys (2)
  • wireframing (2)
  • “play package” (1)
  • 1-on-1 meetings to elicit requirements (1)
  • active listening (1)
  • analysis (1)
  • analyze audience (1)
  • apply knowledge of psychology to figure out how to approach the various personalities (1)
  • business process analysis (1)
  • business process modeling (1)
  • calculator (1)
  • change management (1)
  • charting on whiteboard (1)
  • coffees with customers (1)
  • coffees with teams (1)
  • collaboration (1)
  • conference calls (1)
  • conflict resolution and team building (1)
  • costing out the requests (1)
  • critical questioning (1)
  • critical questioning (ask why fiive times), funnel questioning (1)
  • data analysis (1)
  • data modeling (1)
  • decomposition (1)
  • design thinking (1)
  • develop scenarios (1)
  • development efforts (1)
  • diagramming/modeling (1)
  • document analysis (1)
  • documentation (1)
  • documenting notes/decisions (1)
  • drinking (1)
  • elicitation (1)
  • expectation level setting (1)
  • face-to-face technique (1)
  • facilitiation (1)
  • fishbone diagram (1)
  • Five Whys (1)
  • focus groups (1)
  • handwritten note-taking (1)
  • hermeneutics / interpretation of text (1)
  • impact analysis (1)
  • individual meetings (1)
  • informal planning poker (1)
  • initial mockups / sketches (1)
  • interview (1)
  • interview end user (1)
  • interview stakeholders (1)
  • interview users (1)
  • interviewing (1)
  • JAD sessions (Joint Application Development Sessions) (1)
  • job shadowing (1)
  • listening (1)
  • lists (1)
  • meeting facilitation (prepare an agenda, define goals, manage time wisely, ensure notes are taken and action items documented) (1)
  • mind mapping (1)
  • notes (1)
  • note-taking (1)
  • observation (1)
  • organize (1)
  • paper (1)
  • paper easels (1)
  • pen and paper (1)
  • phone calls and fate-to-face meetings (1)
  • Post-It notes (Any time of planning or breaking down of a subject, I use different colored Post-Its, writing with a Sharpie, on the wall. This allows me to physically see an idea from any distance. I can also move and categorize at will. When done, take a picture.) (1)
  • prioritization (MOSCOW) (1)
  • process decomposition (1)
  • process design (1)
  • process flow diagrams (1)
  • process modeling (1)
  • product vision canvas (1)
  • prototyping (can be on paper) (1)
  • recognize what are objects (nouns) and actions (verbs) (1)
  • requirements elicitation (1)
  • requirements meetings (1)
  • requirements verification and validation (1)
  • requirements workshop (1)
  • responsibility x collaboration using index cards (1)
  • rewards (food, certificates) (1)
  • Scrum Ceremonies (1)
  • Scrums (1)
  • shadowing (1)
  • SIPOC (1)
  • sketching (1)
  • spreadsheets (1)
  • stakeholder analysis (1)
  • stakeholder engagement (1)
  • stakeholder engagement – visioning to execution and post-assessment (1)
  • stakeholder interviews (1)
  • swim lanes (1)
  • taking / getting feedback (1)
  • taking notes (1)
  • test application (1)
  • training needs analysis (1)
  • use paper models / process mapping (1)
  • user group sessions (1)
  • user stories (1)
  • visual modeling (1)
  • walking through client process (1)
  • whiteboard diagrams (1)
  • whiteboard workflows (1)
  • whiteboarding (1)
  • whiteboards (1)
  • workflows (1)
  • working out (1)
  • workshops (1)

Name the goals of a couple of different projects (e.g., automate a manual process, interface to a new client, redesign screens, etc.)

  • add enhancements to work flow app
  • adding feature toggles for beta testing
  • adhere to regulatory requirements
  • adjusting solution to accommodate the needs of a new/different user base
  • automate a manual form with a workflow
  • automate a manual login/password generation and dissemination to users
  • automate a manual process
  • automate a manual process, reduce time and staff to accomplish a standard organizational function
  • automate a paper-based contract digitization process
  • automate and ease reporting (new tool)
  • automate highly administrative, easily repeatable processes which have wide reach
  • automate manual process
  • automate new process
  • automate risk and issue requirements
  • automate the contract management process
  • automate the process of return goods authorizations
  • automate workflow
  • automate workflows
  • automation
  • block or restore delivery service to areas affected by disasters
  • bring foreign locations into a global system
  • build out end user-owned applications into IT managed services
  • business process architecture
  • clear bottlenecks
  • consolidate master data
  • create a “how-to” manual for training condo board members
  • create a means to store and manage condo documentation
  • create a reporting mechanism for healthcare enrollments
  • data change/update
  • data migration
  • design processes
  • develop a new process to audit projects in flight
  • develop and interface between two systems
  • develop data warehouse
  • develop effort tracking process
  • develop new functionality
  • develop new software
  • document current inquiry management process
  • enhance current screens
  • enhance system performance
  • establish standards for DevOps
  • establish vision for various automation
  • I work for teams impplementing Dynamics CRM worldwide. I specialize in data migration and integration.
  • implement data interface wiith two systems
  • implement new software solution
  • implement software for a new client
  • implement vendor software with customizations
  • improve a business process
  • improve system usability
  • improve the usage of internal and external data
  • improve user interface
  • include new feature on mobile application
  • increase revenue and market share
  • integrate a new application with current systems/vendors
  • maintain the MD Product Evaluation List (online)
  • map geographical data
  • merge multiple applications
  • migrate to a new system
  • move manual Excel reports online
  • new functionality
  • process data faster
  • process HR data and store records
  • product for new customer
  • prototype mobile app for BI and requirements
  • provide business recommendations
  • provide new functionality
  • recover fuel-related cost fluctuations
  • redesign
  • redesign a system process to match current business needs
  • reduce technical debt
  • re-engineer per actual user requirements
  • reimplement solution using newer technology
  • replace current analysis tool with new one
  • replace legacy system
  • replace manual tools with applications
  • replatform legacy system
  • rewrite / redesign screens
  • simplify / redesign process
  • simplify returns for retailer and customer
  • standardize / simplify a process or interface
  • system integration
  • system integration / database syncing
  • system performance improvements
  • system-to-system integration
  • technical strategy for product
  • transform the customer experience (inside and outside)
  • UI optimization
  • update a feature on mobile app
  • update the e-commerce portion of a website to accept credit and debit cards
Posted in Tools and methods | Tagged , , , , , | Leave a comment

A Simulationist’s Framework for Business Analysis: Round Five

Today I gave this talk at the Project Summit – Business Analyst World conference in Orlando. The slides I used for this presentation are here. The most recent version of the slides, which includes links to detailed discussions of more topics, is at:

http://rpchurchill.com/presentations/SimFrameForBA/index.html

In this version I further developed my concept of the Unified Theory of Business Analysis. I also collected more survey responses, the results of which are reported below.

List at least five steps you take during a typical business analysis effort.

  1. requirements gathering
  2. delivery expectations
  3. user experience work with customer
  4. process mapping
  5. system and user testing
  6. system interaction (upstram and downstream) how does a change affect my process?
  7. understanding stakeholders
  1. stakeholder elicitation
  2. brainstorming
  3. requirements analysis
  4. wireframing
  5. process / flow diagrams
  1. current state analysis
  2. future state
  3. gap analysis
  4. requirements gathering
  5. success metrics
  1. interview users
  2. gather requirements
  3. document business rules
  4. business process flow
  5. mock-ups
  1. UX design review
  2. requirements gathering
  3. vision gathering / understanding
  1. requirements elicitation
  2. gap analysis
  1. shadow users
  2. follow up to verify understanding of business and need
  3. mockups, high-level design concept
  4. present mockup, design concept
  5. create and mintain stories and acceptance criteria
  1. brainstorming
  2. external stakeholder feedback
  3. internal stakeholder feedback
  4. break down epics
  5. user stories
  6. building
  1. stakeholder analysis
  2. elicitation activity plan
  3. requirements tracing
  4. prototyping
  5. document analysis
  1. research
  2. requirements analysis
  3. state chart diagram
  4. execution plan
  5. reporting plan

List some steps you took in a weird or non-standard project.

  • data migration
  • did my own user experience testing
  • got to step 6 (building) and started back at step 1 multiiple times
  • mapped a process flow on a meeting room wall and had developers stick up arrows and process boxes like I would create in Visio to get engagement and consensus
  • moved servers from one office to another
  • process development
  • showed customer a competitor’s website to get buy-in for change

Name three software tools you use most.

  • Visio (4)
  • Excel (3)
  • Jira (3)
  • MS Teams (3)
  • PowerPoint (3)
  • Draw.io (2)
  • Word (2)
  • Balsamiq (1)
  • Google Docs (1)
  • iRise (1)
  • Lucid Chart (1)
  • Miro Real-Time Board (1)
  • MS Office (1)
  • Pendo (1)
  • Siebel (1)
  • Slack (1)
  • Visual Studio (1)
  • Visual Studio Team Server (1)
  • Vocera EVA (1)

Name three non-software techniques you use most.

  • brainstorming
  • brainstorming
  • business process modeling
  • charting on whiteboard
  • document analysis
  • interview
  • interviews
  • job shadowing
  • mind mapping
  • note-taking
  • paper easels
  • product vision canvas
  • requirements elicitation
  • requirements workshop
  • SIPOC
  • surveys
  • taking / getting feedback
  • walking through client process
  • whiteboards
  • workshops

Name the goals of a couple of different projects (e.g., automate a manual process, interface to a new client, redesign screens, etc.)

  • adding feature toggles for beta testing
  • automate manual process
  • automate risk and issue requirements
  • enhance current screens
  • new functionality
  • product for new customer
  • prototype mobile app for BI and requirements
  • provide new functionality
  • replace legacy system
  • rewrite / redesign screens
  • system performance improvements
  • system-to-system integration
  • UI optimization
Posted in Tools and methods | Tagged , , , , | Leave a comment

What Do I Mean By “Solve The Problem Abstractly?”

Requirements and Design Phases

Looking at the phases I’ve described in my business analysis framework, I wanted to describe the major difference between the step for requirements and design. The artifacts created as part of these phases, which usually include documentation but can also include prototypes, can vary greatly. I’ve seen these steps take a lot of different forms and I’ve certainly written and contributed to a lot of documents and prototypes that looked very, very different.

As I’ve described in my presentations, the phases in my framework are:

  • Project Planning: Planning the engagement, what will be done, who will be involved, resource usage, communication plan, and so on.
  • Intended Use: The business problem or problems to be solved and what the solution will try to accomplish.
  • Assumptions, Capabilities, Limitations, and Risks & Impacts: Identifying what will be included and omitted, what assumptions might be made in place of hard information that might not be available, and the consequences of those choices.
  • Conceptual Model: The As-Is state. Mapping of the existing system as a starting point, or mapping a proposed system as part of requirements and design for new projects.
  • Data Sources: Identifying sources of data that are available, accurate, and authoritative.
  • Requirements: The Abstract To-Be state. Description of the elements and activities to be represented, the data needed to represent them, and any meta-information or procedures needed for support.
  • Design: The Concrete To-Be state. Description of how the identified requirements are to be instantiated in a working solution (either as a new system or as a modification to an existing system) and the plan for implementation and transition.
  • Implementation: The phase where the solution is actually implemented. In a sense, this is where the actual work gets done.
  • Test Operation and Usability: Verification. Determining whether the mechanics of the solution work as designed.
  • Test Outputs and Suitability for Purpose: Validation. Determining whether the properly function system solves the actual problem it was intended to solve.
  • Acceptance: Accreditation. Having the customer (possibly through an independent agent) decide whether they will accept the solution provided.

The requirements and design phases are sometimes combined by practitioners but I want to discuss how they can and should be differentiated. Also, given the potentially amorphous and iterative nature of this process, I’ll also discuss how the requirements phase interacts with and overlaps the conceptual model and the data sources phases.

As you can see from the description of the phases above, I refer to the requirements phase as coming up with an abstract solution to the problem under analysis. People have a lot of different conceptions of what this can be. For example, it can contain written descriptions of actions that need to be supported. It can be written solely in the form of user stories. My preference is to begin with some form of system diagram or process map to provide a base to work from and to leverage the idea that a picture is worth a thousand words. That is to be followed by descriptions of all events, activities, messages, entities, and data items that are involved.

Data

Even more specifically, the nature of all the data items should be defined. Six Sigma methodologies define nine different characteristics that can be associated with each data item. They can be looked up easily enough. I prefer to think about things this way:

  • Measure: A label for the value describing what it is or represents
  • Type of Data:
    • numeric: intensive value (temperature, velocity, rate, density – characteristic of material that doesn’t depend on the amount present) vs. extensive value (quantity of energy, mass, count – characteristic of material that depends on amount present)
    • text or string value (names, addresses, descriptions, memos, IDs)
    • enumerated types (color, classification, type)
    • logical (yes/no, true/false)
  • Continuous or Discrete: most numeric values are continuous but counting values, along with all non-numeric values, are discrete
  • Deterministic vs. Stochastic: values intended to represent specific states (possibly as a function of other values) vs. groups or ranges of values that represent possible random outcomes
  • Possible Range of Values: numeric ranges or defined enumerated values, along with format limitations (e.g., credit card numbers, phone numbers, postal addresses)
  • Goal Values: higher is better, lower is better, defined/nominal is better
  • Samples Required: the number of observations that should be made to obtain an accurate characterization of possible values or distributions
  • Source and Availability: where and whether the data can be obtained and whether assumptions may have to be made in its absence
  • Verification and Authority: how the data can be verified (for example, data items provided by approved individuals or organizations may be considered authoritative)

These identified characteristics provide guidance to the participants in the design and implementation phases. Many of the data items may be identified as part of the conceptual model phase. These are the items associated with the As-Is state (for cases where there is an existing system or process to analyze). New data items will usually be identified as part of the requirements phase, both for modified parts of the original system and for the elements needed to control them. Data items may also be removed during the transition from the conceptual model to the requirements if an existing system is being pruned or otherwise rearranged or streamlined.

I wrote here about one of my college professor’s admonitions that you need to solve the problem before you plug the numbers in. He described how students would start banging away on their hand-held calculators when they got lost (yeah, I’m that old!), as if that was going to help them in some way. He said that any results obtained without truly understanding what was going on were only going to lead to further confusion. The students needed to fully develop the equations needed to describe the system they were analyzing (he was my professor for Physics 3 in the fall of my sophomore year), and simplify and rearrange them until the desired answer stood alone on one side of the equals sign. Then and only then should the given values be plugged into the variables on the other side of the equals sign, so the numeric answer could be generated. Problems can obviously involve more than single values for answers but the example is pretty clear.

So, the requirements should identify the data items that need to be included to describe and control the solution. The design might then define the specific forms those representations will take. Those definitions can include specific form of the variables (in terms of type and size or width) appropriate for the computer language or tool (e.g., database implementation) in which the solution (or the computer-based part of it) will be implemented. Those details definitely must be determined for the implementation.

The requirements descriptions should also include how individual data items should naturally be grouped. This includes groupings for messages, records for storage and retrieval, and associations with specific activities, entities, inputs, and outputs.

Contexts

An important part of what must be captured and represented is the information needed for the user to interact with and control the system. The conceptual model, particularly when a simulation is involved, mostly contains information about the process itself, but can clearly simulate a limited scope of user interactions. The requirements phase is where it’s most important to incorporate all contexts of user behavior and interactions.

The two major contexts of user interaction involve manipulations of data items (typically CRUD operations) and initiation of events (typically non-CRUD operations). Another contextual differentiation is the operating mode of a system. Modes can involve creation and editing, running, maintenance, analysis, and deployment, among others.

Skills Required for Different Phases

Identification of activities and values can be performed by anyone with good analysis skills, though it always helps to also have architecture and implementation skills. Business analysts should have the requisite skills to carry out all actions of the requirements phase.

The design phase is where practitioners with specific skills start to be needed. The skills needed are driven by the environment in which the solution will be implemented. Solutions come in many shapes and sizes. Processes for serving fancy coffees, making reservations at hospitality and entertainment venues, and manufacturing hand tools can be very different, and the skills needed to design solutions can vary widely.

A standard, BABOK-based business analyst should be able to analyze and design a process for serving customers directly in a store. Designing a computer-based solution will require different skills based on the scope, scale, and components of the solution envisioned. A solution based on a single desktop might require basic programming experience. A solution based on peer-to-peer networked computers might require experience in programming, inter-process communication, and real-time considerations. A simple, web-based system might require knowledge of basic web programming, different levels of understanding of tools like WordPress, or other automated web-builder tools. An enterprise-level, web-based system might require knowledge of DevOps, server virtualization, cloud computing, and clustering. People mostly seem to refer to this skillset when they use the term solutions architect, though I interpret this term more widely. Designing a manufacturing system might require knowledge of engineering, specific physical processes, and non-trivial knowledge of a wide range of computing subjects. A knowledge of simulation might be helpful for a lot of different solutions.

No matter what skills are required or thought to be required by the design practitioners, the business analyst needs to be able to work closely with them to serve as an effective bridge to business customers. I’m certainly capable of creating designs in most if not all of these contexts, even if I don’t have some of the specific implementation skills. No one knows everything, what’s important is to know what’s needed and how to work with and organize the requisite practitioners.

The skills required for the implementation phase are definitely more specific based on the specific nature of the solution. A business analyst needs to be able to communicate with all kinds of implementation practitioners (in both directions!) in order to serve as an effective liaison between those implementors and the business customers they serve.

Summary

Solving the problem abstractly, to me, means writing a comprehensive set of requirements that facilitates the creation of an effective design. The requirements are the abstract representation of the To-Be state while the design is the concrete representation of the To-Be state. It describes what’s actually going to be built. The implementation results in the actual To-Be state.

Posted in Tools and methods | Tagged , , , | Leave a comment

Unified Theory of Business Analysis: Part Three

How The Most Commonly Used Software Tools Apply to the Solution Effort and the Engagement Effort

Continuing last week’s discussions I wanted to analyze how business analysts tend to apply their favorite software tools.

  • Excel (24)

    Microsoft Excel is such a general purpose tool that it can be used to support almost any activity. It is highly useful for manipulating data and supporting calculations for solutions but is equally useful for managing schedule, cost, status, and other information as part of any engagement. I’ve been using spreadsheets since the 80s and I know I’ve used them extensively for both kinds of efforts.

  • Jira (14)

    While Jira and related systems (e.g., Rally, which surprisingly never came up in my survey even once) are used to manage information about solutions, it’s less about the solutions than keeping it all straight and aiding in communication and coordination. As such, I consider it to be almost entire geared to support engagement efforts.

  • Visio (14)

    Visio could be used for diagramming work breakdown structures, critical paths, organizational arrangements, and so on, but it seems far more directed to representing systems and their operations as well as solutions and their designs. Therefore I classify Visio as primarily as an aid to solution efforts.

  • Word (13)

    Like Excel, Microsoft Word is another general purpose tool that can be used to support both solution efforts and engagement efforts.

  • Confluence (8)

    Confluence is a bit of an odd duck. It’s really good for sharing details about rules, discovered operations and data, and design along with information about people and project statuses. It suffers from the weakness that the information entered can be difficult to track down unless some form of external order is imposed. The tool can readily be used to support both kinds of efforts.

  • Outlook (7)

    Microsoft Outlook is a general communication tool that manages email, meetings, facilities, and personnel availability. It is most often leverage to support the engagement effort.

  • SharePoint (6)

    SharePoint is another Microsoft product that facilitates sharing of information among teams, this time in the form of files and notifications. This makes it most geared towards supporting engagement efforts.

  • Azure DevOps (5)

    I’m not very familiar with this tool, having only been exposed to it during a recent presentation at the Tampa IIBA Meetup. It seems to me that this product is about instantiating solutions while its sister product, Visual Studio Team Services (VSTS) is meant to serve the engagement effort. I would, of course, be happy to hear different opinions on this subject.

  • Team Foundation Server (4)

    I’m mostly classifying this as supporting the engagement effort, since it seems to be more about communication and coordination, but to the degree that it supports source code directly iit might have to be considered as supporting solution efforts as well. Like I’ve said, feel free to pipe in with your own opinion.

  • PowerPoint (4)

    I’ve used PowerPoint for its effective diagramming capabilities, which sometimes allow you to do things that are more difficult to do in other graphics packages like Visio. That said, as primarily a presentation and communications tool, I think it should primarily be classified as supporting engagements.

  • Email (3)

    Email is a general form of communication and coordination and is definitely most geared toward engagement efforts.

  • Google Docs (3)

    These tools provide analogs to the more general Microsoft office suite of tools and should support both solution and engagement efforts. However, I’m thinking these are more generally used to support the engagement side of things.

  • MS Dynamics (2)

    These tools seem to be mostly about supporting engagements, although the need to customize them for specific uses may indicate that it’s also something of a solution in itself.

  • Visual Studio (2)

    Any tools meant to directly manipulate source code must primarily be used to support solution efforts.

  • Notepad (2)

    This general purpose tools can be used for almost anything, and is thus appropriate for supporting both solution and engagement efforts.

  • OneNote (2)

    This is another very generalized Microsoft tool that facilitates sharing of information that can be used to support solution and engagement efforts.

  • SQL Server (2)

    SQL Server is almost always part of a solution.

The survey respondents had identified 38 other software tools at the time of this writing, none of which were mentioned more than once. They were mostly variations on the tools discussed in detail here, and included tools for diagramming, communication and coordination, and analysis. A small number of explicit programming tools were listed (e.g., Python, R) along with some automated testing tools that are usually the province of implementation and test practitioners. It’s nice to see BAs that have access to a wider range of skills and wear multiple hats.

Here’s the summary of how I broke things down. Please feel free to offer suggestions for how you might classify any of these differently.

Software Tool Survey Count Effort Type
Excel 24 Both
Jira 14 Engagement
Visio 14 Solution
Word 13 Both
Confluence 8 Both
Outlook 7 Engagement
SharePoint 6 Engagement
Azure DevOps 5 Solution
Team Foundation Server 4 Engagement
PowerPoint 4 Engagement
Email 3 Engagement
Google Docs 3 Engagement
MS Dynamics 2 Engagement
Visual Studio 2 Solution
Notepad 2 Both
OneNote 2 Both
SQL Server 2 Solution
Posted in Management | Tagged , , | Leave a comment

Unified Theory of Business Analysis: Part Two

How The Fifty Business Analysis Techniques Apply to the Solution Effort and the Engagement Effort

Yesterday I kicked off this discussion by clarifying the difference between the solution effort and the engagement effort. Again, the solution effort involves the work to analyze, identify, and implement the solution. Note that the solution should include not just the delivered operational capability but also the means of operating and governing it going forward. The engagement effort involves the meta-work needed to manage the process of working through the different phases. An engagement can be a project of fixed scope or duration or it can be an ongoing program (consisting of serial or parallel projects) or maintenance effort (like fixing bugs using a Kanban methodology). It’s a subtle difference, as I described, but the discussion that follows should make the difference more clear.

I want to do so by describing how the fifty techniques business analysis techniques defined in the BABOK (chapter ten in the third edition) relate to the solution and engagement efforts. I want to make it clear that as a mechanical and software engineer, I’m used to doing the analysis to identify the needs, do the discovery, collect the data, and define the requirements; manage the process from beginning to end; and then doing the implementation, test, and acceptance. A lot of BAs don’t have the experience of doing the actual implementations. To me it’s all part of the same process.

The definitions for each business analysis technique are taken verbatim from the BABOK. The descriptions of the uses to which they can be put within the context of my analytical framework are my own. I also refer to different kinds of model components as I described here.

  1. Acceptance and Evaluation Criteria: Acceptance criteria are used to define the requirements, outcomes, or conditions that must be met in order for a solution to be considered acceptable to key stakeholders. Evaluation criteria are the measures used to assess a set of requirements in order to choose between multiple solutions.

    An acceptance criterion could be the the maximum processing time of an item going through a process. An evaluation criteria might be that the item must exit a process within 22 minutes at least 85 percent of the time. Similarly, the processing cost per item might need to be less than US $8.50. These criteria can apply to individual stations or a system as a whole (e.g., a document scanning operation could be represented by a single station while a group insurance underwriting evaluation process would be made up of a whole system of stations, queues, entries, and exits).

    Identifying the acceptance criteria is part of the solution effort. Knowing this has to be done is part of the engagement effort. Note that this will be the case for a lot of the techniques.

  2. Backlog Management: The backlog is used to record, track, and prioritize remaining work items.

    The term backlog is usually used in a Scrum context today, but the idea of lists of To-Do items, either as contract requirements or deficiency punch lists, has been around approximately forever. I’d bet my left arm they had written punch lists when they were building the pyramids in Egypt.

    Managing the backlog is primarily part of the engagement effort. Doing the work of the individual backlog items is part of the solution effort.

  3. Balanced Scorecard: The balanced scorecard is used to manage performance in any business model, organizational structure, or business process.

    This is a technique only a consultant could love. In my opinion it’s just a made up piece of marketing twaddle. Like so many other techniques, it’s just a formalized way of making sure you do what you should be doing anyway.

    OK, taking off my cranky and cynical hat I’ll say that techniques like these exist because somebody found them useful. While it’s clear that managers and practitioners should be evaluating and improving their capabilities from the point of view of customers, internal processes, learning and growth (internal and external), and financial performance, it never hurts to be reminded to do so explicitly. And heck, my own framework isn’t much more than an organized reminder to make sure you do everything you need to do in an organized way.

    I would place this activity primarily in the context of the engagement effort, since it strikes me as a meta-evaluation of an organization that defines the context that determine the kind of solutions that might be needed.

  4. Benchmarking and Market Analysis: Benchmarking and market analysis are conducted to improve organizational operations, increase customer satisfaction, and increase value to stakeholders.

    This technique is about comparing the performance of your individual stations or activities and overall system (composed of collections of individual stations, queues, entries, and exits) to those of other providers in the market, by criteria you identify.

    This seems to lean more in the direction of an engagement effort, with the details of changes made based on the findings being part of solution efforts.

  5. Brainstorming: Brainstorming is an excellent way to foster creative thinking about a problem. The aim of brainstorming is to produce numerous new ideas, and to derive from them themes for further analysis.

    This technique involves getting (generally) a group of people together to think about a process to see what options exist for adding, subtracting, rearranging, or modifying its components to improve the value it provides. This can be done when improving an existing product or process or when creating a new one from scratch.

    A “process” in this context could be an internal organizational operation, something a product is or does, or something customers might need to improve their lives. It’s a very general and flexible concept.

    Brainstorming is mostly about the solution effort, though engagement issues can sometimes be addressed in this way.

  6. Business Capability Analysis: Business capability analysis provides a framework for scoping and planning by generating a shared understanding of outcomes, identifying alignment with strategy, and providing a scope and prioritization filter.

    Processes made up of stations, queues, entries, and exits represent the capabilities an organization has. They can be analyzed singly and in combination to improve the value an organization can provide.

    The write-up in the BABOK approaches this activity a bit differently and in an interesting way. It’s worth a read. That said, looking at things the way I propose, in combination with the other ideas described in this write-up, will provide the same results.

    Since this technique is about the organization and its technical capabilities, it might be considered a mix of engagement and solution work, but overall it tends toward the engagement side of things.

  7. Business Cases: A business case provides a justification for a course of action based on the benefits to be realized by using the proposed solution, as compared to the cost, effort, and other considerations to acquire and live with that solution.

    A business case is an evaluation of the costs and benefits of doing things a different way, or of doing a new thing or not doing it at all. Business cases are evaluated in terms of financial performance, capability, and risk.

    Business cases can be a mix of engagement and solution effort.

  8. Business Model Canvas: A business model canvas describes how an enterprise creates, delivers, and captures value for and from its customers. It is comprised of nine building blocks that describe how an organization intends to deliver value: Key Partnerships, Key Activities, Key Resources, Value Proposition, Customer Relationships, Channels, Customer Segments, Cost Structure, and Revenue Streams.

    Again, this is a fairly specific format for doing the kinds of analyses we’re discussing over and over again. It is performed at the level of an organization on the one hand, but has to be described at the level of individual processes and capabilities on the other. Once you’re describing individual processes and capabilities, you’re talking about operations in terms of process maps and components, where the maps include the participants, information flows, financial flows, resource flows, and everything else.

    This technique also applies to both engagement and solution efforts.

  9. Business Rules Analysis: Business rules analysis is used to identify, express, validate, refine, and organize the rules that shape day-to-day business behavior and guide operational business decision making.

    Business rules are incorporated in stations and process maps in the form or decision criteria. Physical or informational items are not allowed to move forward in a process until specified criteria are met. Business rules can be applied singly or in combination in whatever way works for the organization and the process. If mechanisms for applying business rules can be standardized and automated, then so much the better.

    Business rules are almost always created and evaluated at the level of the solution.

  10. Collaborative Games: Collaborative games encourage participants in an elicitation activity to collaborate in building a joint understanding of a problem or a solution.

    This has never been one of my go-to techniques, but I admire the people who can think up things that are applicable to a given real-world problem. I’ve seen these used pretty much exclusively as teaching and team-building exercises, and it’s difficult to imagine how this activity could be mapped directly onto a process map or its components.

    That said, there are ways to gamify almost anything, even if it’s just awarding points for doing certain things with prizes awarded at intervals to individuals and teams who rack up the most. When I was in the Army a colleague came up with a way to have us remember a sequence of actions a weapon system crew had to call out to track and fire a missile, reinforced by calling out items as we passed a volleyball among ourselves.

    If anything this technique belongs more in the realm of engagement efforts.

  11. Concept Modeling: A concept model is used to organize the business vocabulary needed to consistently and thoroughly communicate the knowledge of a domain.

    This is an integral part of domain knowledge acquisition, which I consider to be a core function of business analysts.

    The description in the BABOK talks mostly about defining the relevant nouns and verbs in a domain, free of “data or implementation biases,” which is good advice based on my experience. The nouns and verbs of a process or domain should be identified (and understood) during the discovery phase of an engagement and then, only when that is done, the data should be captured during an explicit data collection phase. It’s possible to do both things at once, but only if you have a very clear understanding of what you’re doing.

    This detailed work is almost always performed in the context of solution efforts.

  12. Data Dictionary: A data dictionary is used to standardize a definition of a data element and enable a common interpretation of data elements.

    This technique goes hand in hand with the concept modeling technique described directly above. Instead of describing domain concepts, though, it is intended to catalog detailed descriptions of data items and usages. Where domain nouns and verbs identify things (including ideas) and actions, data items are intended to characterize those things and actions. For example, a domain concept would be an isolated inspection process, which is a verb. The location or facility at which the inspection is performed would be a noun. The time it takes to perform the inspection is a datum (or collection of data, if a distribution of times is to be used in place of a single, fixed value) that characterizes the verb, while the number of items that can be inspected simultaneously is a datum (or collection of data is, say, that number changes throughout the day according to a schedule or in response to demand) that characterizes the noun.

    The general type of data (e.g., continuous vs. discrete, integer vs. real, logical vs. numeric vs. enumerated type) may also be recorded in the dictionary. Specific information may also be recorded based on how the data is used in terms of the implementation. These may include numeric ranges (minimums and maximums), valid enumerated values (e.g., colors, months, descriptive types or classes of items or events), and detailed storage types (short and long integers, floating point variables of different lengths, string lengths, data structures descriptions, and so on).

    A data dictionary is definitely associated with the solution effort.

  13. Data Flow Diagrams: Data flow diagrams show where data comes from, which activities process the data, and if the output results are stored or utilized by another activity or external entity.

    Data flow diagrams can be process maps of their own or can be embedded in process maps that also show the movement and processing of physical entities. Different representations might be used to make it clear which is which if both physical and informational entities are described in the same map, to improve clarity and understanding.

    This technique is used in the context of the solution effort.

  14. Data Mining: Data mining is used to improve decision making by finding useful patterns and insights from data.

    People typically think of “Big Data” or “AI” or “machine learning” when they talk of data mining, but the concept applies to data sets of any size. Intelligent perusal and analysis of data can yield a lot of insights, even if performed manually by a skilled practitioner.

    Graphical methods are very useful in this regard. Patterns sometime leap right out of different kinds of plots and graphs. The trick is to be creative in how those plots are designed and laid out, and also to have a strong understanding of the workings and interrelationships of a system and what the data mean. The BABOK mentions descriptive, diagnostic, and predictive methods, which can all use different analytical sub-techniques.

    Although this technique can be used in support of the engagement effort if it’s complex enough, this kind of detailed, high-volume methodology is almost always applied to the solution.

  15. Data Modeling: A data model describes the entities, classes or data objects relevant to a domain, the attributes that are used to describe them, and the relationships among them to provide a common set of semantics for analysis and implementation.

    I see this as having a lot of overlap with the data dictionary, described above. It aims to describe data conceptually (as I describe in the Conceptual Model phase of my analytic framework), logically (in terms of normalization, integrity, authority, an so on), and physically (as instantiated or represented in working system implementations).

    This technique is definitely used in solution efforts.

  16. Decision Analysis: Decision analysis formally assesses a problem and possible decisions in order to determine the value of alternate outcomes under conditions of uncertainty.

    The key point here is uncertainty. Deterministic decisions made on the basis known values and rules are far easier to analyze and implement. Those can be captured as business rules as described above in a comparatively straightforward manner.

    Uncertainly may arise when aspects of the problem are not clearly defined or understood and when there is disagreement among the stakeholders. The write-up in the BABOK is worth reviewing, if you have the chance. There is obviously a wealth of additional material to be found elsewhere on this subject.

    Interestingly, this kind of uncertainty is largely distinguished from the type of uncertainty that arises from the systemic variability stemming from concatenations of individually variable events, as found in processes described by Monte Carlo models. That kind of uncertainty is manageable within certain bounds and the risks are identifiable.

    This technique is always part of work on the solution.

  17. Decision Modeling: Decision modeling shows how repeatable business decisions are made.

    This technique contrasts to Decision Analysis in that it is generally concerned with decisions that are made with greater certainty, where the factors and governing values are well understood, and where the implementations of the required calculations is straightforward. Note that the processes and calculations can be of any complexity, the differentiation is about the level of certainty or decidability.

    This technique is also always part of the solution effort.

  18. Document Analysis: Document analysis is used to elicit business analysis information, including contextual understanding and requirements, by examining available materials that describe either the business environment or existing organizational assets.

    Document analysis can aid in domain knowledge acquisition, discovery, and data collection as described and linked previously. Documents can help identify the elements and behaviors of a system (its nouns and verbs), the characteristics of a system (adjectives and adverbs modifying the nouns and verbs), and the entities processed by the system. It the entities to be processed by the system are themselves documents, then they drive the data that the system must process, transport, and act upon.

    In all cases this work is performed during the solution effort.

  19. Estimation: Estimation is used by business analysts and other stakeholders to forecast the cost and effort involved in pursuing a course of action.

    Estimation is a mix of science and art, and probably a black art at that. Any aspect of a process or engagement can be estimated, and the nature and accuracy of the estimate will be based on prior experience, entrepreneurial judgment, gathered information, and gut feel.

    The BABOK describes a number of sub-techniques for performing estimations. They are top-down, bottom-up, parametric estimation, rough order of magnitude (ROM), Delphi, and PERT. Other methods exist but these cover the majority of those actually used.

    As described in the BABOK this technique is intended to be used as part of the engagement effort.

  20. Financial Analysis: Financial analysis is used to understand the financial aspects of an investment, a solution, or a solution approach.

    Fixed and operating costs must be balanced against revenue generated for every process or group of processes. Costs may be calculated by working bottom-up from individual components and entities while revenues come from completed (through the sale) work items. The time value of money is often considered in such analyses; I took a course on this as a freshman in engineering school. I’m glad I gained this understanding so early. Risk is often calculated in terms of money, as well, and it should be remembered that uncertainty is always a form of risk.

    I’ve always enjoyed analyzing systems from a financial point of view, because it provides the most honest insight into what factors need to be optimized. For example, you can make something faster at the expense of taking longer to make it, requiring more expensive materials, making it more expensive to build and maintain, and making it more difficult to understand. You can optimize along any axis beyond the point at which it makes sense. Systems can be optimized along many different axes but the ultimate optimization is generally financial. This helps you figure out the best way of making trade-offs between speed, resource usage, time, and quality, or among the Iron Triangle concepts of

    Detailed cost data is sometimes hidden from some analysts, for a number of reasons. A customer or department might not want other employees or consultants knowing how much individual workers get paid. That’s a good reason in a lot of cases. Companies hiding or blinding data from outside consultants for reasons of competitive secrecy are also reasonable. A bad reason is when departments within an organization refuse to share data in order to protect their “rice bowls.” In some case you’ll build the capability of doing financial calculations into a system and test it with dummy data, and then release it to organizations authorized to work with live data in a secure way.

    This is always an interesting area to me because four generations of my family have worked in finance in one capacity or another. As an engineer and process analyst I’m actually kind of a black sheep!

    Financial analysis is a key part of both the solution and engagement efforts.

  21. Focus Groups: A focus group is a means to elicit ideas and opinions about a specific product, service, or opportunity in an interactive group environment. The participants, guided by a moderator, share their impressions, preferences, and needs.

    Focus groups can be used to generate ideas or evaluate reactions to certain ideas, products, opportunities, and risks. In terms of mapped processes the group can think about individual components, the overall effect of the process or system, or the state of an entity — especially if it’s a person and even more so if it’s a customer (see this article on service design) — as it moves through a process or system.

    This technique is primarily used in solution efforts.

  22. Functional Decomposition: Functional decomposition helps manage complexity and reduce uncertainty by breaking down processes, systems, functional areas, or deliverables into their simpler constituent parts and allowing each part to be analyzed independently.

    Representing a system as a process map is a really obvious form of functional decomposition, is it not? A few more things might be said, though.

    I like to start by breaking a process down into the smallest bits possible. From there I do two things.

    One is that I figure out whether I need all the detail I’ve identified, and if I find that I don’t I know I can aggregate activities at a higher level of abstraction or moit them entirely if appropriate. For example, a primary inspection at a land border crossing can involve a wide range of possible sub-activities, and some of those might get analyzed in more detailed studies. From the point of view of modeling the facility, however, only the time taken to perform the combined activities may be important.

    The other thing I do is group similar activities and see if there are commonalities I can identify that will allow me to represent them with a single modeling or analytical component or technique. It might even make sense to represent a wide variety of similar activities with a standard component that can be flexibly configured. This is part of the basis for object-oriented design.

    A more general benefit of functional decomposition is simply breaking complex operations down until they are manageable, understandable, quantifiable, and controllable. The discussion in the BABOK covers a myriad of ways that things can be broken down and analyzed. The section is definitely worth a read and there are, of course, many more materials available on the subject.

    Functional decomposition is absolutely part of the solution effort.

  23. Glossary: A glossary defines key terms relevant to a business domain.

    Compiling a glossary of terms is often a helpful aid to communication. It ensures that all parties are using an agreed-upon terminology. It helps subject matter experts know that the analysts serving them have sufficient understanding of their work. Note that this goes in both directions. SMEs are assured that the analysts serving them are getting it right, but the analysts are (or should be) bringing their own skills and knowledge to the working partnership, so their terms of art may also be added to the glossary so the SMEs being served understand how the analysts are doing it.

    The glossary is chiefly part of the solution effort.

  24. Interface Analysis: Interface analysis is used to identify where, what, why, when, how, and for whom information is exchanged between solution components or across solution boundaries.

    This is a particular area of interest for me, so I’m going to break it down across all the concepts described in the BABOK.

    • User interfaces, including human users directly interacting with the solution within the organization: Users and user interfaces can be represented as stations or process blocks in any system. They can be represented as processes, inputs, outputs, and possibly even queues in a process map. In other cases they can be represented as entities which move through and are processed by the system. (Alternatively, they can be described as being served by the system or process.) The data items transmitted between these components and entities are themselves entities that move through and are potentially transformed by the system.
    • People external to the solution such as stakeholders or regulators: Information or physical materials exchanged with external actors (like customers) are typically represented as entities, while the actors themselves can be represented as entities or fixed process components.
    • Business processes: Process maps can represent activities at any level of aggregation and abstraction. A high-level business operation might be represented by a limited number of fixed components (stations, queues, entries, and exits) while each component (especially stations) could be broken down into its own process map (with its own station, queue, entry, and exit blocks). An example of this would be the routing of documents and groups of documents (we can call those files or folders) through an insurance underwriting process, where the high-level process map shows how things flow through various departments and where the detailed process maps for each department show how individual work items are routed to an processed by the many individuals in that department.
    • Data interfaces between systems: The word “system” here can refer to any of the other items on this list. In general a system can be defined as any logical grouping of functionality. In a nuclear power plant simulator, for example, the different CPU cores, shared memory system, hard disks, control panels and controls and displays and indicators mounted on them, thermo-hydraulic and electrical system model subroutines, standardized equipment handler subroutines, hard disks, tape backup systems, and instructor control panels are all examples of different systems that have to communicate with each other. As a thermo-hydraulic modeler, a lot of the interface work I did involved negotiating and implementing interfaces between my fluid models and those written by other engineers. Each full-scope simulator would include on the order of fifty different fluid systems along with a handful of electrical and equipment systems. Similarly, the furnace control systems I wrote had to communicate with several external systems that ran on other types of computers with different operating systems and controlled other machines in the production line. Data interfaces had to be negotiated and implemented between all of them, too. The same is true of the Node.js microservices that provided the back-end functionality accessed by users through web browsers and Android and iOS apps.
    • Application Programming Interfaces (APIs): APIs can be prepackaged collections of code in the form of libraries or frameworks, while microservices provide similarly centralized functionality. The point is that the interfaces are published so programs interacting with them can do so in known ways without having to worry (in theory) about the details what’s going on in the background.
    • Hardware devices: Communication with hardware devices isn’t much different than the communications described above. In fact, the interfaces described above often involve hardware communications. (See a brief rundown of the 7-layer OSI model here.)

    I’ve written a lot about inter-process communication in this series of blog posts.

    Data interfaces can be thought of at an abstract level and at a concrete level. Business analysts will often identify the data items that need to be collected, communicated, processed, stored, and so on, while software engineers and database analysts will design and implement the systems that actually carry out the specified actions. Both levels are part of the solution effort.

  25. Interviews: An interview is a systematic approach designed to elicit business analysis information from a person or group of people by talking to the interviewee(s), asking relevant questions, and documenting the responses. The interview can also be used for establishing relationships and building trust between business analysts and stakeholders in order to increase stakeholder involvement or build support for a proposed solution.

    Interviews involve talking to people. In this context the discussions can be about anything related to a process as it is mapped, in terms of its functional and non-functional characteristics. The discussions can be about individual components or entities, subsets of items, or the behavior of a system in its entirety.

    I’ve interviewed SMEs, executives, line workers, engineers, customers, and other stakeholders to learn what they do, what they need, and how their systems work. I’ve done this in many different industries in locations all over the world. The main thing I’ve learned is to be pleasant, patient, respectful, interested, and thorough. Being thorough also means writing down your findings for review by the interviewees, so they can confirm that you captured the information they provided correctly.

    This work is definitely part of the solution effort.

  26. Item Tracking: Item tracking is used to capture and assign responsibility for issues and stakeholder concerns that pose an impact to the solution.

    In the context described by the BABOK, the items discussed aren’t directly about the mapped components and entities of a system of interest. They are instead about the work items that need to be accomplished in order to meet the needs of the customers and stakeholders being served by the analyst. In that sense we’re talking about tracking all the meta-activities that have to get done to create or improve a mapped process.

    I really enjoy using a Requirements Traceability Matrix (RTM) to manage these meta-elements. If done the way I describe in my blog post (just linked, and here generally, and also throughout the BABOK), the work items and the mapped components and entities will all to accounted for and linked forward and backward through all phases of the engagement. Indeed, the entire purpose of my framework is to ensure the needs of the stakeholders are met by ensuring that all necessary components of a system or process are identified and instantiated or modified. This all has to happen in a way that provides the desired value.

    In the context discussed in the BABOK, this technique is used as part of the engagement effort.

  27. Lessons Learned: The purpose of the lessons learned process is to compile and document successes, opportunities for improvement, failures, and recommendations for improving the performance of future projects or project phases.

    Like the item tracking technique described above, this is also kind of a meta-technique that deals with the engagement process and not the operational system under analysis. The point here is to review the team’s activities to capture what was done well and what could be done better in the future. This step is often overlooked by organizations and teams are created and disbanded without a lot of conscious review. It’s one of those things where you never have time to do it right but you always have time to do it over.

    It’s supposed to be a formal part of the Scrum process on a sprint and project basis, but it really should be done periodically in every engagement, and especially at the end (if there is an identifiable end). Seriously, take the time do do this.

    As stated, this technique is also used during the engagement process.

  28. Metrics and KPIs: Metrics and key performance indicators measure the performance of solutions, solution components, and other matters of interest to stakeholders.

    These are quantifiable measures of the performance of individual components, groups of components, and systems or processes as a whole. They’re also used to gauge the meta-progress of work being performed through an engagement.

    The BABOK describes an indicator as a numerical measure of something, while a metric is a comparison of an indicator to a desired value (or range of values). For example, an indicator is the number of widgets produced during a week, while a metric is whether or not that number is above a specific target, say, 15,000.

    Care should be taken to ensure that the indicators chosen are readily collectible, can be determined at a reasonable cost, are clear, are agreed-upon, and and are reliable and believed.

    Metrics and KPIs are identified as part of the solution effort. Note that the ongoing operation and governance of the solution by the customer, possibly after it has been delivered, is all part of the solution the BA and his or her colleagues should be thinking about.

  29. Mind Mapping: Mind mapping is used to articulate and capture thoughts, ideas, and information.

    I’ve never used an explicit mind-mapping tool, and I’ve only encountered one person who actively used one (as far as I know). That said, there are a lot of ways to accomplish similar ends without using an explicit software tool.

    This technique actually has a lot in common with the part of the functional decomposition technique that involves organizing identified concepts into logical groups. Brainstorming with Post-It notes is a manual form of mind-mapping, and the whole process of discovery and domain knowledge acquisition (as I describe them) is supposed to inspire this activity.

    For most of my life I’ve taken notes on unlined paper so I have the freedom to write and draw wherever I like to show relationships and conceptual groupings. Another way this happens is through being engaged with the material over time. I sometimes refer to this as wallowing in the material. Insights are gained and patterns emerge in the course of your normal work as long as you are mentally and emotionally present. These approaches might not work for everyone, but they’ve worked for me.

    This technique is primarily applied as part of the solution effort.

  30. Non-Functional Requirements Analysis: Non-functional requirements analysis examines the requirements for a solution that define how well the functional requirements must perform. It specifies criteria that can be used to judge the operation of a system rather than specific behaviors (which are referred to as the functional requirements).

    These define what a solution is rather than what it does. The mapped components of a process or system (the stations, queues, entries, exits, and aths along with the entities processed), define the functional behavior of the process or system. The non-functional characteristics of a process or system describe the qualities it must have to provide the desired value for the customer.

    Non-functional requirements are usually expressed in the form of -ilities. Examples are reliability, modularity, flexibility, robustness, maintainability, scalability, usability, and so on. Non-functional requirements can also describe how a system or process is to be maintained and governed.

    The figure at the bottom of this article graphically describes my concept of a requirements traceability matrix. Most of the lines connecting items in each phase are continuous from the Intended Use phase through to the Final Acceptance phase. These cover the functional behaviors of the system or process. The non-functional requirements, by contrast, are shown as beginning in the Requirements phase. I show them this way because the qualities the process or system needs to have are often a function of what it needs to do, and thus they may not be known until other things about the solution are known. That said, uptime and performance guarantees are often parts of contracts and identified project goals from the outset. When I was working the paper industry, for example, the reliability (see there? a non-functional requirement!) of the equipment was improving to the point that turnkey system suppliers were starting to guarantee 95% uptime where the previous standard had been 90%. I know that plant utilization levels in the nuclear industry have improved markedly over the years, as competing firms have continued to learn and improve.

    I include a few other lines at the bottom of the figure to show that the engagement meta-process itself might be managed according to requirements of its own. However, I consider non-functional requirements analysis to be part of the solution effort.

  31. Observation: Observation is used to elicit information by viewing and understanding activities and their context. It is used as a basis for identifying needs and opportunities, understanding a business process, setting performance standards, evaluating solution performance, or supporting training and development.

    Observation is basically the process of watching something to see how it works. It can be used during discovery and data collection activities.

    Observations are made as part of the solution effort.

  32. Organizational Modeling: Organizational modeling is used to describe the roles, responsibilities, and reporting structures that exist within an organization and to align those structures with the organization’s goals.

    Organizational models are often represented in the form of classic org charts. However, the different parts and functions within an organization can be mapped and modeled as stations (and other model components) or entities. Furthermore, entities are sometimes resources that are called from a service pool to a requesting process station to perform various actions. They may be called in varying numbers and for varying time spans based on the activities and defined business rules.

    This activity is performed during the solution effort.

  33. Prioritization: Prioritization provides a framework for business analysts to facilitate stakeholder decisions and to understand the relative importance of business analysis information.

    This technique refers to prioritizing work items through time and different phases of an engagement. It does not refer to the order in which operations must be carried out by the system under analysis, according to business and other identified rules.

    The BABOK describes four different approaches to prioritization. They are grouping, ranking, time boxing/budgeting, and negotiation. These are just specific approaches to doing a single, general thing.

    Prioritization in this context is absolutely part of the engagement effort.

  34. Process Analysis: Process analysis assesses a process for its efficiency and effectiveness, as well as its ability to identify opportunities for change.

    In a sense this is the general term for everything business analysts do when looking at how things are supposed to get done. That is, everything having to do with the functional parts of a process involve process analysis. Examination of the non-functional aspects of an operation, what things are as opposed to what they do, is still undertaken in support of a process, somewhere.

    In terms of a mapped process this technique involves identifying the components in or needed for a process, how to add or remove or modify components to change how it operates, how to move from a current state to a future state, and the impacts of proposed or real changes made.

    Interestingly, simulation is considered to be a method of process analysis and not process modeling. This is because the model is considered to be the graphical representation itself, according to the BABOK, while simulation is a means of analyzing the process. I suppose this distinction might matter on one question on a certification exam, but in the end these methods all flow together.

    Although the engagement effort is itself a process, this technique is intended to apply to the solution effort.

  35. Process Modeling: Process modeling is a standardized graphical model used to show how work is carried out and is a foundation for process analysis.

    Process models are essentially the graphical maps I’ve been describing all along, as they are composed of stations, queues, entries, exits, connections or paths, and entities. Information, physical objects, and activities of all kinds can be represented in a process map or model.

    When they say a picture is worth a thousand words, this is the kind of thing they’re talking about. In my experience, nothing enhances understanding and agreement more than a visual process model.

    The BABOK goes into some detail about the various methods that exist for creating process maps, but in my mind they’re more alike than different.

    This technique is clearly to be applied as part of the solution effort.

  36. Prototyping: Prototyping is used to elicit and validate stakeholder needs through an iterative process that creates a model or design of requirements. It is also used to optimize user experience, to evaluate design options, and as a basis for development of the final business solution.

    A prototype is a temporary or provisional instantiation of part or all of a proposed solution. Prototypes are created so they can be understood and interacted with. The idea of a prototypes is extremely flexible. They can range from temporary models made with throwaway materials to working bits of software to full-scale, flyable aircraft.

    A simulation of something can also serve as a prototype, especially simulations of systems with major physical components.

    Prototyping is absolutely part of the solution effort.

  37. Reviews: Reviews are used to evaluate the content of a work product.

    Work products in my framework are created in every phase of an engagement, and thus reviews should be conducted during every phase of the engagement. This is done to ensure that all parties are in agreement and that all work items have been addressed before moving on to the next phase. The review in each phase is performed iteratively until the work for that phase is agreed to be complete. If items are found to have been missed in a phase, the participants return to the previous phase and iteratively complete work and review cycles until it is agreed that attention can be returned to the current or future phases.

    Note that multiple items can be in different phases through any engagement. While each individual item is addressed through one of more phases, it is always possible for multiple work items to be in process in different phases simultaneously.

    Reviews are meta-processes that are part of the engagement effort.

  38. Risk Analysis and Management: Risk analysis and management identifies areas of uncertainty that could negatively affect value, analyzes and evaluates those uncertainties, and develops and manages ways of dealing with the risks.

    The subject of risk can be addressed at the level of individual components in mapped systems, especially when provisions for failures are built into models, but they are more often handled at the level of the system or engagement. The BABOK gives a good overview of risk analysis and management and many other materials are available.

    Risks are evaluated based on their likelihood of occurrence and their expected impact. There are a few straightforward techniques for calculating and tracking those, mostly involving risk registers and risk impact scales. The classic approaches to risk are generally categorized as avoid (take steps to prevent negative events from happening), transfer (share the risk with third parties, with the classic example being through insurance), mitigate (act to reduce the chances a negative event will occur or reduce the impact if it does occur), accept (deal with negative events as they occur and find workarounds), and increase (find ways to take advantage of new opportunities).

    Risk has been an important part of two classes of simulation work I’ve done. The first involved dealing with adverse occurrences in nuclear power plants. A variety of system failures (e.g., leaks and equipment malfunctions) were built into the fluid, electrical, and hardware models in the operator training simulators we built. If such failures happened in a real plant (see Three Mile Island, where a better outcome would have resulted if the operators had just walked away and let the plant shut down on its own), especially since the likelihood of any particular adverse event was rare, there would be the possibility that the operators would not know how to diagnose the problem in real time and take the proper corrective action. Building full scope training simulators reduced the risk of operating actual plants by giving operators the experience of reacting to a wide variety of failures they could learn how to diagnose and mitigate in a safe environment. In terms of mapped processes, the failure mechanisms were built in to the model components themselves (stations, paths or connections, entries, and exits, and so on), and controlled by training supervisors.

    The second way I worked with risk in the context of simulation was when I worked with Monte Carlo models. These involve running a simulation through multiple iterations where many of the outcomes are determined randomly. The combinations and permutations of how those many random events concatenate and interact can result in a wide variety of outcomes at the whole-system level. The purpose of these simulations was to learn how often negative system outcomes were going to occur (based on potentially thousands of possible failures that occurred at known rates), and work on ways to mitigate the effects of negative systemic outcomes.

    Risks are managed at both the solution and the engagement level. It is one of the most important considerations for any analyst or manager.

  39. Roles and Permissions Matrix: A roles and permissions matrix is used to ensure coverage of activities by denoting responsibility, to identify roles, to discover missing roles, and to communicate results of a planned change.

    This technique is used to determine what which actors are empowered to take which actions. In terms of system maps the actors will be represented as process blocks or possibly resources able to interact with an trigger other system events.

    In the context of the BABOK this analysis is part of the solution effort. It deals with how the customers will interact with the provided solution.

  40. Root Cause Analysis: Root cause analysis is used to identify and evaluate the underlying causes of a problem.

    Root cause analysis is always served by knowing more about the system being investigated. The more you know about the components and interactions of a system the easier it will be to determine the root cause of any outcome, whether desired or undesired.

    Process or system maps greatly aid in the understanding of a system and its behavior and the behavior of its components. Fishbone diagrams and The Five Whys are important sub-techniques used in this kind of analysis.

    This technique is used as part of the solution effort.

  41. Scope Modeling: Scope models define the nature of one or more limits or boundaries and place elements inside or outside those boundaries.

    Process or system maps make very clear what is within the scope and outside the scope of any system. Note that systems can be defined in many ways and that many subsystems can be aggregated to form a larger system. An example of this from my own experience is the creation of fifty or more separate fluid, electrical, and equipment handler models to construct a full scope operator training simulator.

    Scope boundaries are usually determined by grouping components together in a logical way so their interactions and effects have their greatest effect in a specified range of space, time, function, and actors (people). The scope of different systems also defines responsibility for outcomes related to them.

    An example of scope determined by space would be hydrological models where the flow of water is governed by the topology and composition of the landscape. An example of scope determined by time is the decision to include hardware and components that affect system behavior when a plant is operating to generate power, and not to include equipment like sample and cleanout ports that are only used when the plant is in a maintenance outage. An example of scope determined by function is the isolation of the logical components that filter reactor water to remove elements like boron and other particulates. An example of scope determined by actors is the assignment of permissions (or even requirements) to give approval for certain actions based on holding a certain credential, as in requiring sign-off by a civil engineer with a PE license or requiring a brain surgeon to have an M.D. and malpractice insurance.

    Analyzing the system’s components is an aspect of the solution effort, while determining what components and responsibilities are in and out of scope might be part of the engagement effort, since this may involve negotiation of what work does and does not get done.

  42. Sequence Diagrams: Sequence diagrams are used to model the logic of usage scenarios by showing the information passed between objects in the system through the execution of the scenario.

    Sequence diagrams are a specific kind of process map that organizes events in time. The specific methods for creating them are defined in the UML standard (I bought this book back in the day). The diagrams don’t necessarily show why things happen, but they do give insight into when they happen and in what order. They also show the messages (or objects) that are passed between the different packages of functionality.

    This detailed activity takes place in the context of the solution effort.

  43. Stakeholder List, Map, or Personas: Stakeholder lists, maps, and personas assist the business analyst in analyzing stakeholders and their characteristics. This analysis is important in ensuring that the business analyst identifies all possible sources of requirements and that the stakeholder is fully understood so decisions made regarding stakeholder engagement, collaboration, and communication are the best choices for the stakeholder and for the success of the initiative.

    Stakeholders are affected by work done to build or modify the process or system under investigation and by the work done through the phases of the engagement itself. They may be included as components or resources in process models. The degree to which they are affected by the current scope of work or are able to exert influence over the capabilities under analysis can be represented by onion diagrams or stakeholder matrices (of which the RACI Matrix is a classic example).

    This technique is definitely part of the engagement effort.

  44. State Modeling: State modeling is used to describe and analyze the different possible states of an entity within a system, how that entity changes from one state to another, and what can happen to the entity when it is in each state.

    A state model describes the conditions or configurations an object or component can be in, the triggers or events that cause the state to change, and the allowable states that can be transitioned to from any other state. Diagrams can show these transitions independent of any other process map or timing diagram or the concepts can be embedded or otherwise combined. In terms of the elements I’ve defined, stations, paths or connections, entities, resources, and even entries and exits can all change state if it makes sense in a given context.

    This work is always part of the solution effort.

  45. Survey or Questionnaire: A survey or questionnaire is used to elicit business analysis information—including information about customers, products, work practices, and attitudes—from a group of people in a structured way and in a relatively short period of time.

    The information gathered can relate to any aspect of work related to the solution or to the engagement.

  46. SWOT Analysis: SWOT analysis is a simple yet effective tool used to evaluate an organization’s strengths, weaknesses, opportunities, and threats to both internal and external conditions.

    These efforts are generally undertaken at the level of the behavior or capability of entire systems, though they can also be about technologies that can improve all or part of an existing system or even the work of one or more phases of an engagement.

    This technique is therefore applicable to both the solution effort and the engagement effort.

  47. Use Cases and Scenarios: Use cases and scenarios describe how a person or system interacts with the solution being modeled to achieve a goal.

    These are used in the design, requirements, and implementation phases of an engagement, to describe user actions to achieve certain ends. The are many ways to describe the qualities and capabilities a solution needs to have, and use cases are only one.

    While use cases are similar in many ways to user stories, they aren’t always exactly the same thing.

    These techniques are applied to describe detailed aspects of the solution.

  48. User Stories: A user story represents a small, concise statement of functionality or quality needed to deliver value to a specific stakeholder.

    Like use cases, user stories are an effective means for capturing and communicating information about individual features of a solution. Note that use cases are not to be confused with formal requirements.

    The technique is generally associated with the Agile paradigm in general and the Scrum methodology in particular. They are typically written in the following form, or something close to it: “As a user role, I would like to take some action so I can achieve some affect.”

    While this is a useful technique, I have the (possibly mistaken) impression that it is relied upon and otherwise required far in excess of its actual usefulness. For example, I would disagree with anyone who asserts that every backlog item must be written in the form of a user story.

    Like the use case technique from directly above, this technique is also applied as a detailed part of the solution effort.

  49. Vendor Assessment: A vendor assessment assesses the ability of a vendor to meet commitments regarding the delivery and the consistent provision of a product or service.

    Vendors may be evaluated for the ability to perform any work or deliver any value related to the solution or to the engagement.

  50. Workshops: Workshops bring stakeholders together in order to collaborate on achieving a predefined goal.

    These collaborations with groups of stakeholders can be used to gather information about or review any aspect of work on the solution effort or any phase of the engagement.

Here is a summary listing of the fifty BA techniques and the type of effort they most apply to.

BA Technique Effort Type
Backlog Management Engagement
Balanced Scorecard Engagement
Benchmarking and Market Analysis Engagement
Brainstorming Solution
Business Capability Analysis Engagement
Business Cases Both
Business Model Canvas Both
Business Rules Analysis Solution
Collaborative Games Engagement
Concept Modelling Solution
Data Dictionary Solution
Data Flow Diagrams Solution
Data Mining Solution
Data Modelling Solution
Decision Analysis Solution
Decision Modelling Solution
Document Analysis Solution
Estimation Engagement
Financial Analysis Both
Focus Groups Solution
Functional Decomposition Solution
Glossary Solution
Interface Analysis Solution
Interviews Solution
Item Tracking Engagement
Lessons Learned Engagement
Metrics and KPIs Solution
Mind Mapping Solution
Non-Functional Requirements Analysis Solution
Observation Solution
Organizational Modelling Solution
Prioritization Engagement
Process Analysis Solution
Process Modelling Solution
Prototyping Solution
Reviews Engagement
Risk Analysis and Management Both
Roles and Permissions Matrix Solution
Root Cause Analysis Solution
Scope Modelling Both
Sequence Diagrams Solution
Stakeholder List, Map, or Personas Engagement
State Modelling Solution
Survey or Questionnaire Both
SWOT Analysis Both
Use Cases and Scenarios Solution
User Stories Solution
Vendor Assessment Both
Workshops Both

Here are the counts for each type of effort:

Effort Type Number of Occurrences
Solution 30
Engagement 11
Both 9
Posted in Management | Tagged , | Leave a comment