Iteration and Feedback: The Key to Making Projects and Teams Work

The Product Management Body of Knowledge (PMBOK) teaches that most project failures are caused by poor team dynamics. That may be true, but that’s just a specific case of a larger problem. The more foundational idea is that trouble arises when people don’t communicate sufficiently well or often, and that is true whether the communications are collegial or contentious. Everyone can be the best of friends, but if they’re sitting in their own rooms doing their own things they aren’t going to accomplish much as a team.

The whole concept of Agile and its specific techniques of Scrum, Kanban, SAFe, and their variants and hybrids, as well as related frameworks for managing complexity, is that there has to be an organized way for people to talk to each other so they can reach common and correct understanding of what is, what should be, and how to get from one to the other. My framework for doing business analysis works through six major phases in ways that are adaptable to a given effort, organization, and management environment. All the elements from each phase are tracked and kept in sync by using a Requirements Traceability Matrix.

The diagram above was created to emphasize the need to get people together to talk to each other, figure out what they need, what’s possible, and how they can work together. They don’t do this just once in most situations, they do it iteratively, incorporating feedback and making corrections until everyone is in agreement (subject to real limitations of time, resources, and availability) at the end of each phase. The circles in the diagram represent the iterative cycles of planning, performing, review, feedback, and correction for each phase. The links forward represent moving to the next phase when the iterations for the current phase have succeeded. The links backward represent recognition that something was missed that needs to be added in a previous phase. Adding the missing elements involves iterations in the previous phase and then a return to the ongoing iterations in the current phase. (Ha ha, if done well, this all flows more naturally than it sounds like in writing here.)

Since I’ve served in so many different roles in my career, often in the capacity of a vendor or consultant serving larger organizations by providing specialized, highly technical products and services, I’ve had the chance to meet, work with, learn from, share with, and help a lot of different kinds of people in a lot of different environments. They all have a part to play and their needs, contributions, thoughts, and opinions are all valuable.

Get them all talking to each other. My way isn’t the only way (though it’s a good one), but make sure people are talking in some way.

Posted in Uncategorized | Leave a comment

Cross-Browser Compatibility: My Website Animation, Part 3

Following up on the issue I discussed previously here and here, I finally bit the bullet and straightened out the problems caused in the landing page animation by a certain behavior of Android web browsers. After periodically digging around for a solution that addressed the problem of font boosting directly, and finding a lot of expressions of frustrations about it where it’s discussed, I finally decided to just brute force it.

That means test for the browser type and then, if it’s Android, adjust the rendering code a bit to get the desired result. It turned out to be pretty straightforward.

Testing for the browser type is really simple, especially if you only have to test for one.

After that I just needed to write different versions of the initiation calls for the moving text elements of the animation sequence. The graphic elements didn’t need to be changed in this way. I had been a bit inconsistent when I defined the font sizes in the original animation, so I had to do a bit of trial and error to get the Android-specific items the right size, but all 35 of them got done. I think it only took a couple of hours one evening while I watched horror movies hosted by Joe Bob Briggs. Who says programming can’t be fun?

As browser compatibility issues go, this one was simple. If there isn’t a clean solution to whatever issue is causing you problems, then do what you need to do. It may feel like a hack but we don’t always get perfection. A little elbow grease on your part makes life a lot better and more consistent for your users, and usually the cost in terms of complexity and bandwidth is minimal. Your users come first, especially as the number of users grows.

I’m sure there are far weirder incompatibilities out there. Which ones have you seen?

Posted in Software | Tagged , , , | Leave a comment

Documentation

I’ve written a lot of different kinds of documentation in my career, and I list and describe them here. Some types of documentation are formal, as in manuals written for various phases of a project, but others will be more ad hoc, like punch lists used in place of formal tracking tools.

To be thorough I’m going to include some document types that may not usually be thought of as documentation per se, though they are still relevant types of documents, and they can include a lot of technical material.

Individual documents can (and should!) be generated for most of the phases of my business analysis framework, depending on the complexity and scope of the effort.

Sometimes the documentation is in the form of a word processing file and sometimes it is spread over many locations in a content management system like Confluence, WordPress, or some kind of Wiki page. In Agile projects a lot of the documentation, at least during the build phase of a project, will be hosted piecemeal in a distributed software system like Jira or Rally. There may be a lot of overlap, but every situation is unique, so let’s just jump on in.

  • User Manual: This one is kind of obvious. However, a system can have many different types of users, so individual manuals (or at least sections) can be written for users in every end user role as well as installers, maintainers, user managers, and so on. I often tried to include a story walkthough in my user manuals. Rather than just describing what every field and control on every screen of every program did and meant, and how every input and output artifact was to be processed, conditioned, and interpreted, I provided a narrative describing how a user would exercise most of the features in a logical order.
  • Sales Proposal / Project Bid: These might not seem like normal kinds of documentation but these packages can be large, complex, and highly technical. I’ve prepared process and instrumentation drawings, heat and material balances, test descriptions, process guides, personnel information, budgets, company teaming agreements, schedules, and other materials for inclusion in sales and bid packages.
  • Technical Manual: These documents can include details on almost anything, but they are usually meant to contain information of a highly technical nature. Descriptions of collected data, assumptions, equations, references, techniques, code, and other specifications are just some examples of what can be included in documents with this name. You could buy the Technical Reference Manual for the original IBM PC, which contained some detailed specs but was mostly famous for providing the complete BIOS listing in assembly language. I’ve written technical manuals that described how all the data was collected or otherwise derived, how it was conditioned and processed, and then how it was used in complex systems. I sometimes managed and serially updated these documents over a period of years. In one case I built a system to automate the calculation of initial values, internal coefficients, listing of source materials, and output of governing equations (with equation numbers), variable listings, and value tables.
  • Procedural Manual: This document describes how activities that support an effort are to be carried out. For example, I wrote a Data Collection Manual to describe how my company’s data collectors were supposed to collect, process, and incorporate data, from discovery and data collection trips to dozens of customer sites, as well as what and how much data needed to be collected. This kind of manual could then be delivered to the customer so they could use it to maintain and update the delivered system on their own. This was for a program whose purpose was to build a tool to generate models for every land border crossing for Canada or Mexico, and then collect data for the individual models.
  • Policy Guide / Business Rules: These lists of directives tend to describe organizational process requirements in terms of how certain decisions are made, the time periods allowed to complete certain activities, duties certain people must perform, and things that cannot be done. Policies may be imposed by outside agencies in the form of customer requirements or government laws and regulations.
  • Data Dictionary: These documents can describe all relevant data items, their provenance, their methods of collection, their meaning, their acceptable (ranges of) values, and so on. I have always included this information in other documents.
  • Glossary of Terms: This document describes the specialized terms of art for a project, system, practice, organization, or profession. I think I may have contributed to a standalone document of this type for one project, but I have often inclluded this kind of information in other documents.
  • Intended Use Statement: I originally saw this described in a Navy specification for performing full-scope VV&A on simulations. Its purpose was to describe how a simulation was to be used to support a particular type of planning and scheduling analysis. I’ve appropriated this language to describe the purpose of a project involving a system that is to be automated, constructed, simulated, reengineered, upgraded, or otherwise improved.
  • Conceptual Model: This document describes what’s happening in a system or process that is going to be automated, constructed, simulated, reengineered, upgraded, or otherwise improved. This type of document is often referred to as describing the As-Is state of a system.
  • Requirements Document: This document can describe several things. One is what the system needs to be able to do at the end of the project. These items are referred to as functions requirements. Another thing that can be described is the qualities a system needs to have, usually in the form of -ilities (e.g., reliability, flexibility, maintainability, understandability, usability, configurability, modularity, etc.), and those are referred to as non-functional requirements. Procedural requirements, which are requirements for how the project, effort, or engagement, should be carried out, are probably best considered a subclass of non-functional requirements. I’ve written and managed requirements in multiple contexts and dealt with requirements written by others in multiple contexts as well. This type of document is often referred to as describing the To-Be state of a system. In particular I think of requirements as an abstract description of the To-Be state.
  • Design Document: This document may describe the components, layout, configuration, operations, tools, interfaces, and methods that will comprise the solution. It can define what the solution will be and how it is to be implemented. There are as many ways to create design documents as there are projects to be completed. The details don’t always matter if everyone understands what is being proposed. There should be a balance between detail and flexibility depending on the type, scale, and complexity of the effort. A manufacturing line including specific equipment meant to provide a specific, contractually defined level of production should be specified up front and should probably be implemented using a Waterfall methodology. A software system whose details must be partially defined on the fly should likely be specified piecemeal using an Agile strategy. I’ve produced many items that were intended to serve as design documents and they were presented at varying levels of abstraction. A lot of communication in this area is taken for granted. Be sure to review and seek approval from the customer before proceeding. I think of this type of document as representing a more concrete description of the To-Be state of a system.
  • Implementation Document: This item could be a plan for how the implementation is to be accomplished (this was also discussed as possibly being part of the design document). This could include instructions for ongoing governance (e.g., change control procedures, communication and permissions, and so on). It could also be a record how how the implementation has been carried out. It can be in the form of an updatable text document or in the form of a data system of some kind, along the lines of a Jira, Rally, or VersionOne.
  • Test Document: This item can describe test procedures and also progress attained, in which case it can also function as a checklist. I’ve seen some pretty comprehensive test plans in my career. A good formal test plan should exercise and verify every possible function and capability of a system. All appropriate artifacts and operations should be validated as well, but that often involves expert judgment and that can be difficult or impossible to automate.
  • Accreditation Documents: These documents could include plans for how accreditation is to be carried out (plan) and how it was carried out and the resulting recommendation (report). These artifacts can bookend similar documents describing how V&V is to be and was completed. One possible description of how this could work is here.
  • Requirements Traceability Matrix: This artifact serves as a unified checklist that crosslinks every item in every project phase (intended use, conceptual model, requirements, design, implementation, and test and acceptance in my framework) both forward to the next phase and back to the previous phase. Every element should be linked in both directions (except for the initial and final phases). If all items in all phases are accepted as complete by the customer then work may be regarded as finished for the entire project’s identified and agreed-upon intended uses and requirements. I’ve only used this formally on one large, very formal, two-year project where my company was part of a team acting as an independent agent for the V&V of a deterministic simulation tool the Navy was adopting to manage large fleets of its aircraft. Since I learned about this and earned by CBAP certification I’ve become a huge fan and advocate its use in every effort of sufficient scale to warrant it, and will use it informally on smaller efforts.
  • Statement of Findings: This document may be generated to relay the findings of any kind of investigation one may conduct. This is intended to apply more to troubleshooting or forensic investigations than normal discovery efforts. I’ve prepared a few such reports for different situations over the years.
  • Status Report: These can be issued at various intervals and include varying amounts of information. Their contents may or may not be formally defined. They are usually targeted to defined distribution lists (which sometimes consist of just one manager or customer representative). I’ve written a ton of periodic and ad hoc status reports over the years. Automated tracking systems of various types can generated custom formatted periodic and ad hoc reports of activity as well.
  • Field Report: These documents describe discoveries and accomplishments at remote locations. In my experience these have almost always been customer sites. There can be a large overlap between these and other report types in this list.
  • Recommendations: These items come in many forms and can be produced as the result of a formal investigation or audit (I wrote a couple of these for customers in the paper industry as a young process engineer) or as the result of an informal observation (I’ve proposed numerous internal process improvements to companies I’ve worked for, with this being one recent example).
  • Punch List: These are usually informal to-do lists of tasks to be accomplished. They are usually generated and worked off near the end of projects where formal management techniques and tracking tools aren’t being used. They can be for strictly internal use or reviewed at intervals with the customer. I saw a lot of these when I visited turnkey pulping lines my company was installing for customers in the paper industry and I reviewed them in morning meetings with some customers in the steel industry, for whom I was building and installing supervisory furnace control systems.
  • Specifications: These are often standard descriptions of individual pieces of physical equipment, but they can also describe systems of physical components or software.
  • Request for Proposal / Request for Quote: Organizations in some fields (especially government and heavy industry) will advertise that they need to have specific work performed or items provided and potential vendors will respond with proposals describing how they will provide the requested goods and services. The issuing organizations may provide guidelines for how the response package must for formatted, and these specifications can be extremely long and complex. I’ve contributed artifacts to many bid packages in private industry and have worked on some insanely complex and formal packages for the federal government and the City of New York.
  • User Stories: User stories are one type of artifact that can be created to define requirements in Agile and Scrum projects. They can be written from the point of view of a human user, a piece of hardware, an automated process, or an external system. They often (but not always) take the form, “As a (type of user), I want to (perform some action), so I can (achieve some outcome).” On some projects user stories are the main or even only way that requirements are specified and tracked.
  • Contracts: Contracts can describe what is to be delivered and also the process by which the work is to be completed. It is generally appropriate to use a Waterfall methodology in cases where final deliverables are described up front in great detail and Agile methodologies where final deliverables are specified in a more open-ended way. Waterfall and Agile techniques need not be viewed as totally different and opposed. I feel they should be seen as usable on a hybrid, continuum basis, as I describe here. I’ve had to understand, manage to, and fulfill the terms of customer contracts of varying complexity for most of my career. I’ve even helped to draft sections of contracts related to my role and areas of expertise.

So that’s what I’ve done over the years. What kinds of documentation have you written that I didn’t include?

Posted in Tools and methods | Tagged , , , , | Leave a comment

Understanding and Monitoring Microservices Across Five Levels

Did I know anything about microservices (or DevOps, or…) when I landed at Universal recently? No, I did not. Did that stop me from figuring it out in short order? Nope. Did that stop me from being able to reverse-engineer their code stacks in multiple languages from day one? Of course not. Did that stop me from seeing things nobody else in the entire organization saw and designing a solution that would have greatly streamlined their operations and saved them thousands of hours of effort? Not a chance. Could I see what mistakes they had made in the original management and construction of the system and would I have done it differently? You betcha.

While there are definitely specific elements of tradecraft that have to be learned to make optimal use of microservices, and there would always be more for me to learn, the basics aren’t any different than what I’ve already been doing for decades. It didn’t take long before the combination learning how their system was laid out, seeing the effort it took to round up the information needed to understand problems as they came up in the working system, and seeing a clever monitoring utility someone had created before it became obvious they needed a capability that would help them monitor and understand the status of their entire system. Like I said, it didn’t require a lot of work, it was something I could just “see.” I suggested an expanded version of the monitoring tool they’d created, that let anyone see how every part of the system worked at every level.

Now don’t get me wrong, there were a lot of smart and capable and dedicated people there, and they understood a lot of the architectural considerations. So what I “saw” wasn’t totally new in detail, but is was certainly new in terms of its scope, in that it would unify and leverage a lot of already existing tools and techniques. As I dug into what made the system tick I saw that concepts and capabilities broke down into five different layers, but first some background.

Each column in the figure above represents a single microservices environment. The rightmost column shows a production environment, the live system of record that actually supports business operations. It represents the internal microservices and the external systems that are part of the same architectural ecosystem. The external items might be provided by third-party as a standard or customized capability. They can be monitored and interacted with, but the degree to which they can be controlled and modified might be limited. The external systems may not be present in every environment. Environments may share access to external systems for testing, or may not have access at all, in which case interactions have to be stubbed out or otherwise simulated or handled.

The other columns represent other environments that are likely to be present in a web-based system. (Note that microservices can use any communication protocol, not just the HTTP used by web-facing systems. The other environments are part of a DevOps pipeline used to develop and test new and modified capabilities. Such pipelines may have many more steps, than what I’ve shown here, but new code is ideally entered into the development environment and then advanced rightward from environment as it passes different kinds of tests and verifications. There may be an environment dedicated to entering and testing small hotfixes that can be advanced directly to the production environment with careful governance.

The basic structure of a microservice is shown above. I’ve written about monitoring processes here, and this especially make sense in a complicated microservices environment. I’ve also written about determining what information needs to be represented and what calculations need to be performed before working out detailed technical instantiations here. A microservice can be thought of as a set of software capabilities that perform defined and logically related functions, usually in a distributed system architecture, in a way that ideally doesn’t require chaining across to other microservices (like many design goals or rules of thumb, this regulation need not be absolute, just know what you’re trying to do and why; or, you’ve got to know the rules before you can break them!).

I’ve listed them here in the raw form in which I discovered them. I never got to work with anyone to review and implement this stuff in detail; something came up every time we scheduled a meeting, so I’ve found inconsistencies as I worked on this article. I cleaned some things up, eliminated some duplications, and made them a bit more organized and rational down below. A more detailed write-up follows.

Layer 1 – Most Concrete
Hardware / Local Environment

  • number of machines (if cluster)
  • machine name(s)
  • cores / memory / disk used/avail
  • OS / version
  • application version (Node, SQL, document store)
  • running?
  • POC / responsible party
  • functions/endpoints on each machine/cluster

Layer 2

Gateway Management (Network / Exposure / Permissioning)

  • IP address
  • port (different ports per service or endpoint on same machine?)
  • URL (per function/endpoint)
  • certificate
  • inside or outside of firewall
  • auth method
  • credentials
  • available?
  • POC / responsible party
  • logging level set
  • permissions by user role

Level 3

QA / Testing

  • tests being run in a given environment
  • dependencies for each path under investigation (microservices, external systems, mocks/stubs)
  • schedule of current / future activities
  • QA POC / responsible party
  • Test Incident POC / responsible party
  • release train status
  • CI – linting, automatically on checkin
  • CI – unit test
  • CI – code coverage, 20% to start (or even 1%), increase w/each build
  • CI – SonarQube Analysis
  • CI – SonarQube Quality Gate
  • CI – VeraCode scan
  • CI – compile/build
  • CI – deploy to Hockey App for Mobile Apps / Deploy Windows
  • CD – build->from CI to CD, publish to repository server (CodeStation)
  • CD – pull from CodeStation -> change variables in Udeploy file for that environment
  • CD – deploy to target server for app and environment
  • CD – deploy configured data connection, automatically pulled from github
  • CD – automatic smoke test
  • CD – automatic regression tests
  • CD – send deploy data to Kibana (deployment events only)
  • CD – post status to slack in DevOps channel
  • CD – roll back if not successful
  • performance test CD Ubuild, Udeploy (in own perf env?)

Level 4

Code / Logic / Functionality

  • code complete
  • compiled
  • passes tests (unit, others?)
  • logging logic / depth
  • Udeploy configured? (how much does this overlap with other items in other areas?)
  • test data available (also in QA area or network/environment area?)
  • messaging / interface contracts maintained
  • code decomposed so all events queue-able, storable, retryable so all eventually report individual statuses without loss
  • invocation code correct
  • version / branch in git
  • POC / responsible party
  • function / microservice architect / POC
  • endpoints / message formats / Swaggers
  • timing and timeout information
  • UI rules / 508 compliance
  • calls / is-called-by –> dependencies

Level 5 – Most Abstract

Documentation / Support / Meta-

  • management permissions / approvals
  • documentation requirements
  • code standards reviews / requirements
  • defect logged / updated / assigned
  • user story created / assigned
  • POC / responsible party
  • routing of to-do items (business process automation)
  • documentation, institutional memory
  • links to related defects
  • links to discussion / interactive / Slack pages
  • introduction and training information (Help)
  • date & description history of all mods (even config stuff)
  • business case / requirement for change

That’s my stream-of-consciousness take.

Before we go into detail let’s talk about what a monitoring utility might look like. The default screen of the standalone web app that served as my inspiration for this idea showed a basic running/not-running status for every service in every environment and looked something like this. It was straight HTML 5 with some JavaScript and Angular and was responsive to changes in the screen size. There were a couple of different views you could choose to see some additional status information for every service in an environment or for a single service across all environments. It gathered most of its status information by repeatedly sending a status request to the gateway of each service in each environment. Tow problems with this approach were that the original utility pinged the services repeatedly without much or any delay, and multiple people could (and did) run the app simultaneously, which hammered the network and the services harder than was desirable. These issues could be addressed by slowing down the scan rate and hosting the monitoring utility on a single website that users could load to see the results of the scans. That would still require a certain volume of refreshes across the network (and there are ways to even minimize those) but queries of the actual service endpoints would absolutely be minimized.

It was real simple. Green text meant the service was running and red text meant it wasn’t. Additional formatting or symbology could be added for users who are colorblind.

Now let’s break this down. Starting from the most concrete layer of information…

Layer 1 – Most Concrete
Hardware / Local Environment

This information describes the actual hardware a given piece of functionality is hosted on. Of course, this only matters if you are in control of the hardware. If you’re hosting the functionality on someone else’s cloud service then there my be other things to monitor, but it won’t be details you can see. Another consideration is whether a given collection of functionality requires more resources than a single host machine has, in which case the hosting has to be shared across across multiple machines, with all the synchronization and overhead that implies.

When I was looking at this information (and finding that the information in lists posted in various places didn’t match) it jumped out at me that there were different versions of operating systems on different machines, so that’s something that should be displayable. If the web app had a bit more intelligence, it could produce reports on machines (and services and environments) that were running each OS version. There might be good reasons to support varied operating systems and versions, and you could include logic that identified differences against policy baselines for different machines for different reasons. The point is that you could set up the displays to include any information and statuses you wanted. This would provide at-a-glance insights into exactly what’s going on at all times.

Moreover, since this facility could be used by a wide variety of workers in the organization, this central repository could serve as the ground truth documentation for the organization. Rather than trying to organize and link to web pages and Confluence pages and SharePoint documents and manually update text documents and spreadsheets on individual machines, the information could all be in one place that everyone could find. And even if separate documents needed to be maintained by hand, such a unified access method could provide a single, trusted pointer to the correct version of the correct document(s). This particular layer of information might not be particularly useful for everyone, but we’ll see when we talk about the other layers that having multiple eyes on things allows anomalies to be spotted and rectified much more quickly. If all related information is readily available and organized in this way, and if people understood how to access it, then the amount of effort spent finding the relevant people and information when things went wrong would be reduced to an absolute minimum. In a large organization the amount of time and effort that could be saved would be spectacular. Different levels of permissions for viewing, maintenance, and reporting operations could also be included as part of a larger management and security policy.

I talked about pinging the different microservices, above. That’s a form of ensuring that things are running at the application level, or the top level of the seven-layer OSI model. The status of machines can be independently monitored using other utilities and protocols. In this case they might operate at a much lower level. If the unified interface I’m discussing isn’t made to perform such monitoring directly, it could at least provide a trusted link to or description of where to access and how to use the appropriate utility. I was exposed to a whole bunch of different tools at Universal, and I know I didn’t learn them all or even learn what they all were, but such a consistent, unified interface would tell people where things were and greatly streamline awareness, training, onboarding, and overall organizational effectiveness.

  • number of machines (if cluster)
  • machine name(s)
  • cores / memory / disk used/avail
  • OS / version
  • application version (Node, SQL, document store)
  • running (machine level)
  • POC / responsible party
  • functions/endpoints on each machine/cluster

Last but not least, the information tracked and displayed could include meta-information about who was responsible for doing certain things and who to contact at various times of day and days of week. Some information would be available and displayed on a machine-by-machine basis, some based on the environment, some based on services individually or in groups, and some on an organizational basis. The diagram below shows some information for every service for every environment, but other information only for each environment. Even more general information, such as that applying to the entire system, could be displayed in the top margin.

Layer 2

Gateway Management (Network / Exposure / Permissioning)

The next layer is a little less concrete and slightly more abstract and configurable, and this involves the network configuration germane to each machine and each service. Where the information in the hardware layer wasn’t likely to change much (and might be almost completely obviated when working with third-party, virtualized hosting), this information can change more readily, and it is certainly a critical part making this kind of system work.

The information itself is fairly understandable, but what gives it power in context is how it links downward to the hardware layer and links upward to the deployment and configuration layer. That is, every service in every environment at any given time is described by this kind of five-layer stack of hierarchical information. If multiple endpoints are run on a shared machine then the cross-links and displays should make that clear.

  • IP address
  • port (different ports per service or endpoint on same machine?)
  • URL (per function/endpoint)
  • SSL certificate (type, provider, expiration date)
  • inside or outside of firewall
  • auth method
  • credentials
  • available?
  • POC / responsible party
  • logging level set
  • permissions by user role

There are a ton of tools that can be used to access, configure, manage, report on, and document this information. Those functions could all be folded into a single tool, though that might be expensive, time-consuming, and brittle. As described above, however, links can be provided to the proper tools and various sorts of live information.

Level 3

QA / Testing

Getting still more abstract, we could also display the status of all service items as they are created, modified, tested, and moved through the pipeline toward live deployment. The figure below shows how the status of different build packages could be shown in the formatted display we’ve been discussing. Depending on what is shown and how it’s highlighted, it would be easy to see the progress of the builds through the system, when each code package started running, the status of various test, and so on. You could highlight different builds in different colors and include extra symbols to show how they fit in sequence.

  • tests being run in a given environment
  • dependencies for each path under investigation (microservices, external systems, mocks/stubs)
  • schedule of current / future activities
  • QA POC / responsible party
  • Test Incident POC / responsible party
  • release train status
  • CI – linting, automatically on checkin
  • CI – unit test
  • CI – code coverage, 20% to start (or even 1%), increase w/each build
  • CI – SonarQube Analysis
  • CI – SonarQube Quality Gate
  • CI – VeraCode scan
  • CI – compile/build
  • CI – deploy to Hockey App for Mobile Apps / Deploy Windows
  • CD – build->from CI to CD, publish to repository server (CodeStation)
  • CD – pull from CodeStation -> change variables in Udeploy file for that environment
  • CD – deploy to target server for app and environment
  • CD – deploy configured data connection, automatically pulled from github
  • CD – automatic smoke test
  • CD – automatic regression tests
  • CD – send deploy data to Kibana (deployment events only)
  • CD – post status to slack in DevOps channel
  • CD – roll back if not successful
  • performance test CD Ubuild, Udeploy (in own perf env?)

A ton of other information could be displayed in different formats, as shown below. The DevOps pipeline view shows the status of tests passed for each build in an environment (and could include every environment, I didn’t make complete example displays for brevity, but it could also be possible to customize each display as shown). These statuses might be obtained via webhooks to the information generated by the various automated testing tools. As much or as little information could be displayed as desired, but it’s easy to see how individual status items can be readily apparent at a glance. Naturally, the system could be set up to only show adverse conditions (tests that failed, items not running, permissions not granted, packages that haven’t moved after a specified time period, and so on).

A Dependency Status display could show what services are dependent on other services (if there are any). This gives the release manager insight into what permissions can be given to advance or rebuild individual packages if its clear there won’t be any interactions. It also shows what functional tests can’t be supported if required services aren’t running. If unexpected problems are encountered, in functional testing it might be an indication that the messaging contracts between services needs to be examined, it something along those lines was missed. In the figure below, Service 2 (in red) requires that Services 3, 5, and 9 must be running. Similarly, Service 7 (in blue) requires that Services 1, 5, and 8 are running. If any of the required services aren’t running then meaningful integration tests cannot be run. Inverting that, the test manager could also see at a glance which services could be stopped for any reason. If Services 2 and 7 both needed to be running, for example, then services 4 and 6 could be stopped without interfering with anything else. There are doubtless more interesting, comprehensive, and efficient ways to do this, but the point is merely to illustrate the idea. This example does not show dependencies for external services, but those could be shown as well.

The Gateway Information display shows how endpoints and access methods are linked to hardware items. The Complete Requirements Stack display could show everything from the hardware on up to the management communications, documentation, and permissioning.

The specific tools used by any organization are likely to be different, and in fact are very likely to vary within an organization at a given time (Linux server function written in Node or Java, iOS or Android app, website or Angular app, or almost any other possibility), and will certainly vary over time as different tools and languages and methodologies come and go. The ones shown in these examples are notional, and different operations may be performed in different orders.

Level 4

Code / Logic / Functionality

The code, logic, and style is always my main concern and that most matching my experience. I interacted with people doing all these things, but I naturally paid closest attention to what was going on in this area.

  • code complete: flag showing whether code is considered complete by developer and submitted for processing.
  • compiled: flag showing whether code package is compiled. This is might only apply to the dev environment if only compiled packages are handled in higher environments.
  • passes tests (unit, others?): flag(s) showing status of local (pre-submittal) tests passed
  • logging logic / depth: settings controlling level of logging of code for this environment. These settings might vary by environment. Ideally, all activity at all levels would be logged in the production environment, so complete auditing is possible to ensure no transactions ever fail to complete or leave an explanation of why they did not. Conversely, more information might be logged for test scenarios than for production. It all depends on local needs.
  • Udeploy configured: Flag for whether proper deployment chain is configured for this code package and any dependencies. (How much does this overlap with other items in other areas?)
  • test data available : Every code package needs to have a set test data to run against. Sometimes this will be native to automated local tests and at other times will involved data that must be provided by or available in connected capabilities through a local (database) or remote (microservice, external system, web page, or mobile app) communications interface. The difficulty can vary based on the nature of the data. Testing credit card transactions so all possible outcomes are exercised is a bit of a hassle, for example. It’s possible that certain capabilities won’t be tested in every environment (which might be redundant in any case), so what gets done and what’s required in each environment in the pipeline needs to be made clear.
  • messaging / interface contracts maintained: There should be ways to automatically test the content and structure of messages passed between different functional capabilities. This can be done structurally (as in the use of JavaScript’s ability to query an object to determine what’s in it to see if it matches what’s expected, something other languages can’t always do) or by contents (as in the case of binary messages (see story 1) that could be tested based on having unique data items in every field). Either way, if the structure of any message is changed, someone has to verify that all code and systems that use that message type are brought up to date. Different versions of messages may have to be supported over time as well, which makes things even more complicated.
  • code decomposed so all events queue-able, storable, retryable so all eventually report individual statuses without loss and so that complete consistency of side-effects is maintained: People too often tend to write real-time systems with the assumption that everything always “just works.” Care must be taken to queue any operations that fail so they ccan be retried until they pass or, if they are allowed to fail legitimately, care must be taken to ensure that all events are backed out, logged, and otherwise rationalized. The nature of the operation needs to be considered. Customer facing functions need to happen quickly or be abandoned or otherwise routed around while in-house supply and reporting items could potentially proceed with longer delays.
  • invocation code correct: I can’t remember what I was thinking with this one, but it could have to do with the way the calling or initiating mechanism works, which is similar to the messaging / interface item, above.
  • version / branch in git: A link to the relevant code repository (it doesn’t have to be git) should be available and maintained. If multiple versions of the same code package are referenced by different environments, users, and so on, then the repository needs to be able to handle all of them and maintain separate links to them. The pipeline promotion process should also be able to trigger automatic migrations of code packages in the relevant code repositories. The point is that it should be easy to access thhe code in the quickest possible way for review, revision, or whatever. A user shouldn’t have to go digging for it.
  • POC / responsible party: How to get in touch with the right person depending on the situation. This could include information about who did the design, who did the actual ccoding (original or modification), who the relevant supervisor or liaison is, who the product owner is, or a host of other possibilities.
  • function / microservice architect / POC: This is just a continuation of the item above, pointing to the party responsible for an entire code and functionality package.
  • endpoints / message formats / Swaggers: This is related to the items about messaging and interface contracts but identifies the communication points directly. There are a lot of ways to exercise communication endpoints, so all could be referenced or linked to in some way. Swaggers are a way to automatically generate web-based interfaces that allow you to manually (or automatically, if you do it right) populate and send messages to and receive and display messages from API endpoints. A tool called Postman does something similar. There are a decent number of such tools around, along with ways to spoof endpoint behaviors internally, and I’ve hand-written test harnesses of my own. The bottom line is that more automation is better, but the testing tools used should be described, integrated, or at least pointed to. Links to documentation, written and video tutorials, and source websites are all useful.
  • timing and timeout information: The timing of retries and abandonments should be parameterized and readily visible and controllable. Policies can be established governing rules for similar operations across the entire stack, or at least in the microservices. Documentation describing the process and policies should also be pointed to.
  • UI rules / 508 compliance: This won’t apply to back-end microservices but would apply to front-end and mobile codes and interfaces. It could also apply to user interfaces for internal code.
  • calls / is-called-by –> dependencies: My idea is that this is defined at the level of the microservice, in order to generate the dependency display described above. This might not be so important if microservices are completely isolated from each other, as some designers opine that microservices should never call each other, but connections with third-party external systems would still be important.
  • running (microservice/application level): indication of whether the service or code is running on its host machine. This is determined using an API call.

Level 5 – Most Abstract

Documentation / Support / Meta-

    While the lower levels are more about ongoing and real-time status, this level is about governance and design. Some of the functions here, particularly the stories, defects, tracking, and messaging, are often handled by enterprise-type systems like JIRA and Rally, but they could be handled by a custom system. What’s most important here is to manage links to documentation in whatever style the organization deems appropriate. Definition of done calculations could be performed here in such a way that builds, deployments, or advancements to higher environments can only be performed if all the required elements are updated and verified. Required items could include documentation (written or updated, and approved), management permission given, links to Requirements Traceability Matrix updated (especially to the relevant requirement), announcements made by environment or release manager or other parties, and so on.

    Links to discussion pages are aslo important, since they can point directly to the relevant forums (on Slack, for example). This makes a lot more of the conversational and institutional memory available and accessable over time. Remember, if you can’t find it, it ain’t worth much.

  • management permissions / approvals
  • documentation requirements
  • code standards reviews / requirements
  • defect logged / updated / assigned
  • user story created / assigned
  • POC / responsible party
  • routing of to-do items (business process automation)
  • documentation, institutional memory
  • links to related defects
  • links to discussion / interactive / Slack pages
  • introduction and training information (Help)
  • date & description history of all mods (even config stuff)
  • business case / requirement for change: linked forward and backward as part of the Requirements Traceability Matrix.

Visually you could think of what I’ve described looking like the figure below. Some of the information is maintained in some kind of live data repository, from which the web-based, user-level-access-controlled, responsive interface is generated, while other information is accessed by following links to external systems and capabilities.

And that’s just the information for a single service in a single environment. The entire infrastructure (minus the external systems) contains a like stack for all services in all environments.

There’s a ton more I could write about one-to-many and many-to-one mapping in certain situations (e.g., when multiple services run on a single machine in a given environment or when a single service is hosted on multiple machines in a given — usually production — environment to handle high traffic volumes), and how links and displays could be filtered as packages moved through the pipeline towards live deployment (by date or build in some way), but this post is more than long enough as it is. Don’t you agree?

Posted in Tools and methods | Tagged , , , | Leave a comment

A Few Interesting Talks on Architecture and Microservices

These talks were recommended by Chase O’Neill, a very talented colleague from my time at Universal, and proved to be highly interesting. I’ve thought about things like this through my career, and they’re obviously more germane given where things are currently heading, so I’m sharing them here.

https://www.youtube.com/watch?v=KPtLbSEFe6c

https://www.youtube.com/watch?v=STKCRSUsyP0

https://www.youtube.com/watch?v=MrV0DqTqpFU

Let me know if you found them as interesting as I did.

Posted in Uncategorized | Leave a comment

Unified Theory of Business Analysis: Part Four

How Different Management Styles Work

Solutions, and let’s face it, those quite often involve the development and modification of software these days, can be realized in the context of many different types of organization. A lot of bandwidth is expended insisting that everything be done in a specific Agile or Scrum context, and people can get fairly militant about how Scrum processes should be run. I tend to be a little more relaxed about all that, figuring that correctly understanding a problem and designing an excellent solution is more important than the details of any management technique, and here’s why.

Beginning at the beginning, my framework lists a group of major steps to take: intended use, conceptual model, requirements, design, implementation, and testing. (I discuss the framework here, among other places.) While there are a few ancillary steps involving planning, data, and acceptance, these are the major things that have to get done. The figure below shows that each step should be pursued iteratively until all stakeholders agree that everything is understood and complete before moving on. Moreover, the team can always return to an earlier step or phase to modify anything that might have been missed in an earlier phase. This understanding should be maintained even as we discuss different configurations of phases going forward.

The most basic way to develop software, in theory, is through a Waterfall process. In this style of development you march through each phase in order, one after another, and everyone goes home happy, right?

Riiiiiight…

No one ever really developed software that way. Or, more to the point, no one did so with much success on any but the most trivial projects. There was always feedback, learning, adjustment, recycling, and figuring stuff out as you go. The key is that people should always be working together and talking to each other as the work progresses, so all parties stay in sync and loose ends get tied up on the fly. There are a lot of ways for this process to fail, and most of them involve people not talking to each other.

A significant proportion of software projects fail. A significant proportion of software projects have always failed. And a significant proportion of software projects are going to go right on failing — and mostly for the same reason. People don’t communicate.

Running Scrum or Agile or Kanban or Scrumban or WaterScrum or anything else with a formal definition isn’t a magic bullet by itself. I’ve seen huge, complicated Scrum processes in use in situations where stuff wasn’t getting done very efficiently.

So let’s look at some basic organization types, which I diagram below, using the basic six phases I list above.

Waterfall is the traditional form of project organization, but as a practical matter nobody ever did it this way — if they wanted their effort to be successful. First of all, any developer will perform an edit-compile-test cycle on his or her local machine if at all possible. Next, individuals and teams will work in a kind of continuous integration and test process, building up from a minimum core of running code and slowly expanding until the entire deliverable is complete.

So what defines a waterfall process? Is it defined by when part of all of the new code runs or when new code is delivered? Developing full-scope nuclear power plant training simulators at Westinghouse was a long-term effort. Some code was developed and run almost from the beginning and code was integrated into the system through the whole process but the end result wasn’t delivered until the work was essentially complete, shipped to the site, and installed about three years later. The model-predictive control systems I wrote and commissioned were developed and deployed piecemeal, often largely on-site.

My point isn’t to argue about details, it’s to show there’s a continuum of techniques. Pure methodologies are rarely used.

The main point of Agile is to get people talking and get working code together sooner rather than later, so you aren’t just writing a whole bunch of stuff and trying to stick it together at the very end. The Agile Manifesto was a loose confederation of ideas by seventeen practitioners in 2001 who couldn’t agree on much else. The four major statements are preferences — not absolute commandments.

People get even more wound up in the practice of Scrum. It seems to me that people who’ve only been in one or two organizations think that what they’ve seen is the only way to do things. They tend to think that asking a few trivia questions is a meaningful way to gauge someone else’s understanding. Well, I’ve seen the variety of different ways organizations do things and I know that everything written in the Scrum manuals (the training manuals I got during my certification classes are shown above and all can be reviewed in 10-15 minutes for details once you know the material) should be treated more as a guideline or a rule of thumb rather than an unbreakable commandment. In fact, I will state that if you’re treating any part of the Scrum oeuvre as an unbreakable commandment you’re doing it wrong. There are always exceptions and contexts and you’d know that if you’ve seen enough.

I showed the Scrum process, in terms of my six major engagement phases, as a stacked and offset series of Waterfall-like efforts. This is to show that a) every work item needs to proceed through all phases, and more or less in order, b) that not all the work proceeds in order within a given sprint; items are pulled from a backlog of items generated during intended use, conceptual model, and requirements phases which are conducted separately (and which could be illustrated as a separate loop of ongoing activities), and c) and that work tends to overflow from sprint to sprint depending on how the specific organization and deployment pipeline is set up. A small team working on a desktop app might proceed through coding, testing, and deployment of all items in a single sprint. A larger or more distributed team, especially where coders create work items that are submitted to dedicated testers, might complete coding and local testing in one sprint, while the testers will approve and deploy those items in the next. If there is a long, multi-stage development and deployment process, supporting multiple teams working in parallel and serially, like you’d find in a DevOps environment supporting a large-scale web-based operation, then who knows how long it will take to get significant items though the entire pipeline? And how does that change if significant rework or clarification is needed?

A lot of people also seem to think their environment or their industry is uniquely complex, and that other types of experience aren’t really translatable. They don’t necessarily know about the complexities other people have to deal with. I’ve written desktop suites of applications that require every bit as much input, care, planning, real-time awareness, internal and external integration, mathematical and subject matter acumen, cyclic review, and computer science knowledge as the largest organizational DevOps processes out there. What you’re doing ain’t necessarily that special.

All projects, even in software development, are not one-size-fits all. As I stress in the talks I give about my business analysis framework, the BABOK is written in a diffuse way precisely because the work comes in so many different forms. There is no cookie cutter formula to follow. Even my own work is just restating the material in a different context. Maybe the people I’ve been interacting with know that, but whatever they think I’m not communicating to them, there’s surely been a lot they haven’t communicated to me. Maybe that’s just because communication about complex subjects is always difficult (and even simple things, all too often), or maybe a lot of people are just thinking inside their own boxes.

This figure above can also be used to represent an ongoing development or support process as it’s applied to an already working system. This can take the form of Scrum, Kanban, Scrum-Ban, or any other permutation of these ideas. Someone recently asked me what I thought the difference between Scrum and Kanban was, and I replied that I didn’t think it was particularly clear cut. I’ve seen a lot of descriptions (including talks at IIBA Meetups) about how each methodology can be used for native development and for ongoing support. The biggest difference I’d gathered was that Kanban was more about work-metering, or managing in the context of being able to do a relatively fixed amount of work in a given time with a relatively fixed number of resources, while Scrum was more flexible and scalable depending on the situation. A search of the differences between Scrum and Kanban turns up a fairly consistent list of differences — none of which I care about very much. Is there anything in any of those descriptions which experienced practitioners couldn’t discuss and work out in a relatively short conversation? Moreover, is that material anywhere near as important as being able to understand customers’ problems and developing phenomenal solutions to them in an organized, thorough, and even stylish way, which is the actual point of what I’ve been doing for the last 30-plus years, in a variety of different environments? Moreover, referring to my comment above that if you’re doing things strictly by the book you’re probably doing it wrong, I have to ask whether things don’t always end up as a hybrid anyway, or that you don’t adapt as needed? I’ve commented previously that the details of specific kinds of Agile organizations are about one percent of doing the actual analysis and solutioning that provides value. That’s not likely to make a lot of people happy, least of all the screeners who congratulate themselves for supposedly being able to spot fraudulent Scrum practitioners, but that’s where I come down on it. While those folks are asking about things that don’t matter much, they’re failing to understand things that do.

Another problem with the Agile and Scrum ways of thinking is that some people might think that you might need to do some initial planning in a Sprint Zero but then you can be off and running and making meaningful deliveries shortly thereafter. I hope people don’t think that, but I wanted to show a hybrid Waterfall-Scrum-like process to illustrate how Scrum techniques can be used in a larger context where an overall framework is definitely needed. In such a scenario, a significant amount of intended use, conceptual modeling, requirements, and design work has to be done before any implementation can begin, and there will need to be significant integration and final acceptance testing at the end, some of which may not be able to be conducted until all features are implemented. In the middle of that, however, a Scrum-like process can be used to strategically order work items for incremental development, with some time allowed for low-level, local design and unit and integration testing. The point is never to follow a prescription from someone’s book, but to do what makes sense based on what you actually need.

If you’re doing simulation, either as the end goal of an engagement or as a supporting part of an engagement, the construction of a simulation is a separate project on its own, and that separate project requires a full progression through all the standard phases. I don’t show a specific phase where the simulation is used, that’s implied by the phase or phases in which the construction of the simulation is embedded. If the individual simulations used in each analysis have to be built or modified by a tool that builds simulations, then the building (and maintenance) of the tool is yet another level of embedding. Tools to create simulations can be used for one-offs by individual customers, but they are often maintained and modified over long periods of time to support an ongoing series of analyses of existing processes or projects.

In the diagram I show the construction of the simulation as being embedded in the design and implementation phases of a larger project, but that’s just a starting point. Let’s look at some specific uses of simulation (and I’ve done all of these but the last two, almost always with custom simulations stick-built in various high-level languages, and often that I’ve written personally in whole or in part), and see how the details work out. I describe the following applications of simulation in detail here, and I can tell ahead of time that there’s going to be some repetition, but let’s work through the exercise to be thorough.

  • Design and Sizing: In this case the construction and use of the simulation in all within the design phase of the larger project. I used an off-the-shelf package called GEMS to simulate prospective designs for pulping lines at my first engineering job. It was the only off-the-shelf package I ever used. Everything else was custom.

  • Operations Research: This usually, but not always, involved modelling a system to see how it would behave in response to changes in various characteristics. Here are a few of the efforts we applied it to at two of my recent stops.

    • We used a suite of model-building tools called BorderWizard, CanSim, and SimFronteras to build models of almost a hundred land border ports of entry on both sides of both U.S. borders — and one crossing on Mexico’s southern border with Guatemala. The original tool was created and improved over time and used to create baseline models of all the ports in question. That way, when an analysis needed to be done for a port, the existing baseline model could be modified and run and the output data analyzed.

      The creation, maintenance, and update of the tools was a kind of standalone effort. The creation of the individual baselines all sprang from a common intended use phase but proceeded through all the other phases separately. The building of each crossing’s baseline is embedded as a separate project in the analysis project’s conceptual model phase.

      The individual analyses are what is represented in the header diagram for this section. Each standalone analysis project proceeded through its own intended use, conceptual model, and requirements phases. Updating the configuration of the model based on the needs of the particular analysis was embedded in the design phase of the analysis project. Using the results of the model runs to guide the modifications to the actual port and its operations was embedded in the implementation phase of the analysis project. The test phase of the analysis or port improvement project involved seeing whether the changes made met the intended use.

      I participated in a design project to determine the requirements for and test the performance of proposed designs for the the facilities on the U.S. side of an entirely new crossing on the border between Maine and Canada. This work differed from our typical projects only in that we didn’t have a baseline model to work from.

    • The work we did on aircraft maintenance and logistics using (what we called) the Aircraft Maintenance Model (AMM) proceeded in largely the same way, except that we didn’t have a baseline model to work from. Instead, the input data to the one modeling engine that was used for all analyses were constructed in modular form, and the different modules could be mixed and matched and modified to serve the analysis at hand. We did have a few baseline input data components, but most of the input components had to be created from scratch from historical data and prevailing policy for each model run.

    • I had a PC version of the simulation I used to install on DEC machines at plants requesting that hardware. I’d develop it on the PC first to get all the parameters right, and then I’d re-code that design for the new hardware, in whatever language the customer wanted (the PC code was in Pascal, the customers wanted FORTRAN, C, or C++ at that time). One of our customers, in a plant in Monterrey, Mexico, which I visited numerous times over the course of two installations (my mother died while I was in the computer room there), saw the PC version and wondered if it could be used to do some light operations research, since it included all the movement and control mechanisms along with the heating mechanisms, and it had a terrific graphic display. We ended up selling them a version of the desktop software as an add-on, and they apparently got a lot of use out of it.

      My creation of the PC version of the software for design and experimentation was embedded in the conceptual model, requirements, and design phases of larger control system projects. Adapting it so it could be used by the customer was a standalone project in theory, though it didn’t take much effort.

    • We built simulations we used to analyze operations at airports and to determine staffing needs in response to daily flight schedule information we accessed (the latter could be thought of as a form or real-time control in a way). That simulation was embedded in a larger effort, probably across the requirements, design, and implementation phases.

  • Real-Time Control: Simulations are used in real-time control in a lot of different contexts. I’ve written systems that contained elements that ran anywhere from four times a second to once per day. If decisions are made and actions taken based on the result of a time-bound simulation, then it can be considered a form of real-time control.

    The work I did in the metals industry involved model-predictive simulation. The simulation was necessary because the control variable couldn’t be measured and thus had to be modeled. In this case the simulation wasn’t really separate from or embedded within another project, it was, from the point of view of a vendor providing a capability to a customer, the point of the deliverable. The major validation step of the process, carried out in the test phase of my framework, is ensuring that the surface temperatures of the discharged workpiece match what the control system supposedly calculated and controlled to. The parameters I was given and the methods I employed were always accurate, and the calculated values always matched the values read by the pyrometer. The pyrometer readings could only capture the surface temperature of the workpieces, so a second validation was whether the workpieces could be properly rolled so the final products had the desired properties and the workpieces didn’t break the mill stands.

  • Operator Training: Simulations built for operator training are usually best thought of as ends in themselves, and thus often should be thought of as standalone projects. However, the development and integration of simulations for this purpose can be complex and change and grow over extended periods of time, so there can be multiple levels of context.

    • The nuclear power plant simulators I helped build worked the same way as the control systems I wrote for the metals industry, except that they were more complicated, involved more people, and took longer to finish. The simulation was itself the output, along with the behavior of the control panels, thus building the simulations were part and parcel of the finished, standalone product. They weren’t something leveraged for a higher end. The same is true for flight simulators, military weapon simulators, and so on.

    • We made a plug-in for an interactive, multi-player training simulator used to examine various threat scenarios in different kinds of buildings and open spaces. The participants controlled characters in the simulated environment much like they would in a video game. Our plug-in autonomously guided the movements of large number of people as the evacuated the defined spaces according to defined behaviors. In terms of the our customer’s long-term development and utilization of the larger system our effort would be considered to be embedded in the design and implementation phases of an upgrade to their work. In terms of our developing the plug-in for our customer, it seemed like a standalone project.

  • Risk Analysis: Risk can be measured and analyzed in a lot of ways, but the most common I’ve encountered is in the context of Monte Carlo simulations, where results are expressed in terms of the percentage of times a certain systemic performance standard was achieved, and in operator training, where the consequences of making mistakes can be seen and trainees can learn the importance of preventing adverse scenarios from happening.

  • Economic Analysis: Costs and benefits can be assigned to every element and activity in a simulation, so any of the contexts described here can be leveraged to support economic calculations. Usually these are performed in terms of money, which is pretty straightforward (if you can get the data) but other calculations were performed as well. For example, we modified one of our BorderWizard models to total up the amount of time trucks sat at idle as they were waiting to be processed in the commercial section of a border crossing in Detroit, so an estimate could be made of the amount of pollution that was being emitted, and how much it might be reduced if the truck could be processed more quickly.

  • Impact Analysis: Assessing the change in behavior or outputs in response to changes in inputs is essentially gauging the impact of a proposed change. This applies to any of the contexts listed her.

  • Process Improvement: These are generally the same as operations research simulations. The whole point of ops research is process improvement and optimization, though it sometimes involves studies of feasibility and cost (to which end see economic analysis, above).

  • Entertainment: Simulations performed for entertainment can be thought of as being standalone projects if the end result is used directly, as in the case of a video game or a virtual reality environment. If the simulation is used to create visuals for a movie then it might be thought of as an embedded project during the implementation phase of the movie. If the simulation is used in support of something else that will be done for a movie, for example, the spectacular car-jump-with-a-barrel-roll stunt the the James Bond movie “The Man With the Golden Gun,” then it might be thought of as being embedded in the larger movie’s design phase. The details of what phase things are in aren’t all that important, as the context is intuitively obvious when you think about it. We’re just going through the examples as something of a thought experiment.

  • Sales: Simulations are often used to support sales presentations, and as such might have to look a little spiffier than the simulations working analysts (who understand the material and are more interested in the information generated than they are in the visual pizazz) tend to work with. If an existing simulation can’t be used, then the process of creating a new version to meet the new requirement may be thought of as a standalone project.

If a project involves improving an existing process, then the work of the analyst has to begin with understanding what’s already there. This involves performing process discovery, process mapping, and data collection in order to build a conceptual model of the existing system. Work can then proceed to assess the methods of improvements and their possible impacts.

If a project involves creating a new operation from the ground up, then the conceptual modeling phase is itself kind of embedded in the requirements and design phases of the new project. The discovery, mapping, and data collection phases will be done during requirements and design. That said, if some raw material can be gleaned from existing, analogous or competitive projects elsewhere, then this initial background research could be thought of as the conceptual modeling phase.

Conclusion

This exercise might have been a bit involved, but I think it’s important to be able to draw these distinctions so you have a better feel for what you’re doing and when and why. I slowly learned about all these concept for years as I was doing them, but I didn’t have a clear understanding of what I was doing or what my organization was doing. That said, getting my CBAP certification was almost mindlessly easy because I’d seen everything before and I had ready references and examples for every concept the practice of business analysis supposedly involves. The work I’ve done since then, here in these blog post and in the presentations I’ve given, and all I’ve learned from attending dozens of talks by other BAs, Project Managers, and other practitioners, has continued to refine my understanding of these ideas.

Looking back based on what I know now, I see where things could have been done better. It might be a bit megalomaniacal to describe these findings as a “Unified Theory of Business Analysis,” but the more this body of knowledge can be arranged and contextualized so it can be understood and used more easily and effectively, then I think it’s useful.

If you think this material is useful, tell everyone you know. If you think it sucks, tell me!

Posted in Management | Tagged , , , , , | Leave a comment

Using Data In My Framework and In Simulations

I recently wrote about how data is collected and used in the different phases of my business analysis framework. After giving the most recent version of my presentation on the subject I was asked for clarification about how the data is used, so I wanted to write about that today.

I want to start by pointing out that data comes in many forms. It can be highly numeric, which is more the case for simulations and physical systems, and it can be highly descriptive, which is often more the case for business systems. Make no mistake, though, it’s all data. I’ll describe how data came into play in several different jobs I did to illustrate this point.

Sprout-Bauer (now Andritz) (details)

My first engineering job was as a process engineer for an engineering and services firm serving the pulp and paper industry. Our company (which was part of Combustion Engineering at the time, before being acquired by Andritz) sold turnkey mechanical pulping lines based on thermomechanical refiners. I did heat and material balances and drawings for sales proposals, and pulp quality and mill audits to show that we were making our quality and quantity guarantees and to serve as a basis for making process improvement recommendations. Data came in three major forms:

  • Pulp Characteristics: Quite a few of these are defined in categories like freeness (Canadian Standard Freeness is approximately the coolest empirical measure of anything, ever!), fiber properties, strength properties, chemical composition, optical properties, and cleanliness. We’d analyze the effectiveness of the process by analyzing the progression of about twenty of these properties at various points in the production line(s). I spent my first month in the company’s research facility in Springfied, Ohio learning about the properties they typically used. It seems that a lot of these measures have been automated, which must be really helpful for analysts at the plants. It used to be that I’d go to a plant to draw samples from various points in the process (you’d have to incrementally fill buckets at sample ports about hourly over the course of a day), then dewater them, seal them in plastic bags, label them, and ship them off to Springfield, where the lab techs would analyze everything and report the resullts. Different species of trees and hard and soft woods required different processing as well, and that was all part of the art and science of designing and analyzing pulping processes. One time we had to send wood chips back in 55-gallon drums. Somehow this involved huge bags and a pulley hanging over the side of a 100-foot-high building. My partners held me by the back of my belt as I leaned out to move the pulley in closer so we could feed a rope through it. So yeah, data.
  • Process volumes and contents: Pulp-making is a continuous process so everything is expressed on a rate basis. For example, if the plant was supposed to produce 280 air dried metric tons per day it might have a process flow somewhere with 30,000 gallons per minute at 27% consistency (the percentage of the mass flow composed of wood fiber with the remainder being steam, a few noncondensable gases, chemicals like liquors and bleaches, and some dirt or other junk). Don’t do the math, I’m just giving examples here. The flow conditions also included temperatures (based on total energy or enthalpy), and pressures, which allowed calculation of the volume flows and thus the equipment and pipe sizing needed to support the desired flow rates. The thermodynamic properties of water (liquid and gaseous) are a further class of data needed to perform these calculations. They’ve been compiled by researchers of the years. The behavior of flow through valves, fittings, and pipe is another form of data that has been compiled over time.
  • The specifications and sizes of different pieces of equipment were also part of the data describing each system. Many pieces of equipment came in standard sizes because it was too difficult to make custom-sized versions. This was especially true of pulp refiners, which came in several different configurations. Other items were custom made for each application. Examples of these included conveyors, screw de-watering presses, and liquid phase separators. Some items, like screens and cleaners, were made in standard sizes and various numbers of them were used in parallel to support the desired flow rates. Moreover, the screens and cleaners would often be arranged in multiple phases. I didn’t calculate flows based on equipment sizes for the most part, I calculated them based on the need to produce a certain amount of pulp. The equipment and piping would later be sized to support the specified flows.
  • The fourth item in this three-item list is money. In the end, every designed process had to be analyzed in terms of installed, fixed, and operating costs vs. benefits from sales. I didn’t do those calculations as a young engineer but they were definitely done as we and our customers worked out the business cases. I certainly saw how our proposals were put together and had a decent idea of what things cost. I’d learn a lot more about how those things are done in later jobs.

All these data elements have to be obtained through observation, testing, research, and elicitation (and sometimes negotiation), and all must be understood to analyze the process.

Westinghouse Nuclear Simulator Division (details)

Here I leveraged a lot of the experience I gained analyzing fluid flows and doing thermodynamic analyses in the paper industry. Examples of how I/we incorporated thermodynamic properties are here, here, and here. In this case the discovery was done for the modellers in that the elements to be simulated were already identified. This meant that we started with data collection, which we performed in two phases. We started by visiting the plant control room and recording the readings on every dial and indicator and the position or every switch, button, and dial. This gave us an indication of a few of the flows, pressures, and temperatures at different points in the system. The remainder of those values had to be calculated based on the equipment layouts and the properties of the fluids.

  • Flow characteristics: These were mostly based on the physical properties of water and steam but we sometimes had to consider noncondensables, especially when they made up the bulk of the flow, as they did in the building spaces and the offgas system I worked on. We also had to consider concentrations of particulates like boron and radioactive elements. The radiation was tracked as an abstract emittance level that decayed over time. We didn’t worry about the different kinds of radiation and the particles associated with them. (As much as I’ve thought about this work in the years since I did it I find it fascinating that I never really “got” this detail until just now as I’m writing this.) As mentioned above, the thermodynamic properties of the relevant fluids have all been discovered and compiled over the years.
  • Process volumes and contents: The flow rates were crucial and were driven by pressure differentials and pump characteristics and affected by the equipment it flowed through.
  • The specifications and sizes of different pieces of equipment were also part of the data describing each system. We needed to do detailed research through a library of ten thousand documents to learn the dimensions and behavior of all the pipes, equipment items, and even rooms in the containment structure.

Beyond the variables describing process states and equipment characteristics, the simulation equations required a huge number of coefficients. These all had to be calculated from the steady-state values of the different process conditions. I spent so much time doing calculations and updating documents that I found it necessary to create a tool to manage and automate the process.

Another important usage of data was in the interfaces between models. These had to be negotiated and documented, and the different models had to be scheduled to run in a way that would minimize race conditions as each model updated its calculations in real-time.

CIScorp (details)

In this position I did the kind of business process analysis and name-address-phone number-account number programming I’d been trying to avoid, since I was a hard core mechanical engineer, and all. Who knew I’d learn so much and end up loving it? This position’s contrast to most others I worked in the first half of my career taught me more about performing purposeful business analysis than any other single thing I did. I’m not sure I understood the oeuvre as a whole at the time, but it certainly gave me a solid grounding and a lot of confidence for things I did later. Here I write about how I integrated various insights and experiences over time to become the analyst and designer that I am today.

The FileNet document imaging system is used to scan documents so their images are moved around almost for free while the original hardcopies are warehoused. We’d do a discovery process to map an organization’s activities, say, the disability underwriting section of an insurance company, to find out what was going on. We interviewed representatives of the groups of workers in each process area to identify each of the possible actions they could take in response to receipt of any type of document or group of documents. This gave us the nouns and verbs of the process. Once we knew that, we’d gather up the adjectives of the process, the data that describe the activities, the entities processed (the documents), and the results generated. We gathered the necessary data mostly through interviews and reviews of historical records.

The first phase of the effort involved a cost-benefit analysis that only assessed the volumes and process times associated with the section’s activities. Since this was an estimation we collected process times via a minimum of observation and descriptions of SMEs. As a cross-check we reviewed whether our findings made sense in light of what we knew about how many documents were processed daily by each group of workers and the amount of time taken per action. Since the total amount of time spent per day per worker usually totaled up to just around eight hours we assumed our estimates were on target.

The next step was to identify which actions no longer needed to be carried out by workers, since all operations involving the physical documents were automated. We also estimated the time needed for the new actions of scanning and indexing the documents as they arrived. Finally, given assumptions for average pay rates for each class of worker, we were able to calculate the cost of running the As-Is process and the To-Be automated process and estimate the total savings that would be realized. We ended up having a long discussion about whether we’d save one minute per review or two minutes per review of collated customer files by the actual underwriting analysts, which was the most important single activity in the entire process. We ultimately determined that we could make a sufficient economic case for the FileNet solution by assuming a time savings of only one minute per review. The customer engaged two competitors, each of whom performed similar analyses, and our solution was judged to realize the greater net savings, about thirty percent per year on labor costs.

The data items identified and analyzed were similar to those I worked with in my previous positions. They were:

  • Document characteristics: The type of document determined how it needed to be processed. The documents had to be collated into patient files, mostly containing medical records, that would be reviewed and scored. This would determine the overall risk for a pool of employees a potential customer company wanted to provide disability coverage for. The insurer’s analysis would determine whether it would agree to provide coverage and what rate would be charged for that pool.
  • Process volumes and contents: These flows were defined in terms of documents per day per worker and per operation, with the total number arriving for processing each day being known.
  • The number and kind of workers in each department is analogous to the equipment described in the systems above. The groups of workers determined the volume of documents that could be processed and the types of transformations and calculations that could be carried out.

Once the initial phase was completed we examined the documents more closely to determine exactly what information had to be captured from them in order to collate them into files for each employee and how those could be grouped with the correct company pool. This information was to be captured as part of the scanning and indexing operation. The documents would be scanned and automatically assigned a master index number. Then, an index operation would follow which involved reading information identifying an employee so it could be entered into a form on screen. Other information was entered on different screens about the applying company and its employee roster. The scores for each employee file, as assigned by the underwriters, also had to be included. The data items needed to design the user and control screens all had to be identified and characterized.

Bricmont (details)

The work I did at Bricmont was mapped a little bit differently than the work at my previous jobs. It still falls into the three main classifications in a sense but I’m going to describe things differently in this case. For additional background, I describe some of the detailed thermodynamic calculations I perform here.

  • Material properties of metals being heated or even melted: As in previous descriptions, the properties of materials are obtained from published research. Examples of properties determined as a function of temperature are thermal conductivity and specific heat capacity. Examples of properties that remained constant were density, the emissivity of (steel) workpieces, and the Stephan-Boltzmann constant.
  • Geometry of furnaces and metal workpieces being heated: The geometry of each workpiece determines the layout of the nodal network within it. The geometry of the furnace determines how heat, and therefore temperature, is distributed at different locations. The location of workpieces relative to each other determines the amount of heat radiation that can be transferred to different sections of the surface of the workpieces (this obviously doesn’t apply for heating by electrical induction). This determines viewing angles and shadows cast.
  • Temperatures and energy inputs: Energy is transferred from furnaces to workpieces usually by radiative heat transfer, except in the cases where electric induction heating is used. Heat transfer is a function of temperature differential (technically the difference between the fourth power of the absolute temperature of the furnace and the fourth power of the temperature of the workpiece) for radiative heating methods and a function of the electrical inputs minus losses for inductive methods.
  • Contents of messages received from and sent to external systems: Information received from external systems included messages about the nature of workpieces or materials loaded into a furnace, the values of instrument readings (especially thermocouples that measure temperature), other physical indicators that result in the movement of workpieces through the furnace, and permissions to discharge heated workpieces to the rolling mill, if applicable. Information forwarded to other systems included messages to casters or slab storage to send new workpieces, messages to the rolling mill about the workpiece being discharged, messages to the low-level control system defining what the new temperature or movement setpoints should be, and messages to higher-level administrative and analytic systems about all activities.
  • Data logged and retrieved for historical analysis: Furnace control systems stored a wide range of operating, event, and status data that couuld be retrieved for further analysis.

The messages relayed between systems employed a wide variety of inter-process communication methods.

American Auto-Matrix (details)

In this position I worked with low-level controllers that exchanged messages with each other to control HVAC devices, manage other devices and setpoints, and record and analyze historical data. The platforms I worked on were mostly different from those at previous jobs but in principle they did the same things. This was mostly interesting because of the low-level granularity of the controllers used and the variety of communication protocols employed.

Regal Decision Systems and RTR Technologies (details here and here)

The major difference between the work I did with these two companies and all the previous work I’d done is that I switched from doing continuous simulation to discrete-event simulation. I discuss some of the differences here, though I could always go into in more detail. At a high level the projects I worked on incorporated the same three major classes of data as what I’ve described above (characteristics of the items being processed, the flow and content of items being processed, and the characteristics of the subsystems where the processing occurs). However, while discrete-event simulation can be deterministic, its real power comes from being able to analyze stochastic processes. This is accomplished by employing Monte Carlo methods. I described how those work in great detail yesterday.

To review, here are the major classes of data that describe a process and are generated by a process:

  • Properties of items or materials being processed
    • physical properties of materials that affect processing
    • information content of items that affect processing
    • states of items or materials that affect processing
    • contents of messages that affect processing
  • Volumes of items or materials being processed
  • Characteristics of equipment or activities doing the processing
  • Financial costs and benefits
  • Output data, items, materials, behaviors, or decisions generated by the process
  • Historical data recorded for later analysis
Posted in Engineering, Tools and methods | Tagged , , , | Leave a comment

Discrete-Event Simulations and Monte Carlo Techniques

“It was smooth sailing” vs. “I hit every stinkin’ red light today!”

Think about all the factors that might affect how long it takes you to drive in to work every day. What are the factors that might affect your commute, bearing in mind that models can include both scheduled and unscheduled events?

  • start time of your trip
  • whether cars pull out in front of you
  • presence of children (waiting for a school bus or playing street hockey)
  • presence of animals (pets, deer, alligators)
  • timing of traffic lights
  • weather
  • road conditions (rain, slush, snow, ice, hail, sand, potholes, stuff that fell out of a truck, shredded tires, collapsed berms
  • light level and visibility
  • presence of road construction
  • occurrence of accidents
  • condition of roadways
  • condition of alternate routes
  • mechanical condition of car
  • your health
  • your emotional state (did you have a fight with your significant other? do you have a big meeting?)
  • weekend or holiday (you need to work on a banker’s holiday others may get)
  • presence of school buses during the school year, or crossing guards for children walking to school
  • availability of bus or rail alternatives (“The Red Line’s busted? Again?”)
  • distance of trip (you might work at multiple offices or with different clients)
  • timeliness of car pool companions
  • need to stop for gas/coffee/breakfast/cleaning/groceries/children
  • special events or parades (“What? The Indians finally won the Series?”)
  • garbage trucks operating in residential areas

So how would you build such a simulation? Would you try to represent only the route of interest and apply all the data to that fixed route, or would you build a road network of an entire region to drive (possibly) more accurate conditions on the route of interest? (SimCity simulates an entire route network based on trips taken by the inhabitants in different buildings, and then animates moving items proportional to the level of traffic in each segment.)

Now let’s try to classify the above occurrences in a few ways.

  • Randomly generated outcomes may include:
    • event durations
    • process outcomes
    • routing choices
    • event occurrences (e.g., failure, arrival; Poisson function)
    • arrival characteristics (anything that affects outcomes)
    • resource availability
    • environmental conditions
  • Random values may be obtained by applying methods singly and in combination, which can result in symmetrical or asymmetrical results:
    • data collected from observations
    • continuous vs. discrete function outputs
    • single- and multi-dice combinations
    • range-defined buckets
    • piecewise linear curve fits
    • statistical and empirical functions (the SLX programming language includes native functions for around 40 different statistical distributions)
    • rule-based conditioning of results

Monte Carlo is a famous gambling destination in the Principality of Monaco. Gambling, of course, is all about random events in a known context. Knowing that context — and setting ultimate limits on the size of bets — is how the house maintains its advantage, but that’s a discussion for another day! When applied to simulation, the random context comes into play in two ways. First, the results of individual (discrete) events are randomized, so a range of results are generated as the simulation runs over multiple iterations. The random results are generated based on known distributions of possible outcomes. Sometimes these are made by guesstimating, but more often they are based on data collected from actual observations. (I’ll describe how that’s done, below.) The second way the random context comes into play is when multiple random events are incorporated into a single simulation so their results interact. If you think about it, it wouldn’t be very interesting to analyze a system based on a single random distribution, because the output distribution would be essentially the same as the input distribution. It’s really the interaction of numerous random events that make such analyses interesting.

First, let’s describe how we might collect the data from which we’ll build a random distribution. We’ll start by collecting a number of sample values using some form of the observation technique from the BABOK, ensuring that we capture a sufficient number of values.

What we do next depends on the type of data we’re working with. In the two classic cases we order the data and figure out how many occurrences of each kind of value occur. In both cases we start by arranging the data items in order. If we’re dealing with a limited number of specific values, examples of which could be the citizenship category of a person presenting themselves for inspection at a border crossing or the number number of separate deliveries it will take to fulfill an order, then we just count up the number of occurrences of each measured value. If we’re dealing with a continuous range of values that has a known upper and lower limit, with examples being process times or the total value of an order, then we must rearrange the data in order and break it into “buckets” across intervals chosen to capture the shape of the data accurately. Sometimes data is collected in raw form and then analyzed to see how it should be broken down, while in other cases a judgment is made about how the data should be categorized ahead of time.

The data collection form below shows an example of how continuous data was pre-divided into buckets ahead of time. See the three rightmost columns for processing time. Note further that items that took less than two minutes to process could be indicated by not checking any of the boxes.

In the case where the decision of how to analyze data is made after it’s collected we’ll use the following procedures. If we identify a limited number of fixed categories or values we’ll simply count the number of occurrences of each. Then we’ll arrange them in some kind of order (if that makes sense) and determine the cumulative number of readings and determine the proportion of each result’s occurrence. A random number (from zero to one) can be generated against the cumulative distribution shown below, which determines the bucket and the related result.

Given data in a spreadsheet like this…

…the code could be initialized and run something like this:

There are always things that could be changed or improved. For example, the data could be sorted in order from most likely to least likely to occur, which would minimize the execution time as the function would generally loop through fewer possibilities. Alternatively, the code could be changed so some kind of binary search is used to find the correct bucket. This would make the function run times highly consistent at the cost of making the code more complex and difficult to read.

This is pretty straightforward and the same methods could be used in almost any programming language. That said, some languages have specialized features to handle these sorts of operations. Here’s how the same random function declaration would look in GPSS/H, which is something of an odd beast of a language that looks a little bit like assembly in its formatting (though not for its function declarations). GPSS/H is so unusual, in fact, that the syntax highlighter here doesn’t recognize that language and color it consistently.

In this case the 12345 is an example of the function index, the handle by which the function is called. RN1 indicates that the random function iterates a die roll of 0.0-0.999999 between the first values in each pair and, when the correct bucket is found, returns the second value. D0010 indicates that the array has ten elements. The defining values are given as a series of x,y value pairs separated by slashes. The first number of each pair is the independent value while the second is the dependent value.

The language defines five different types of functions. Some function types limit the output values to the minimum or maximum if the input is outside the defined range. I’m not doing that in my example functions because I’ve limited the possible input values.

So that’s the simple case. Continuous functions do mostly the same thing but also perform a linear interpolation between the endpoints of each bucket’s dependent value. Let’s look at the following data set and work through how we might condition the information for use.

In this case we just divide the range of observed results (18-127 seconds) by 20 buckets an see how many results fall into each range. We then do the cumulations and proportions like before, though we need 21 values so we have upper and lower bounds for all 20 buckets for the interpolation operation. If longer sections of the results curve appear to be linear then we can omit some of the values in the actual code implementation. If we do this then we want to shape a piecewise linear curve fit to the data. The red line graph superimposed on the bar chart above shows that the fit including only the items highlighted in orange in the table is fairly accurate across items 5 through 12, 12 through 18, and 18 through 20.

The working part of the C++ code would look like this:

Looking at the results of a system simulated in this way you can get a range of answers. Sometimes you get a smooth curve that might be bell-shaped and skewed left. This would indicate that much of the time your commute duration fell in a typical window but was occasionally a little shorter but sometimes longer, and sometimes by a lot. Sometimes the results can be discontinuous, which means you sometimes get a cluster of really good or really bad overall results if things stack up just right. Some types of models are so granular that the variations kind of cancel each other out, and thus we didn’t need to run many iterations. This seemed to be the case with the detailed traffic simulations we built and ran for border crossings. In other cases, like in the more complicated aircraft support logistics scenarios we analyzed, the results could be a lot more variable. This meant we had to run more iterations to be sure we were generating a representative output set.

Interestingly, exceptionally good and exceptionally poor systemic results can be driven by the exact same data for the individual events. It’s just how things stack up from run to run that makes a given iteration good or bad. If you are collecting data during a day when an exceptionally good or bad result was obtained in the real world, the granular information obtained should still give you the more common results in the majority of cases. This is a very subtle thing that took me a while to understand. And, as I’ve explained elsewhere, there are a lot of complicated things a practitioner needs to understand to be able to do this work well. The teams I worked on had a lot of highly experienced programmers and analysts and we had some royal battles over what things meant and how things should have been done. In the end I think we ended up with a better understanding and some really solid tools and results.

Posted in Simulation | Tagged , , | Leave a comment

Combined Survey Results (late March 2019)

The additional survey results from yesterday are included in the combined results here.

List at least five steps you take during a typical business analysis effort.

  1. Requirements Gathering
  2. Initiation
  3. Testing
  4. QA
  5. Feedback
  6. User acceptance
  1. Requirement Elicitation
  2. UX Design
  3. Software Design for Testability
  1. Identify Business Goal
  2. ID Stakeholders
  3. Make sure necessary resources are available
  4. Create Project Schedule
  5. Conduct regular status meetings
  1. Meet with requester to learn needs/wants
  2. List details/wants/needs
  3. Rough draft of Project/proposed solutions
  4. Check in with requester on rough draft
  5. Make edits/adjustments | test
  6. Regularly schedule touch-point meeting
  7. Requirement analysis/design | functional/non-functional
  8. Determine stakeholders | user acceptance
  1. List the stakeholders
  2. Read through all documents available
  3. Create list of questions
  4. Meet regularly with the stakeholders
  5. Meet with developers
  6. Develop scenarios
  7. Ensure stakeholders ensersy requirements
  8. other notes
    • SMART PM milestones
    • know players
    • feedback
    • analysis steps
    • no standard
  1. identify stakeholders / Stakeholder Analysis
  2. identify business objectives / goals
  3. identify use cases
  4. specify requirements
  5. interview Stakeholders
  1. project planning
  2. user group sessions
  3. individual meetings
  4. define business objectives
  5. define project scope
  6. prototype / wireframes
  1. identify audience / stakeholders
  2. identify purpose and scope
  3. develop plan
  4. define problem
  5. identify objective
  6. analyze problems / identify alternative solutions
  7. determine solution to go with
  8. design solution
  9. test solution
  1. gathering requirements
  2. assess stakeholder priorities
  3. data pull
  4. data scrub
  5. data analysis
  6. create summary presentation
  1. define objective
  2. research available resources
  3. define a solution
  4. gather its requirements
  5. define requirements
  6. validate and verify requirements
  7. work with developers
  8. coordinate building the solutions
  1. requirements elicitation
  2. requirements analysis
  3. get consensus
  4. organizational architecture assessment
  5. plan BA activities
  6. assist UAT
  7. requirements management
  8. define problem to be solved
  1. understand thhe business need of the request
  2. understand why the need is important – what is the benefit/value?
  3. identify the stakeholders affected by the request
  4. identify system and process impacts of the change (complexity of the change)
  5. understand the cost of the change
  6. prioritize the request in relation to other requests/needs
  7. elicit business requirements
  8. obtain signoff on business requests / validate requests
  1. understanding requirements
  2. writing user stories
  3. participating in Scrums
  4. testing stories
  1. research
  2. requirements meetings/elicitation
  3. document requirements
  4. requirements approvals
  5. estimation with developers
  6. consult with developers
  7. oversee UAT
  8. oversee business transition
  1. brainstorming
  2. interview project owner(s)
  3. understand current state
  4. understand need / desired state
  5. simulate / shadow
  6. inquire about effort required from technical team
  1. scope, issue determination, planning
  2. define issues
  3. define assumptions
  4. planning
  5. ccommunication
  6. analysis – business and data modeling
  1. gather data
  2. sort
  3. define
  4. organize
  5. examples, good and bad
  1. document analysis
  2. interviews
  3. workshops
  4. BRD walkthroughs
  5. item tracking
  1. ask questions
  2. gather data
  3. clean data
  4. run tests
  5. interpret results
  6. visualize results
  7. provide conclusions
  1. understand current state
  2. understand desired state
  3. gap analysis
  4. understand end user
  5. help customer update desired state/vision
  6. deliver prioritized value iteratively
  1. define goals and objectives
  2. model As-Is
  3. identify gaps/requirements
  4. model To-Be
  5. define business rules
  6. conduct impact analysis
  7. define scope
  8. identify solution / how
  1. interview project sponsor
  2. interview key stakeholders
  3. read relevant information about the issue
  4. form business plan
  5. communicate and get buy-in
  6. goals, objectives, and scope
  1. stakeholder analysis
  2. requirements gathering
  3. requirements analysis
  4. requirements management – storage and updates
  5. communication – requirements and meetings
  1. analyze evidence
  2. desiign application
  3. develop prototype
  4. implement product
  5. evaluate product
  6. train users
  7. upgrade functionality
  1. read material from previous similar projects
  2. talk to sponsors
  3. web search on topic
  4. play with current system
  5. ask questions
  6. draw BPMs
  7. write use cases
  1. document current process
  2. identify users
  3. meet with users; interview
  4. review current documentation
  5. present proposed solution or iteration
  1. meeting with stakeholders
  2. outline scope
  3. research
  4. write requirements
  5. meet and verify with developers
  6. test in development and production
  7. outreach and maintenance with stakeholders
  1. As-In analysis (current state)
  2. write lightweight business case
  3. negotiate with stakeholders
  4. write user stories
  5. User Acceptance Testing
  6. cry myself to sleep đŸ™‚
  1. initiation
  2. elicitation
  3. discussion
  4. design / user stories / use cases
  5. sign-off
  6. sprints
  7. testing / QA
  8. user acceptance testing
  1. planning
  2. elicitation
  3. requirements
  4. specification writing
  5. QA
  6. UAT
  1. identify the problem
  1. studying subject matter
  2. planning
  3. elicitation
  4. functional specification writing
  5. documentation
  1. identify stakeholders
  2. assess working approach (Waterfall, Agile, Hybrid)
  3. determine current state of requirements and maturity of project vision
  4. interview stakeholders
  5. write and validate requirements
  1. problem definition
  2. value definition
  3. decomposition
  4. dependency analysis
  5. solution assessment
  1. process mapping
  2. stakeholder interviews
  3. write use cases
  4. document requirements
  5. research
  1. listen – to stakeholders and customers
  2. analyze – documents, data, atc. to understand thhings further
  3. repeat back what I’m hearing to make sure I’m understanding correctly
  4. synthesize – the details
  5. document – as needed(e.g., Visio diagramsPowerPoint decks, Word, tool, etc.)
  6. solution
  7. help with implementing
  8. assess and improve – if/as needed
  1. understand the problem
  2. understand the environment
  3. gather the requirements
  4. align with IT on design
  5. test
  6. train
  7. deploy
  8. follow-up
  1. watch how it is currently done
  2. listen to clients’ pain points
  3. define goals of project
  1. critical path tasks
  2. pros/cons of tasks
  3. impacts
  4. risks
  5. goals
  1. discovery – high level
  2. analysis / evaluation
  3. presentation of options
  4. requirements gathering
  5. epic / feature / story definition’
  6. prioritization
  1. who is driving the requirements?
  2. focus on what is needed for project
  3. who is going to use the product?
  1. elicit requirements
  2. hold focus groups
  3. create mock-ups
  4. test
  5. write user stories
  1. analyze
  2. document process
  3. identify waste (Lean)
  4. communicate
  5. document plan / changes
  1. meeting
  2. documentation
  3. strategy
  4. execution plan
  5. reporting plan
  1. requirements gathering
  2. delivery expectations
  3. user experience work with customer
  4. process mapping
  5. system and user testing
  6. system interaction (upstram and downstream) how does a change affect my process?
  7. understanding stakeholders
  1. stakeholder elicitation
  2. brainstorming
  3. requirements analysis
  4. wireframing
  5. process / flow diagrams
  1. current state analysis
  2. future state
  3. gap analysis
  4. requirements gathering
  5. success metrics
  1. interview users
  2. gather requirements
  3. document business rules
  4. business process flow
  5. mock-ups
  1. UX design review
  2. requirements gathering
  3. vision gathering / understanding
  1. requirements elicitation
  2. gap analysis
  1. shadow users
  2. follow up to verify understanding of business and need
  3. mockups, high-level design concept
  4. present mockup, design concept
  5. create and mintain stories and acceptance criteria
  1. brainstorming
  2. external stakeholder feedback
  3. internal stakeholder feedback
  4. break down epics
  5. user stories
  6. building
  1. stakeholder analysis
  2. elicitation activity plan
  3. requirements tracing
  4. prototyping
  5. document analysis
  1. research
  2. requirements analysis
  3. state chart diagram
  4. execution plan
  5. reporting plan

List some steps you took in a weird or non-standard project.

  • Steps:
    1. 1. Why is there a problem? Is there a problem?
    2. 2. What can change? How can I change it?
    3. 3. How to change the process for lasting results
  • A description of “weird” usually goes along with a particular person I am working with rather than a project. Some people like things done a certain way or they need things handed to them or their ego stroked. I accommodate all kinds of idiosyncrasies so that I can get the project done on time.
  • adjustments in project resources
  • after initial interview, began prototyping and iterated through until agreed upon design
  • built a filter
  • create mock-ups and gather requirements
  • create strategy to hit KPIs
  • data migration
  • data dictionary standardization
  • describing resource needs to the customer so they better understand how much work actually needs to happen and that there isn’t enough staff
  • design sprint
  • design thinking
  • developers and I create requirements as desired
  • did my own user experience testing
  • document requirements after development work has begun
  • documented non-value steps in a process new to me
  • explained project structure to stakeholders
  • For a client who was unable to clearly explain their business processes and where several SMEs had to be consulted to form the whole picture, I drew workflows to identify inputs/outputs, figure out where the gaps in our understanding existed, and identify the common paths and edge cases.
  • got to step 6 (building) and started back at step 1 multiiple times
  • guided solutioning
  • identified handoffs between different contractors
  • identify end results
  • interview individuals rather than host meetings
  • investigate vendor-provided code for business process flows
  • iterative development and delivery
  • made timeline promises to customers without stakeholder buy-in/signoff
  • make excutive decisions withoutstakeholder back-and-forth
  • mapped a process flow on a meeting room wall and had developers stick up arrows and process boxes like I would create in Visio to get engagement and consensus
  • moved heavy equipment
  • moved servers from one office to another
  • observe people doing un-automated process
  • personally evaluate how comitted mgt was to what they said they wanted
  • phased delivery / subject areas
  • physically simulate each step of an operational process
  • process development
  • regular status reports to CEO
  • resources and deliverables
  • reverse code engineering
  • review production incident logs
  • showed customer a competitor’s website to get buy-in for change
  • simulation
  • start with techniques from junior team members
  • starting a project without getting agreed funding from various units
  • statistical modeling
  • surveys
  • team up with PM to develop a plan to steer the sponsor in the right diection
  • town halls
  • track progress in PowerPoint because the sponsor insisted on it
  • train the team how to read use case diagrams
  • translating training documents into Portuguese
  • travel to affiliate sites to understand their processes
  • understanding cultural and legal requirements in a foreign country
  • use a game
  • using a ruler to estimate level of effort to digitize paper contracts in filing cabinets gathered over 40 years
  • work around manager who was afraid of change – had to continually demonstrate the product, ease of use, and savings
  • worked with a mechanic
  • write requirements for what had been developed

Name three software tools you use most.

  • Excel (27)
  • Visio (18)
  • Jira (17)
  • Word (15)
  • Confluence (8)
  • Outlook (7)
  • PowerPoint (7)
  • SharePoint (6)
  • Azure DevOps (5)
  • Google Docs (4)
  • MS Team Foundation Server (4)
  • email (3)
  • MS Teams (3)
  • Draw.io (2)
  • MS Dynamics (2)
  • MS Office (2)
  • MS Visual Studio (2)
  • Notepad (2)
  • OneNote (2)
  • Siebel (2)
  • Slack (2)
  • SQL Server (2)
  • Version One (2)
  • Adobe Reader (1)
  • all MS products (1)
  • ARC / Knowledge Center(?) (Client Internal Tests) (1)
  • Balsamiq (1)
  • Basecamp (1)
  • Blueprint (1)
  • Bullhorn (1)
  • CRM (1)
  • database, spreadsheet, or requirement tool for managing requirements (1)
  • Doors (1)
  • Enbevu(?) (Mainframe) (1)
  • Enterprise Architect (1)
  • Gephi (dependency graphing) (1)
  • Google Calendar (1)
  • Google Drawings (1)
  • illustration / design program for diagrams (1)
  • iRise (1)
  • Kingsway Soft (1)
  • Lucid Chart (1)
  • LucidChart (1)
  • Miro Real-Time Board (1)
  • MS Office tools (1)
  • MS Project (1)
  • MS Word developer tools (1)
  • NUnit (1)
  • Pendo (1)
  • Power BI (1)
  • Process 98 (1)
  • Python (1)
  • R (1)
  • requirements repositories, e.g., RRC, RTC (1)
  • RoboHelp (1)
  • Scribe (1)
  • Scrumhow (?) (1)
  • Skype (1)
  • SnagIt (1)
  • SQL (1)
  • Tableau (1)
  • Visible Analyst (1)
  • Visual Studio (1)
  • Visual Studio Team Server (1)
  • Vocera EVA (1)

Name three non-software techniques you use most.

  • interviews (4)
  • communication (3)
  • brainstorming (2)
  • meetings (2)
  • process mapping (2)
  • prototyping (2)
  • relationship building (2)
  • surveys (2)
  • wireframing (2)
  • “play package” (1)
  • 1-on-1 meetings to elicit requirements (1)
  • active listening (1)
  • analysis (1)
  • analyze audience (1)
  • apply knowledge of psychology to figure out how to approach the various personalities (1)
  • business process analysis (1)
  • business process modeling (1)
  • calculator (1)
  • change management (1)
  • charting on whiteboard (1)
  • coffees with customers (1)
  • coffees with teams (1)
  • collaboration (1)
  • conference calls (1)
  • conflict resolution and team building (1)
  • costing out the requests (1)
  • critical questioning (1)
  • critical questioning (ask why fiive times), funnel questioning (1)
  • data analysis (1)
  • data modeling (1)
  • decomposition (1)
  • design thinking (1)
  • develop scenarios (1)
  • development efforts (1)
  • diagramming/modeling (1)
  • document analysis (1)
  • documentation (1)
  • documenting notes/decisions (1)
  • drinking (1)
  • elicitation (1)
  • expectation level setting (1)
  • face-to-face technique (1)
  • facilitiation (1)
  • fishbone diagram (1)
  • Five Whys (1)
  • focus groups (1)
  • handwritten note-taking (1)
  • hermeneutics / interpretation of text (1)
  • impact analysis (1)
  • individual meetings (1)
  • informal planning poker (1)
  • initial mockups / sketches (1)
  • interview (1)
  • interview end user (1)
  • interview stakeholders (1)
  • interview users (1)
  • interviewing (1)
  • JAD sessions (Joint Application Development Sessions) (1)
  • job shadowing (1)
  • listening (1)
  • lists (1)
  • meeting facilitation (prepare an agenda, define goals, manage time wisely, ensure notes are taken and action items documented) (1)
  • mind mapping (1)
  • notes (1)
  • note-taking (1)
  • observation (1)
  • organize (1)
  • paper (1)
  • paper easels (1)
  • pen and paper (1)
  • phone calls and fate-to-face meetings (1)
  • Post-It notes (Any time of planning or breaking down of a subject, I use different colored Post-Its, writing with a Sharpie, on the wall. This allows me to physically see an idea from any distance. I can also move and categorize at will. When done, take a picture.) (1)
  • prioritization (MOSCOW) (1)
  • process decomposition (1)
  • process design (1)
  • process flow diagrams (1)
  • process modeling (1)
  • product vision canvas (1)
  • prototyping (can be on paper) (1)
  • recognize what are objects (nouns) and actions (verbs) (1)
  • requirements elicitation (1)
  • requirements meetings (1)
  • requirements verification and validation (1)
  • requirements workshop (1)
  • responsibility x collaboration using index cards (1)
  • rewards (food, certificates) (1)
  • Scrum Ceremonies (1)
  • Scrums (1)
  • shadowing (1)
  • SIPOC (1)
  • sketching (1)
  • spreadsheets (1)
  • stakeholder analysis (1)
  • stakeholder engagement (1)
  • stakeholder engagement – visioning to execution and post-assessment (1)
  • stakeholder interviews (1)
  • swim lanes (1)
  • taking / getting feedback (1)
  • taking notes (1)
  • test application (1)
  • training needs analysis (1)
  • use paper models / process mapping (1)
  • user group sessions (1)
  • user stories (1)
  • visual modeling (1)
  • walking through client process (1)
  • whiteboard diagrams (1)
  • whiteboard workflows (1)
  • whiteboarding (1)
  • whiteboards (1)
  • workflows (1)
  • working out (1)
  • workshops (1)

Name the goals of a couple of different projects (e.g., automate a manual process, interface to a new client, redesign screens, etc.)

  • add enhancements to work flow app
  • adding feature toggles for beta testing
  • adhere to regulatory requirements
  • adjusting solution to accommodate the needs of a new/different user base
  • automate a manual form with a workflow
  • automate a manual login/password generation and dissemination to users
  • automate a manual process
  • automate a manual process, reduce time and staff to accomplish a standard organizational function
  • automate a paper-based contract digitization process
  • automate and ease reporting (new tool)
  • automate highly administrative, easily repeatable processes which have wide reach
  • automate manual process
  • automate new process
  • automate risk and issue requirements
  • automate the contract management process
  • automate the process of return goods authorizations
  • automate workflow
  • automate workflows
  • automation
  • block or restore delivery service to areas affected by disasters
  • bring foreign locations into a global system
  • build out end user-owned applications into IT managed services
  • business process architecture
  • clear bottlenecks
  • consolidate master data
  • create a “how-to” manual for training condo board members
  • create a means to store and manage condo documentation
  • create a reporting mechanism for healthcare enrollments
  • data change/update
  • data migration
  • design processes
  • develop a new process to audit projects in flight
  • develop and interface between two systems
  • develop data warehouse
  • develop effort tracking process
  • develop new functionality
  • develop new software
  • document current inquiry management process
  • enhance current screens
  • enhance system performance
  • establish standards for DevOps
  • establish vision for various automation
  • I work for teams impplementing Dynamics CRM worldwide. I specialize in data migration and integration.
  • implement data interface wiith two systems
  • implement new software solution
  • implement software for a new client
  • implement vendor software with customizations
  • improve a business process
  • improve system usability
  • improve the usage of internal and external data
  • improve user interface
  • include new feature on mobile application
  • increase revenue and market share
  • integrate a new application with current systems/vendors
  • maintain the MD Product Evaluation List (online)
  • map geographical data
  • merge multiple applications
  • migrate to a new system
  • move manual Excel reports online
  • new functionality
  • process data faster
  • process HR data and store records
  • product for new customer
  • prototype mobile app for BI and requirements
  • provide business recommendations
  • provide new functionality
  • recover fuel-related cost fluctuations
  • redesign
  • redesign a system process to match current business needs
  • reduce technical debt
  • re-engineer per actual user requirements
  • reimplement solution using newer technology
  • replace current analysis tool with new one
  • replace legacy system
  • replace manual tools with applications
  • replatform legacy system
  • rewrite / redesign screens
  • simplify / redesign process
  • simplify returns for retailer and customer
  • standardize / simplify a process or interface
  • system integration
  • system integration / database syncing
  • system performance improvements
  • system-to-system integration
  • technical strategy for product
  • transform the customer experience (inside and outside)
  • UI optimization
  • update a feature on mobile app
  • update the e-commerce portion of a website to accept credit and debit cards
Posted in Tools and methods | Tagged , , , , , | Leave a comment

A Simulationist’s Framework for Business Analysis: Round Five

Today I gave this talk at the Project Summit – Business Analyst World conference in Orlando. The slides I used for this presentation are here. The most recent version of the slides, which includes links to detailed discussions of more topics, is at:

http://rpchurchill.com/presentations/SimFrameForBA/index.html

In this version I further developed my concept of the Unified Theory of Business Analysis. I also collected more survey responses, the results of which are reported below.

List at least five steps you take during a typical business analysis effort.

  1. requirements gathering
  2. delivery expectations
  3. user experience work with customer
  4. process mapping
  5. system and user testing
  6. system interaction (upstram and downstream) how does a change affect my process?
  7. understanding stakeholders
  1. stakeholder elicitation
  2. brainstorming
  3. requirements analysis
  4. wireframing
  5. process / flow diagrams
  1. current state analysis
  2. future state
  3. gap analysis
  4. requirements gathering
  5. success metrics
  1. interview users
  2. gather requirements
  3. document business rules
  4. business process flow
  5. mock-ups
  1. UX design review
  2. requirements gathering
  3. vision gathering / understanding
  1. requirements elicitation
  2. gap analysis
  1. shadow users
  2. follow up to verify understanding of business and need
  3. mockups, high-level design concept
  4. present mockup, design concept
  5. create and mintain stories and acceptance criteria
  1. brainstorming
  2. external stakeholder feedback
  3. internal stakeholder feedback
  4. break down epics
  5. user stories
  6. building
  1. stakeholder analysis
  2. elicitation activity plan
  3. requirements tracing
  4. prototyping
  5. document analysis
  1. research
  2. requirements analysis
  3. state chart diagram
  4. execution plan
  5. reporting plan

List some steps you took in a weird or non-standard project.

  • data migration
  • did my own user experience testing
  • got to step 6 (building) and started back at step 1 multiiple times
  • mapped a process flow on a meeting room wall and had developers stick up arrows and process boxes like I would create in Visio to get engagement and consensus
  • moved servers from one office to another
  • process development
  • showed customer a competitor’s website to get buy-in for change

Name three software tools you use most.

  • Visio (4)
  • Excel (3)
  • Jira (3)
  • MS Teams (3)
  • PowerPoint (3)
  • Draw.io (2)
  • Word (2)
  • Balsamiq (1)
  • Google Docs (1)
  • iRise (1)
  • Lucid Chart (1)
  • Miro Real-Time Board (1)
  • MS Office (1)
  • Pendo (1)
  • Siebel (1)
  • Slack (1)
  • Visual Studio (1)
  • Visual Studio Team Server (1)
  • Vocera EVA (1)

Name three non-software techniques you use most.

  • brainstorming
  • brainstorming
  • business process modeling
  • charting on whiteboard
  • document analysis
  • interview
  • interviews
  • job shadowing
  • mind mapping
  • note-taking
  • paper easels
  • product vision canvas
  • requirements elicitation
  • requirements workshop
  • SIPOC
  • surveys
  • taking / getting feedback
  • walking through client process
  • whiteboards
  • workshops

Name the goals of a couple of different projects (e.g., automate a manual process, interface to a new client, redesign screens, etc.)

  • adding feature toggles for beta testing
  • automate manual process
  • automate risk and issue requirements
  • enhance current screens
  • new functionality
  • product for new customer
  • prototype mobile app for BI and requirements
  • provide new functionality
  • replace legacy system
  • rewrite / redesign screens
  • system performance improvements
  • system-to-system integration
  • UI optimization
Posted in Tools and methods | Tagged , , , , | Leave a comment