A Simulationist’s Framework for Business Analysis: Combined Survey Results

These results are combined from versions of this talk given in Pittsburgh, DC, and Baltimore.

List at least five steps you take during a typical business analysis effort.

I would list the steps I take in conducting an analysis project as follows.

Project Planning
Intended Use (Identify or Receive Business Needs)
Assumptions, Capabilities, and Risks and Impacts
Conceptual Model (As-Is State)
Data Sources
      –Requirements (To-Be State: Abstract)
      –Functional (What it Does)
Non-Functional (What it Is, plus Maintenance and Governance)
Design (To-Be State: Detailed)
Implementation
Test
      –Operation and Usability (Verification)
      –Outputs (Validation)
Acceptance (Accreditation)
Project Closeout

As I point out in the presentation the knowledge in the BABOK and the ideas they contain map roughly to these steps, though they are necessarily a bit generalized. The audience members whose surveys I collected reported that they follow the same rough procedures, in whole or in part; they just tend to use different language in many cases.

  1. Requirements Gathering
  2. Initiation
  3. Testing
  4. QA
  5. Feedback
  6. User acceptance
  1. Requirement Elicitation
  2. UX Design
  3. Software Design for Testability
  1. Identify Business Goal
  2. ID Stakeholders
  3. Make sure necessary resources are available
  4. Create Project Schedule
  5. Conduct regular status meetings
  1. Meet with requester to learn needs/wants
  2. List details/wants/needs
  3. Rough draft of Project/proposed solutions
  4. Check in with requester on rough draft
  5. Make edits/adjustments | test
  6. Regularly schedule touch-point meeting
  7. Requirement analysis/design | functional/non-functional
  8. Determine stakeholders | user acceptance
  1. List the stakeholders
  2. Read through all documents available
  3. Create list of questions
  4. Meet regularly with the stakeholders
  5. Meet with developers
  6. Develop scenarios
  7. Ensure stakeholders ensersy(?) requirements
  8. other notes
    • SMART PM milestones
    • know players
    • feedback
    • analysis steps
    • no standard
  1. identify stakeholders / Stakeholder Analysis
  2. identify business objectives / goals
  3. identify use cases
  4. specify requirements
  5. interview Stakeholders
  1. project planning
  2. user group sessions
  3. individual meetings
  4. define business objectives
  5. define project scope
  6. prototype / wireframes
  1. identify audience / stakeholders
  2. identify purpose and scope
  3. develop plan
  4. define problem
  5. identify objective
  6. analyze problems / identify alternative solutions
  7. determine solution to go with
  8. design solution
  9. test solution
  1. gathering requirements
  2. assess stakeholder priorities
  3. data pull
  4. data scrub
  5. data analysis
  6. create summary presentation
  1. define objective
  2. research available resources
  3. define a solution
  4. gather its requirements
  5. define requirements
  6. validate and verify requirements
  7. work with developers
  8. coordinate building the solutions
  1. requirements elicitation
  2. requirements analysis
  3. get consensus
  4. organizational architecture assessment
  5. plan BA activities
  6. assist UAT
  7. requirements management
  8. define problem to be solved
  1. understand the business need of the request
  2. understand why the need is important – what is the benefit/value?
  3. identify the stakeholders affected by the request
  4. identify system and process impacts of the change (complexity of the change)
  5. understand the cost of the change
  6. prioritize the request in relation to other requests/needs
  7. elicit business requirements
  8. obtain signoff on business requests / validate requests
  1. understanding requirements
  2. writing user stories
  3. participating in Scrums
  4. testing stories
  1. research
  2. requirements meetings/elicitation
  3. document requirements
  4. requirements approvals
  5. estimation with developers
  6. consult with developers
  7. oversee UAT
  8. oversee business transition
  1. brainstorming
  2. interview project owner(s)
  3. understand current state
  4. understand need / desired state
  5. simulate / shadow
  6. inquire about effort required from technical team
  1. scope, issue determination, planning
  2. define issues
  3. define assumptions
  4. planning
  5. communication
  6. analysis – business and data modeling
  1. gather data
  2. sort
  3. define
  4. organize
  5. examples, good and bad
  1. document analysis
  2. interviews
  3. workshops
  4. BRD walkthroughs
  5. item tracking
  1. ask questions
  2. gather data
  3. clean data
  4. run tests
  5. interpret results
  6. visualize results
  7. provide conclusions
  1. understand current state
  2. understand desired state
  3. gap analysis
  4. understand end user
  5. help customer update desired state/vision
  6. deliver prioritized value iteratively
  1. define goals and objectives
  2. model As-Is
  3. identify gaps/requirements
  4. model To-Be
  5. define business rules
  6. conduct impact analysis
  7. define scope
  8. identify solution / how
  1. interview project sponsor
  2. interview key stakeholders
  3. read relevant information about the issue
  4. form business plan
  5. communicate and get buy-in
  6. goals, objectives, and scope
  1. stakeholder analysis
  2. requirements gathering
  3. requirements analysis
  4. requirements management – storage and updates
  5. communication – requirements and meetings
  1. analyze evidence
  2. design application
  3. develop prototype
  4. implement product
  5. evaluate product
  6. train users
  7. upgrade functionality
  1. read material from previous similar projects
  2. talk to sponsors
  3. web search on topic
  4. play with current system
  5. ask questions
  6. draw BPMs
  7. write use cases
  1. document current process
  2. identify users
  3. meet with users; interview
  4. review current documentation
  5. present proposed solution or iteration
  1. meeting with stakeholders
  2. outline scope
  3. research
  4. write requirements
  5. meet and verify with developers
  6. test in development and production
  7. outreach and maintenance with stakeholders
  1. As-In analysis (current state)
  2. write lightweight business case
  3. negotiate with stakeholders
  4. write user stories
  5. User Acceptance Testing
  6. cry myself to sleep đŸ™‚
  1. initiation
  2. elicitation
  3. discussion
  4. design / user stories / use cases
  5. sign-off
  6. sprints
  7. testing / QA
  8. user acceptance testing

List some steps you took in a weird or non-standard project.

I would classify these as specific activities that fall into place in the normal framework and are only listed as non-standard because the individuals reporting them hadn’t done them often enough to see that. I’ve done almost all of these things at one time or another. There is also the possibility that people listed these things as non-standard simply because they were asked to.

As the respondents were of many ages and levels of experience it makes me wonder how people come to be business analysts. It seems to me that most people transition to the practice from other functions, though I’ve met people who began their careers doing it. Certainly the IIBA is making allowances for training beginning professionals in the practice. I am somewhere in between. I began as an engineer and software developer who just happen to analyze systems using the same techniques. Most of the systems were fluid or thermodynamic systems but there was an interval where I worked for a company that did business process reengineering using a document imaging capability.

One other item that comes up in this list is the need to iron out problems with individuals and managers, which sometimes involves working around them. If there’s anything that’s more common than needing to iron out problems with people, I don’t know what it is. That comes up in every profession and every relationship. I can report that my ability to fulfill professional duties, like all others, has gotten easier as I’ve learned how to work with people more cooperatively, smoothly, and supportively.

  • Steps:
    1. Why is there a problem? Is there a problem?
    2. What can change? How can I change it?
    3. How to change the process for lasting results
  • Adjustments in project resources
  • after initial interview, began prototyping and iterated through until agreed upon design
  • create mock-ups and gather requirements
  • describing resource needs to the customer so they better understand how much work actually needs to happen and that there isn’t enough staff
  • Developers and I create requirements as desired
  • documented non-value steps in a process new to me
  • explained project structure to stakeholders
  • guided solutioning
  • identified handoffs between different contractors
  • interview individuals rather than host meetings
  • iterative development and delivery
  • Made timeline promises to customers without stakeholder buy-in/signoff
  • make excutive decisions withoutstakeholder back-and-forth
  • observe people doing un-automated process
  • personally evaluate how comitted mgt was to what they said they wanted
  • phased delivery / subject areas
  • physically simulate each step of an operational process
  • Regular status reports to CEO
  • simulation
  • starting a project without getting agreed funding from various units
  • statistical modeling
  • surveys
  • town halls
  • Travel to affiliate sites to understand their processes
  • use a game
  • using a ruler to estimate level of effort to digitize paper contracts in filing cabinets gathered over 40 years
  • work around manager who was afraid of change – had to continually demonstrate the product, ease of use, and savings
  • Write requirements for what had been developed

Name three software tools you use most.

The frequency of reported tool use matches closely with my own experience. Excel is such a generalized and approachable tool that people tend to use it for almost everything. It’s no wonder BAs explicitly report using it more often than anything else. Word and PowerPoint are almost as ubiquitous and like SharePoint, Outlook, and similar tools they organize and enhance communication and mutual understanding. Jira and Confluence are used quite often to manage requirements and work products using a standard relational tool. They are used quite often are more and more shops adopt Agile methods in general and Scrum and its cousins in particular.

Specialized tools like SQL and R come up less often than I might have expected, but we’re working with a small sample size and there may be cases where audience members use it but not as often as other tools they did report. Several specialized development tools were listed, showing that there is an overlap between analysis and development skills.

  • Excel (15)
  • Word (11)
  • Visio (8)
  • Jira (7)
  • Confluence (5)
  • SharePoint (4)
  • PowerPoint (3)
  • MS Outlook (2)
  • Notepad (2)
  • SQL Server (2)
  • Adobe Reader (1)
  • all MS products (1)
  • ARC / Knowledge Center(?) (Client Internal Tests) (1)
  • Azure (1)
  • Basecamp (1)
  • Blueprint (1)
  • CRM (1)
  • database, spreadsheet, or requirement tool for managing requirements (1)
  • Doors (1)
  • Email (1)
  • e-mail (1)
  • Enbevu(?) (Mainframe) (1)
  • Enterprise Architect (1)
  • Google Docs (1)
  • Google Drawings (1)
  • illustration / design program for diagrams (1)
  • LucidChart (1)
  • MS Office (1)
  • MS Project (1)
  • MS Visual Studio (1)
  • MS Word developer tools (1)
  • NUnit (1)
  • OneNote (1)
  • Process 98 (1)
  • Python (1)
  • R (1)
  • requirements repositories, e.g., RRC, RTC (1)
  • RoboHelp (1)
  • Scrumhow (?) (1)
  • SnagIt (1)
  • SQL (1)
  • Tableau (1)
  • Team Foundation Server (1)
  • Visible Analyst (1)
  • Visual Studio (MC) (1)

Name three non-software techniques you use most.

It’s a bit more difficult to group and count these. The ideas of interviewing and meeting come up more than anything else, though almost never in exactly the same language. Various forms of diagramming, modeling, and decomposition come up often as well, and those are tools I emphasize in my own practice and in the slides. It might make a fun group exercise to group these into categories, though I’m sure the results would not be surprising.

  • communication (3)
  • meetings (2)
  • prototyping (2)
  • Scrum Ceremonies (2)
  • “play package” (1)
  • 1-on-1 meetings to elicit requirements (1)
  • active listening (1)
  • analysis (1)
  • analyze audience (1)
  • apply knowledge of psychology to figure out how to approach the various personalities (1)
  • business process analysis (1)
  • calculator (1)
  • conflict resolution and team building (1)
  • costing out the requests (1)
  • data modeling (1)
  • decomposition (1)
  • develop scenarios (1)
  • diagramming/modeling (1)
  • documentation (1)
  • elicitation (1)
  • expectation level setting (1)
  • facilitiation (1)
  • fishbone Diagram (1)
  • five Whys (1)
  • handwritten note-taking (1)
  • hermeneutics / interpretation of text (1)
  • impact analysis (1)
  • individual meetings (1)
  • initial mockups / sketches (1)
  • interview end user (1)
  • interview stakeholders (1)
  • interview users (1)
  • interviews (1)
  • JAD sessions (Joint Application Development Sessions) (1)
  • listening (1)
  • lists (1)
  • Notes (1)
  • organize (1)
  • paper (1)
  • pen and paper (1)
  • phone calls and fate-to-face meetings (1)
  • process decomposition (1)
  • process flow diagrams (1)
  • process mapping (1)
  • process Modeling (1)
  • recognize what are objects (nouns) and actions (verbs) (1)
  • requirements meetings (1)
  • responsibility vs. collaboration using index cards (1)
  • rewards (food, certificates) (1)
  • shadowing (1)
  • Spreadsheets (1)
  • surveys (1)
  • swim lanes (1)
  • taking notes (1)
  • test application (1)
  • training needs analysis (1)
  • use paper models / process mapping (1)
  • user group sessions (1)
  • user stories (1)
  • whiteboard diagrams (1)
  • whiteboard workflows (1)
  • wireframing (1)
  • workflows (1)

Name the goals of a couple of different projects (e.g., automate a manual process, interface to a new client, redesign screens, etc.)

We unsurprising find our goals to be, roughly: automate, create, change, develop, improve, process, redesign/re-engineer, replace, simplify, update. Ha ha, it’s almost like we’re talking about business analysis and process improvement!

  • adhere to regulatory requirements
  • adjusting solution to accommodate the needs of a new/different user base
  • automate a manual login/password generation and dissemination to users
  • automate a manual process
  • automate a manual process, reduce time and staff to accomplish a standard organizational function
  • automate a paper-based contract digitization process
  • automate and ease reporting (new tool)
  • automate new process
  • automate the contract management process
  • automation
  • block or restore delivery service to areas affected by disasters
  • clear bottlenecks
  • create a “how-to” manual for training condo board members
  • create a means to store and manage condo documentation
  • create a reporting mechanism for healthcare enrollments
  • data change/update
  • data migration
  • develop data warehouse
  • develop effort tracking process
  • develop new functionality
  • document current inquiry management process
  • enhance system performance
  • implement new software solution
  • improve a business process
  • improve system usability
  • improve user interface
  • include new feature on mobile application
  • increase revenue and market share
  • maintain the MD Product Evaluation List (online)
  • map geographical data
  • move manual Excel reports online
  • process data faster
  • process HR data and store records
  • provide business recommendations
  • recover fuel-related cost fluctuations
  • redesign
  • reduce technical debt
  • re-engineer per actual user requirements
  • reimplement solution using newer technology
  • replace current analysis tool with new one
  • “replat form” legacy system (?)
  • simplify returns for retailer and customer
  • system integration
  • system integration / database syncing
  • update a feature on mobile app
Posted in Tools and methods | Tagged , , , , | Leave a comment

A Simulationist’s Framework for Business Analysis: Round Three

I was finally able to give this talk to the IIBA’s Baltimore Chapter. The results of the newest survey are below, and the results from the initial talks are here and here. The link to the current version of the presentation follows directly.

https://www.rpchurchill.com/presentations/SimFrameForBA/index.html

Here are the latest survey results.

List at least five steps you take during a typical business analysis effort.

  1. stakeholder analysis
  2. requirements gathering
  3. requirements analysis
  4. requirements management – storage and updates
  5. communication – requirements and meetings
  1. analyze evidence
  2. design application
  3. develop prototype
  4. implement product
  5. evaluate product
  6. train users
  7. upgrade functionality
  1. read material from previous similar projects
  2. talk to sponsors
  3. web search on topic
  4. play with current system
  5. ask questions
  6. draw BPMs
  7. write use cases
  1. document current process
  2. identify users
  3. meet with users; interview
  4. review current documentation
  5. present proposed solution or iteration
  1. meeting with stakeholders
  2. outline scope
  3. research
  4. write requirements
  5. meet and verify with developers
  6. test in development and production
  7. outreach and maintenance with stakeholders
  1. As-Is analysis (current state)
  2. write lightweight business case
  3. negotiate with stakeholders
  4. write user stories
  5. User Acceptance Testing
  6. cry myself to sleep đŸ™‚
  1. initiation
  2. elicitation
  3. discussion
  4. design / user stories / use cases
  5. sign-off
  6. sprints
  7. testing / QA
  8. user acceptance testing

List some steps you took in a weird or non-standard project.

  • documented non-value steps in a process new to me
  • guided solutioning
  • identified handoffs between different contractors
  • iterative development and delivery
  • make executive decisions without stakeholder back-and-forth
  • personally evaluate how committed management was to what they said they wanted
  • phased delivery / subject areas
  • starting a project without getting agreed funding from various units
  • work around manager who was afraid of change – had to continually demonstrate the product, ease of use, and savings

Name three software tools you use most.

  • Excel (4)
  • Jira (3)
  • Word (3)
  • Confluence (2)
  • PowerPoint (2)
  • e-mail (1)
  • Google Docs (1)
  • Google Drawings (1)
  • MS Word developer tools (1)
  • RoboHelp (1)
  • SharePoint (1)
  • SnagIt (1)
  • Visio (1)

Name three non-software techniques you use most.

  • analysis
  • analyze audience
  • apply knowledge of psychology to figure out how to approach the various personalities
  • communication
  • expectation level setting
  • JAD sessions (Joint Application Development Sessions)
  • meetings
  • phone calls and fate-to-face meetings
  • “play package”
  • process flow diagrams
  • prototyping
  • test application
  • training needs analysis
  • user stories
  • whiteboard diagrams
  • wireframing
  • workflows

Name the goals of a couple of different projects (e.g., automate a manual process, interface to a new client, redesign screens, etc.)

  • automate and ease reporting (new tool)
  • automate the contract management process
  • automation
  • block or restore delivery service to areas affected by disasters
  • create a “how-to” manual for training condo board members
  • create a means to store and manage condo documentation
  • create a reporting mechanism for healthcare enrollments
  • develop data warehouse
  • develop effort tracking process
  • develop new functionality
  • document current inquiry management process
  • maintain the MD Product Evaluation List (online)
  • move manual Excel reports online
  • process HR data and store records
  • recover fuel-related cost fluctuations
  • redesign
  • reduce technical debt
  • re-engineer per actual user requirements
  • replace current analysis tool with new one
  • simplify returns for retailer and customer

A compilation of results from all three surveys will be posted shortly.

Posted in Tools and methods | Tagged , , , , | Leave a comment

Structured Thought: Problem Solving

On Monday I attended a Meetup at the IIBA’s Pittsburgh Chapter for a presentation about structured problem solving given by a gentleman named Greg Acton.

The presentation included an introduction of about twenty minutes, a small group exercise for another twenty minutes, and a follow-on group discussion for a further twenty minutes, all of which were highly engaging. Mr. Acton added another twenty minutes or so sharing an interesting story about how the analysis technique he described worked on a different problem at a well-known organization.

I enjoyed the entire affair because it lines up so closely with my own experiences and body of knowledge. To begin, Mr. Acton and I have a number of shared or parallel experiences. We both spent some early years in the military (he on Aegis Missile cruisers, I in Air Defense Artillery, where I worked with several different missile systems; I also supported several Navy programs over the years and was at one point offered a job building maintenance training simulations of Aegis cruisers, which for various reasons I declined). We both trained as Six Sigma Black Belts. He plied his Six Sigma training at Amazon in the early 2000s. I made it through several rounds of programming tests and phone interviews with Amazon in the late 90s, with a goal of programming large-scale optimizations of product distribution and warehouse pre-positioning involving 60,000 variables at a pop, before deciding that 70 hours a week working on Unix systems in Seattle was less appealing than other options I had at the time (which themselves sometimes involved huge commitments of time). Finally, we both attended Carnegie Mellon University (he in computer science followed by an MBA, I in mechanical engineering with a bunch of computing courses). One of the courses I enjoyed most during my time there was called Analysis, Synthesis, and Evaluation (ASE, it’s no longer offered under that name but the ideas remain in force all through the university’s verbiage), which was all about problem solving and creative thinking. One of the difficult things for me to assimilate at that time was the idea that many of the parameters of a problem could be estimated just by thinking about them. I was stuck on the concept of having to go look and measure things. Having looked at and measured countless things over the years I have come to appreciate the idea of getting an order-of-magnitude hack at a problem without a vast amount of moving around.

The example I’ll offer is one our course instructor gave, which was estimating how much water a pendulum-driven bilge pump might have to remove from a boat during the course of a day. Figuring one wave-driven pump cycle every few seconds and a volume of water measuring twenty feet by five feet by four inches (give or take in any direction), you might end up with 57,400 cubic inches of water. Hmmm. Given 86,400 seconds per day we might need a bilge pump to remove one the order of two cubic inches per cycle. That seems like a lot. If we assume ten feet by two feet by three inches we get something like a half a cubic inch per cycle. That could work for smaller boats and for less leakage. When the professor worked it out in class he suggested the amount of water moved per cycle was roughly equivalent to spitting one time. That seemed tractable for a simple, inexpensive, unpowered device for small boats.

One thing we didn’t get to do in that course, which I very much looked forward to, was participate in the bridge building exercise. To illustrate that engineers are essentially economists who do physics, the activity shows that engineering decisions are driven by a constrained optimization process. The goal is to build a bridge that spans a specified distance (about 24 or 30 inches) and holds a flat platform/roadway. Weights are then hung from the roadway. Points are scores for the amount of weight supported up to 35 pounds and for the inverse weight of the bridge. That is, the lighter the bridge, the more points scored. Structures were typically made out of combinations of balsa wood, fishing line, cardboard, and glue. I saw older classmates do this before my junior year but for whatever reason we didn’t do this. (I remain somewhat cranky that I feel like I spent four years doing random differential equations and never feeling like I was ever doing anything, but I guess that’s why I enjoyed my computing courses and hands-on projects I got to do with my fraternity during those years.) I heard that a similar contest in earlier years involved building a contraption to help an egg survive being dropped from the nearby Panther Hollow Bridge. (We jokingly referred to this as the Panther Hollow Drop Test and applied it to many items in a humorous vein.) Points in this context were awarded increasingly with decreasing weight and decreasing fall time. A parachute-based device would gain points for being light but lose points for falling slowly. A flexible, water-containing device would fall quickly but be heavy. You get the idea…

One of the techniques described in the ASE class was to brainstorm every possible variable that could have an effect on the operation of a system or process. We tended to list things in groups which were plotted at right angles (imagine a chessboard in two dimensions or a Rubik’s Cube in three) to define combinations of variables we’d have to consider, which brings us back to the presentation. We were asked to analyze why certain classes of Lyft drivers achieved slightly different outcomes. (I’m going to be deliberately vague about the details.) The specific method Mr. Acton described involved starting with one or more root factors and then branching off through more and more sub-factors, listing possible outcomes of each (in this case each factor could reasonably be divided into only two cases). In our group I suggested listing every possible factor that could affect a trip (or fare) and a driver’s willingness to volunteer for it. The speaker described a left-to-right breakdown of the problem (one to many) but allowed that a right-to-left approach could also be valid (start with many and organize). That turns out to be a common approach, and brainstorming is, of course, a standard BABOK technique. We knew about surge pricing, where prices increase to incentivize drivers to meet demand in more difficult situations, so we concentrated on listing factors which different drivers would approach differently. We didn’t get our list diagrammed out the way we were shown but we had a good conversation. And, obviously, one can only expect to get so far in twenty minutes.

Next came the open discussion, where each of the six groups described what they had come up with. It was interesting that more or less every group came up with at least some unique factors to consider in analyzing the problem. When Mr. Acton described the ultimate findings they made sense. The finding also noted that the system was “working as designed” and so there wasn’t necessarily a “problem.” The kinds of analyses each group was doing could ultimately have led them to the correct answer, though I can’t remember offhand if any of the groups had listed either of the two factors that ultimately drove the variation that inspired the analysis in the first place.

On the theory that if all you have is a hammer everything looks like a nail I was reminded all through the evening of a number of techniques I’ve used and how they could also be applied successfully.

The first was called Force Field Analysis, which I learned about from a Total Quality Management (TQM) consultant in the mid-90s. I don’t know if that could have solved the whole problem, but I was reminded of it during the small group session as we were thinking about how different factors affected different types of drivers.

As a simulationist I clearly spent a lot of time thinking both about how I might build a simulation of the Lyft ecosystem that could generate statistics that would illuminate the problem and how I would definitely do discovery and collect data for building it. The latter activity alone may well allow me (or anyone) to identify the factors that turned out to be decisive.

I was finally reminded of the idea of analyzing equations to see the effects of modifying every possible term. Back in the day I wrote a tool to calculate pressure drops through sections of piping systems of the type found in paper mills and nuclear power plants. I had to do a bit of digging in old code and documentation and a bit of web searching to recreate the crime đŸ™‚ but the basic equation is shown below (click on it to go to the source page).

That’s not all there is to it, the Darcy-Weisbach friction coefficient (represented by the lambda character), itself needs to be calculated (and is in my code). One method for doing so is linked at the linked page. I’d give you what I had in my engineering notes from December of 1989 but that particular file is somewhere in storage. Anyway, the point is that you can see what happens if you change things. The length term is in the numerator so it’s clear that pressure drops will be larger in longer runs of pipe. The diameter of the pipe is in the denominator. This indicates that a larger (wider) pipe will incur a lower pressure drop. The density term is in the denominator, which means that you need to move more mass you need more pressure. The velocity term is in the denominator and is a squared function, which means that you would need four times the pressure to move the fluid at double the speed. The coefficient is not constant, and varies with factors like the roughness of the inside of the pipe (which itself may be a function of its age) and the viscosity of the fluid being pushed (molasses is more viscous than water or steam and thus requires more pressure to push).

This kind of analysis is great when the terms are well understood and verified through experiment and long practice. When you’re trying to write your own equations, however, you can run into trouble.

Consider the analysis of several factors on the price of a good in a market. The best economic analysts know you can’t write an accurate equation to describe this but as a thought experiment we can write something like:

This isn’t a “real” equation in any sense but it can be a useful tool for understanding the factors that may affect prices. If the supply of money or demand for a good increase, those terms being in the numerator, the price of the good should, in theory, increase. If the demand for money or the supply of a good increase then the price of the good would tend to decrease. Like everything in economics it’s way more complicated but remember it’s a thought experiment, a tool.

There are a few other things we can do with this rough tool. We can look at all of the terms individually and see what factors affect the values of each. Many economic theorists assert that the supply of money should be defined in different ways. The Federal Reserve publishes a whole range of values for different formulations of the money supply. Other economists define even more values. When it comes to demand for money how does one even assign a numeric value to it? The same applies to the demand for a good. How do substitution effects figure in? What special situations occur? Do you measure the supply of a good by what’s available to consumers, what’s sitting in warehouses (or in DeBeers’ diamond safes being purposefully held off the market), or what’s planned in terms of production capacity or reserves in the ground?

Analysts often have trouble defining concepts like inflation, and many different formulations and definitions exist. As citizens and observers we feel in our gut that it’s a comparison between the supply and demand of money vs. the supply and demand of goods on a macro scale, and we might be able to consider the problem in terms of the above equation. The use of such an equation is problematic but again we can do some useful thought exercises. If the money supply goes down then we would usually expect that prices would go down (we’d call this negative inflation or deflation). So far, so good, right? But what if the demand for money goes down even more? This would (again, in theory) cause prices to rise. This would be counterintuitive to many economists and many other observers.

Looking at the problem from a different angle we could ask whether trying to come up with a single index value for inflation across an entire economy even has meaning. Consider the following graph of price changes in different sectors over a recent twenty year period.

If prices in different sectors are changing at such different rates then what does a single, aggregate index number mean? Are the sector indices even meaningful given that they are also based on a myriad of individual prices?

A really good monetary theorist would account for all of these terms but some prominent economists appear to omit terms, like the demand for money. How would that affect an analysis?

Returning to our work as system, process, or business analysts we see that we have to try very hard to ensure we identify every relevant factor. To do this we may have to identify every possible factor and explicitly determine whether it’s relevant. I like to invoke this cover photo from an interesting book to illustrate the point.

This image cleverly demonstrates why it’s important to look at problems from all angles. The many techniques listed in the BABOK represent a diffuse way to look at problems from many angles, while something like Unified Modeling Language (UML) is a more organized way.

Mr. Acton also referred to some prickly problems that involved large numbers of interrelated values, and he used the term cross-product to describe that interrelatedness. I don’t remember the exact details or context but it made me remember a series of mechanisms in a large scale maintenance logistics model I worked with for nearly five years while supporting Navy and Marine Corps aviation activities. A number of very smart people analyzed, pondered, modified, and argued over the model the entire time with a near-religious fervor (which I loved, by the way) and we ended up making some fairly substantive changes during the time I was there. One change was based on an understanding I came to of the context of how a complicated cross-product actually generated outcomes in the heart of the model. Again, I’m not sure we were thinking about the same things but we’re clearly both thinking of complex, interrelated systems and effects.

The speaker described a single technique but also explained that he works with an organization that employs and teaches a wide range of problem-solving techniques that are all geared toward the practice of peeling problems apart so they can be properly analyzed and effectively understood and improved. I imagine that the entire body of knowledge would be worthwhile to explore.

Posted in Tools and methods | Tagged , , | Leave a comment

Of Pencils, the Division of Labor, Building Teams, and Knowing What the Heck Is Going On

The conversation at the watering hole after this evening’s CharmCityJS Meetup was interesting as usual. In between talking with and encouraging some of the younger attendees and speakers I spoke with a senior develop about a contract he was finishing up in which his ability to generate an effective outcome was severely compromised by the dysfunction and lack of communication in the customer’s organization.

He described how one senior manager worried only about contracts and money and another worried only about lists of work items and there was no communication between them. He also described how no individual or group seemed to be thinking about the architecture as a whole, which included seven different legacy systems talking to one database with no real intermediation or controls. The individual developers kind of did their own thing and changes, without considering their effects from a higher level, often broke other parts of the system. The contractor couldn’t get much in the way of requirements, didn’t have time to map things out and bring order to the system so he could make meaningful headway on his assignments, and tried to communicate the need for some sort of top down organization and communication without success. He talked about how he did his best to mentor the individual assigned to act as ScrumMaster even though he didn’t have the experience or insight to do so effectively. At least that individual asked for help and tried to take advantage of the guidance offered.

I observed that this kind of thing comes up often in organizations in general and in software efforts in particular. A few disconnects are survivable (e.g., the managers of funds might not share all information with line managers and architects) but too many can only mean trouble.

We also discussed some issues we’ve encountered in hiring in the tech industry (a topic I also discussed with one of this evening’s speakers, with a follow-up planned for next week). We observed that the existence of automated search tools gives people the illusion that they can find individuals with long lists of specific skills just because it’s possible to search for them across large numbers of candidates. This illusion leads people to ask for both long lists of skills and extensive experience in many or all of those skills. One need only review discussions on LinkedIn or Quora to get an idea of how this is working out. It’s not that there isn’t a real problem with finding qualified people, there are plenty of job seekers that will, shall we say, pad their qualifications, but what’s missing from many search tools is a sense of context, and an appreciation that certain skills really are emphasized over others when screening candidates. I get a lot of contacts based on misinterpretations of what’s in my various profiles. I describe things in more detail on my website, but even that material awaits further clarification.

The topics inspired me to refer my interlocutor to two resources that seem to say different things.

One was a famous essay in economics about the division of labor in society that illustrates a valuable insight. The essay is called I, Pencil and was written by a gentleman named Leonard Read in 1959. The piece is written in the first person from the point of view of a simple, lead pencil with an eraser attached with a metal band. Picture a standard, yellow, number two, wooden, non-mechanical pencil that can be found all over the world for a minimal cost. What makes the pencil so interesting is the incredible amount of knowledge and number of skills it takes to identify, procure, process, and arrange the assembly of the parts into a single, unified whole. What seems a simple, commonplace object is actually so complex that no one person can ever hope to know enough to produce even a single one from scratch on their own. It takes the cooperation of large numbers of people with specialized training and interests to make it happen. Someone must oversee the final assembly in an organized way, but no one can hope to know enough to do it all.

The other resource came from something I learned at an IIBA Meetup in Pittsburgh last June (2017). I described what I learned in this blog post, which involved building teams using members that know something about many things but who know a lot about a few things. More importantly, a team should have a member whose greatest interest and focus is in one of nine different components needed to complete a full Systems Development Life Cycle (SDLC). Again, someone has to be in charge to coordinate the activities, but no one can or should be expected to do it all. I described in the blog post linked just above that my goal is to be an analyst who solves business problems and therefore my interests and passions lead me to concentrate on the nine areas Ms. Hardy described with the following priorities:

I’m no longer trying to be an expert programmer, although I can and do continue to learn new software, tools, and techniques all the time. I’m not as interested in the development tools and deployment and test chains, though I appreciate their importance and try to ensure they are properly looked after by other specialists on the team. I don’t concentrate so much on security because that’s a specialty all its own, though again I want to know it’s being addressed by a qualified team member. I care about the business value, user control of the situation, and that all logical and informational aspects of the process are identified, documented, agreed to, included, verified, and validated. I’ve done that across countless product, project, and program cycles over several decades. I care that the needs are understood and agreed to by the customers, stakeholders, and team members. I work to ensure that everything is mapped, documented, tracked, and communicated in a way that everyone understands. I use pictures and diagrams wherever possible, because a picture is worth a thousand words.

The upshot is that one person can’t do it all, but on a small scale things don’t just magically come together. One a large scale an economy will coordinate itself using the price system without top-down oversight. On a human or organizational or project scale there has to be conscious coordination and review. People with an identified range of specific skills need to be able to cooperate in an environment of mutual respect and support. People need to be trained and incentivized to continue learning on their own.

I’ve seen parts of this done well and I’ve seen it done poorly. My goal is always to ensure it’s done well, but doing so requires clear communication, time, and realistic expectations. If you’ve got time to fail or time to do it over, you’ve got time to do it right in the first place.

Try to do it right in the first place. Ask me for help if you need it. đŸ™‚

Posted in Management | Tagged , , , , , | Leave a comment

Bye Bye Black Belt, It Was Nice Knowin’ Ya…

My Lean Six Sigma Black Belt certification from Villanova University rolled off yesterday, and I’ve been pondering whether or not to pursue a replacement. If I do I’d get it through ASQ (American Society for Quality) and it would only involve studying again and passing the exam. When I was studying for the Villanova cert I did a practice exam for the ASQ one and passed it, but I’d want to read one of the ASQ’s big study guides to make sure I was comfortable with some of their question styles.

I did the original study and certification with Villanova because of the project requirement for certification. ASQ required documentation of two formal Six Sigma-type projects, which I didn’t think I could do. Villanova’s process involved a mock project as part of completing the Black Belt training course (the Green Belt and Lean courses were also required). Not knowing the subject matter clearly I went with what I knew I could get verified.

Now, however, I recognize that quite a lot of the work I’ve done over the years has Six Sigma components, and even involved Six Sigma Methodologies. Even more of my work has involved Lean techniques, so I’m guessing I should have exactly zero problem getting some of that work described, attested to by project champions, and accepted by ASQ. Passing the exam will be nothing but elbow grease.

I also found that Villanova’s process for recertification involved continuing to take a bunch of expensive classes rather than a more rational process of continuing study and participation or reexamination like ASQ and, ohhhh… everyone else uses.

When I read about the requirements I saw that I’m right near the end of an application window and there won’t be time to get everything done. If I proceed I’ll do so in the next window, after I give the final IIBA presentation I have scheduled in Baltimore on February 13th.

Posted in Tools and methods | Tagged , , , | Leave a comment

Cross-Browser Compatibility: My Website Animation, Part 2

Digging into yesterday’s problem more I found references to a procedure called font boosting, and a number of ways to potentially control it or turn it off. Font boosting has to do with making the text larger by default on certain mobile devices in certain situations.

I put together a simple test page I could use to easily see the effects of different changes and clearly verify which CSS was in effect.

The suggestions from this page didn’t seem to work. The main suggestion from this page is to set max-height or min-height parameters to very large or very small values (e.g., 999999px or 1px, respectively) as a way to short circuit the font boosting.

This suggestion didn’t appear to work, either. The remote debugger shows the 16px font size and the -webkit-text-size-adjust lines are properly in effect.

I considered writing some JavaScript to test for Android and then apply a multiplier to make the relevant text items smaller, but a review of the sizing commands I use show that any type of multiplier I might use would tend to be inconsistent. It would merely override whatever multiplier was already in place. The workaround for that would be to set the relative text size of every text element using an individual font-size command, and then apply a universal multiplier in em or percent to do a universal scaling. That would work, but seems to represent a nuclear option. I’d really like to find a better solution. (An even more nuclear option, fusion instead of fission, perhaps, is to punt the opening animation altogether.)

Another thing that confuses this whole effort is the fact that users can set the default size of text larger or smaller than the standard, which would probably obviate this whole discussion anyway. Hmm, maybe the nuclear option is the way to go. This is particularly true in light of the fact that I’ve been thinking of redoing the landing page into more of a copywritten story format, rather than being mostly a rehash of the Roles page. We shall see.

Posted in Software | Tagged , , , | Leave a comment

Cross-Browser Compatibility: My Website Animation, Part 1

The introductory animation on my website has been a source of annoyance for me for some time. It never scaled properly on Android devices. It worked fine on every other browser I was able to test (IE, Edge, Firefox, Chrome, Opera, and Safari on Windows and Mac, and iOS and even Chrome on iPhone and iPad). Whenever I looked at it on a friend’s Android phone the text elements being animated all looked askew, as if the parent div was too small or the text items were too large.

Here’s how the first couple pages of the animations look — and should look — on any desktop browser:

Here’s how they look — and again should look — on iOS devices:

Here’s how they get incorrectly rendered on Android:

The only Android device I own is a Kindle Fire 8, and I had to do screen captures on my desktop while remotely debugging the landing page of my website. I suppose I could have done screencaps directly from the Fire and transferred them to my PC (like I did with the iPhone) but I was already using that setup to inspect the size and CSS provenance of elements so that was easier. That investigation showed me that Android wasn’t resizing the parent div but was instead rendering the text too large. The upper left corner of the text elements always end up in the right place but the letters are too long and the single-line elements often wrap to a second line.

As an side I’m going to ignore some of the other relative scaling issues that are going on. Keen observers will note that the graphic elements (the circles) take on different sizes on the three platforms relative to the text in the header (and in the animation). Something you can’t see is that the Android device compresses things so that two columns of articles are displayed below the circles. Only one column is typically displayed at the resolution of the iPhone or a similar width of a desktop browser. That’s because the pixel width of the Android/Fire/Silk browser is greater than the media setting for that page. The graphic and other elements are rendered slightly differently on each platform. A lot of this has to do with a meta setting in the header section of the HTML.

The decisions the different browsers make on those issues are agreeable and I don’t concern myself with them. The scaling of text in the animation section, however, is not agreeable, and that’s what I’m trying to fix.

So why is this happening? My initial reading and experimentation suggests that the Android browser assumes a different default value for font-size than the other browsers. Most browsers assume, I think, 16px for the default font-size, while the Android browser assumes something a bit larger, in the neighborhood of 20px or 24px.

I tried a few things, like setting the font-size to 16px in the HTML tag of the site’s global stylesheet.css file, but that didn’t solve the problem. The text elements in the animation, like all of the other elements, are created using the bcInitElement function, the final parameter of which contains the text string that defines the HTML and inline CSS definition of the element. See the following example (and see here for the history of how this got developed).

The function creates a div with a specific width that gets moved around during the animation process. The text (or other element, like an image) gets defined and modified in the HTML parameter text. You can see above that I alternately apply different font-size parameters using class definitions and em multiples. Inspecting the text elements in the remote debugger shows that all previous size definitions are overridden so these are the only ones having an effect.

Well, almost the only ones. The .introPanel class applies a font-size multiple of 1.3 to all of the text elements.

If I comment out that multiple of 1.3 the text looks about right on the Android devices (not exactly, but much closer) but looks much smaller than it should be on everything else.

I also tried setting the initial-scale factor in the meta line described above to 1.0 instead of 0.5. That changed the relative scaling of certain things, but not the text items I wanted to change.

I’m out of time for today, but tomorrow I’ll do some organized experiments to see just what’s cascading off of what. If I were doing the same kind of thing using HTML canvas I would have far more information about the text elements rendered, because I can query the pixel width of a run of text directly. When doing HTML/CSS/JavaScript animations there’s a lot more machinery in the way, and that may limit the ultimate placement accuracy I can achieve on all platforms. Oh well, I think I’d hardly be the first person to have to accept a certain amount of drift in presentations like this.

Posted in Software | Tagged , , , | Leave a comment

Generic Sensor API

On Wednesday I attended a Meetup hosted by Pittsburgh Code & Supply. This particular event was hosted by Brian Kardell (see also here) and was formally titled with the prefix “[Chapters Web Standards]:” I think to indicate that the talk is part of a series or a larger effort.

The talk itself (see slides here) was delivered remotely from Denmark by Kenneth R. Christiansen, who works for Intel on web standards.

Here is the current working draft standard for the Generic Sensor API.

I drove up to Pittsburgh to see the talk because of my experience working with real-time, real-world systems that included sensors and actuators. I had even run into some of the specific issues discussed when I was experimenting with Three.js and WebGL in preparation for the talk I gave at CharmCityJS. I wrote about my investigations here.

That preamble out of the way, the presentation was really interesting. It was also very dense and delivered very quickly. The speaker demanded a lot of his audience as he made it through all 67 slides in less than an hour. This worked because the audience probably self-selected for interest in the subject and because the slides are posted here.

The talk described efforts to create a standard way of exposing and accessing sensors of various kinds. Many of these are built in to the devices themselves (like the accelerometers in three axes built into handheld devices like phones that allow sensing of orientation and other things) but the talk also described how to incorporate sensors built into external devices. One example involved an external Arduino board connected through the Chrome browser’s Bluetooth API as shown here:


I had known about the Bluetooth API but learned there was also a USB API, which also seems to be implemented only on Chrome.

The talk included a number of highly informative graphics. The illustration of how accelerometers work was a classic case of a picture being worth a thousand words. I’d never taken the time to think about how they worked but Ken’s 25th slide led to an immediate “aha!” moment.

Subsequent images showed how gyroscopes and magnetometers work with similar verve.

The most interesting parts of the discussion involved derived and fusion sensors, with the latter being fusions of physical sensors into unified, abstract sensors implemented in code. (See slides 14 and following.) Several examples were given about how fusion is accomplished, including text and code.

The talk went into security concerns, which is obviously important.

I had also never heard the term “polyfill” before. It refers to code that implements features found in some browsers in other browsers that do not support those features.

The Zephyr.js project, which implements a limited, full-stack (Node.js-based) version of JavaScript on small, IOT-type devices (like Arduino boards), was referenced, as well as the Johnny Five project, which does something similar. I’ve been playing around with Arduino kits a little bit and am looking forward to trying these.

I haven’t included a huge amount of text here but have probably set a new record for links. This is an indication that I found the talk hugely satisfying. It should provide a lot of food for thought going forward.

Posted in Software | Tagged , , , , | Leave a comment

A Simulationist’s Framework for Business Analysis: Round Two

On Tuesday I gave this talk again, this time for the IIBA’s Metro DC Chapter. Here are the results from the updated survey. The results from the first go-round are here.

List at least five steps you take during a typical business analysis effort.

Everyone uses slightly different language and definitely different steps, but most of the items listed are standard techniques or activities described in the BABOK. Remember that these are things that actual practitioners report doing, and that there no wrong answers.

Some of the BAs report steps through an entire engagement from beginning to end. Other BAs report steps only to a certain point, for example from kickoff to handoff to the developers. Some start off trying to identify requirements and some end there. Some talk about gathering data and some don’t. Some talk about solutions and some don’t.

What do you take away from these descriptions?

  1. identify stakeholders / Stakeholder Analysis
  2. identify business objectives / goals
  3. identify use cases
  4. specify requirements
  5. interview Stakeholders
  1. project planning
  2. user group sessions
  3. individual meetings
  4. define business objectives
  5. define project scope
  6. prototype / wireframes
  1. identify audience / stakeholders
  2. identify purpose and scope
  3. develop plan
  4. define problem
  5. identify objective
  6. analyze problems / identify alternative solutions
  7. determine solution to go with
  8. design solution
  9. test solution
  1. gathering requirements
  2. assess stakeholder priorities
  3. data pull
  4. data scrub
  5. data analysis
  6. create summary presentation
  1. define objective
  2. research available resources
  3. define a solution
  4. gather its requirements
  5. define requirements
  6. validate and verify requirements
  7. work with developers
  8. coordinate building the solutions
  1. requirements elicitation
  2. requirements analysis
  3. get consensus
  4. organizational architecture assessment
  5. plan BA activities
  6. assist UAT
  7. requirements management
  8. define problem to be solved
  1. understand the business need of the request
  2. understand why the need is important – what is the benefit/value?
  3. identify the stakeholders affected by the request
  4. identify system and process impacts of the change (complexity of the change)
  5. understand the cost of the change
  6. prioritize the request in relation to other requests/needs
  7. elicit business requirements
  8. obtain signoff on business requests / validate requests
  1. understanding requirements
  2. writing user stories
  3. participating in Scrums
  4. testing stories
  1. research
  2. requirements meetings/elicitation
  3. document requirements
  4. requirements approvals
  5. estimation with developers
  6. consult with developers
  7. oversee UAT
  8. oversee business transition
  1. brainstorming
  2. interview project owner(s)
  3. understand current state
  4. understand need / desired state
  5. simulate / shadow
  6. inquire about effort required from technical team
  1. scope, issue determination, planning
  2. define issues
  3. define assumptions
  4. planning
  5. communication
  6. analysis – business and data modeling
  1. gather data
  2. sort
  3. define
  4. organize
  5. examples, good and bad
  1. document analysis
  2. interviews
  3. workshops
  4. BRD walkthroughs
  5. item tracking
  1. ask questions
  2. gather data
  3. clean data
  4. run tests
  5. interpret results
  6. visualize results
  7. provide conclusions
  1. understand current state
  2. understand desired state
  3. gap analysis
  4. understand end user
  5. help customer update desired state/vision
  6. deliver prioritized value iteratively
  1. define goals and objectives
  2. model As-Is
  3. identify gaps/requirements
  4. model To-Be
  5. define business rules
  6. conduct impact analysis
  7. define scope
  8. identify solution / how
  1. interview project sponsor
  2. interview key stakeholders
  3. read relevant information about the issue
  4. form business plan
  5. communicate and get buy-in
  6. goals, objectives, and scope

List some steps you took in a weird or non-standard project.

It’s interesting to see what different people consider to be out of the ordinary. Over time they’ll find that there isn’t a single formula for doing things and that many engagements will need to be customized to a greater or lesser degree. This is especially true across different projects, companies, and industries.

I think it’s always a good idea for people involved in any phase of an analysis/modification process to be given some sort of overview of the entire effort. This allows people to see where they fit in and can build enthusiasm for participating in something that may have a meaningful impact on what they do. This can be done in kickoff and introduction meetings and by written descriptions that are distributed to or posted for the relevant individuals.

The most interesting “weird” item to me was the “use a game” entry. I’d love to hear more about that.

  • Steps:
    1. Why is there a problem? Is there a problem?
    2. What can change? How can I change it?
    3. How to change the process for lasting results
  • after initial interview, began prototyping and iterated through until agreed upon design
  • create mock-ups and gather requirements
  • describing resource needs to the customer so they better understand how much work actually needs to happen and that there isn’t enough staff
  • explained project structure to stakeholders
  • interview individuals rather than host meetings
  • observe people doing un-automated process
  • physically simulate each step of an operational process
  • simulation
  • statistical modeling
  • surveys
  • town halls
  • use a game
  • using a ruler to estimate level of effort to digitize paper contracts in filing cabinets gathered over 40 years

Name three software tools you use most.

The usual suspects show up at the top of the list, and indeed a lot of a BA’s work involves describing findings and tabulating results. There are a lot of tools for communicating, organizing, and sharing information, whether qualitative findings (nouns and verbs discovered during process mapping), quantitative findings (adjectives and adverbs compiled during data collection), graphics (maps, diagrams, charts), or project status (requirements, progress, participants, test results). A few heavy duty programming tools are listed. These seem more geared to efforts involving heavy data analysis, though Excel is surprisingly powerful in the hands of an experienced user, particularly one who also knows programming and error detection.

  • Excel (8)
  • Visio (7)
  • Word (7)
  • Jira (4)
  • Confluence (3)
  • SharePoint (3)
  • MS Outlook (2)
  • Adobe Reader (1)
  • all MS products (1)
  • Azure (1)
  • Basecamp (1)
  • Blueprint (1)
  • CRM (1)
  • database, spreadsheet, or requirement tool for managing requirements (1)
  • Doors (1)
  • Enterprise Architect (1)
  • illustration / design program for diagrams (1)
  • LucidChart (1)
  • MS Project (1)
  • MS Visual Studio (1)
  • PowerPoint (1)
  • Process 98 (1)
  • Python (1)
  • R (1)
  • requirements repositories, e.g., RRC, RTC (1)
  • Scrumhow (?) (1)
  • SQL (1)
  • Tableau (1)
  • Visible Analyst (1)

Name three non-software techniques you use most.

I was surprised that there was so little repetition here. Different forms of interviewing come up most and a couple of thoughts are expressed in different ways, but the question asked what non-software tools were used most. One might expect that people would do many of the same things, but it’s a question of how each individual looks at things from their own point of view. “Business process analysis,” for example, is a high-level, abstract concept, while other items are lower-level, detailed techniques. Again, all of these items are valid, this just illustrates how people think about doing these sorts of analyses differently, and why the BABOK is necessarily written in a general way.

  • active listening
  • business process analysis
  • calculator
  • communication
  • conflict resolution and team building
  • costing out the requests
  • data modeling
  • decomposition
  • develop scenarios
  • diagramming/modeling
  • facilitation
  • Five Whys
  • handwritten note-taking
  • hermeneutics / interpretation of text
  • impact analysis
  • individual meetings
  • initial mock-ups / sketches
  • interview end user
  • interview stakeholders
  • interview users
  • interviews
  • listening
  • organize
  • paper
  • pen and paper
  • process decomposition
  • process mapping
  • prototyping
  • requirements meetings
  • rewards (food, certificates)
  • Scrums
  • shadowing
  • surveys
  • swim lanes
  • taking notes
  • use paper models / process mapping
  • user group sessions
  • whiteboard workflows

Name the goals of a couple of different projects (e.g., automate a manual process, interface to a new client, redesign screens, etc.)

The most common goals listed were automation and improvement, which is to be expected. In fact, pretty much every item on the list represents a process improvement of some kind, which is pretty much the point of what business analysts do.

  • adhere to regulatory requirements
  • adjusting solution to accommodate the needs of a new/different user base
  • automate a manual login/password generation and dissemination to users
  • automate a manual process
  • automate a manual process, reduce time and staff to accomplish a standard organizational function
  • automate a paper-based contract digitization process
  • automate new process
  • clear bottlenecks
  • data change/update
  • data migration
  • enhance system performance
  • implement new software solution
  • improve a business process
  • improve system usability
  • improve user interface
  • include new feature on mobile application
  • increase revenue and market share
  • map geographical data
  • process data faster
  • provide business recommendations
  • reimplement solution using newer technology
  • “replat form” legacy system (?)
  • system integration
  • system integration / database syncing
  • update a feature on mobile app

I’m scheduled to give this presentation in Baltimore in a few weeks, and may have still more opportunities to do so. I’ll repeat and report new survey results after each occasion, and I’ll report the combined results as well.

I’d love to hear any observations you have on these findings and answer any questions you may have.

Posted in Tools and methods | Tagged , , , , | 2 Comments

Applications for Simulation

Simulation can be used for many different purposes, and I wanted to describe a few of them in detail. I pay special attention to the ones I’ve actually worked with during my career. Note that these ideas inevitably overlap to some degree.

Design and Sizing: Simulation can be used to evaluate the behavior of a system before it’s built. This allows designers to head off costly mistakes in the design stage rather than having to fix problems identified in a working system. There are two main aspects of a system that will typically be evaluated.

Behavior describes how a system works and how all the components and entities interact. This might not be a big deal for typical or steady-state operations but it can be very important when there are many variations and interactions and when systems are extremely complex. I’ve done this for many different applications, including iteratively calculating the concentration of chemicals in a pulp-making process, analyzing layouts for land border crossings, and examining the queuing, heating, and delay behavior of steel reheat furnaces.

Sizing a system involves ensuring that it can handle the desired throughput. For continuous or fluid-based systems this may involve determining the diameter of pipes and this size of tanks and chests meant to store material as buffers. For a discrete system like a border crossing there has to be enough space to move and queue.

The number of parallel operations for certain process stages needs to be determined for all systems. For example, if a pulp mill requires a cleaning stage and the overall flow is 10,000 gallons per minute but the capacity of each cleaner is only 100 gallons per minute then you’ll need a bank of at least 100 cleaners. That’s a calculation you can do without simulation, pr se, but other situations are more complex.

If an inspection process needs to have a waiting period of no longer than thirty minutes and the average inspection time is two minutes (but may vary between 45 seconds and 20 minutes) and there are 30 arrivals per hour, then how many inspection booths do you need? There’s not actually enough information to know. The design flow in a paper mill can be known but the arrival rate at a border crossing may vary by time of day, time of year, weather, special events, the state of the economy, and who knows how many other reasons. The size of a queue that builds up over time is based on the number of arrivals exceeding the inspection rate over a period of time. That’s not something you can predict in a deterministic way, which is why Monte Carlo techniques are used.

It’s also why performance standards (also known as KPIs or MOEs for Key Performance Indicators or Measures of Effectiveness) are expressed with a degree of uncertainty. The performance standard for a border crossing might actually be set as thirty minutes or less 85% of the time.

Operations Research is sometimes also known as tradespace analysis (see the first definition here), in that it attempts to analyze the effect of changing multiple, tightly linked, interacting processes. When I did this for aircraft maintenance logistics we included the effects of reliability, supply quantities and replenishment times, staff levels, scheduled and unscheduled maintenance procedures, and operational tempo. That particular simulation was written in GPSS/H and is said to be the most complex model ever executed in that language.

Real-Time Control systems take actions to ensure that a measured quantity, like the temperature in your house, stays as close as possible to a target or setpoint value, like the setting on your thermostat. In this example we say the system is controlling for temperature and that temperature is the control variable. In most cases the control variable can be measured directly, in which case you just need a feedback and control loop that looks at the value read by a mechanical or electrical sensor. In some cases, though, the control variable cannot be measured directly, in which case the control value or values have to be calculated using a simulation.

I did this for industrial furnace control systems using combustion heating processes and induction heating processes. Simulation was necessary for two reasons in these systems. One is that the temperature inside a piece of metal cannot be measured easily or cost-effectively in a mass-production environment, so the internal and external temperatures through each workpiece were calculated based on known information, including the furnace temperature, view factors, thermal properties of the materials including conductivity and heat capacity (which themselves changed with temperature), dimensions and mass density of the metal, and radiative heat transfer coefficients. The temperature was calculated for between seven and 147 points (nodes) along the surface and through the interior of each piece depending on the geometry of the piece and the furnace. This allows for calculation of both the average temperature of a piece and the differential temperature of a piece (highest minus lowest temperature). The system might be set to heat the steel to 2250 degrees Fahrenheit on average with a maximum differential temperature of 20 degrees. This was done so each piece was known to be thoroughly heated inside and outside before being sent to the rolling mill.

Training using simulation comes in many forms.

Operator training involves interacting with a piece of equipment or a system like an airplane or a nuclear power plant. I trained on two different kinds of air defense artillery simulators in the Army (the Roland and the British Tracked Rapier). More interestingly I researched, designed, and implemented thermohydraulic models for full-scope nuclear power plant training simulators for the Westinghouse Nuclear Simulator Division. These involved building a full-scale mockup of every panel, screen, button, dial, light, switch, alarm, recorder, and meter in the plant control room. Instead of being connected to a live plant it was connected to a simulation of every piece of equipment that affected or was affected by an item the operators could see or touch. The simulation included the operation of equipment like valves, pumps, sensors, and control and safety circuits; fluid models that simulated flow, heat transfer, and state changes; and electrical models that simulated power, generators, and bus conditions.

Participatory training involves moving through an environment, often with other participants. One company I worked with built evacuation simulations which were later modified to be incorporated into multi-player training systems for event security and emergency response. I defined the system and behavior characteristics that needed to be included and designed the screen controls that allowed users to set and modify the parameters. I also wrote real-time control and communication modules to allow our systems to communicate and integrate with partner systems in a distributed environment.

Risk Analysis can be performed using simulations combined with Monte Carlo techniques. This provides a range of results across multiple runs rather than a single or point result, and allows analysis of how often certain events occur relative to a desired threshold, expressed as a percentage. I’ve done this as part of the aircraft support logistics simulation I described above.

Economic Analysis may be carried out by adding cost factors to all relevant activities, multiplying them by number of occurrences, and totaling everything up. Note that economic effects can only be calculated for processes that can truly be quantified. Human action in unbounded activities can never be accurately quantified, both because humans have an infinite number of choices and because it would be impossible to collect data if all possible activities could be identified, so simulation of unbounded economies and actors is not possible. Simulating the cost of a defined and limited activity like an inspection or manufacturing process is possible because the possible actions are limited, definable, and collectable. I built this feature directly into the system I created for building simulations of medical practices.

Interestingly, cost data can be hard to acquire. This is understandable in the case of external cost data but less so from other departments within the same organization. Government departments are notorious for protecting their “rice bowls.” Employee costs are another sensitive area. They can be coded or blinded in some way, for example by dividing all amounts by a set factor so relative costs may be discerned but not absolute costs. Spreadsheets containing occurrence counts with costs left blank can be provided to customers or managers to fill out and analyze on their own.

Impact Analysis involves the assessment of changes in outcomes resulting from changes to inputs or parameters. Many of the simulations I’ve worked with have been used in this way.

Process Improvement (including BPR) is based on assessing the impacts of changes that make a process better in terms of throughput, loss, error, accuracy, cost, resource usage, time, or capability.

Entertainment is a fairly obvious use. Think movies and video games.

Sales can also be driven by simulations, particularly for demonstrating benefits. Simulations can also show how things work in a visually, which can be impressive to people in certain situations.

Funny story: One company did a public demo of a 3D model of a border crossing. It was a nice model that included some random trees and buildings around the perimeter of the property for effect. Some of the notional buildings were houses that weren’t intended to be particularly accurate as far as design or placement. A lady viewing the demo said the whole thing was probably wrong because her house wasn’t the right color. She wouldn’t let it go.

You never know what some people will think is important.

Posted in Tools and methods | Tagged , | Leave a comment