Generic Sensor API

On Wednesday I attended a Meetup hosted by Pittsburgh Code & Supply. This particular event was hosted by Brian Kardell (see also here) and was formally titled with the prefix “[Chapters Web Standards]:” I think to indicate that the talk is part of a series or a larger effort.

The talk itself (see slides here) was delivered remotely from Denmark by Kenneth R. Christiansen, who works for Intel on web standards.

Here is the current working draft standard for the Generic Sensor API.

I drove up to Pittsburgh to see the talk because of my experience working with real-time, real-world systems that included sensors and actuators. I had even run into some of the specific issues discussed when I was experimenting with Three.js and WebGL in preparation for the talk I gave at CharmCityJS. I wrote about my investigations here.

That preamble out of the way, the presentation was really interesting. It was also very dense and delivered very quickly. The speaker demanded a lot of his audience as he made it through all 67 slides in less than an hour. This worked because the audience probably self-selected for interest in the subject and because the slides are posted here.

The talk described efforts to create a standard way of exposing and accessing sensors of various kinds. Many of these are built in to the devices themselves (like the accelerometers in three axes built into handheld devices like phones that allow sensing of orientation and other things) but the talk also described how to incorporate sensors built into external devices. One example involved an external Arduino board connected through the Chrome browser’s Bluetooth API as shown here:


I had known about the Bluetooth API but learned there was also a USB API, which also seems to be implemented only on Chrome.

The talk included a number of highly informative graphics. The illustration of how accelerometers work was a classic case of a picture being worth a thousand words. I’d never taken the time to think about how they worked but Ken’s 25th slide led to an immediate “aha!” moment.

Subsequent images showed how gyroscopes and magnetometers work with similar verve.

The most interesting parts of the discussion involved derived and fusion sensors, with the latter being fusions of physical sensors into unified, abstract sensors implemented in code. (See slides 14 and following.) Several examples were given about how fusion is accomplished, including text and code.

The talk went into security concerns, which is obviously important.

I had also never heard the term “polyfill” before. It refers to code that implements features found in some browsers in other browsers that do not support those features.

The Zephyr.js project, which implements a limited, full-stack (Node.js-based) version of JavaScript on small, IOT-type devices (like Arduino boards), was referenced, as well as the Johnny Five project, which does something similar. I’ve been playing around with Arduino kits a little bit and am looking forward to trying these.

I haven’t included a huge amount of text here but have probably set a new record for links. This is an indication that I found the talk hugely satisfying. It should provide a lot of food for thought going forward.

Posted in Software | Tagged , , , , | Leave a comment

A Simulationist’s Framework for Business Analysis: Round Two

On Tuesday I gave this talk again, this time for the IIBA’s Metro DC Chapter. Here are the results from the updated survey. The results from the first go-round are here.

List at least five steps you take during a typical business analysis effort.

Everyone uses slightly different language and definitely different steps, but most of the items listed are standard techniques or activities described in the BABOK. Remember that these are things that actual practitioners report doing, and that there no wrong answers.

Some of the BAs report steps through an entire engagement from beginning to end. Other BAs report steps only to a certain point, for example from kickoff to handoff to the developers. Some start off trying to identify requirements and some end there. Some talk about gathering data and some don’t. Some talk about solutions and some don’t.

What do you take away from these descriptions?

  1. identify stakeholders / Stakeholder Analysis
  2. identify business objectives / goals
  3. identify use cases
  4. specify requirements
  5. interview Stakeholders
  1. project planning
  2. user group sessions
  3. individual meetings
  4. define business objectives
  5. define project scope
  6. prototype / wireframes
  1. identify audience / stakeholders
  2. identify purpose and scope
  3. develop plan
  4. define problem
  5. identify objective
  6. analyze problems / identify alternative solutions
  7. determine solution to go with
  8. design solution
  9. test solution
  1. gathering requirements
  2. assess stakeholder priorities
  3. data pull
  4. data scrub
  5. data analysis
  6. create summary presentation
  1. define objective
  2. research available resources
  3. define a solution
  4. gather its requirements
  5. define requirements
  6. validate and verify requirements
  7. work with developers
  8. coordinate building the solutions
  1. requirements elicitation
  2. requirements analysis
  3. get consensus
  4. organizational architecture assessment
  5. plan BA activities
  6. assist UAT
  7. requirements management
  8. define problem to be solved
  1. understand the business need of the request
  2. understand why the need is important – what is the benefit/value?
  3. identify the stakeholders affected by the request
  4. identify system and process impacts of the change (complexity of the change)
  5. understand the cost of the change
  6. prioritize the request in relation to other requests/needs
  7. elicit business requirements
  8. obtain signoff on business requests / validate requests
  1. understanding requirements
  2. writing user stories
  3. participating in Scrums
  4. testing stories
  1. research
  2. requirements meetings/elicitation
  3. document requirements
  4. requirements approvals
  5. estimation with developers
  6. consult with developers
  7. oversee UAT
  8. oversee business transition
  1. brainstorming
  2. interview project owner(s)
  3. understand current state
  4. understand need / desired state
  5. simulate / shadow
  6. inquire about effort required from technical team
  1. scope, issue determination, planning
  2. define issues
  3. define assumptions
  4. planning
  5. communication
  6. analysis – business and data modeling
  1. gather data
  2. sort
  3. define
  4. organize
  5. examples, good and bad
  1. document analysis
  2. interviews
  3. workshops
  4. BRD walkthroughs
  5. item tracking
  1. ask questions
  2. gather data
  3. clean data
  4. run tests
  5. interpret results
  6. visualize results
  7. provide conclusions
  1. understand current state
  2. understand desired state
  3. gap analysis
  4. understand end user
  5. help customer update desired state/vision
  6. deliver prioritized value iteratively
  1. define goals and objectives
  2. model As-Is
  3. identify gaps/requirements
  4. model To-Be
  5. define business rules
  6. conduct impact analysis
  7. define scope
  8. identify solution / how
  1. interview project sponsor
  2. interview key stakeholders
  3. read relevant information about the issue
  4. form business plan
  5. communicate and get buy-in
  6. goals, objectives, and scope

List some steps you took in a weird or non-standard project.

It’s interesting to see what different people consider to be out of the ordinary. Over time they’ll find that there isn’t a single formula for doing things and that many engagements will need to be customized to a greater or lesser degree. This is especially true across different projects, companies, and industries.

I think it’s always a good idea for people involved in any phase of an analysis/modification process to be given some sort of overview of the entire effort. This allows people to see where they fit in and can build enthusiasm for participating in something that may have a meaningful impact on what they do. This can be done in kickoff and introduction meetings and by written descriptions that are distributed to or posted for the relevant individuals.

The most interesting “weird” item to me was the “use a game” entry. I’d love to hear more about that.

  • Steps:
    1. Why is there a problem? Is there a problem?
    2. What can change? How can I change it?
    3. How to change the process for lasting results
  • after initial interview, began prototyping and iterated through until agreed upon design
  • create mock-ups and gather requirements
  • describing resource needs to the customer so they better understand how much work actually needs to happen and that there isn’t enough staff
  • explained project structure to stakeholders
  • interview individuals rather than host meetings
  • observe people doing un-automated process
  • physically simulate each step of an operational process
  • simulation
  • statistical modeling
  • surveys
  • town halls
  • use a game
  • using a ruler to estimate level of effort to digitize paper contracts in filing cabinets gathered over 40 years

Name three software tools you use most.

The usual suspects show up at the top of the list, and indeed a lot of a BA’s work involves describing findings and tabulating results. There are a lot of tools for communicating, organizing, and sharing information, whether qualitative findings (nouns and verbs discovered during process mapping), quantitative findings (adjectives and adverbs compiled during data collection), graphics (maps, diagrams, charts), or project status (requirements, progress, participants, test results). A few heavy duty programming tools are listed. These seem more geared to efforts involving heavy data analysis, though Excel is surprisingly powerful in the hands of an experienced user, particularly one who also knows programming and error detection.

  • Excel (8)
  • Visio (7)
  • Word (7)
  • Jira (4)
  • Confluence (3)
  • SharePoint (3)
  • MS Outlook (2)
  • Adobe Reader (1)
  • all MS products (1)
  • Azure (1)
  • Basecamp (1)
  • Blueprint (1)
  • CRM (1)
  • database, spreadsheet, or requirement tool for managing requirements (1)
  • Doors (1)
  • Enterprise Architect (1)
  • illustration / design program for diagrams (1)
  • LucidChart (1)
  • MS Project (1)
  • MS Visual Studio (1)
  • PowerPoint (1)
  • Process 98 (1)
  • Python (1)
  • R (1)
  • requirements repositories, e.g., RRC, RTC (1)
  • Scrumhow (?) (1)
  • SQL (1)
  • Tableau (1)
  • Visible Analyst (1)

Name three non-software techniques you use most.

I was surprised that there was so little repetition here. Different forms of interviewing come up most and a couple of thoughts are expressed in different ways, but the question asked what non-software tools were used most. One might expect that people would do many of the same things, but it’s a question of how each individual looks at things from their own point of view. “Business process analysis,” for example, is a high-level, abstract concept, while other items are lower-level, detailed techniques. Again, all of these items are valid, this just illustrates how people think about doing these sorts of analyses differently, and why the BABOK is necessarily written in a general way.

  • active listening
  • business process analysis
  • calculator
  • communication
  • conflict resolution and team building
  • costing out the requests
  • data modeling
  • decomposition
  • develop scenarios
  • diagramming/modeling
  • facilitation
  • Five Whys
  • handwritten note-taking
  • hermeneutics / interpretation of text
  • impact analysis
  • individual meetings
  • initial mock-ups / sketches
  • interview end user
  • interview stakeholders
  • interview users
  • interviews
  • listening
  • organize
  • paper
  • pen and paper
  • process decomposition
  • process mapping
  • prototyping
  • requirements meetings
  • rewards (food, certificates)
  • Scrums
  • shadowing
  • surveys
  • swim lanes
  • taking notes
  • use paper models / process mapping
  • user group sessions
  • whiteboard workflows

Name the goals of a couple of different projects (e.g., automate a manual process, interface to a new client, redesign screens, etc.)

The most common goals listed were automation and improvement, which is to be expected. In fact, pretty much every item on the list represents a process improvement of some kind, which is pretty much the point of what business analysts do.

  • adhere to regulatory requirements
  • adjusting solution to accommodate the needs of a new/different user base
  • automate a manual login/password generation and dissemination to users
  • automate a manual process
  • automate a manual process, reduce time and staff to accomplish a standard organizational function
  • automate a paper-based contract digitization process
  • automate new process
  • clear bottlenecks
  • data change/update
  • data migration
  • enhance system performance
  • implement new software solution
  • improve a business process
  • improve system usability
  • improve user interface
  • include new feature on mobile application
  • increase revenue and market share
  • map geographical data
  • process data faster
  • provide business recommendations
  • reimplement solution using newer technology
  • “replat form” legacy system (?)
  • system integration
  • system integration / database syncing
  • update a feature on mobile app

I’m scheduled to give this presentation in Baltimore in a few weeks, and may have still more opportunities to do so. I’ll repeat and report new survey results after each occasion, and I’ll report the combined results as well.

I’d love to hear any observations you have on these findings and answer any questions you may have.

Posted in Tools and methods | Tagged , , , , , | 2 Comments

Applications for Simulation

Simulation can be used for many different purposes, and I wanted to describe a few of them in detail. I pay special attention to the ones I’ve actually worked with during my career. Note that these ideas inevitably overlap to some degree.

Design and Sizing: Simulation can be used to evaluate the behavior of a system before it’s built. This allows designers to head off costly mistakes in the design stage rather than having to fix problems identified in a working system. There are two main aspects of a system that will typically be evaluated.

Behavior describes how a system works and how all the components and entities interact. This might not be a big deal for typical or steady-state operations but it can be very important when there are many variations and interactions and when systems are extremely complex. I’ve done this for many different applications, including iteratively calculating the concentration of chemicals in a pulp-making process, analyzing layouts for land border crossings, and examining the queuing, heating, and delay behavior of steel reheat furnaces.

Sizing a system involves ensuring that it can handle the desired throughput. For continuous or fluid-based systems this may involve determining the diameter of pipes and this size of tanks and chests meant to store material as buffers. For a discrete system like a border crossing there has to be enough space to move and queue.

The number of parallel operations for certain process stages needs to be determined for all systems. For example, if a pulp mill requires a cleaning stage and the overall flow is 10,000 gallons per minute but the capacity of each cleaner is only 100 gallons per minute then you’ll need a bank of at least 100 cleaners. That’s a calculation you can do without simulation, pr se, but other situations are more complex.

If an inspection process needs to have a waiting period of no longer than thirty minutes and the average inspection time is two minutes (but may vary between 45 seconds and 20 minutes) and there are 30 arrivals per hour, then how many inspection booths do you need? There’s not actually enough information to know. The design flow in a paper mill can be known but the arrival rate at a border crossing may vary by time of day, time of year, weather, special events, the state of the economy, and who knows how many other reasons. The size of a queue that builds up over time is based on the number of arrivals exceeding the inspection rate over a period of time. That’s not something you can predict in a deterministic way, which is why Monte Carlo techniques are used.

It’s also why performance standards (also known as KPIs or MOEs for Key Performance Indicators or Measures of Effectiveness) are expressed with a degree of uncertainty. The performance standard for a border crossing might actually be set as thirty minutes or less 85% of the time.

Operations Research is sometimes also known as tradespace analysis (see the first definition here), in that it attempts to analyze the effect of changing multiple, tightly linked, interacting processes. When I did this for aircraft maintenance logistics we included the effects of reliability, supply quantities and replenishment times, staff levels, scheduled and unscheduled maintenance procedures, and operational tempo. That particular simulation was written in GPSS/H and is said to be the most complex model ever executed in that language.

Real-Time Control systems take actions to ensure that a measured quantity, like the temperature in your house, stays as close as possible to a target or setpoint value, like the setting on your thermostat. In this example we say the system is controlling for temperature and that temperature is the control variable. In most cases the control variable can be measured directly, in which case you just need a feedback and control loop that looks at the value read by a mechanical or electrical sensor. In some cases, though, the control variable cannot be measured directly, in which case the control value or values have to be calculated using a simulation.

I did this for industrial furnace control systems using combustion heating processes and induction heating processes. Simulation was necessary for two reasons in these systems. One is that the temperature inside a piece of metal cannot be measured easily or cost-effectively in a mass-production environment, so the internal and external temperatures through each workpiece were calculated based on known information, including the furnace temperature, view factors, thermal properties of the materials including conductivity and heat capacity (which themselves changed with temperature), dimensions and mass density of the metal, and radiative heat transfer coefficients. The temperature was calculated for between seven and 147 points (nodes) along the surface and through the interior of each piece depending on the geometry of the piece and the furnace. This allows for calculation of both the average temperature of a piece and the differential temperature of a piece (highest minus lowest temperature). The system might be set to heat the steel to 2250 degrees Fahrenheit on average with a maximum differential temperature of 20 degrees. This was done so each piece was known to be thoroughly heated inside and outside before being sent to the rolling mill.

Training using simulation comes in many forms.

Operator training involves interacting with a piece of equipment or a system like an airplane or a nuclear power plant. I trained on two different kinds of air defense artillery simulators in the Army (the Roland and the British Tracked Rapier). More interestingly I researched, designed, and implemented thermohydraulic models for full-scope nuclear power plant training simulators for the Westinghouse Nuclear Simulator Division. These involved building a full-scale mockup of every panel, screen, button, dial, light, switch, alarm, recorder, and meter in the plant control room. Instead of being connected to a live plant it was connected to a simulation of every piece of equipment that affected or was affected by an item the operators could see or touch. The simulation included the operation of equipment like valves, pumps, sensors, and control and safety circuits; fluid models that simulated flow, heat transfer, and state changes; and electrical models that simulated power, generators, and bus conditions.

Participatory training involves moving through an environment, often with other participants. One company I worked with built evacuation simulations which were later modified to be incorporated into multi-player training systems for event security and emergency response. I defined the system and behavior characteristics that needed to be included and designed the screen controls that allowed users to set and modify the parameters. I also wrote real-time control and communication modules to allow our systems to communicate and integrate with partner systems in a distributed environment.

Risk Analysis can be performed using simulations combined with Monte Carlo techniques. This provides a range of results across multiple runs rather than a single or point result, and allows analysis of how often certain events occur relative to a desired threshold, expressed as a percentage. I’ve done this as part of the aircraft support logistics simulation I described above.

Economic Analysis may be carried out by adding cost factors to all relevant activities, multiplying them by number of occurrences, and totaling everything up. Note that economic effects can only be calculated for processes that can truly be quantified. Human action in unbounded activities can never be accurately quantified, both because humans have an infinite number of choices and because it would be impossible to collect data if all possible activities could be identified, so simulation of unbounded economies and actors is not possible. Simulating the cost of a defined and limited activity like an inspection or manufacturing process is possible because the possible actions are limited, definable, and collectable. I built this feature directly into the system I created for building simulations of medical practices.

Interestingly, cost data can be hard to acquire. This is understandable in the case of external cost data but less so from other departments within the same organization. Government departments are notorious for protecting their “rice bowls.” Employee costs are another sensitive area. They can be coded or blinded in some way, for example by dividing all amounts by a set factor so relative costs may be discerned but not absolute costs. Spreadsheets containing occurrence counts with costs left blank can be provided to customers or managers to fill out and analyze on their own.

Impact Analysis involves the assessment of changes in outcomes resulting from changes to inputs or parameters. Many of the simulations I’ve worked with have been used in this way.

Process Improvement (including BPR) is based on assessing the impacts of changes that make a process better in terms of throughput, loss, error, accuracy, cost, resource usage, time, or capability.

Entertainment is a fairly obvious use. Think movies and video games.

Sales can also be driven by simulations, particularly for demonstrating benefits. Simulations can also show how things work in a visually, which can be impressive to people in certain situations.

Funny story: One company did a public demo of a 3D model of a border crossing. It was a nice model that included some random trees and buildings around the perimeter of the property for effect. Some of the notional buildings were houses that weren’t intended to be particularly accurate as far as design or placement. A lady viewing the demo said the whole thing was probably wrong because her house wasn’t the right color. She wouldn’t let it go.

You never know what some people will think is important.

Posted in Tools and methods | Tagged , | Leave a comment

Architectural Considerations for Simulation

The simulations I’ve written, designed, specified, and utilized have incorporated a number of different features. I found it interesting that I was able to describe them in opposing pairs.

Continuous vs. Discrete-Event

I’ve gone into detail about continuous and discrete-event simulations here and here, among other places, so I won’t go into major detail now. I will say that continuous simulations are based on differential equations intergrated stepwise over time. They tend to run using time steps of a constant size. If multiple events occur in a given time interval they are all processed at once during the next execution cycle. Discrete-event simulations process events one-by-one, in time order, and in intervals of any duration. They can also handle wait..until conditions. Hybrid architectures are also possible.

Interactive vs. Fire-And-Forget

Interactive simulations can accept inputs from users or external processes at any time. They can be paused and restarted, and can sometimes be run at multiples or fractions of the base speed. Examples of interactive simulations are training simulators, real-time process control simulations, and video games.

Non-Interactive or Fire-and-Forget simulations typically run at one hundred percent duty cycle for the fastest possible completion. This type of simulation is generally used for design or analysis.

Real-Time vs. Non-Real-Time

Real-Time systems include wait states so they run at the same rate as time in the real world. This means that the code meant to run in a given time interval absolutely must complete within the time allotted for that interval.

When a control system I wrote for a steel furnace in South Caroline started throwing alarms for not keeping up I used the Performance Monitor program in Windows to determine that a database process running on the same machine would consume all the execution time for a couple of minutes a couple of times a day. This prevented the simulation from running at full speed which would have caused unwanted calculation and event-handling errors. I ended up being able to install the control code on a different system where it didn’t have to compete for resources.

Non-Real-Time systems can run at any speed. Fire-and-forget systems tend to run at full speed using all available CPU resources. Interactive simulations may run faster or slower than real-time. The computer game SimCity, for example, simulates up to 200 years of game activity in a few tens of hours of game play. Scientific simulations of nano-scale, physical events may simulation a few microseconds of real activity over dozens of hours of computing time.

Single-Platform vs. Distributed

Single-Platform simulations run on a single machine and a single CPU. This is typical of single-threaded desktop programs.

Distributed systems run across multiple CPUs or multiple machines. This arrangement invokes an overhead of communication and synchronization. I’ve written codes for several kinds of distributed systems.

The first time I encountered a multi-processor architecture was for the nuclear plant trainers hosted on Gould/Encore SEL 32/8000-9000 series computers. These systems featured four 32-bit CPUs that all addressed (effectively) the same memory.

The next time was for the real-time control systems I wrote for industrial furnaces for the metals industry. They included two different kinds of distributed processing. One involved running multiple processes on a single, time-slicing CPU, that communicated with each other using shared memory. In some ways that was like using the Gould/Encore systems. The other kind of distributed processing involved communicating with numerous other computing systems in the plant, at least one of which was itself a simulation in some cases. This kind of architecture employed many different forms of inter-process communication. The diagram below describes this kind of system, which I implemented on VMS and Windows systems.

I wrote the communication code that integrated an interactive evacuation system with a larger threat detection and mitigation system written by other vendors. It used interface techniques similar to those described above.

I did the same thing for the integration of another evacuation simulation using the HLA or High-Level Architecture protocol. This technique is often used to integrate multiple weapons simulators into a unified battlefield scenario.

Deterministic vs. Stochastic

Deterministic simulations always generate the same outputs given the same inputs and parameters. They are intended to provide point results. The thermal and fluid simulation systems I wrote were all deterministic.

Stochastic simulations incorporate random elements to generate probabilistic results. If numerous iterations are run this type of simulation will generate a distribution of results. Monte Carlo simulations are stochastic.

Posted in Simulation | Tagged , | Leave a comment

Verification and Validation: Observations for Simulation and Business Analysis

Verification and validation are complex subjects on their own (and see here and here more specifically for this discussion), and the simplest way I’ve found to describe the difference between them is that verification tests whether a solution works while validation tests whether a solution solves the problem.

Verification is the process of testing whether the system (software or otherwise) works without generating run-time errors, or at least that it’s able to handle and recover from errors. Individual operations can be verified formally by mathematical proof and informally through manual or automated testing. These can include send/receive operations, read/write operations (a subset of send/receive), calculations, branching and call operations, and user interface activities.

Validation in general is the process of testing whether the system (again, software or otherwise) is fit for its intended use, whether it meets the business need.

For simulation the idea of validation has two meanings. One is that the simulation itself accurately recreates observed behaviors for known cases. This gives confidence that it will produce accurate behaviors in novel cases. This is a subset of the test for whether the simulation then fulfills its intended use. Potential intended uses of simulations are described in detail here, but examples include design, real-time control, and training.

For business analysis the above discussion applies for computer-based systems and non-computer-based systems as normal. The BABOK (see also here), however, applies the terms not to software systems or systems in general, that business analysts work with, but to requirements. It gives the following definitions (page 134 of the 3rd ed.):

Verify Requirements: ensures that a set of requirements or designs has been developed in enough detail to be usable by a particular stakeholder, is internally consistent, and is of high quality.

Validate Requirements: ensures that a set of requirements or designs delivers business value and supports the organization’s goals and objectives.

These definitions definitely have the same character as those discussed above. They say that verification is about whether the requirements are usable in a way that will get something working, while validation is about whether the working solution provides the intended value.

I would argue that the BABOK’s wording could be more precise along these lines. The biggest problem I have is with the idea of “high quality,” which is nebulous and undefined. Quality, according to Philip B. Crosby, in his book Quality Is Free asserts that quality means meeting the established requirements. This definition implies that requirements have be be defined in a way that can be tested on an objective, pass/no-pass basis. (Subjective enterprises like art cannot be judged in this way, although its components can be.) I was introduced to this book during my first job in 1989, when the company was introducing the then-current ideas of Total Quality Management. I think these ideas have been subsumed by the ideas of Lean Six Sigma, although the Wikipedia article suggests that its ideas were more directly supplanted by ISO 9000, which a later company I worked for adopted in the late ’90s.

I propose that the BABOK definitions be reworded as follows:

Verify Requirements: ensures that a set of requirements or designs is internally consistent and has been developed in enough detail to support the implementation of a working system.

Validate Requirements: ensures that a set of requirements or designs supports the implementation of a working system that delivers business value and supports the organization’s goals and objectives.

This emphasizes that the requirements and designs need to lead to something concrete, but doesn’t demand that the implementation has been carried out before the verification and validation can be completed.

Posted in Tools and methods | Tagged , , , , , , | Leave a comment

A Simulationist’s Framework for Business Analysis: Round One

Yesterday I gave my newly prepared talk on business analysis at the Pittsburgh IIBA Chapter Meetup, the first of three times I’m scheduled to give it. The presentation, prepared using the Reveal.js framework, is at the link:

http://rpchurchill.com/presentations/SimFrameForBA/

I’m sure the snow limited the turnout but it was still fun to do, and good practice. The talk covers the latest update to my analysis framework I’ve described previously here and here. I’ve changed the terminology to refer to the process steps rather than the documents produced during each step. The current listing is:

Project Planning
Intended Use (Identify or Receive Business Needs)
Assumptions, Capabilities, and Risks and Impacts
Conceptual Model (As-Is State)
Data Sources
      –Requirements (To-Be State: Abstract)
      –Functional (What it Does)
Non-Functional (What it Is, plus Maintenance and Governance)
Design (To-Be State: Detailed)
Implementation
Test
      –Operation and Usability (Verification)
      –Outputs (Validation)
Acceptance (Accreditation)
Project Closeout

I tried to get the audience a bit involved, beyond the normal interactions and questions during the talk, by asking the audience to fill out a brief survey before I started. It asked four things, in one-to-three-word answers:

  • List 5-8 steps you take during a typical project.
  • List some steps you took on a weird or non-standard project.
  • Name three software tools you use most.
  • Name three non-software techniques you use most.

I asked a few audience members to read off their steps in answer to the first question. When I heard their answers I realized I had slightly mis-worded the question, because many of them described the steps of a project and not an analysis. There is admittedly an overlap, but since this is a talk about business or process analysis I was trying to get them to talk about that. I will therefore change the wording of the first survey item to: “List 5-8 steps you take during a typical analysis effort.” I’ll reword the second item in the same way, as well. As I think about it I’m also going to add: “List the goals of some efforts for which you performed analysis.”

I’ll report the findings from later talks when they happen, but here are the responses from the sheets I was able to collect:

List 5-8 steps you take during a typical project.

  1. Requirements Gathering
  2. Initiation
  3. Testing
  4. QA
  5. Feedback
  6. User acceptance
  1. Requirement Elicitation
  2. UX Design
  3. Software Design for Testability
  1. Identify Business Goal
  2. ID Stakeholders
  3. Make sure necessary resources are available
  4. Create Project Schedule
  5. Conduct regular status meetings
  1. Meet with requester to learn needs/wants
  2. List details/wants/needs
  3. Rough draft of Project/proposed solutions
  4. Check in with requester on rough draft
  5. Make edits/adjustments | test
  6. Regularly schedule touch-point meeting
  7. Requirement analysis/design | functional/non-functional
  8. Determine stakeholders | user acceptance
  1. List the stakeholders
  2. Read through all documents available
  3. Create list of questions
  4. Meet regularly with the stakeholders
  5. Meet with developers
  6. Develop scenarios
  7. Ensure stakeholders ensersy requirements
  8. other notes
    • SMART PM milestones
    • know players
    • feedback
    • analysis steps
    • no standard

List some steps you took on a weird or non-standard project.

  • Made timeline promises to customers without stakeholder buy-in/signoff
  • Regular status reports to CEO
  • Adjustments in project resources
  • Travel to affiliate sites to understand their processes
  • Developers and I create requirements as desired
  • Write requirements for what had been developed

Name three software tools you use most.

  • Excel (3)
  • Notepad (2)
  • SQL Server (2)
  • Email
  • Visual Studio (MC)
  • NUnit
  • Team Foundation Server
  • OneNote
  • Word
  • MS Office
  • ARC / Knowledge Center(?) (Clint Internal Tests)
  • Enbevu(?) (Mainframe)

Name three non-software techniques you use most.

  • Elicitation
  • Communication
  • Documentation
  • Recognize what are objects (nouns) and actions (verbs)
  • Responsibility x Collaboration using index cards
  • Scrum Ceremonies
  • 1-on-1 meetings to elicit requirements
  • Spreadsheets
  • Meetings
  • Notes
  • Process Modeling
  • Fishbone Diagram
  • Lists

The point of the survey was to illustrate that different analysts use the same basic techniques but don’t always apply them in the same order or the same way. It therefore makes sense that the BABOK’s concepts are organized and presented in a rather general way. This became clear to me as I was preparing materials for this talk. The BABOK material was really easy for me to learn because I’d essentially seen it all before. It wasn’t until I tried to map its content, in terms of its six major knowledge areas, to the steps of my framework. You can see this in the slide titled “BABOK Knowledge Areas vs. Bob’s Framework.” It roughly goes from upper left to lower right, so that’s good, but the devil is in the details.

The biggest confusion is in the middle steps, Conceptual Model, Data Sources, Requirements, and Design, though I also differentiate between primary mappings (with bold, capital X’s) and secondary mappings (with grayed, lower-case x’s). The problem further extends into the Implementation step, which involves elements from all six BABOK knowledge areas to a greater or lesser degree. I ended up creating a slide that illustrated the complexity of the Concept-to-Design cycle to illustrate this, and this is not intended to convey the same idea as the later slide that shows how every step should be reviewed with the customer until agreement or acceptance and that newly identified elements may require the creation of a new source element in the previous step.

Clarification: Conceptual Model through Design

Customer Feedback Cycle

The steps I’ve developed grew out of my specific experience doing simulations of many different kinds. I’m going to write supporting blog posts for many of the slides and adding links to them from the slides and the accompanying PDF file of them. Look for them in the coming weeks.

In the meantime I’m really looking forward to gathering more data during future talks and reporting on it here. Stay tuned!

Posted in Tools and methods | Tagged , , , , , | Leave a comment

Who’s the Boss?

An interesting subject came up at the post-presentation hangout of this evening’s CharmCityJS Meetup. I was talking with a fellow attendee about the fact that I’d rather be an analyst, requirements engineer, and architect than a full-time coder (not that I don’t want to be deeply involved with said code, even to the point of helping write and automate it) while he had transitioned back to being a full-time developer from being an analyst and architect.

For him the sweet spot was in the implementation, and he never liked the push and pull of his charges and his bosses providing their inputs into and feedback about his designs. He felt that the inputs coming up and down to him conflicted often enough that he could rarely make everyone happy.

My thought is that I want to be analyzing and engineering requirements to address the needs of the customers (external and internal), and meet the needs of the bosses. My thought about making the developers happy, and this came up in a conversation I had with another insightful individual last week, is based on writing the requirements in such a way that the developers have maximum leeway (subject to company policy, tooling, and so on) to give me what I want in the way they want. The way to do that is to write the requirements at a high enough level of abstraction that they can use a lot of different hows but I can’t help but get my what.

I can comment almost endlessly here. To begin, how does one describe this level of abstraction ahead of time? The answer, of course, is that one can’t, really. You just have to have a feel for it on a case by case basis. I know what you can’t do, though.

At the last Baltimore chapter IIBA meeting someone gave a presentation on a requirements authoring tool. It was a flowcharting program with a lot of automated analysis and testing functionality that could be used to represent a design at any level of abstraction. The speaker worked through the demo at a level of detail that was very close to writing the code. I suppose some organizations may want to work that way (bigger companies may have the mindset that they want to pre-specify as much as possible so each part of the value-added chain has the least leeway and can thus make the fewest mistakes), but that methodology didn’t sit right with me. I’ve never written requirements that way unless I’ve been asked to write out specific algorithms or calculations.

Remember that the analyst/engineer/architect is going to be working with the developers in an iterative, cooperative way so they can both provide their best inputs into the solution. I have some pretty strong ideas about things sometimes but I’m not going to just fling work requests over the transom to the next group and expect my wishes to be fulfilled exactly without interaction. Not only is the interaction better for the product and project at hand in the short term, but it’s better for understanding, camaraderie, respect, education, and — dare I say it — fun, in the longer term. Working this way with customers should be a given, and one always hopes to be able to work with bosses in the same way.

The insight from the conversation was that everyone has a boss. Richard Branson and Bill Gates have bosses. They’re called customers, and they’re ultimately the most demanding bosses of all. The question is about how you like to receive and process your “orders” (from all directions, and again, internally and externally) and how much leeway you have to fulfill them.

The fellow I talked with didn’t like to receive his orders the way they came in his role as an analyst and architect, but he does like receiving them as a developer. I can go either way, given the right style of communication and the right environment, but at this point my passion is to make things easier at the sweet spot where I think I can do the most good. I think serving in the analyst/engineer/architect role allows me to shape the interactions in all directions to provide the most effective, cooperative, productive environment possible. This is based on my experience serving in many roles in the SDLC and as a manager/mentor and subordinate/mentee, and the success I’ve had in identifying, implementing, and delivering effective solutions.

Now I wonder if my companion might have preferred his original role if the communication or environment had been different. I’ll try to remember to ask him next month. Until then I’ll be thinking about how I can create the best communication and environment I can no matter what I’m doing, which has been my true mission for the last few years, if not longer. Getting the technical part right is usually much easier than getting the people part right.

Posted in Management | Tagged , , | Leave a comment

Service Design

Manning the phone in the battery headquarters late one night in Basic Training I wiled away some time with a book I found. I read how Colonel Chamberlain’s extensive drilling of his men allowed them to execute a wheel maneuver in pitched combat that resulted in the successful defense of Little Roundtop at the Battle of Gettysburg. I also read about Napoleon giving his men campaign awards as a source of motivation observing, perhaps cynically, that “A soldier will fight long and hard for a bit of colored ribbon” and “Give me enough ribbons to place on the tunics of my soldiers and I can conquer the world.”

What these events have in common is the recognition that individuals respond to incentives and that positive, reward-based incentives lead to better outcomes. Chamberlain and Napoleon were effective leaders and process designers who knew how to instill confidence and satisfaction that would survive under extreme stress. Compared to the horrors of combat, keeping customers happy, informed, incentivized, and engaged while buying a fancy coffee ought to be a piece of cake, right?

Along these lines at this evening’s IIBA Pittsburgh Chapter Meetup, Jean-Marie Sloat discussed a field called Service Design. I’ll link to her presentation when it’s posted but for now I wanted to report that the content was extremely interesting, especially regarding the technique of Customer Journey Maps. It involved many of the concepts I’ve used in my own career as a process analyst, business analyst, and process improvement specialist. My formal training in Project Management, Lean Six Sigma, and Business Analysis (and even Scrum and Agile, which feel like more limited disciplines) overlapped across many of the same ideas, and Service Design seems to do the same thing. Indeed, it is known as being very cross-disciplinary. The disciplines the field sees itself as touching on or including are:

  • Systems Thinking
  • Design Research
  • Business Modeling & Strategy
  • Prototyping
  • Multi-disciplinary (this is more of description than a field, no?)
  • Design
  • Facilitation
  • Visual Design

The talk was really interesting to me because I could see my concerns, techniques, and emphases underlying everything Ms. Sloat described, and those considerations are doubtless addressed by her and her colleagues in the course of their work, but they were not her main area of concern, at least for the purpose of her talk. Her concentration, beyond the many interesting tools she described (that I plan to write about shortly and that will make this idea really pop), was on the emotions and reactions of participants in a process. She noted that she’s a fairly empathetic person and listening to and addressing the needs of people is an important part of her m.o..

I usually try to analyze a process by approaching it like a simulation, and for typical processes involving people and businesses this means discrete-event simulation. Whether a simulation is actually going to be built and run is immaterial, the analysis starts by mapping out flows of material and information in a process, including states, decisions, calculations, transformations, routings, and so on. Six Sigma improvements usually involve some combination of reducing variation, improving centering in a range, root cause analysis, and prevention of errors. Lean improvements usually involve a combination of compression, rearrangement, automation, and elimination.

It’s easy to imagine such techniques yielding improvements in bloodlessly technocratic measures like cycle time, loss and rework percentage, readiness, and cost savings. The question is whether any of these indicators reflect the emotions of the participants in terms of satisfaction, stress, abandonment of the process, shortcutting the process, or generating positive or negative referrals. The question is also whether these factors are considered when identifying the need for change in the first place.

I mentioned earlier that Service Design approaches the same problems as other groups of techniques but does so from a slightly different viewpoint, which is analogous to the way UML analysis drives you to examine computer systems from multiple viewpoints. If all of the techniques of UML are used you’re at least likely to consider a wide range of factors. This is no guarantee that you won’t do anything wrong, but such frameworks provide an organized way to make sure you’re being thorough. This seems to be a characteristic of many formalized bodies of knowledge. My feeling, after reflecting on tonight’s material, is that traditional techniques do, indeed, consider the emotions and reactions and well-being of process participants both as drivers and output measures of improved operations.

  • My Six Sigma training included the Kano Model, the Likert Scale, and probably others I can’t remember away from my notes and references. These deal specifically with the needs and desires of customers.
  • I’ve certainly designed user interfaces to improve control, understanding, and situational awareness, which results in improved confidence, efficiency, and even safety of users and other parties. This deals with the needs of providers.
  • Establishing measures of effectiveness (MOEs) in terms of maximum wait times is certainly about customer well-being.
  • Asking what people want and need must consider their emotions and needs at various levels (I learned about Maslow’s Hierarchy of Needs when I was in Junior Achievement in high school, so I’ve essentially always been aware of this).
  • Having managers take suggestions from workers is an example of this.
  • Paying people based on production or profits and not just hours worked takes emotions, well-being, and incentives into account pretty directly.
  • A Total Quality Management (TQM) consultant I met in the mid-90s described how Ford Motor Company’s motto, “Quality is job one” was doubly effective because it not only reassured customers but also motivated Ford’s workers at all levels.

Indeed, most, if not all, process improvements are about making people’s lives better in some way.

We engineers and process analysts may seem dry, removed, technical, and unresponsive, particularly when we’re doing something geeky, complicated, and opaque that involves computers, but we’re ultimately trying to help people. Considering things from the viewpoint of Service Design may be a way to be thorough, to relate to the people we’re trying to help in a way that they “get.” You know. Emotionally.

Posted in Tools and methods | Tagged , , | Leave a comment

Ship Technology from the Late 1500s

Today I visited a historical exhibit on Roanoke Island in the Outer Banks of North Carolina. The most interesting part of the park was the Elizabeth II, a replica modeled after the late-1500s sailing ships used to bring colonists and explorers to the New World.

People have an idea of how these ships work from movies and touring exhibits. My favorite was always the replica of the HMS Bounty that was constructed in 1960 for the 1962 movie Mutiny on the Bounty, featuring Marlon Brando, Trevor Howard, and Richard Harris. (The possibly more famous Clark Gable version was from 1935.) I visited the replica many times as a child when it was moored at the St. Petersburg Municpal Pier in Florida and later when it visited Annapolis, Maryland shortly before her unfortunate demise off the Outer Banks in Hurricane Sandy.


Loss of the HMS Bounty replica in 2012

The Bounty replica seemed downright spacious compared to the Elizabeth II. I’d always read about how small Christopher Columbus’ ships were and how small the Mayflower was but I’d never been on replicas that were that size. I’ve seen the USS Constitution in Boston and the USS Constellation in Baltimore, both of which are quite large. The ships I’ve seen in media and in person all had steering wheels. I recently learned the the Bounty replica, while very accurate in its reconstruction, was made twice as large as the original to accommodate the film crews and equipment. It’s easy to call up an image of an intrepid seaman lashing himself to the wheel in a raging storm, high on the raised, rear deck, holding forth against wind and rain and wave. (Never mind that the Bounty had only a single deck…)


The Elizabeth II, happily afloat!

The problem was, when I went looking for said wheel on the Elizabeth II, I didn’t find one. There was a raised rear deck, alright, but there wasn’t a wheel up there. It was then that I noticed the large lever set into the floor in a compartment just below the raised, rear deck. I asked the costumed docent about this and was told that the lever used to steer the ship was called a whipstaff and that using wheels to control rudders was a later invention. Well I’ll be dipped in peanut butter and rolled in Corn Flakes! I hadn’t known that and I found it fascinating. It also makes perfect sense. Ship designers would use the simplest possible designs that would work until the vessels got big enough to require more oomph.


Looking toward the bow


Looking toward the stern. Where’s that wheel? Oh… you can see the whipstaff though the opening down below.


Close-up view


Pivot mounting on the deck


View from the opposite side

Like the steam locomotives of a later age, sailing ships were the most complex engineered objects of their time. They were living, breathing entities with stories and “souls” all their own. The energy, investment, study, and thought that went into building and improving them rivals anything going on in high tech now. They were the Internet of their day, and made the world smaller in much the same way.

Posted in Life | Tagged , , | Leave a comment

Added CBAP Certification Today

Today I sat for and passed the Certified Business Analysis Professional (CBAP) exam issued by the International Institute of Business Analysis (IIBA). I sought this certification simply to communicate my experience throughout my career. I now list this credential as the first item in the list of my certifications. I found it easy to prepare for and found the test as annoyingly convoluted as the PMP exam.

The credential requires 7,500 hours of documented business analysis experience over the previous ten years (I could have listed a lot more, plus even more over the prior eighteen years), recommendations from two references (provided by the owners of the last two companies I worked for), and 35 contact hours of formal study with an approved provider (I took part in an online, interactive seminar series provided by Adaptive US), as well as the exam. The seminar provided by Adaptive was fairly well done and required a decent amount of interaction from the students, including needing to give short presentations on the subjects we were studying. I was able to leverage my experiences and resources on my website to demonstrate my experience very clearly.

Adaptive’s customer service was flat-out amazing. I’ve never encountered a company that does all the little things they do with such speed. Examples include paying your membership fees to IIBA and your exam fees, providing feedback, and forwarding information.

Adaptive’s training materials include a number of online question banks. I went through about 300 questions and reviewed the answers after each session. I skimmed through the BABOK Guide (v3) to get a feel for the structure of the material. I spent a few hours putting together a matrix of techniques used in the different knowledge areas as well as my own version of the outline grid. I noticed that Adaptive’s materials could used a bit of editing and cleaning up but, even more interestingly, I think I may have found an inconsistency in how one of the Guidelines and Tools is labeled in different knowledge areas (Requirements Management Tools / Repository in the Requirements Life Cycle knowledge area vs. Requirements Life Cycle Management Tools in the Requirements Analysis and Design Definition knowledge area). I plan to write to the IIBA to ask if this needs to be clarified.

Here is the two-sheet Excel file I made up to organize my thoughts. (Ignore the scratchwork to the right side of the first sheet.) I’m not sure the end result was useful per se, but the process of creating it got me used to the structure and vocabulary, which was probably helpful. I made up a similar study grid for the PMP exam in 2009, which is here. That actually proved to be more useful for me. I’ll update it some time after the new PMBOK Guide (Sixth Edition) is released in September.

I used this experience to extend my PMP certification out to 2021 and recently paid to have my Scrum certs extended to 2019. For my next trick I need to find out how to extend my Lean Six Sigma Black Belt certification beyond early 2018.

Posted in Tools and methods | Tagged , | Leave a comment