“Where in the Framework Do You Think About How To Add Value?”

I was recently asked this by a very intelligent and insightful individual. My expanded answer follows.

The short answer is that I differentiate between the solution and the engagement in several different ways. The solution, and the analysis performed beforehand that helps to decide whether to even pursue a solution, is where the value is generated. The engagement describes the management environment in which the solution is generated. So, if I’m talking about the management environment, I’m not likely to be talking about the solution in its own detail.

It’s amazing how long you can think about a thing and still have so many unstated assumptions in play. It is clear in my mind that the framework I’ve developed is intended to guide work in any engagement with customers and problems, enhance situational awareness within such engagements, and so on. I often speak and write about the kinds of work that are done within each phase of an engagement, but I had not done so in the conversation where this question was posed to me, so some important context was missing. I’ve also had the feeling that, if people have failed to understand what I’ve been trying to communicate with all this, they may think it is just another form of content-free consultant-y blather. My antipathy towards fluffy consultantspeak is as strong as anyone’s, so let us definitely understand the missing context.

I recently discussed the factors that can drive potential solutions. Many of those factors consider how value is evaluated and generated before and outside of any particular engagement. If those factors are not expected to drive value, through the application of prior experience, a priori reasoning, or other entrepreneurial judgement, then why would anyone even begin an engagement?

It is certainly true that projects fail for various reasons, but projects quite often succeed, too, so let’s look at how that happens.

When I did business process reengineering using FileNet document imaging systems, it was pretty well understood that properly designed and implemented systems could generate substantial cost savings. So although engagements meant to analyze, automate, and reconfigure customer operations were known to provide (a really good chance of generating) benefits going in — for certain document-heavy classes of business processes — the engagements themselves still needed to worked through all the phases in detail to figure out where and how they would be realized for each customer’s specific process, and to figure out the exact value of the benefit.

The specific benefits of solutions like the foregoing may be readily calculated, but it may be far more difficult to assign specific monetary values generated in other situations, for example by the creation and employment of nuclear power plant training simulators. Their use is likely to lead to more efficient and effective operations on an ongoing basis, though those savings would be difficult to identify and quantify. The biggest reason they are used, however, is to prevent catastrophic failures that result in the loss of entire plants and extended losses in the surrounding regions. Economic analysts can (and do) put prices on those things, but the goal is largely to forestall the unthinkable.

Other types of benefits may not be strictly monetary, but may instead provide psychic or quality-of-life benefits. This is probably more likely to be true for consumer products than for capital goods or process improvements. It is also true that the producer of these kinds of products must provide them at a price consumers are willing to pay, because the customers will perform their own, subjective, cost-benefit analysis, whether they do so explicitly or not.

Business analysts, project managers, executives, and individuals in every other role can contribute to analyses of whether projects and product development efforts should be undertaken. However, it’s important to be able to do the necessary costing and accounting that enables determinations of which efforts are worthwhile and which aren’t.

We can also state that doing project and product work more efficiently always drives value. This not only enables a team to deliver outputs at lower cost, and probably higher quality, but those savings may make the difference between whether customers accept those outputs or not.

The initial phase, whether we call it the intended use, problem statement, or something else, is mostly about defining the goals of the engagement and setting up the management mechanisms in preparation for doing the work. I assume that this step is only undertaken if the effort has already been judged to be beneficial.

Value is generated in the conceptual model phase differently based on when it occurs. If it is carried out at or near the beginning to determine the As-Is state for a process improvement, then generating the most thorough and accurate picture of the current state will lead to the best results in later phases. This work includes learning everything possible about extant assumptions and risks. If the conceptual model work is conducted as part of the design phase, when there is no current state and the goal is to build something entirely new, then thorough and accurate information is still important, it just happens in a different way.

The requirements phase is where the envisaged solution is fitted to the specific needs of the customer, and where the needs of the solution and applicable regulations and standards are folded in. The more accurately and completely the needs can be identified, the more value the solution will be able to provide.

Value can be added in the design phase through elements that embody robustness, modularity, efficiency, novelty, and effectiveness in many ways. Elements can involve improved materials, effective rearrangements, the leveraging of new scientific discoveries, updated machines and components, and more.

Improving the efficiency and effectiveness of activities of the implementation phase adds value by consuming less time and resources. The resource savings can be realized both while carrying out the actual work or during the operation of the delivered solution. Alternatively, value may be enhanced by building in ways to generate more outputs with similar inputs. One possible example of this is efficient computer calculations that produce more granular, accurate, and frequent outputs. Another example is improved machining methods that produce tighter tolerances, that result in tighter seals and reduced friction and wear, that would allow an engine to produce more power and less pollution over a longer period of time while requiring less maintenance. It should also be understood that effective deployment is an integral part of the implementation phase.

It occurs to me that the line between requirements, design, and implementation can be rather blurry. Harkening back to the last example, I would ask whether the improved ability to machine surfaces is an element of design or implementation. I’d love to hear your thoughts on this distinction.

Activities in the test (and acceptance) phase add value in at least two ways. One is simply by reducing the time and effort taken to conduct (all necessary) testing. Theoretically this applies to all forms of iterative review and incorporation of feedback embedded in every previous phase, as well. The other value-add comes from ensuring testing is thorough so that the chance of passing deficiencies on to customers is reduced to the greatest degree possible.

Finally, just as evaluation and selection activities before engagements can add value, ongoing review and consideration can add value after the initial engagement ends. This involves thinking in terms of the extended life cycle of a solution, and continually looking to improve the solution and the means of delivering it.

In conclusion, while most of the explicit value is generated by the specific analyses and decisions made before a solution effort is undertaken, during the actual work of generating a solution, and ongoing review and improvement of the solution, the environment in which the work is performed and the framework used to guide it should not be overlooked. The solution and the engagement, which again are often referred to as the product and the process, may be thought of as a pair of scissors. You need both halves working together to get the best results.

Posted in Tools and methods | Tagged , , | Leave a comment

Three Layers of Architecture, and Three Dimensions of Iterative Phase Loops

As I’ve been developing my engagement framework over the past few years, I have sometimes struggled to classify the exact phase in which certain activities take place. Or, more to the point, I have sometimes had difficulty contextualizing multiple different activities that may take place in the same phase. Although the framework has seemed very solid against watching several years of material in the practices of business analysis, project management, Agile and Scrum, and related disciplines, there seemed to be just a small thing missing, a bit of confusion I couldn’t quite resolve. A couple of posts where I wrestled with this are here and here.

Much of this particular confusion was lifted when a gentleman named Kieth Nolen, MBA, CBAP gave a presentation on business architecture to our weekly IIBA certification study group, as part of a professional series we did on different fields and activities business analysts need to be aware of. His presentation may be viewed at or downloaded from the Tampa IIBA chapter’s shared drive here. Look in the IIBATampaBayBABOKStudyGroup directory for the file named BABOKStudyGroup20220802.mp4 (which alas I cannot link to directly).

Mr. Nolen provided wealth of fascinating and useful information, including this table of perspectives to consider when analyzing or designing a system.

What really clicked for me was his description a tool called ArchiMate. It enforces certain (necessary) rules on the creations of diagrams expressing business architecture designs. The thing that made it pop, especially given what had preceded it, was its segregation of elements into three major layers. The tool refers to these as the business layer, the application layer, and the technology layer, as shown in the diagram below.

As the oodles of drawings elsewhere on my website will attest, I have spent many years expressing different aspects of architecture, design, and processes from different perspectives. I’ve always had an innate understanding of how to communicate what I needed to, but it’s always helpful to keep reading, listening, and learning so I find more and different takes on materials on interest. The longer you do something, assuming you’re on the right track, more and more of what you see should fit into your existing understanding. This makes it all the more interesting when you find something that actually appears to be new. Such findings can lead to important clarifications and breakthroughs, and I believe this happened with me.

I use slightly different language than Mr. Nolen or Archimate use, but it’s clear where the idea came from.

The standardized representation I’ve developed for my engagement and analysis framework is below. It has to be understood that this is a highly stylized and streamlined representation of what practitioners actually do when solving a problem for an internal or external customer.

This is meant to depict the iterative nature of the work within and between each phase at a high level. Implicit in this is that many activities can occur in each phase both in parallel (if several individuals or teams are doing similar things at the same time) and serially (if one individual or team performs several successive different tasks in the same phase.

Parallel activities may be depicted this way, by showing additional iterative cycles represented in depth:

Serial activities might be thought of in two ways, by showing additional iterative cycles represented horizontally:

The breakthrough can from expanding the representation in the vertical direction, as shown next:

This is where I’ve used slightly different language for the three layers. Where ArchiMate goes with business, application, and technology, I prefer what I feel are more general terms. Those are business, abstract, and implementation.

The business layer can include elements like business units or departments, people as individuals or in groups with similar responsibilities and permissions, and overall systemic responsibilities. The abstract layer includes descriptions of the processes, communication channels, data elements, activities, calculations, and decisions. Those can be determined logically through the discovery and data collection activities in the conceptual model phase, and also through various activities in the requirements, design, and implementation phases. Finally, the implementation layer (as differentiated from the implementation phase) describes all aspects of an implementation, in terms of the hardware, software, operating systems, governance and maintenance procedures and so on. While one is generally tempted to visualize IT systems first, foremost, and always in these situations, the considerations are meant to be more general than that. That is the primary reason for my using different language than what ArchiMate uses.

Moreover, I find that separating activities into layers in each phase can actually be done on an almost completely arbitrary way, and we aren’t limited to only the three layers listed. The specific insight provided by Mr. Nolen’s presentation has lead me to a more general conceptual understanding of how to proceed.

So how does this represent a breakthrough for my understanding and the interpretation of my framework? Let’s look at how activities may be broken down into layers within each phase.

  • Intended Use (Problem Statement): This phase doesn’t immediately seem to break down into layers in an easy of obvious way. Problem statements and project charters are usually described at a high level with few items. These can be added to and clarified over time, as understanding increases through iteration between phases over the course of an engagement or program, but their general expression tends to brief. Additionally, a lot of the context may be subsumed within the very nature of the proposed solution approach.
  • Conceptual Model: Discovery and data collection activities can be carried out to map and characterize existing (or new) systems along the line of the standard three layers. Other possibilities exist, however. One example I can think of was a two-part analysis we did of a complex, large scale business process, where we did a first pass of discovery and data collection to determine what and how much was going on and how long it took to complete an initial economic justification, and then a second pass of discovery and data collection to learn the details of each activity so an automated replacement system could be developed and deployed.
  • Requirements: Several possible division exist here. Functional requirements describe what a system does (or will do) while non-functional requirements describe what a system is (or will be). Methodological requirements may govern how the engagement itself is to be run, how communications must occur, how things must be written, and so on. Requirements can exist for accuracy, for what must be considered in or out of scope, what hardware must be used, and more.
  • Design: Designs can certainly cover the standard three layers, but I can think of multiple possible subdivisions within each layer. Particularly in the abstract (or application) layer, I look back to my days writing thermo-hydraulic models for nuclear power plant simulators, and see that the governing equations and the solution methods could be considered very distinct parts of the design. Multiple engineers could all agree that the governing equations are written correctly (those could actually be considered part of the conceptual model, by the way), but implement the actual performance of calculations in entirely different way. (Development of my framework has made it clear to me that, even though Westinghouse got good scores in an evaluation of its project management practices, lack of review and socialization of methods in this area was a major weakness of the organization.)
  • Implementation: Beyond the three standard layers, the construction and deployment of a new system (or modifications of an existing system) can be seen as entirely separate subdivisions within at least the abstract and implementation layers. While I often write that I consider system maintenance and governance a subset of non-functional requirements, I also see the CI/CD or release train process for a system or capability to need as much effort, thought, care, and feeding as any other part of a developed and deployed system. This, this can be seen as its own layer through all of the phases (but especially this one), or an integral part of all other work. Either way, awareness of the need for this capability is crucial.
  • Test (and Acceptance): Testing and acceptance can occur at so many places through every phase of the process that it almost should be left (per the forward and backward linkages required in the Requirements Traceability Matrix) to be driven by the linkages to the implementation items. Conversely, the different kinds of test can be considered separately, especially when it comes to the differences between verification (does the thing be built work as intended?) and validation (did we build the right thing to address the identified need, and is it sufficiently accurate?).

Coming to understand the possible vertical divisions within each phase makes my framework much more adaptable and powerful in my mind. I’m sure I will considering the subtleties and ramifications of these new insights for a long time to come.

I will also share that this insight has allowed me to put clearly into word the space in which I seek to work and help people. In the end, the main area I like to think in is in the middle or abstract (or application) layer. I can certainly support analysis and development of the business layer driven by that. Even more to the point, I’m not very interested in writing code or developing databases or mucking around with the details of deployment at this stage, and I’ve never been and expert at security but recognize its indispensability, but I’ve certainly done and learned enough to be able to work with the relevant specialists of every kind as needed.

Posted in Tools and methods | Tagged , | Leave a comment

Approach Contexts for Potential Solutions

As I’ve been pondering different aspects of my engagement framework, the management environments I’ve experienced, and the nature of the solutions I’ve helped create or that I’ve learned about through other means, it occurs to me that potential solutions are driven by a number of factors. Solutions come in many contexts, so lets look at some of the considerations. I don’t know if any overarching classification exists, so let’s just list what we can and see if some kind of order or hierarchy suggests itself.

  • Internal vs. external customers: How a solution is approached may differ based on whether the solution team is serving an internal or external customer. External customers much be engaged with on a more formal and structured basis, with the activities and behaviors often driven by contractual terms of various kinds. Internal customers are much more likely to be served with ad hoc solutions as opposed to more standardized solutions that may be the raison d’Ăªtre of an organization serving primarily external customers.
  • Standard solution vs. open-ended solution: Most of the companies I worked for were smaller, specialized, highly technical organizations serving larger, commercial firms with somewhat standardized solutions in a vendor or consultative capacity. None of the vendors or consultancies I worked for provided truly standardized solutions. What they did was provide solutions using a certain set of tools, components, and approaches, as opposed to going in cold and trying to generate solutions from a completely (open-ended) blank slate. The solutions were always highly customized to the customers’ specific needs and situations. I think the differentiation here has to do with the degree to which the possible solution space in constrained. (The more an organization’s solution offerings are constrained, the more I tend to refer to them as hammer-and-nail companies, because if all you have is a hammer, everything looks like a nail.) That said, the range of provided solutions from a single vendor may be quite wide, but they will all tend to be related to a single area of endeavor. For example, I worked for two different companies that provided customized, turnkey production lines (or parts thereof) to industrial clients in the paper and metals industries. Both sold systems, individual pieces of equipment, service, electronic control systems, and independent analytical services for individual situations. However, they almost always related to each other in the knowledge and process areas addressed. By contrast, I know of other consultants or vendors that provide more generalized, open-ended kinds of services. I guess it’s just a matter of how the various offerings are related. Finally, constraints may take many forms. Companies like IBM will tend to serve larger organizations that can afford to pay high rates for large, complex solutions, and providers tend to scale down from there. Every organization offering solutions is constrained in some way.
  • Imposed by competition: There are a lot of reasons organizations will seek solutions, but the primary one is competition. An organization seeks and applies solutions to improve its processes and offerings on an ongoing basis, or it may (will!) suddenly find itself losing money as competitors do so and end up serving customers better. Individuals and organizations should always be looking to continually improve every aspect of their business.
  • Opportunity from new technology: The appearance of new technologies can lead to all sorts of opportunities for improvement. Some will be blindingly obvious, while others may require more thought or lateral association, or even luck. I once created a really fun and successful solution based on an item I had recently seen that was new to me. That set off a whole chain of events that led to a pretty spectacular outcome. By contrast, and I intend to write about this more in the near future, some solutions and methodologies are well understood, but await a breakthrough that makes their implementation possible. The design of the Polaris missile and the submarines that launched them, for example, allocated certain volumes for various electronic systems to be housed, that couldn’t actually fit in the spaces provided. The project manages counted on improvements in technology during the development process to make everything work. Similarly, the mathematical techniques for performing certain physics and Monte Carlo analyses (using continuous and discrete-event simulation techniques, respectively) were long known but awaited the development of digital computers to finally make their applications practical.
  • Automation: Automation is the process of doing by machine (including computers) that perform or at least assist with operations that were once done manually. I often built tools to automate calculations and even write code and documentation it would have been extremely tedious to write out or even type by hand. Similarly, I used to analyzed business processes so they could use scanned images of documents instead of the paper documents themselves. The goal in those instances was to preserve the operations where humans had to apply their expertise and reasoning power in creative ways, and automate the repetitive actions the computer could be instructed to do from context. We added scanning, indexing, and archiving steps to an existing, complex business process for about ten percent of the original operating cost, and were able to automate away thirty to forty percent of the operating cost, for a savings of twenty to thirty percent overall, in addition to the elimination of a lot of human drudgery.
  • Standardization / Modularization: I must have played with almost every modular construction toy in existence growing up, so I was used to putting creative things together out of standard components from an early age. The trick to doing this as a professional analyst and designer is to recognize opportunities for creating standard components. One way to do this is by using affinity grouping (think of the exercise where you brainstorm a bunch of ideas on Post-It notes and then group them by how similar certain ones are to each other). Another way is simply to recognize when you are doing the same or very similar things over and over in different situations. Standardization can be accomplished in a number of different ways. Policies and procedures can be established that enforce consistent best practices. Or, reusable (physical or logical) components can be built that enforce standard approaches or operations for any system in which they are incorporated. It is sometimes a good idea to make the created standard components configurable, so they may be adapted to individual situations as needed. In that case, designers and users need to continually reevaluate the boundary between making something configurable and when it would be best to create a new standard component.
  • Rearrangement or streamlining: Sometimes all the right things are done for the right reasons and with the right operations, and all that is needed is to put them in a different order so they can be accomplished more quickly and efficiently. A classic case of this is when a series of production machines are located far apart from each other, necessitating a lot of transport time and resources between operations. If the production operations are all relocated to be near each other and in the correct order, the waste associated with transport and storage can be reduced or eliminated, making the overall process more efficient. More complex examples of this are possible. It may, for example, improve overall efficiency (and quality) to add a step to a process which performs a kind of standardized preparation and setup, so the value-added operations can be performed more cleanly and effectively, with less chance of error or unexpected occurrence. Another way to realize improvements is to re-route items so they interfere with each other less, and still another way is to relocate or otherwise automate certain operations. The latter approach may not lessen the amount of effort involved in completing some operations (for example, processing paperwork for import brokers in cross-border trucking operations), but will generate benefits in other locations (by reducing or eliminating certain sources of congestion at the border-crossing facilities themselves. In these cases it’s all about eliminating constraints wherever possible (and then identifying and addressing whatever becomes the new constraint…).
  • Lean vs. Six Sigma: Lean is doing more or the same with less, while Six Sigma is about eliminating variability (and hence waste) so you effectively do more with the same. Tightening up a machine or creating a jig or structuring a computer’s user interface to improve clarity and situational awareness leading to a reduction in errors and thus loss or rework is what Six Sigma is all about. Lean is about automation, standardization, and rearrangement or streamlining as described directly above.
  • Profiling (Theory of Constraints): Software developers often instrument their code to see which operations are taking the most time (or consuming the most memory or bandwidth or other relevant resource) in order to identify which code to devote time toward making more efficient. (I have written about extreme loop unrolling in some of my own work, for example.) Code profilers have been around for a long time. A number of standard techniques exist to improve the speed (and other consumption of resources) of code, and a balance must be struck between the efficiencies to be gained vs. the cost to gain them. However, this approach is applicable to far more activities than computer code. A couple of years ago I listened with interest as a very successful entrepreneur described how he noticed that his biggest expense for a particular product was in the materials used to build them. The entrepreneur then asked an expert in that class of materials if anything cheaper existed that would do the same job at a lower cost. Upon learning that the answer was yes, the entrepreneur adopted the new material, thus lowering costs significantly while retaining quality, durability, and performance. As discussed above, this is just another exploration of the Theory of Constraints.
  • Incremental vs. The Big Kill: Many kinds of improvements are possible. Sometimes it is necessary to make a lot of small improvements to eliminate losses and small errors, and sometimes it is possible — and even necessary — to do something completely new or different and made a major improvement in one quantum leap. Individuals and organizations should continually look for ways to do both.
  • End-of-Life, replacement, and repurposing: Change is sometime necessitated by circumstance. One possible example is that existing capabilities stop functioning do to failures, and another example is that the vendor of the capability goes out of business or otherwise withdraws support (think versions of software, fixed-duration warranties, exhaustion of spare parts, and more). In these cases the user is driven to adopt a new solution or abandon that capability entirely. It is also possible that a capability that becomes uneconomic for its original use can be repurposed for an alternative use where the economics make sense. When I worked in the paper industry in the late-80s the major emphasis and source of market growth was newsprint (my, how things have changed!). When modern production lines are turning out bright, strong rolls of paper, forty feet wide at sixty miles and hour, it can be impossible to run older production lines at a profit. However, it was often possible to repurpose a line, with minimal tooling and rework, to instead produce specialty grades of paper (like tissue) in a way that could still be profitable. I also heard of entire lines being purchased to be shipped to developing countries for greatly reduced capital outlay (vs. a new system), so that it made economic sense to operate there. Much of this discussion is probably more germane to the use of capital equipment rather than software, but readers should be alert to all relevant possibilities.
  • Entirely new solutions (market opportunities from both growth and development): Sometimes the opportunity to be exploited isn’t in doing the same thing better, but in doing something entirely new. These opportunities may be driven by technology, as described above, but may also be driven by economic growth, changing tastes or practices, or other reasons. “New” can mean a number of things in this context. New can mean more of the same, or increasing scale. New can mean different, or increasing scope. Writer Jane Jacobs, the noted urban theorist (among other things), differentiated between growth and development along these lines. Another for of new can mean moving to new locations, whether adding to the total stock of capability, or replacing capability lost elsewhere. However it is defined, doing new things is going to be different than improving existing things.
  • Improved management techniques: This type of improvement can take many forms. It is subsumed in a lot of the other items described above, but I will add a couple more here as a kind of a catch-all. One possible management improvement is in how maintenance is conducted. Collecting new data or leveraging existing data can allow organizations to implement or fine tune preventive or condition-based maintenance procedures to head off major failures, improve uptime, and reduce overall maintenance costs and materials. Other benefits to be realized include employee and customer support and engagement, communications of various kinds, improved training to make the workforce more aware of all of the foregoing, and more.
  • General technical solutions: There are large classes of problems that may be addressed in fairly standard ways, all of which can provide or improve value when judiciously applied. I wrote this article in response to an interesting presentation I saw a few years ago on the ARIZ (formerly TRIZ) method. Sometimes, of course, there are no solutions, only trade-offs. Many solutions that come out of the practice of ARIZ overlap with items discussed previously in this article.

So, that’s everything I can think of the time being. Can you think of anything I might add? I’d love to hear your suggestions.

Posted in Management | Tagged , , , , , | Leave a comment

On Entrepreneurship

There are many different definitions on what it means to be an entrepreneur, but to me the one that is most salient has to do with operating under conditions of uncertainty. That is, the entrepreneurial function involves offering new products to consumers and seeing whether consumers will buy them.

Let’s pull this definition apart. We’ll start with the definition of “new.” New could refer to an entirely new category of product (home computers in the late-70s and early-80s), class of product within a category (candy bar form factor smartphones in the mobile phone space in 2007), major feature of a product (anti-lock brakes on cars), minor characteristic of a product (striped, multi-flavor Hershey Kisses), new providers of existing products (especially fungible products with minimal differentiation), new locations where products become available (mobile phones in Africa, often bypassing the need to ever build out a wired network), or new materials for existing products (replacing metal components with carbon fiber ones).

Different forms of “newness” come with different forms of uncertainty. The process for doing market research to determine whether and where to place a new location in a chain (Walgreens or McDonalds) is fairly standard and accurate. It considers population, demographics, existing coverage, the state of the economy, and so on. The process of determining demand for something entirely new (satellite communications) is far more open ended and uncertain. How much traffic do previous satellite communication networks (Iridium) actually carry (there are niche uses like for DeLorme GPS devices with limited messaging features), and which will be the first to really take off?

It also generally requires a certain amount of creativity and insight. The examples above all required a certain amount of lateral and imaginative thinking. Interestingly, customers are not always able to identify their own needs. They can and do, however, recognize a good idea when they see one. Henry Ford is said to have quipped that if he had asked customers what they wanted, they’d have asked for a better horse.

It is important to note that products do not succeed in the marketplace solely on the basis of their qualities and potential utility. The price at which they are offered is a crucial consideration as well. It may be possible to produce some products long before they actually become available — at some (probably exorbitant) cost — but it generally makes no sense to do so. Few people would be able to afford the product (the process of early adoption on the basis of wealth, adventurousness, and technical knowledge is well understood), it may be brittle and unreliable, and the cost-benefit calculation may not make sense.

Austrian economics (the economic school I believes comes closest to being “right”) describes the economy as a giant, ongoing, competitive (and cooperative) discovery and simultaneous auction process. People and organizations have unlimited desires but have limited resources. (Per Thomas Sowell, “The first law of economics is scarcity. The first law of politics is to ignore the first law of economics.”) As such, they must choose which products and services they acquire based on which they believe will address their most pressing needs and desires on the margin. This applies not only to consumer products but also to resources needed to produce other products. The price system is the means by which society as a whole coordinates these activities by allowing people to value different resources and perform profit-and-loss calculations that determine which activities make sense and which do not.

A secondary consideration of entrepreneurs is how to arrange resources to provide the goods and services they offer. A wide range of skills and analyses can be brought to bear on this problem, but when writing about entrepreneurial activities from an economic standpoint the details are somewhat beside the point. Like I noted with respect to Agile considerations yesterday, I will assert that the discussion of the details and techniques associated with improving quality and efficiency are important, but aren’t strictly about the entrepreneurial function, which is more about the decision to apply that effort than the nature of the effort itself.

A further observation people make is the difference between being an entrepreneur and merely making oneself a job. In a job you simply do the same thing over and over without applying a huge amount of creativity and judgment, and you may also do what you are asked to be others. Even if you run your own business, if it offers an undifferentiated good or service in an existing industry and you aren’t continually trying to try new things and find new customers in create ways, you have a job and are not truly an entrepreneur. If, by contrast, you are always trying to figure out how to do more and different things in novel ways, you are functioning as an entrepreneur even if you don’t run your own company. (If you are someone else’s employee, however, you may be limited in how much you get to apply your creativity and independent judgment.)

As I see it, entrepreneurial actions can be driven by several factors. One is that they identify a need, whether of individuals or groups of people, and they try to find ways to address that need. During a class called Analysis, Synthesis, and Evaluation in my junior year in college, for example, the professor told us about an individual who watched poor people in Africa wash their clothes by hand by dipping them in a local stream and wringing them out on a rock. That person reasoned that a small, plastic agitator could make the process of washing clothes a lot easier for people in those conditions. Another is that entrepreneurs may become aware of (or invent!) some new technology or methodology that improves efficiency or makes something possible that previously had not been. One of the interviews I had while looking for my first engineering job was with a company called Copeland Compliant Scrolls. The idea of a making an air compressor using a scroll design had been around for a while, but Copeland was either the first or one of the first companies that figured out how to manufacture them with tolerances tight enough to make them practical. Another driver is that conditions change that make certain activities economically feasible where they previously had not been. This can be driven by changes in prices or income driven by other factors, improved organization, increased demand, or other effects. Another potential driver is that some competitive or regulatory pressures may come to bear in ways that harm an entrepreneur’s business. The entrepreneur then has to respond by changing or improving in some way or face loss of profits at best or going out of business entirely at worst. People often figure things out that surprise them, simply because they have to.

So the process of being an entrepreneur comes down to repeatedly asking the following questions, and probably others, in any order appropriate to the situation. (A cool diagram can probably be made from this idea. Feel free to give it a shot and tell me what you come up with.)

  • How can I help someone? Will this help someone? Who might this help?
  • Can this be done?
  • Can I use this new idea in a different and useful way?
  • Can this actually be sold? Can the benefits be effectively communicated?
  • Can I make this better or do this better?
  • Is this effort profitable (and hence sustainable)?
  • Is some external pressure driving a need to change?
Posted in Management | Tagged , , | Leave a comment

Another Take On Communication Styles

Years ago I was with my uncle, an extremely technically adept retired Coast Guard officer, bleeding the water out of a fuel line for a small outboard motor. He told me to turn the petcock counter-clockwise to open it and I asked him whether that’s counter-clockwise looking down at it or up at it. He pointed out that ships have been lost for that reason.

Communication is hard for *everyone*.

Communication has long been something of a challenge for me, so I’ve learned to keep my eyes and ears open for ways to do it better. I also like frameworks and systems, so I was naturally drawn to the information Nikki (Heise) Evans (see her company’s website) presented during a recent online series of project management webinars organized by the Project Management Institute (PMI) regarding a new (to me) way of thinking about communication styles and motivations.


Note that the Goal Setting quadrant should include bullets for Vision, Options, Bottom Line, and Discussion. I apologize for not capturing the optimal rendition of this slide.

She describes two different preference axes: one for speed (moves carefully vs. fast paced) and one for results vs. relationships orientation. The latter could also be thought of as engagement vs. solution, product vs. process, or people vs. things. Where one falls on the two axes points to a preferred concern orientation as one works. This works like many different personality tests.

If one prefers to move quickly with a results orientation, one is said to have a goal setting outlook that primarily asks what the focus is. If one prefers to move quickly but with a relationship orientation, that points to a lifestyle outlook that tries to make everyone comfortable. If one prefers to move more deliberately and with a relationship orientation, that indicates indicates a stability outlook concerned with how different ideas will sound. Not surprisingly, I prefer to proceed somewhat deliberately and with a results orientation, which is described as having an information outlook that tries to identify what customers need.

I am always an analyst first, and then everything flows from that. Even if I don’t appear to be oriented toward people in an obvious way, developing excellent solutions is ultimately in service of meeting their needs, so we see everyone is trying to get to the same place.

It is important to note that good leaders and analysts will be able to adapt to conditions, team members, and customers as needed, and that these categories of outlook are not hard and fast, but only serve to understand how one is likely to proceed when starting out. Over time, effective communication, analysis, iteration, feedback, and correction should lead individuals and teams to create similarly effective solutions. I wonder, however, if Nikki and and her colleagues have noticed that the nature of the solutions and the problems typically encountered tend to vary with the orientation of different types of leaders and approaches. I believe I will ask them that very question, and I will report back when I learn more.

Noting my affinity for frameworks and systems, I’d like to see where this fits in with similar insights made by others. Let me list and briefly describe some of them, and then compare the present insight.

I first encountered the above communication model in a leadership school in the Army, and I have encountered additional materials on communication and information theory since (for example, this one was interesting). This started to give me a feel for how complex communication can be and the problems that can be encountered.

From my work in distributed real-time systems I learned that distributed, real-time system communications must often be retried until success is achieved. Given the number of communication channels that may exist the effects of poor communications can be highly problematic.

This is why my engagement and analysis framework has evolved the way it has, and how I’ve chosen to represent it. The circles in the top figure represent the iteration within each phase of all the relevant activities and communications as they proceed to completion. The two-way connections between the circles represent the iteration between phases both as the project moves forward as the effort progresses and backwards as subsequent discoveries induce changes in the understanding of previous phases. The lower image shows how the analysis process proceeds in the context of a project management effort.

A gentleman named Dave Thomas, who was one of the original authors of the Agile Manifesto, gave a well known talk provocatively titled Agile is Dead, admonished practitioners that the cottage industry that has grown up around Agile and Scrum (and Kanban and so on) is largely beside the point, and that what Agile should really involve is getting people to talk to each other and change course as necessary. It’s not that all the other things taught don’t have value; it’s that they’re kind of beside the point. All the practices (that came from the eXtreme Programming or XP oeuvre that was in vogue about the time the Agile Manifesto was written) are useful, but they aren’t the major innovation. They are things you should be doing anyway if you want to run an effective shop, whether you are consciously “doing Agile” or not. The real innovation was in seeking and incorporating feedback and correction purposefully, early, and often. In short, communicate more and better and act on it.

A gentleman named Matthew Green shared his extensive knowledge of Agile and its related practices at two of the recent Tampa IIBA chapter’s weekly certification study group webinars. His presentations and knowledge and speaking quality were exceptional and he made the best arguments I’ve ever heard as to why the various software development practices should be thought of as integral to the Agile oeuvre. His argument is that the the ability to react quickly and actually deliver value quickly is dependent on knowing and practicing the relevant development techniques. While I agree with his point and have complete respect (and admiration) for his skills and experience, I still view the practices as mostly separate and independent. ScrumMasters don’t necessarily need to know much of the development oeuvre, and Product Owners need to know it even less. (It never hurts if they do, of course.) The advanced techniques they teach in the Certified Scrum Developer track (as least through Scrum Alliance) make a worthy point of crossover, however. Somebody should know the development practices well, but it is somewhat specialized knowledge. ScrumMasters, Product Owners, and other team members do not necessarily need to concentrate on that knowledge. They have their hands full with other duties, after all. It should usually be enough for people to communicate with each other effectively so the necessary knowledge is shared and acted upon as needed. That, to me, is the essential thrust of Agile.

All that said, you are invited to review the meeting recordings here, and tell me what you think. Specifically check out the videos from August 23rd and August 30th, 2022 (BABOKStudyGroup20220823.mp4 and BABOKStudyGroup20220830.mp4). I believe they are sessions 50 and 51.

I encountered two more interesting frameworks while attending many dozens of business analysis Meetup sessions hosted by different IIBA chapters.

Kara Sundar presentation on Communicating with Leaders described different concerns of leaders in ways that may illuminate how to understand and communicate with them. This is similar to the framework Ms. Evans presented, as described above. I’m sure it will also affect the nature of the working environment those leaders create.

Kim Hardy’s presentation on Nice SDLC Cross-Functional Areas, by contrast, described how to build a team based on the concerns of the team members. Rather than building a team by gathering up people with skill-based job titles like Sponsor (Product Owner), Team Lead (Architect), ScrumMaster, Developer, DBA, UI/UX Designer, Graphic Artist, Tester, Business Analyst, Specialists in Security/Deployment/Documentation, and so on, one can identify team members based on concerns like Business Value, User Experience, Process Performance, Development Process, System Value, System Integrity, Implementation, Application Architecture, and Technical Architecture. One doesn’t typically recruit for people with the latter labels or titles, but this is a way to ensure that attention is paid to the qualities of the process and the solution. This should happen anyway, but having an organized way of thinking about can definitely increase the chances of success.

I have also had the privilege of learning about the Herrmann Brain Dominance Instrument, which can help managers communicate more effectively with team members. Other popular personality typing systems like Myers-Briggs and the Big Five provide similar insights.

At one point I thought that Ms. Hardy’s framework helped address the non-functional aspects of the solution while building a team by standard title was geared more toward the functional aspects of the solution. Most of the other frameworks and tools seem geared more toward aspects of the engagement and the environments in which they operate. I still think that idea has some merit, but it is not particularly important. In any case, I know all of these practitioners are very knowledgeable and experienced and can provide value across an entire spectrum of concerns, but each presenter has a limited time to advance a theme and make a limited but powerful point. I think all of these insights provide valuable tools for improving communication and mutual understanding.

Posted in Management | Tagged , | Leave a comment

A Fascinating Simulation-Based Method for Risk Analysis

While piling up the final PDUs I need to renew my PMP, I encountered a fascinating discussion of risk (and budgeting) analysis and management techniques, presented by the internationally recognized risk management expert Dr. David T. Hulett. The video I watched is hosted on ProjectManagement.com, which I believe is behind a paywall. The video below, on Youtube, appears to cover roughly the same material.

I first learned about techniques like this from my friend Richard Frederick, in lecture 14 of his 20-part series on agile project management and data management, but hadn’t learned a lot more about it until now.

The interesting part of Dr. Hulett’s work is not just the application of Monte Carlo techniques to the analysis of risk (and project costs), but the wide range of additional considerations the technique can include, address, and illuminate. These include lead-lag effects, dependencies, correlations, the fact that risk elements are far more likely to represent threats (negative) than opportunities (positive), the fact that — for valid reasons usually involving complexity — risks and costs are almost certainly greater than are usually estimated, and more.

I’m sure could profitably watch this multiple times, but for now it has given me many useful concepts to chew on.

Posted in Management | Tagged , , | Leave a comment

How To Address A Weakness

In this evening’s discussion in our weekly Tampa IIBA certification study group we touched on the subject of dealing with weaknesses. This initially grew out of a discussion about SWOT analyses. Based on things I’ve read and my proclivity to try to look at problems from as many angles as possible, I am aware of two main approaches.

The obvious one is to make the weakness less weak. There are numerous ways to do this, depending on the nature of the weakness. One is to learn more or otherwise develop or add to the skill or capability that is lacking. This can involve bringing in new people (from within and from outside your organization), obtaining information (in books and papers and online), training in new tools or technologies (via courses, videos, and friends), and many other methods.

The other, and less intuitive, approach is to enhance your strengths so that the weakness matters less. If you or your organization is so good at something that provides a significant competitive advantage, then it may be wisest to concentrate on maintaining or improving that strength.

In general, it’s best to take the approach — or combination of approaches — that provides the highest marginal benefit. That is, go with the solution that gives the greatest bang for the buck.

Think of a football team. (We’re talking American Football here, not what us crazy Americans call soccer that the rest of the world calls football!). Every team gets to put eleven players on the field on defense at one time. Assuming the level of overall talent among teams is roughly equal, we might observe that some teams, because they have better players at certain positions or better coaching or better or different schemes, will be stronger at some facets of defense and weaker in others, while other teams are stronger or weaker in different areas.

For example, one team may have a very strong defensive line that is reasonably able to stop opposing teams’ rushing attack, but can really put a lot of pressure on the quarterback. A different team may have a less proficient defensive line but a much stronger secondary. If the first team’s defensive line can pressure the opposing quarterback enough, it may not matter that its secondary cannot cover the receivers as well, because the quarterback won’t be able to get the ball to them, anyway. If the second team’s secondary is very strong and is able to blanket the opposing receivers, it may not matter if its defensive line is weaker, because even if the quarterback has more time to throw, the receivers will never be open to throw to.

There are other ways to cover for a weak aspect of a defense. One is to improve the offense so the defense is on the field less or otherwise does not have to be as effective. Another is to modify the stadium and motivate the crowd so opposing offenses have to play in a louder environment, which should reduce their effectiveness. The number of factors that can be considered in this kind of analysis is nearly infinite, so in order to keep things simple, we’re only going to consider the problem from the two dimensions of a defense’s line vs. its secondary.

Looking at the first case of a defense with a strong line and a weaker secondary (the weakness we intend to discuss), we can see that we can either improve the secondary (the obvious approach), or (less obviously) we can maintain or improve the defensive line even more. Remember that resources are limited (the number of players on a team and the number on the field at any one time are fixed, there is a salary cap, there are recruiting restrictions, and so on) and the solution must be optimized within defined constraints. Not all problems are constrained in this exact way, but there are no problems which are not constrained in some way. It’s always a question of making the best use of the resources you have. The approach you take should be based on what works best for the current situation.

Posted in Management | Tagged , , , | Leave a comment

Basic Framework Presentation

I found it necessary to put together a shorter introduction to my business analysis framework than my normal, full-length presentation(s). The link is here.

Posted in Tools and methods | Tagged , , | Leave a comment

Decision Modeling

Many business processes require decisions to be made. Decision modeling is about the making and automating of repeatable decisions. Some decisions require unique human judgment. They may arise from unusual or entrepreneurial situations and involve factors, needs, and emotions that cannot reasonably be quantified. Other decisions should be made the same way every time, according to definable rules and processes.

Decisions are made using methods of widely varying complexity. Many of the simulation tools I created and used were directed to decision support. The most deterministic and automatable decisions tend to use the techniques toward the lower left end of the trend toward complexity and abstraction for data analysis techniques shown above, although the ability to automate decisions is slowly creeping up and to the right. I discussed some aspects of this in a recent write-up on data mining.

Decision processes embody three elements. Knowledge is the procedures, comparisons, and industry context of the decision. Information is the raw material and comparative parameters against which a decision is made. The decision itself is the result of correctly combining the first two. Business rules can involve methods, parameters, or both.

Let’s see how some of these methods may work in practice.

Decision trees, an example of which is shown above, list the relevant rules in terms of operations and parameters. The rules shown above involve simple comparisons, but more complex definitional and behavioral rules can apply. The optimization routines Amazon uses to determine how to deliver multiple items ordered at the same time in one or many shipments on one or many days and from one or more fulfillment centers involved up to 60,000 variables in the late-90s and are likely to be even larger now. A definitional rule may describe the way a calculation should be performed, while a behavioral rule might require a payment address to match a shipping address. The procedures and comparisons can be as complex as the situation demands.

The same set of rules can be drawn in the form of a decision tree as shown below.

These rules can be described during the discovery and data collection activities of the conceptual model phase, and also during the requirements and design phases. It is fascinating how many different ways such rules can be brought to life in the implementation phase.

The most direct and brute force way is shown below, in the C language.

This way also works, but looks totally different.

The number of ways this can be done is endless. The first method “hard codes” all of the definitional parameters needed for comparison. This can be somewhat opaque and hard to maintain and update. The second method defines all the parameters as variables that can be redefined on the fly, or initialized from a file, a database, or some other source. The latter is easier to maintain and is generally preferred. It is extremely important to maintain good documentation, including in the code itself. I’ve omitted most comments for clarity, but I would definitely include a lot in production code. I would also include references to the governing documents, RTM item index values, and so on to maintain tight connections between all of the sources, trackers, documents, and implementations.

In order to understand these, you’d have to know a reasonable amount about programming, and failing that you should know how to define tests that exercise every relevant case. For example, you would want to define tests that not only supplied inputs in the middle to each range, but also supplied inputs on the boundaries of each range, so you could fully ensure the greater-than-OR-equal-to or just greater-than logic tests work exactly the way you and your customers intend. Setting the requirements for these situations may require understand of organizational procedures and preferences, industry practices, competitive considerations, risk histories and profiles, and governing regulations and statutes. None of these considerations are trivial.

You will also want to work with your implementers and testers to ensure they test for invalid, nonsensical, inconsistent, or missing inputs. It’s up to the analysts and managers to be aware of what it takes to make systems as robust, flexible, understandable, and maintainable as possible. Some programmers may not want to do these things, but the good and conscientious ones will clamor to inject the widest variety of their concerns and experiences into the process. As such, it’s important to foster good relationships between all participants in the working process and have them contribute to as many engagement phases as possible.

Finally, data comes in many forms and is used and integrated into organizations’ working processes in many ways. I discuss some of them here and here, and visually suggest some in the figure below. Some of this data is used for other purposes and doesn’t directly drive decisions, and I would not assert that one kind is more important than another. In the end, it all drives decisions and it is all important to get right.

Posted in Tools and methods | Tagged , , , | Leave a comment

Decision Analysis

Some decisions are fairly straightforward to make. This is true when the number of factors is limited and when their contributions are well defined. Other decisions, by contrast, involve more factors which contributions are less clear. It is also possible that the desired outcomes or actions leading to or resulting from a decision are not well defined, or if there is disagreement among the stakeholders.

The process of making decisions in similar the entire life cycle of a business analysis engagement — writ small. It involves some of the same steps, including defining the problem, identifying alternatives, evaluating alternatives, choosing the alternative to implement, and implementing the chosen alternative. The decision-makers and decision criteria should also be identified.

Let’s look at a few methods of making multi-factor decisions at increasing levels of complexity. It is generally best to apply the simplest possible method that can yield a reasonably effective decision, because more time and effort is required as the complexity of analysis increases. I have worked on long and expensive programs to build and apply simulations to support decisions of various kinds. Simulations and other algorithms themselves vary in complexity, and using or making more approachable and streamlined tools makes them more accessible, but one should still be sure to apply the most appropriate tool for a job.

  • Pros vs. Cons Analysis: This simple approach involves identifying points for and against each alternative, and choosing the one with the most pros, fewest cons, or some combination. This is a very flat approach.
  • Force Field Analysis: This is essentially a weighted form of the pro/con analysis. In this case each alternative is given a score within an agreed-upon scale for the pro or con side, and the scores are added for each option. This method is called a force field analysis because it is sometimes drawn as a (horizontal or vertical) wall or barrier with arrows of different lengths or widths pushing against it perpendicularly from either side, with larger arrows indicating considerations with more weight. The side with the greatest total weight of arrows “wins” the decision.
  • Decision Matrices: A simple form of the decision matrix assigns scores to multiple criteria for each option and adds them up to select the preferred alternative (presumably the one with the highest score). A weighted decision matrix does the same thing, but multiplies the individual criteria scores by factor weightings. A combination of these techniques was used to compile the ratings for the comparative livability of American cities in the 1984 Places Rated Almanac. See further discussion of this below.
  • Decision Tables: This technique involves defining groups of values and the decisions to be made given different combinations of them. The input values are laid out in tables and are very amenable to being automated though a series of simple operations in computer code.
  • Decision Trees: Directed, tree structures are constructed where internal nodes represent mathematically definable sub-decisions and terminal nodes represent end results for the overall decision. The process incorporates a number of values that serve as parameters for the comparisons, and another set of working values that are compared in each step of the process.
  • Comparison Analysis: This is mentioned in the BABOK but not described. A little poking around on the web should give some insights, but I didn’t locate a single clear and consistent description for guidance.
  • Analytic Hierarchy Process (AHP): Numerous comparisons are made by multiple participants of options that are hierarchically ranked by priority across potentially multiple considerations.
  • Totally-Partially-Not: This identifies which actions or responsibilities are within a working entity’s control. An activity is totally within, say, a department’s control, partially within its control, or not at all in its control. This helps pinpoint the true responsibilities and capabilities related to the activity, which in turn can guide how to address it.
  • Multi-Criteria Decision Analysis (MCDA): An entire field of study has grown up around the study of complex, multiple-criteria problems, mostly beginning in the 1970s. Such problems are characterized by conflicting preferences and other tradeoffs, and ambiguities in the decision and criteria spaces (i.e., input and output spaces).
  • Algorithms and Simulations: Must of the material on this website discusses applications of mathematical modeling and computer simulation. There are many, many subdisciplines within this field, of which the discrete-event, stochastic simulations using Monte Carlo techniques I have worked on is just one.
  • Tradespace Analysis: Most of the above methods of analysis involve evaluating trade-offs between conflicting criteria, so there is a need to balance multiple considerations. It is often true, especially for complex decisions, that there isn’t a single optimal solution to a problem. And in any case there may not be time and resources to make the best available decision, so these methods provide a way to at least bring some consideration and rationality to the process. Decision-making is ultimately an entrepreneurial function (making decisions under conditions of uncertainty).

The Places Rated Almanac

I’ve lived in a lot of places in my life for I consider Pittsburgh to be my “spiritual” hometown. I spent many formative years and working years there and I have a great love for the city, even against my understanding of its problems. So, I and other Pittsburghers were shocked and delighted when the initial, 1985 edition of Rand McNally’s Places Rated Almanac (see also here) rated our city as the most livable in America. Not that we didn’t love it, and not that it doesn’t have its charms, but it pointed out a few potential issues with ranking things like this.

The initial work ranked the 329 largest metropolitan area in the United States on nine different categories including ambience, housing, jobs, crime, transportation, education, health care, recreation, and climate. Pittsburgh scores well on health care because it has a ton of hospitals and a decent amount of important research happens there (much of it driven by the University of Pittsburgh). It similarly gets good score for education, probably driven by Pitt and Carnegie Mellon, among many other alternatives. I can’t remember what scores it got for transportation, but I can tell you that the topography of the place makes navigation a nightmare. Getting from place to place involves as much art as science, and often a whoooole lot of patience.

It also gets high marks for climate, even though its winters can be long, leaden, gray, mucky, and dreary. So why is that? It turns out that the authors assigned scores that favored mean temperatures closest to 65 degrees, and probably favored middling amounts of precipitation as well. Pittsburgh happens to have a mean temperature of about 65 degrees, alright, but it can be much hotter in the summer and much colder in the winter. San Francisco, which ranked second or third overall in that first edition, also has a mean temperature of about 65 degrees, but the temperature is very consistent throughout the year. So which environment would you prefer, and how do you capture it in a single metric? Alternatively, how might you create multiple metrics representing different and more nuance evaluation criteria? How might you perform different analyses in all nine areas than what the authors did?

If I recall correctly, the authors also weighted the nine factors equally, but provided a worksheet in an appendix that allowed readers to assign different weights to the criteria they felt might be more important. I don’t know if it supported re-scoring individual areas for different preferences. I can tell you, for example, that the weather where I currently live in central Florida is a lot warmer and a lot sunnier and a lot less snowy than in Pittsburgh, and I much prefer the weather here.

Many editions of the book were printed, in which the criteria were continually reevaluated, and that resulted in modest reshufflings of the rankings over time. Pittsburgh continued to place highly in subsequent editions, but I’m not sure it was ever judged to be number one again. More cities were added to the list over the years as different towns grew beyond the lower threshold of population required for inclusion in the survey. Interestingly, the last-place finisher in the initial ranking was Yuba City, California, prompting its officials to observe, “Yuba City isn’t evil, it just isn’t much.”

One thing you can do with methods used to make decisions is to grind though your chosen process to generate an analytic result, and then step back and see how you “feel” about it. This may be appropriate in personal decisions like choosing a place to live, but might lead to bad outcomes for public competitions with announced criteria and scoring systems that should be adhered to.

Posted in Tools and methods | Tagged , , | Leave a comment