On Entrepreneurship

There are many different definitions on what it means to be an entrepreneur, but to me the one that is most salient has to do with operating under conditions of uncertainty. That is, the entrepreneurial function involves offering new products to consumers and seeing whether consumers will buy them.

Let’s pull this definition apart. We’ll start with the definition of “new.” New could refer to an entirely new category of product (home computers in the late-70s and early-80s), class of product within a category (candy bar form factor smartphones in the mobile phone space in 2007), major feature of a product (anti-lock brakes on cars), minor characteristic of a product (striped, multi-flavor Hershey Kisses), new providers of existing products (especially fungible products with minimal differentiation), new locations where products become available (mobile phones in Africa, often bypassing the need to ever build out a wired network), or new materials for existing products (replacing metal components with carbon fiber ones).

Different forms of “newness” come with different forms of uncertainty. The process for doing market research to determine whether and where to place a new location in a chain (Walgreens or McDonalds) is fairly standard and accurate. It considers population, demographics, existing coverage, the state of the economy, and so on. The process of determining demand for something entirely new (satellite communications) is far more open ended and uncertain. How much traffic do previous satellite communication networks (Iridium) actually carry (there are niche uses like for DeLorme GPS devices with limited messaging features), and which will be the first to really take off?

It also generally requires a certain amount of creativity and insight. The examples above all required a certain amount of lateral and imaginative thinking. Interestingly, customers are not always able to identify their own needs. They can and do, however, recognize a good idea when they see one. Henry Ford is said to have quipped that if he had asked customers what they wanted, they’d have asked for a better horse.

It is important to note that products do not succeed in the marketplace solely on the basis of their qualities and potential utility. The price at which they are offered is a crucial consideration as well. It may be possible to produce some products long before they actually become available — at some (probably exorbitant) cost — but it generally makes no sense to do so. Few people would be able to afford the product (the process of early adoption on the basis of wealth, adventurousness, and technical knowledge is well understood), it may be brittle and unreliable, and the cost-benefit calculation may not make sense.

Austrian economics (the economic school I believes comes closest to being “right”) describes the economy as a giant, ongoing, competitive (and cooperative) discovery and simultaneous auction process. People and organizations have unlimited desires but have limited resources. (Per Thomas Sowell, “The first law of economics is scarcity. The first law of politics is to ignore the first law of economics.”) As such, they must choose which products and services they acquire based on which they believe will address their most pressing needs and desires on the margin. This applies not only to consumer products but also to resources needed to produce other products. The price system is the means by which society as a whole coordinates these activities by allowing people to value different resources and perform profit-and-loss calculations that determine which activities make sense and which do not.

A secondary consideration of entrepreneurs is how to arrange resources to provide the goods and services they offer. A wide range of skills and analyses can be brought to bear on this problem, but when writing about entrepreneurial activities from an economic standpoint the details are somewhat beside the point. Like I noted with respect to Agile considerations yesterday, I will assert that the discussion of the details and techniques associated with improving quality and efficiency are important, but aren’t strictly about the entrepreneurial function, which is more about the decision to apply that effort than the nature of the effort itself.

A further observation people make is the difference between being an entrepreneur and merely making oneself a job. In a job you simply do the same thing over and over without applying a huge amount of creativity and judgment, and you may also do what you are asked to be others. Even if you run your own business, if it offers an undifferentiated good or service in an existing industry and you aren’t continually trying to try new things and find new customers in create ways, you have a job and are not truly an entrepreneur. If, by contrast, you are always trying to figure out how to do more and different things in novel ways, you are functioning as an entrepreneur even if you don’t run your own company. (If you are someone else’s employee, however, you may be limited in how much you get to apply your creativity and independent judgment.)

As I see it, entrepreneurial actions can be driven by several factors. One is that they identify a need, whether of individuals or groups of people, and they try to find ways to address that need. During a class called Analysis, Synthesis, and Evaluation in my junior year in college, for example, the professor told us about an individual who watched poor people in Africa wash their clothes by hand by dipping them in a local stream and wringing them out on a rock. That person reasoned that a small, plastic agitator could make the process of washing clothes a lot easier for people in those conditions. Another is that entrepreneurs may become aware of (or invent!) some new technology or methodology that improves efficiency or makes something possible that previously had not been. One of the interviews I had while looking for my first engineering job was with a company called Copeland Compliant Scrolls. The idea of a making an air compressor using a scroll design had been around for a while, but Copeland was either the first or one of the first companies that figured out how to manufacture them with tolerances tight enough to make them practical. Another driver is that conditions change that make certain activities economically feasible where they previously had not been. This can be driven by changes in prices or income driven by other factors, improved organization, increased demand, or other effects. Another potential driver is that some competitive or regulatory pressures may come to bear in ways that harm an entrepreneur’s business. The entrepreneur then has to respond by changing or improving in some way or face loss of profits at best or going out of business entirely at worst. People often figure things out that surprise them, simply because they have to.

So the process of being an entrepreneur comes down to repeatedly asking the following questions, and probably others, in any order appropriate to the situation. (A cool diagram can probably be made from this idea. Feel free to give it a shot and tell me what you come up with.)

  • How can I help someone? Will this help someone? Who might this help?
  • Can this be done?
  • Can I use this new idea in a different and useful way?
  • Can this actually be sold? Can the benefits be effectively communicated?
  • Can I make this better or do this better?
  • Is this effort profitable (and hence sustainable)?
  • Is some external pressure driving a need to change?
Posted in Management | Tagged , , | Leave a comment

Another Take On Communication Styles

Years ago I was with my uncle, an extremely technically adept retired Coast Guard officer, bleeding the water out of a fuel line for a small outboard motor. He told me to turn the petcock counter-clockwise to open it and I asked him whether that’s counter-clockwise looking down at it or up at it. He pointed out that ships have been lost for that reason.

Communication is hard for *everyone*.

Communication has long been something of a challenge for me, so I’ve learned to keep my eyes and ears open for ways to do it better. I also like frameworks and systems, so I was naturally drawn to the information Nikki (Heise) Evans (see her company’s website) presented during a recent online series of project management webinars organized by the Project Management Institute (PMI) regarding a new (to me) way of thinking about communication styles and motivations.


Note that the Goal Setting quadrant should include bullets for Vision, Options, Bottom Line, and Discussion. I apologize for not capturing the optimal rendition of this slide.

She describes two different preference axes: one for speed (moves carefully vs. fast paced) and one for results vs. relationships orientation. The latter could also be thought of as engagement vs. solution, product vs. process, or people vs. things. Where one falls on the two axes points to a preferred concern orientation as one works. This works like many different personality tests.

If one prefers to move quickly with a results orientation, one is said to have a goal setting outlook that primarily asks what the focus is. If one prefers to move quickly but with a relationship orientation, that points to a lifestyle outlook that tries to make everyone comfortable. If one prefers to move more deliberately and with a relationship orientation, that indicates indicates a stability outlook concerned with how different ideas will sound. Not surprisingly, I prefer to proceed somewhat deliberately and with a results orientation, which is described as having an information outlook that tries to identify what customers need.

I am always an analyst first, and then everything flows from that. Even if I don’t appear to be oriented toward people in an obvious way, developing excellent solutions is ultimately in service of meeting their needs, so we see everyone is trying to get to the same place.

It is important to note that good leaders and analysts will be able to adapt to conditions, team members, and customers as needed, and that these categories of outlook are not hard and fast, but only serve to understand how one is likely to proceed when starting out. Over time, effective communication, analysis, iteration, feedback, and correction should lead individuals and teams to create similarly effective solutions. I wonder, however, if Nikki and and her colleagues have noticed that the nature of the solutions and the problems typically encountered tend to vary with the orientation of different types of leaders and approaches. I believe I will ask them that very question, and I will report back when I learn more.

Noting my affinity for frameworks and systems, I’d like to see where this fits in with similar insights made by others. Let me list and briefly describe some of them, and then compare the present insight.

I first encountered the above communication model in a leadership school in the Army, and I have encountered additional materials on communication and information theory since (for example, this one was interesting). This started to give me a feel for how complex communication can be and the problems that can be encountered.

From my work in distributed real-time systems I learned that distributed, real-time system communications must often be retried until success is achieved. Given the number of communication channels that may exist the effects of poor communications can be highly problematic.

This is why my engagement and analysis framework has evolved the way it has, and how I’ve chosen to represent it. The circles in the top figure represent the iteration within each phase of all the relevant activities and communications as they proceed to completion. The two-way connections between the circles represent the iteration between phases both as the project moves forward as the effort progresses and backwards as subsequent discoveries induce changes in the understanding of previous phases. The lower image shows how the analysis process proceeds in the context of a project management effort.

A gentleman named Dave Thomas, who was one of the original authors of the Agile Manifesto, gave a well known talk provocatively titled Agile is Dead, admonished practitioners that the cottage industry that has grown up around Agile and Scrum (and Kanban and so on) is largely beside the point, and that what Agile should really involve is getting people to talk to each other and change course as necessary. It’s not that all the other things taught don’t have value; it’s that they’re kind of beside the point. All the practices (that came from the eXtreme Programming or XP oeuvre that was in vogue about the time the Agile Manifesto was written) are useful, but they aren’t the major innovation. They are things you should be doing anyway if you want to run an effective shop, whether you are consciously “doing Agile” or not. The real innovation was in seeking and incorporating feedback and correction purposefully, early, and often. In short, communicate more and better and act on it.

A gentleman named Matthew Green shared his extensive knowledge of Agile and its related practices at two of the recent Tampa IIBA chapter’s weekly certification study group webinars. His presentations and knowledge and speaking quality were exceptional and he made the best arguments I’ve ever heard as to why the various software development practices should be thought of as integral to the Agile oeuvre. His argument is that the the ability to react quickly and actually deliver value quickly is dependent on knowing and practicing the relevant development techniques. While I agree with his point and have complete respect (and admiration) for his skills and experience, I still view the practices as mostly separate and independent. ScrumMasters don’t necessarily need to know much of the development oeuvre, and Product Owners need to know it even less. (It never hurts if they do, of course.) The advanced techniques they teach in the Certified Scrum Developer track (as least through Scrum Alliance) make a worthy point of crossover, however. Somebody should know the development practices well, but it is somewhat specialized knowledge. ScrumMasters, Product Owners, and other team members do not necessarily need to concentrate on that knowledge. They have their hands full with other duties, after all. It should usually be enough for people to communicate with each other effectively so the necessary knowledge is shared and acted upon as needed. That, to me, is the essential thrust of Agile.

All that said, you are invited to review the meeting recordings here, and tell me what you think. Specifically check out the videos from August 23rd and August 30th, 2022 (BABOKStudyGroup20220823.mp4 and BABOKStudyGroup20220830.mp4). I believe they are sessions 50 and 51.

I encountered two more interesting frameworks while attending many dozens of business analysis Meetup sessions hosted by different IIBA chapters.

Kara Sundar presentation on Communicating with Leaders described different concerns of leaders in ways that may illuminate how to understand and communicate with them. This is similar to the framework Ms. Evans presented, as described above. I’m sure it will also affect the nature of the working environment those leaders create.

Kim Hardy’s presentation on Nice SDLC Cross-Functional Areas, by contrast, described how to build a team based on the concerns of the team members. Rather than building a team by gathering up people with skill-based job titles like Sponsor (Product Owner), Team Lead (Architect), ScrumMaster, Developer, DBA, UI/UX Designer, Graphic Artist, Tester, Business Analyst, Specialists in Security/Deployment/Documentation, and so on, one can identify team members based on concerns like Business Value, User Experience, Process Performance, Development Process, System Value, System Integrity, Implementation, Application Architecture, and Technical Architecture. One doesn’t typically recruit for people with the latter labels or titles, but this is a way to ensure that attention is paid to the qualities of the process and the solution. This should happen anyway, but having an organized way of thinking about can definitely increase the chances of success.

I have also had the privilege of learning about the Herrmann Brain Dominance Instrument, which can help managers communicate more effectively with team members. Other popular personality typing systems like Myers-Briggs and the Big Five provide similar insights.

At one point I thought that Ms. Hardy’s framework helped address the non-functional aspects of the solution while building a team by standard title was geared more toward the functional aspects of the solution. Most of the other frameworks and tools seem geared more toward aspects of the engagement and the environments in which they operate. I still think that idea has some merit, but it is not particularly important. In any case, I know all of these practitioners are very knowledgeable and experienced and can provide value across an entire spectrum of concerns, but each presenter has a limited time to advance a theme and make a limited but powerful point. I think all of these insights provide valuable tools for improving communication and mutual understanding.

Posted in Management | Tagged , | Leave a comment

A Fascinating Simulation-Based Method for Risk Analysis

While piling up the final PDUs I need to renew my PMP, I encountered a fascinating discussion of risk (and budgeting) analysis and management techniques, presented by the internationally recognized risk management expert Dr. David T. Hulett. The video I watched is hosted on ProjectManagement.com, which I believe is behind a paywall. The video below, on Youtube, appears to cover roughly the same material.

I first learned about techniques like this from my friend Richard Frederick, in lecture 14 of his 20-part series on agile project management and data management, but hadn’t learned a lot more about it until now.

The interesting part of Dr. Hulett’s work is not just the application of Monte Carlo techniques to the analysis of risk (and project costs), but the wide range of additional considerations the technique can include, address, and illuminate. These include lead-lag effects, dependencies, correlations, the fact that risk elements are far more likely to represent threats (negative) than opportunities (positive), the fact that — for valid reasons usually involving complexity — risks and costs are almost certainly greater than are usually estimated, and more.

I’m sure could profitably watch this multiple times, but for now it has given me many useful concepts to chew on.

Posted in Management | Tagged , , | Leave a comment

How To Address A Weakness

In this evening’s discussion in our weekly Tampa IIBA certification study group we touched on the subject of dealing with weaknesses. This initially grew out of a discussion about SWOT analyses. Based on things I’ve read and my proclivity to try to look at problems from as many angles as possible, I am aware of two main approaches.

The obvious one is to make the weakness less weak. There are numerous ways to do this, depending on the nature of the weakness. One is to learn more or otherwise develop or add to the skill or capability that is lacking. This can involve bringing in new people (from within and from outside your organization), obtaining information (in books and papers and online), training in new tools or technologies (via courses, videos, and friends), and many other methods.

The other, and less intuitive, approach is to enhance your strengths so that the weakness matters less. If you or your organization is so good at something that provides a significant competitive advantage, then it may be wisest to concentrate on maintaining or improving that strength.

In general, it’s best to take the approach — or combination of approaches — that provides the highest marginal benefit. That is, go with the solution that gives the greatest bang for the buck.

Think of a football team. (We’re talking American Football here, not what us crazy Americans call soccer that the rest of the world calls football!). Every team gets to put eleven players on the field on defense at one time. Assuming the level of overall talent among teams is roughly equal, we might observe that some teams, because they have better players at certain positions or better coaching or better or different schemes, will be stronger at some facets of defense and weaker in others, while other teams are stronger or weaker in different areas.

For example, one team may have a very strong defensive line that is reasonably able to stop opposing teams’ rushing attack, but can really put a lot of pressure on the quarterback. A different team may have a less proficient defensive line but a much stronger secondary. If the first team’s defensive line can pressure the opposing quarterback enough, it may not matter that its secondary cannot cover the receivers as well, because the quarterback won’t be able to get the ball to them, anyway. If the second team’s secondary is very strong and is able to blanket the opposing receivers, it may not matter if its defensive line is weaker, because even if the quarterback has more time to throw, the receivers will never be open to throw to.

There are other ways to cover for a weak aspect of a defense. One is to improve the offense so the defense is on the field less or otherwise does not have to be as effective. Another is to modify the stadium and motivate the crowd so opposing offenses have to play in a louder environment, which should reduce their effectiveness. The number of factors that can be considered in this kind of analysis is nearly infinite, so in order to keep things simple, we’re only going to consider the problem from the two dimensions of a defense’s line vs. its secondary.

Looking at the first case of a defense with a strong line and a weaker secondary (the weakness we intend to discuss), we can see that we can either improve the secondary (the obvious approach), or (less obviously) we can maintain or improve the defensive line even more. Remember that resources are limited (the number of players on a team and the number on the field at any one time are fixed, there is a salary cap, there are recruiting restrictions, and so on) and the solution must be optimized within defined constraints. Not all problems are constrained in this exact way, but there are no problems which are not constrained in some way. It’s always a question of making the best use of the resources you have. The approach you take should be based on what works best for the current situation.

Posted in Management | Tagged , , , | Leave a comment

Basic Framework Presentation

I found it necessary to put together a shorter introduction to my business analysis framework than my normal, full-length presentation(s). The link is here.

Posted in Tools and methods | Tagged , , | Leave a comment

Decision Modeling

Many business processes require decisions to be made. Decision modeling is about the making and automating of repeatable decisions. Some decisions require unique human judgment. They may arise from unusual or entrepreneurial situations and involve factors, needs, and emotions that cannot reasonably be quantified. Other decisions should be made the same way every time, according to definable rules and processes.

Decisions are made using methods of widely varying complexity. Many of the simulation tools I created and used were directed to decision support. The most deterministic and automatable decisions tend to use the techniques toward the lower left end of the trend toward complexity and abstraction for data analysis techniques shown above, although the ability to automate decisions is slowly creeping up and to the right. I discussed some aspects of this in a recent write-up on data mining.

Decision processes embody three elements. Knowledge is the procedures, comparisons, and industry context of the decision. Information is the raw material and comparative parameters against which a decision is made. The decision itself is the result of correctly combining the first two. Business rules can involve methods, parameters, or both.

Let’s see how some of these methods may work in practice.

Decision trees, an example of which is shown above, list the relevant rules in terms of operations and parameters. The rules shown above involve simple comparisons, but more complex definitional and behavioral rules can apply. The optimization routines Amazon uses to determine how to deliver multiple items ordered at the same time in one or many shipments on one or many days and from one or more fulfillment centers involved up to 60,000 variables in the late-90s and are likely to be even larger now. A definitional rule may describe the way a calculation should be performed, while a behavioral rule might require a payment address to match a shipping address. The procedures and comparisons can be as complex as the situation demands.

The same set of rules can be drawn in the form of a decision tree as shown below.

These rules can be described during the discovery and data collection activities of the conceptual model phase, and also during the requirements and design phases. It is fascinating how many different ways such rules can be brought to life in the implementation phase.

The most direct and brute force way is shown below, in the C language.

This way also works, but looks totally different.

The number of ways this can be done is endless. The first method “hard codes” all of the definitional parameters needed for comparison. This can be somewhat opaque and hard to maintain and update. The second method defines all the parameters as variables that can be redefined on the fly, or initialized from a file, a database, or some other source. The latter is easier to maintain and is generally preferred. It is extremely important to maintain good documentation, including in the code itself. I’ve omitted most comments for clarity, but I would definitely include a lot in production code. I would also include references to the governing documents, RTM item index values, and so on to maintain tight connections between all of the sources, trackers, documents, and implementations.

In order to understand these, you’d have to know a reasonable amount about programming, and failing that you should know how to define tests that exercise every relevant case. For example, you would want to define tests that not only supplied inputs in the middle to each range, but also supplied inputs on the boundaries of each range, so you could fully ensure the greater-than-OR-equal-to or just greater-than logic tests work exactly the way you and your customers intend. Setting the requirements for these situations may require understand of organizational procedures and preferences, industry practices, competitive considerations, risk histories and profiles, and governing regulations and statutes. None of these considerations are trivial.

You will also want to work with your implementers and testers to ensure they test for invalid, nonsensical, inconsistent, or missing inputs. It’s up to the analysts and managers to be aware of what it takes to make systems as robust, flexible, understandable, and maintainable as possible. Some programmers may not want to do these things, but the good and conscientious ones will clamor to inject the widest variety of their concerns and experiences into the process. As such, it’s important to foster good relationships between all participants in the working process and have them contribute to as many engagement phases as possible.

Finally, data comes in many forms and is used and integrated into organizations’ working processes in many ways. I discuss some of them here and here, and visually suggest some in the figure below. Some of this data is used for other purposes and doesn’t directly drive decisions, and I would not assert that one kind is more important than another. In the end, it all drives decisions and it is all important to get right.

Posted in Tools and methods | Tagged , , , | Leave a comment

Decision Analysis

Some decisions are fairly straightforward to make. This is true when the number of factors is limited and when their contributions are well defined. Other decisions, by contrast, involve more factors which contributions are less clear. It is also possible that the desired outcomes or actions leading to or resulting from a decision are not well defined, or if there is disagreement among the stakeholders.

The process of making decisions in similar the entire life cycle of a business analysis engagement — writ small. It involves some of the same steps, including defining the problem, identifying alternatives, evaluating alternatives, choosing the alternative to implement, and implementing the chosen alternative. The decision-makers and decision criteria should also be identified.

Let’s look at a few methods of making multi-factor decisions at increasing levels of complexity. It is generally best to apply the simplest possible method that can yield a reasonably effective decision, because more time and effort is required as the complexity of analysis increases. I have worked on long and expensive programs to build and apply simulations to support decisions of various kinds. Simulations and other algorithms themselves vary in complexity, and using or making more approachable and streamlined tools makes them more accessible, but one should still be sure to apply the most appropriate tool for a job.

  • Pros vs. Cons Analysis: This simple approach involves identifying points for and against each alternative, and choosing the one with the most pros, fewest cons, or some combination. This is a very flat approach.
  • Force Field Analysis: This is essentially a weighted form of the pro/con analysis. In this case each alternative is given a score within an agreed-upon scale for the pro or con side, and the scores are added for each option. This method is called a force field analysis because it is sometimes drawn as a (horizontal or vertical) wall or barrier with arrows of different lengths or widths pushing against it perpendicularly from either side, with larger arrows indicating considerations with more weight. The side with the greatest total weight of arrows “wins” the decision.
  • Decision Matrices: A simple form of the decision matrix assigns scores to multiple criteria for each option and adds them up to select the preferred alternative (presumably the one with the highest score). A weighted decision matrix does the same thing, but multiplies the individual criteria scores by factor weightings. A combination of these techniques was used to compile the ratings for the comparative livability of American cities in the 1984 Places Rated Almanac. See further discussion of this below.
  • Decision Tables: This technique involves defining groups of values and the decisions to be made given different combinations of them. The input values are laid out in tables and are very amenable to being automated though a series of simple operations in computer code.
  • Decision Trees: Directed, tree structures are constructed where internal nodes represent mathematically definable sub-decisions and terminal nodes represent end results for the overall decision. The process incorporates a number of values that serve as parameters for the comparisons, and another set of working values that are compared in each step of the process.
  • Comparison Analysis: This is mentioned in the BABOK but not described. A little poking around on the web should give some insights, but I didn’t locate a single clear and consistent description for guidance.
  • Analytic Hierarchy Process (AHP): Numerous comparisons are made by multiple participants of options that are hierarchically ranked by priority across potentially multiple considerations.
  • Totally-Partially-Not: This identifies which actions or responsibilities are within a working entity’s control. An activity is totally within, say, a department’s control, partially within its control, or not at all in its control. This helps pinpoint the true responsibilities and capabilities related to the activity, which in turn can guide how to address it.
  • Multi-Criteria Decision Analysis (MCDA): An entire field of study has grown up around the study of complex, multiple-criteria problems, mostly beginning in the 1970s. Such problems are characterized by conflicting preferences and other tradeoffs, and ambiguities in the decision and criteria spaces (i.e., input and output spaces).
  • Algorithms and Simulations: Must of the material on this website discusses applications of mathematical modeling and computer simulation. There are many, many subdisciplines within this field, of which the discrete-event, stochastic simulations using Monte Carlo techniques I have worked on is just one.
  • Tradespace Analysis: Most of the above methods of analysis involve evaluating trade-offs between conflicting criteria, so there is a need to balance multiple considerations. It is often true, especially for complex decisions, that there isn’t a single optimal solution to a problem. And in any case there may not be time and resources to make the best available decision, so these methods provide a way to at least bring some consideration and rationality to the process. Decision-making is ultimately an entrepreneurial function (making decisions under conditions of uncertainty).

The Places Rated Almanac

I’ve lived in a lot of places in my life for I consider Pittsburgh to be my “spiritual” hometown. I spent many formative years and working years there and I have a great love for the city, even against my understanding of its problems. So, I and other Pittsburghers were shocked and delighted when the initial, 1985 edition of Rand McNally’s Places Rated Almanac (see also here) rated our city as the most livable in America. Not that we didn’t love it, and not that it doesn’t have its charms, but it pointed out a few potential issues with ranking things like this.

The initial work ranked the 329 largest metropolitan area in the United States on nine different categories including ambience, housing, jobs, crime, transportation, education, health care, recreation, and climate. Pittsburgh scores well on health care because it has a ton of hospitals and a decent amount of important research happens there (much of it driven by the University of Pittsburgh). It similarly gets good score for education, probably driven by Pitt and Carnegie Mellon, among many other alternatives. I can’t remember what scores it got for transportation, but I can tell you that the topography of the place makes navigation a nightmare. Getting from place to place involves as much art as science, and often a whoooole lot of patience.

It also gets high marks for climate, even though its winters can be long, leaden, gray, mucky, and dreary. So why is that? It turns out that the authors assigned scores that favored mean temperatures closest to 65 degrees, and probably favored middling amounts of precipitation as well. Pittsburgh happens to have a mean temperature of about 65 degrees, alright, but it can be much hotter in the summer and much colder in the winter. San Francisco, which ranked second or third overall in that first edition, also has a mean temperature of about 65 degrees, but the temperature is very consistent throughout the year. So which environment would you prefer, and how do you capture it in a single metric? Alternatively, how might you create multiple metrics representing different and more nuance evaluation criteria? How might you perform different analyses in all nine areas than what the authors did?

If I recall correctly, the authors also weighted the nine factors equally, but provided a worksheet in an appendix that allowed readers to assign different weights to the criteria they felt might be more important. I don’t know if it supported re-scoring individual areas for different preferences. I can tell you, for example, that the weather where I currently live in central Florida is a lot warmer and a lot sunnier and a lot less snowy than in Pittsburgh, and I much prefer the weather here.

Many editions of the book were printed, in which the criteria were continually reevaluated, and that resulted in modest reshufflings of the rankings over time. Pittsburgh continued to place highly in subsequent editions, but I’m not sure it was ever judged to be number one again. More cities were added to the list over the years as different towns grew beyond the lower threshold of population required for inclusion in the survey. Interestingly, the last-place finisher in the initial ranking was Yuba City, California, prompting its officials to observe, “Yuba City isn’t evil, it just isn’t much.”

One thing you can do with methods used to make decisions is to grind though your chosen process to generate an analytic result, and then step back and see how you “feel” about it. This may be appropriate in personal decisions like choosing a place to live, but might lead to bad outcomes for public competitions with announced criteria and scoring systems that should be adhered to.

Posted in Tools and methods | Tagged , , | Leave a comment

Business Analysis Embedded Within Project Management

I often describe the process of business analysis as being embedded in a process of project management. I’ve also described business analysis as an activity that takes place within a project management wrapper. Engagements (projects) may be run in many different styles, as shown below, but the mechanics of project management remain more or less the same.

What changes in the different regimes is the way the actual work gets done. And here I find it necessary to make yet another bifurcation. While I talk about business analysis as taking place across the entire engagement life cycle, through all of its phases, the function of the BA is different in each phase, and the level of effort is also different in each phase.

I think of essentially three groups as being involved in each engagement, the managers, the analysts, and the implementers (and testers). Let’s look at their duties and relative levels of participation across each of the phases. The descriptions are given as if every role was filled by different people with siloed skill sets, but individuals can clearly function in multiple roles simultaneously. I’ve done this in some instances and it will almost inevitably be the case in smaller teams and organizations.

  1. Intended Use (Problem Definition): This is where the project’s ultimate goals are defined, resources are procured, and governance structures are established. This work is primarily done by project and program managers, product owners and managers, and the sponsors and champions. Analysts, as learning and translating machines, can serve in this phase by understanding the full life cycle of an effort and how the initial definition and goals may be modified over time. It may be that only senior analysts participate in this phase. Implementers can contribute their knowledge of their methods and solution requirements and how they need to interact with customers.
  2. Conceptual Model: This is where the analysts shine and drive the work. The managers may need to facilitate the mechanics of the discovery and data collection processes, but the analysts will be the ones to carry it out, document their findings, and review them with the customers, making changes and corrections until all parties have reached agreement. The implementers will generally be informed about events, and may participate lightly in discovery activities or do brief site visits to get a feel for who they are serving and the overall context of the work.
  3. Requirements: This works very much like the conceptual model phase, where the analysts find out what the customers need through elicitation and review and feedback. The implementers will be a little more involved to the degree that their solutions inject their own requirements into the process. Managers facilitate the time, resources, introductions, and permissions for the other participants.
  4. Design: There are two aspects to the design. The abstract design may be developed primarily by the analysts, while the more concrete aspects of the design are likely to be developed by the implementers. I often describe the requirements phase as developing the abstract To-Be state and the design as developing the concrete To-Be state, but even the “concreteness” of the design has different levels. The abstract (but concrete!) part of the design describes the procedures, equations, data items, and outputs for the solution, while the concrete (really, really concrete!) part of the design specifies how the foregoing is implemented. I know from painful experience that you can have a really good idea what you need a system to do, but being able to implement your desires correctly and effectively can be difficult, indeed. See here, here, and here for further discussion. The latter item is especially germane.
  5. Implementation: The implementers clearly do most of the work here. The analysts serve as liaisons between the implementers and customers by facilitating ongoing communication, understanding, and correction. The managers support the process and the environment in which the work is conducted.
  6. Test (and Acceptance): The implementers (and testers) also expend most of the effort in this phase. The managers facilitate and protect the environment and verify final acceptance of all items. The analysts facilitate communication between all participants and the customer, and also continually attempt to improve the flow of the entire working process.

I tend to express the phases of my analysis framework in a streamlined form of a more involved process. I start with everything that gets done:

  • Project Planning
  • Intended Use
  • Assumptions, Capabilities, and Risks and Impacts
  • Conceptual Model (As-Is State)
  • Data Sources, Collection, and Conditioning
  • Requirements (To-Be State: Abstract)
    • Functional (What it Does)
    • Non-Functional (What it Is, plus Maintenance and Governance)
  • Design (To-Be State: Detailed)
  • Implementation
  • Test
    • Operation, Usability, and Outputs (Verification)
    • Outputs and Fitness for Purpose(Validation)
  • Acceptance (Accreditation)
  • Project Close
  • Operation and Maintenance
  • End-of-Life and Replacement

Then I drop the management wrapping at the beginning and end (with the understanding that it not only remains but is an active participant through all phases of an engagement or project/product/system life cycle) simply because it’s not explicitly part of the business analysis oeuvre.

  •   Intended Use
  •   Conceptual Model (As-Is State)
  •   Data Sources, Collection, and Conditioning
  •   Requirements (To-Be State: Abstract)
    • Functional (What it Does)
    • Non-Functional (What it Is, plus Maintenance and Governance)
  •   Design (To-Be State: Detailed)
  •   Implementation
  •   Test
    • Operation, Usability, and Outputs (Verification)
    • Outputs and Fitness for Purpose (Validation)
  •   Operation and Maintenance
  •   End-of-Life and Replacement

Then we simplify even further, since the data melts into the other phases and we don’t always worry about the full life cycle.

Now let’s consider the practice of project management in its own language. The Project Management Body of Knowledge (PMBOK) is the Project Management’s Institute’s (PMI) analog to the IIBA’s BABOK. It defines five phases of a project as follows.

The figure above shows the five phases proceeding from left to right through the course of the project. The practice embodies management of ten different areas of concern, each of which comes into play during some or all of the project’s phases. (This was true through the sixth edition of the PMBOK. The recently released seventh edition replaces the ten knowledge areas with twelve principles, including extensive coverage of Agile practices. I will update this article accordingly at some point in the future.)

The project is defined and kicked off during the initiating phase, during which the requisite stakeholders are identified and involved. The project charter is developed in this phase, shown in the integration management area in the upper left. BAs can definitely be part of the process of creating and negotiating the charter and helping to shape the project and its environment. The project charter defines the key goals of the project, the major players, and something about the process and environment.

The planning phase is where the bulk of the preparation gets done in terms of establishing the necessary aspects of the working environment and methodologies for the project. The actual work gets done in the executing phase, with the monitoring and controlling phase proceeding concurrently, but which is devoted to monitoring, feedback, and correction separate from the actual work. The closing phase ends the project, records lessons learned, archives relevant materials, celebrates its successes (hopefully…), and releases resource for other uses. The methods and concerns in each of the ten management areas all overlap with the practice of business analysis, and BAs should absolutely be involved with that work.

In the figure below I show that, once the engagement (or effort or project or venture or whatever) is set up, most of the work of the business analysis (as well as the implementation and testing) oeuvre is accomplished during the executing phase and the monitoring and controlling phase. This includes the intended use phase (which also includes the activities in the project charter), because it may change as the result of developments, discovery, and feedback over the course of the engagement.

Don’t take the location of the phases too literally. I’m not saying the first three BA phases occur during executing and the remaining three during monitoring and controlling. Rather, I’m saying that all phases of BA work are conducted during the concurrent executing and monitoring and controlling phases. Seen in this light, The initiating, planning, and closing phases from the project management oeuvre are the “wrapper” within which the bulk of an engagement’s actual work is done.

I’ll end by emphasizing a few things again. These general concepts apply no matter what project approach may be taken (e.g., Waterfall, Agile, Scrum, Kanban, SAFe, or hybrid). Individuals may wear multiple hats depending on the organization and situation. All parties should work together to bring their strengths and unique abilities together. Few participants are likely to participate through all phases of an engagement, but they should be made aware of the full context of their work. Greater understanding of the roles of all participants and job functions will greatly aid cooperation and understanding. And finally, and most importantly, that greater understanding will lead to greater success!

Posted in Management | Tagged , , | Leave a comment

Estimation

Estimation is used to try to predict future outcomes related the the iron triangle elements of time, money, and, to a lesser degree quality (or features or performance). The BABOK essentially only discusses the first two. Estimates can be made of both costs and benefits. While all aspects of this process are in a sense entrepreneurial, the biggest component of entrepreneurial judgment is predicting future benefits, particularly for potential sales.

Any aspect of an effort or solution may be estimated for any part of its full life cycle. Examples include the time, cost, and effort (in terms of staff and materials) of any activity; capital, project, and fixed and variable costs of delivered solutions, potential benefits (e.g., sales, savings, reduced losses), and net performance (projected benefits minus projected costs).

The most important thing to know about estimation is that it tends to be more accurate when more information is available. This is especially true when making estimates about outcomes very similar situations from the past.

There are many methods of estimation including:

  • Top-down and Bottom-up: Estimates can be performed from both ends depending on what is known about the engagement and the solution (the project and the product). Breakdowns can be made from the highest levels down to more detailed levels, or aggregations can be made from detailed low-level information which is then grouped and summed.
  • Parametric Estimation: This method has a lot in common with bottom-up estimation. It attempts to multiply lesser-known input information (how many of A, B, and C) by better-known parametric information (e.g., the known prices for each individual example of A, B, and C). Levels of skill and experience can figure in to such calculations as well.
  • Rough Order of Magnitude (ROM): This is basically an educated guess, based on experience, impressions, and entrepreneurial judgment. There are a few pithier names for this method!
  • Rolling Wave: This involves making continuous estimates of elements throughout an engagement, which ideally become more accurate over time as more is known and less is unknown.
  • Delphi: This technique seeks estimates from a wide variety of participants, potentially over multiple iterations, until a consensus is reached. This allows certain kinds of knowledge to be shared across the participants. As an example, think of a group of coders bidding on tasks during sprint planning. Most participants might make similar judgments of the complexity of a task, but if one or two team members make very different estimates they could share that they’re aware of a simple or existing solution to the problem that will reduce the effort required, or know about hidden requirements and other stumbling blocks that will increase the effort required. As another example, the first issue of Omni Magazine included a Delphic poll of its readership asking about when certain developments, discoveries, and accomplishments might take place. The results were published in a subsequent issue.
  • PERT: This technique asks participants to estimate best-case, expected, and worst-case outcomes, which are then averaged, with the expected outcome given a weighting of four times, i.e., result = (best + 4*expected + worst) / 6.

As mentioned above, the accuracy of estimates is likely to improve when more information is available. This information can come from similar or analogous situations, historical information, expert judgment, or a combination of any or all of these.

Estimates can be given as point values or as a range, the latter of which will also indicate the degree of uncertainty. A measure called the confidence interval describes the expected range of outcomes, and it is generally expressed as (1 – expected maximum error), where the expected maximum error is a percentage of the central value. For example, an estimate of 100 plus or minus 10 would indicate a confidence interval of 90%. In the case of 100 plus or minus five, the confidence would be 95%. Certain statistical and Monte Carlo techniques generate confidence intervals. In these two examples, the maximum absolute error in one direction is sometimes called the half-width, because it is half of the full range of possible outcomes (the upper and lower bounds do not have to be the same distance from the expected value.). This information can come into play when determining needed sample sizes.

Estimates should generally be made by those responsible for the outcome of the effort for which the estimate was performed. These can, however, be checked against estimates from additional parties.

Posted in Tools and methods | Tagged , | Leave a comment

Item Tracking

Item tracking is how participants in an effort monitor what concerns, issues, and tasks are valid and need to be addressed, and who has responsibility. Items can arise in any phase of an engagement and be tracked through any other phase, including during the extended operation and maintenance phase.

Items may incorporate the following attributes, according to the BABOK. I think some of these are redundant, but tracking systems like Jira and Rally embody them by default, and can be customized to include the others. More importantly, if you look back to your own experience, you can see that most of these are implicitly present even if not formally acknowledged.

  • Item Identifier: A unique identifier that serves as a key so the item can be found.
  • Summary: A description of the issue that includes text and possibly images, diagrams, and media.
  • Category: A key that can be used to group the item with similar items.
  • Type: The kind of item. (Similar to category?) (Create as needed for your unique situation.)
  • Date Identified: Date and time the issue was raised (and introduced into the system).
  • Identified By: Name (and contact information) of individual(s) who identified or raised the issue.
  • Impact: What happens if the item is not resolved. May include specified times for milestones and completion.
  • Priority: An indication of the item’s importance and especially time requirements.
  • Resolution Date: Times by which milestones must be reached or by which the item must be resolved.
  • Owner: Who is responsible for marshaling the item through to completion.
  • Resolver: Who is responsible for resolving the item.
  • Agreed Strategy: Method for approaching or resolving the item. The BABOK presents options akin to those used in risk analysis (e.g., accept, pursue, ignore, mitigate, avoid), but others are possible.
  • Status: The current state of the item. Items may have their own life cycles (e.g., opened, assigned, in work, in testing, resolved, canceled, rejected). See below for further discussion.
  • Resolution Updates: A log of activities and status updates detailing the item’s disposition.
  • Escalation Matrix: What to do and who should do it if the item is not resolved in the allotted time.

Each organization, and even each engagement, may have its own standard procedures and vocabulary for handling items through their life cycle. When I wrote models for nuclear power plant simulators at Westinghouse we usually had three or four projects going at once, and all of them named some of their work items differently. We had DRs, TRs, and PRs, for deficiency reports, trouble reports, and problem reports at the very least, depending I think on the customer’s preferred language.

I’ve written about using systems like Jira for tracking items through the entire engagement life cycle (here), but a few years later I can see that the idea should be expanded to include an independent life cycle for items within each phase of my framework, and that may be different for different phases. For example, the life cycle for implementation items might be something like assigned, in work, in bench testing, completed (and forwarded to testing). The cycle for conceptual model items might be very different, since it involves performing discovery operations through tours, interviews, research, calculations, and data collection, and then documenting the many identified elements and circulating them for review and correction. I should do a specific write-up on this.

Statistics can be compiled on the processing and disposition of items, so the engagement teams and customers can understand and improve their working methods. Care should be taken to be aware of potential variances in the complexity and requirements of each item, so any resultant findings can be interpreted accurately and fairly.

As mentioned above, items can arise and be tracked and resolved in and through all phases in an engagement’s or product’s full life cycle. In my career I’ve seen individually tracked items mostly come from testing, customer concerns, and to do lists generated by the solution teams themselves. We often called them punch lists as projects were advancing toward completion and the number of outstanding items became small enough to be listed and attacked individually. But, depending on the maturity and complexity of your organization and your effort, you’ll want to carefully consider what system you impose on a working project. You want it to be complex enough to be powerful and clarifying for all participants, but not so overwhelming that interacting with it is almost a larger burden than the actual work. That is, it should enhance the working environment, and not impede it.

What systems have you seen for tracking items?

Posted in Tools and methods | Tagged , | Leave a comment