Knowing What’s Under the Hood and How It Got There

The downside of having been involved in software for a long time is that it’s easy to fall into the trap of not knowing all of the latest languages and techniques. One gets off into analysis and management and discovery and so on and less into the construction of the day-to-day code. This is true even if such a practitioner has a very deep understanding of the code’s internal logic, its outputs, and its data structures and can ably specify meta-code for algorithms that need to be implemented.

Getting caught up with the latest developments in programming languages and APIs has been interesting and the logic of why things have gone in this or that direction has been clear. The first thing to understand is that the constraints are different than they used to be. Memory, storage, and processing speed were the main limitations once upon a time. My professor emphasized these points during my freshman programming course and they were still important some years later when I was working on thermo-hydraulic and thermodynamic simulations in real-time systems. While those constraints never go away entirely, one can always consume all available resources and cry for more, particularly for simulations, the main constraints are now more likely to be complexity and developer productivity. Bigger and more complex systems have to be verified and maintained and development teams need to be more productive.

The way to accomplish both goals is through greater abstraction. In a sense this is the same process an individual goes through when becoming an expert at something. The practitioner subsumes more and more basic concepts and techniques into “muscle memory”, which is really subconscious “brain memory”, so his or her mind is free to manipulate higher-level concepts. While I’ve been observing the entire industry has followed the same path. Processors, compilers, and APIs do more of the optimizing (I always loved reading about the innovations brought by each generation of Intel CPUs back when I was devouring close to a dozen computing magazines every month). Languages are restructured to embody more of the concepts of functional programming, which is no more than a consistent method of replacing details with abstractions. Development environments include features as simple as syntax highlighting and as complex as various forms of autocompletion and automated refactoring. Development of user interfaces has been simplified and made more flexible and adaptive as well. The point is that developers may not write more lines of code than they used to, but each of those lines of code is designed to do a lot more.

That said, it’s not always a good idea to just accept every abstraction you’re bequeathed, even if they’re all good ones. There is something to be said for understanding what’s going on behind the scenes. Today’s “full stack” web developer is in some ways an analog to yesterday’s developer who programmed down to the hardware and managed the registers, interrupts, stack, and heap directly. It was extremely useful to know what was going on to that level or close to it with early IBM PCs and high-level languages targeted to it. The earliest programmers likely had to know as much about electronic circuits as they did about logic. I always thought my brushes with assembly languages, special memory configurations, binary file structures, low-level networking, and several different high-level languages served me well when working in any particular area. The more background you have, the better off you are.

Consider the allocation of dynamic memory. Programmers used to have to do this entirely by hand and even when compilers started automating certain heap operations it was still up to the programmer to explicitly free up memory that wasn’t going to be used any more. I encountered a language some time ago that forced the programmer to nullify or redirect all pointers to a block of heap memory before it could be deallocated. That was moderately annoying at the time because I was being “forced” by the language to “waste” CPU cycles on activities I felt I was fully in command of. Later I came to appreciate that this constraint forces the programmer to get the details right and in the big scheme of things doesn’t impose a meaningful overhead. I mean, how many cycles does it really take to go pointerToThisThingyOverHere = NULL; even if you have to do what feels like a lot of it? Modern languages are increasingly likely to automate the process altogether. The language designers figure that there will always be spare CPU cycles with which to sweep through the heap to figure out what isn’t being used any longer and get rid of it automatically. Most systems that interface with humans spend at least some of their time waiting for input so that leaves plenty of space for background processes to work their magic. More dedicated or specialized systems might address the problems in different ways.

One of my classmates in the course I took to get my Scrum Developer certification is an active manager and Scrum Master in a C# shop (the class was in Java in the IntelliJ IDEA environment). He opined that it was perfectly reasonable to ask an interviewee if he or she understood the malloc function (or its equivalent in other languages) even though the interviewee might only have worked in a languages like Java or Javascript which tend to hide such details. Given my own experiences I thought he had a point. Even if the interviewee didn’t know the answer to that particular question it would be good if they could show appreciation some other hidden details.

Memory allocation is just one example of the phenomenon of increasing abstraction of which modern practitioners may not always be aware. The same thing is going on in graphics subsystems directly (the “full stack” of graphic operations is deep and specialized) and indirectly (offloading parallel operations to GPUs), communications (digital signal processors and other communications processors), parallel computing, and other areas. Every area is quite specialized at every level so it’s clear that no one person can know all of it.

One can get a lot of work done without knowing what’s going on under the hood and how it got there but there are times when that’s important, if for no other reason than to know if some proposed innovation is actually a variation of a previous finding.

Posted in Software | Tagged , , | Leave a comment

Project Management Environments I’ve Worked In – Part 2

Yesterday’s post described the project management environments I encountered earlier in my career. The unifying themes of each environment are that the formality and intensity of project management techniques are proportional to the scope and scale of each project. The longer each project goes, the more people who are involved, and the more steps there are, the more effort must be expended to plan, track, and communicate. The project management environments in my most recent three companies shared some of the characteristics of those of earlier companies but differed in that the recent positions were usually less project-oriented and more product- and service-oriented, although all of my work involved developing and employing complex software systems.

American Auto-Matrix, Senior Software Developer, HVAC Industry, 2000-2002

AAM developed and sold a line of HVAC controls to third-party integrators. Its internal management structures were all product-based, though billing codes were sometimes issued for specific activities (e.g., specific R&D and troubleshooting efforts for individual products and customers). As such all management was based on continuous streams of income and expenses over the life of each product rather than on the activities associated with specific projects. This article provides a nice discussion of the differences–and the overlaps–between project and product management. Because of these differences I won’t describe the PM environment in terms of projects but of products.

Conceive: Each product was conceived in the context of providing additional capabilities to the product line as a whole, and in response to needs identified to address customer desires and the moves of market competitors.
Plan: Different managers oversaw the development of hardware, software, firmware, manufacturing, and marketing of each product. The company was small enough and the product line limited enough that management was conducted in matrix fashion by department, across all items in the product line. One of the things I looked at during my tenure was the matrix of features across most of the control components. I wanted to see how consistently they implemented different features and see if there was a way to standardize the firmware and software for each to make it modular across products. The hardware of used in each product was sufficiently varied to preclude such an approach at that time, though I believe that with the advent of more powerful yet ridiculously inexpensive processors that can be programmed far more quickly using high-level languages that such an approach could be effective used. We could even have used a team approach that standardized sections of even custom-developed assembly language firmware in a way that enhanced the sharing of knowledge and standardization of techniques. As it was each engineer worked more or less in isolation, maintaining their own firmware and software archives.
Develop: New products were developed in a project-like fashion and then maintained and upgraded over time. The company maintained backward compatibility to even its oldest products and even maintained the ability to manufacture its earlier products with minimal overhead. In terms of software, as mentioned above, the engineers largely worked in isolation. Only the PC-based HMI product called Auto-Pilot was routinely worked on my more than one person, and the team coordinated the sharing, modification, release, and archiving of versions on an ad hoc basis.
Qualify: Products were tested in-house, then at limited locations in the field, and then made available for general release.
Launch: New products were released with accompanying marketing support, training of the technical support staff, appropriate updates to the company’s website and printed materials, and so on.
Deliver: Control units were manufactured and tested in batches in response to order placements and to maintain an appropriate stock on hand. Most distributions were accomplished through third-party integrators.
Retire: All information and manufacturing equipment related to products that were phased out were archived. The appropriate modifications were made to the company’s website and printed materials.

Regal Decision Systems, Program Manager/Technical Manager, Software and Operations Research Industry, 2002-2010

Regal’s work was usually project-based but one major effort had many characteristics of an ongoing product and one was clearly managed as a program. Rather then try to break them all down as I have previously I’ll describe how each worked.

The program I ended up managing involved placing employees with various customer organizations in the Naval Aviation community. The day-to-day activities of the workers employed by our company and by three different subcontracting companies were all managed by the customer. We just tracked their hours, pay, and benefits. I had to handle monthly and yearly reporting, budgeting and invoicing, finding replacement employees, renewing task orders and partnering agreements, supporting and responding to customer evaluations, and so on. I also managed a satellite office along with its utilities, furniture, and amenities, the computers used by all employees under the NMCI program, raises and communications with the home office, transition of managers in the customer Program Management Activities and the NAVAIR contracts office, travel and expenses, security clearances and secured visits when applicable, and more. All told there were up to 35 employees working across four task orders.

The product/project hybrid I worked on was quite interesting. The company was charged with building a tool to simulate inspection activities at land border crossings, and the tool, which was continually enhanced over the course of multiple contract cycles. Individual analysis efforts were managed like projects (or task orders within contracts) and the simulation software was managed like a product whose cost was amortized over the individual analysis efforts within each contract period. Changes to the software were driven mostly by internally identified needs that arose from the analyses we preformed but occasionally were requested directly by the customer (which was the GSA, or the General Services Administration; DHS managed the inspection processes and GSA managed the properties themselves, sometimes in conjunction with private owners). The configuration management of the software was mainly done by the senior software manager, but the team may well have used a source code control system. Most analysis projects involved sending a team of data collectors to do discovery and collect data in the form of video, counts, drawings, pictures, and capture from automated reporting systems. Permissions and travel had to be arranged in those cases. I worked on the analysis of a new port built near Calais, Maine. That involved meetings with different architects and government offices, estimations of expected traffic volumes and composition, construction of multiple models based on alternative designs, reporting of results, and offering of recommendations. The projects varied in scope and scale as did the planning and the nature of the deliverables. Over time the GSA worked with Regal to define a formal analytical procedure and report format. I thought they made the process needlessly burdensome and complex but everyone else seemed satisfied with it. Additional products/projects were developed for land ports of entry in Canada and Mexico as well, and I led the discovery process on many of those trips and wrote the various requirement and technical documents.

The rest of the company’s activities were project-based. They had defined beginnings, middles, and ends and rarely extended beyond a single contract phase. I managed or co-managed a number of those efforts and handled much of the reporting, communication, customer interfacing, and requirements analysis.

RTR Technologies (split from Regal Decision Systems), Senior Process Analyst/Senior Computer Scientist, 2010-2015

RTR had two main customers, an analytical arm of Naval Aviation and the Department of Homeland Security. The occasional work I did for DHS was usually project-based and of limited duration. The project management was usually lax from both our side and from the customer’s side, which proved to be a problem more than once. Other parts of the company maintained more of a product/program relationship with DHS. Most of the work I did for NAVAIR, and the work I was mainly hired to do, involved performing logistics analyses for different Navy and Marine Corps aircraft. Most of the analyses involved configuring and running simulations of inspection, maintenance, and operational activities. As is the case with the border simulation software written by Regal, RTR’s Aircraft Maintenance Model (AMM) was managed like a product while analytical efforts were managed like projects. Project and product management was somewhat ad hoc here as well, and RTR’s senior manager help all financial information close to the vest. None of the project or product managers had a clear enough picture of how the projects were running with respect to cost. The company’s owners also took a long time to recognize that they needed to bill more hours to cut down the burden of overhead across the activities of a relatively small company. Some of the company’s software products were managed using source code control systems but other software components were managed by individuals manually in designated directories on different servers. All supporting documentation was managed manually, though phased updates were stored in a SharePoint system dedicated to each aircraft modeled by the AMM. The AMM software and procedural framework was also managed by a formal Configuration Management Board. The Board held formal meetings and tracked requests for modifications over time. The company always operated in an Agile, iterative format but began to formally adopt Scrum techniques piecemeal over time. I served formally as a Product Owner on on the the DHS projects. That was quite easy to adapt to because I had essentially been doing the same thing for years.

The takeaway is that I’ve seen most combinations and permutations of project, program, and product management methodologies over the years, and I have worked in many capacities within those frameworks.

Posted in Management | Tagged , | Leave a comment

Project Management Environments I’ve Worked In – Part 1

Most of the work I’ve done in my career has been accounted for on a project basis. I was never an operations guy, my job was always to build or fix something under a particular job code and then go on to the next one. There were some times when different jobs involved making mods to or performing analyses using a particular piece of software over time, and there were other times when my job was to re-implement a variation of something I had done before. Working in so many different environments has given me a wide range of experience and allowed me to become (or is that forced me to become?) highly adaptable.

I will describe the project management modes of each of the companies I’ve worked for in terms of the five project phases: Initiating, Planning, Executing, Monitoring and Controlling, and Closing. Younger employees often just go for whatever ride their supervisors send them on; they may not always be conscious of what’s going on behind the scenes. That was somewhat true in my case but looking back it’s pretty easy to figure out what was going on.

Sprout Bauer (then part of Combustion Engineering, now owned by Andritz), Process Engineer, Pulp and Paper Industry, 1988-1989

I worked for what Sprout called the Technical Department, which analyzed the behavior and performance of integrated systems in terms of the quantity and quality of pulp produced. We did not worry about the design, manufacture, installation, or service of any individual pieces of equipment. Our efforts involved analyzing and designing new systems for sales or research and analyzing existing systems to identify potential improvements or to verify they were meeting contract terms. Our direct customers were external in the cases of recommending improvements to existing systems. Our direct customers were internal in all other cases, and tracked as components of larger Research, Sales, or Design & Build contracts.

Initiating: External customers sought our recommendations and established budgets or rates, scope of work, and so on. Internal customers received support based on established practice and the needs of each effort.
Planning: I know there were formal project managers so I assume they ran and tracked activities and costs using standard tools. My supervisor and his supervisor handled the details of our department’s activities. The terms of our contracts spelled out a lot of the details of how the different operational areas would be handled (Integration, Scope, Time, Cost, Quality, Human Resources, Communication, Risk, and Procurement) and our proposal budgets (which I saw on occasion since I used to print them for one of the salesmen on occasion) showed the cost breakdown. On larger projects and sales efforts there were numerous iterations internally and with the customers to work out the design and management details.
Executing: From my point of view this was handled in a straightforward way by my managers. Larger projects, particularly multi-million dollar turnkey installations, must have involved the full range of project management and budgeting tools.
Monitoring and Controlling: Same as executing.
Closing: We closed projects by issuing a final report which described our findings, or by completing the final deliverables in other cases (e.g., design Process and Instrumentation Drawings, Heat and Material Balances, or on-site commissioning assistance). Members of the department filed their work and discussed ways to improve our methods on an ad hoc basis and occasionally through formal meetings.

Westinghouse, Thermo-hydraulic Simulation Engineer, Nuclear Power Industry, 1989-1992

Westinghouse’s Nuclear Simulator Division did only one thing, which is build full-scope nuclear power plant training simulators. The division reached a maximum size of 256 people and three or four projects in-house at one time, so the company had a very formal project management methodology in place. An independent audit of the the division’s Project Management methodology was conducted by Hewlett Packard while I was there and the company received good marks, scoring four out of a possible five, if I remember correctly. I was aware of and involved in ongoing formal processes in a way I wasn’t in my first job.

Initiating: I am guessing that projects were initiated by competitive bid according to terms laid out by the Requests for Proposal and the contract terms specified by and negotiated with the winning bidders. The majority of this activity was mandated by the federal government in the wake of Three Mile Island and Chernobyl, so many of the requirements were doubtless specified in the applicable statutes and regulations.
Planning: Each project had to be completed in multiple steps which had to be formally planned and executed using the Waterfall methodology. Beyond the basic steps of Identify Requirements, Design, Implement, Verify, and Maintain the company had to work with each plant to build replicas of every panel in its main control room; determine the equipment and activities that need to be modeled in each plant system; order, install, and configure the computing hardware; define the models, implement the models, test and verify all models in conjunction with the computing and control panel systems, transport the physical systems to each plant site, and perform final acceptance testing. Different specialists and managers were needed for many of the activities.
Executing: The execution of the above-listed steps proceeded in a straightforward manner, with managers overseeing various technical and managerial aspects of the process using standard project management and budgeting tools.
Monitoring and Controlling: The site maintained a library of copies of every relevant technical document describing the configuration and operation of each plant, so that had to be maintained, along with a shared database that allowed modelers to identify documents of interest. The numerous utility, handler, modeling, and instructor station programs and subroutines were managed using a formal source code control system that had to manage software written in multiple languages (FORTRAN, C, and Assembler at the minimum) for multiple types of computers (Gould/Encore, Sun, PC). Design reviews were held for every system and subsystem and a dedicated group of technical editors helped create the voluminous pieces of formal documentation. Systems and models were tested in isolation and later systemically as they matured, by dedicated teams of testers according to written procedures and monitored through formal defect tracking systems. This environment was the largest and most comprehensive I encountered in my career.
Closing: Closing involved turning over all deliverables and passing final acceptance testing. I know that there must have been some review and documentation of lessons learned, but the management situation was always somewhat fluid and the division contracted greatly during the run of my contract. Most of the contractors were released as their projects wound down and they would not have participated in many closeout activities.

Micro Control Systems, Control Systems Engineer, Steel Industry, 1993

This small modification of an existing control system involved comparatively little in the way of formal project management. A representative of the company showed me how the operating system (Concurrent DOS) and parts of the program worked, told me what changes needed to be made, introduced me to the people at the plant, and more or less turned me loose. When the work was completed and the updates were installed and tested in the plant I completed a minimum of documentation, turned copies of the work over to the company representative, and went on to my next position.

CIScorp, Project Coordinator/FileNet Imaging Programmer, Business Process Reengineering/Enterprise Software Industry, 1993-1994

The FileNet Group wrote software systems for various clients who were installing document imaging systems. The goal of these systems was to replace older, paper-based business processes with newer, more automated processes using a client-server model. As I understand it the hardware systems, at least the central image management and database portions, were installed and managed by other parties, either by the customer IT departments themselves, FileNet itself on a direct basis, or by third-party consultants. CIScorp’s analysts, engineers, and programmers only performed the functions of process discovery (and thus requirements for software and work flow); design of the user interfaces, databases, and code; implementation of software systems; testing and acceptance of software systems, and maintenance of installed systems.

Initiating: Contracts for various customers were secured by reference and referral and through competitive bids. The size and scope of each project determined the formality and intensity of each project initiation cycle.
Planning: Planning on the projects I worked on took a lot of forms. In one case two of us spent nine weeks in Manhattan doing informal development to no particular specification while a separate team did the same using a different enterprise management product. I and my partner provided much of the FileNet expertise to the team of three customer employees and one manager. Our planning was ad hoc on that project; we just made sure we demonstrated at least one of every feature the customer thought they might need. (Our efforts must have been at least somewhat successful since the customer chose to do their implementation in FileNet.) The formality and intensity of planning also appeared to be dependent on the scope of the implementation. I worked on smaller projects and larger ones and know that the larger ones must have required a lot more support. I got my first chances to act as site rep and project coordinator while I worked with CIScorp, though I mostly worked on the discovery, requirements, development, and documentation aspects of the projects. I was not involved with scheduling, personnel, or budgeting.
Executing: One of the projects I worked on began with a full discovery and analysis process. After that was complete and the findings were agreed to and the economic case to proceed sufficiently demonstrated, the results of the analysis were used to provide size and scale information that drove the acquisition and installation of server, imaging, and client hardware. I paid particular attention to what the customer’s managers said about introducing most of its staff to computers for the first time. Some of their viewpoints seemed counterintuitive to me but I knew they had done their homework and were probably making solid and well-informed decisions.
Monitoring and Controlling: All of our analysis and development efforts proceeded with frequent reviews, submission of reports, demonstrations of code, and go/no go decisions. I know of no case where we implemented formal systems of source code control, beyond individuals’ efforts to save versions of their work at intervals. Department and project managers handled the day-to-day and phase-to-phase management details.
Closing: Closing activities consisted of making it through final testing and acceptance, and delivery of the system and accompanying documentation, if any was specified. It often wasn’t in any detail for smaller projects.

Bricmont (bought first by Inductotherm and later sold to Andritz), Manager of Level 2 Projects/Control System Engineer, Steel and Metals Industry, 1994-2000

Bricmont primarily sold reheat furnaces, control systems, and thermodynamic analysis to customers in the steel and metals industry. The Level 1 engineers did the I/O and low-level controls and main HMIs (Human-Machine Interfaces) for each installed system. While I was there the Level 1 systems incorporated a wide range of control circuitry, cabinets, power supplies, sensors and actuators, PLCs (Programmable Logic Controllers), and PCs hosting the HMI software. the Level 2 engineers (including me) did much of the inter-system and inter-process communications and the supervisory, model-predictive control on DEC Vax and Alpha or PC hardware. The company sometimes provided troubleshooting, analysis, and training services as well. The company serviced existing furnace installations and built turnkey furnace systems running into many millions of dollars for larger furnaces (our largest installations might have been our $8M two-line tunnel furnaces that sat between the $150M twin casters and the $200M six-stand rolling mill). I was the sole designer of Level 2 models during my six years with the company, and I served as product manager, site representative, and project coordinator (and also designer, developer, commissioner, and servicer). I sometimes trained and mentored a second Level 2 modeler who just adapted my designs to other furnaces. A second Level 2 controls engineer handled the procurement and installation of DEC computers and their VMS operating and development environment, and wrote much of the inter-process communication code for DEC systems. I also mentored the team that did the redesign of Inductotherm’s MeltMinder system, which controlled their induction melting furnaces and power supplies. I did all of the modifications to the original MeltMinder software in response to requests from the parent company to address the needs of new customers.

Initiating: The projects were initiated by referral, direct sales, and competitive bid. The terms of each job were spelled out in the relevant contracts, the complexity of which were determined by the scope and scale of the deliverables.
Planning: Planning was ad hoc for smaller efforts. We worked out what we were going to do, scheduled a time and a place to do it, and went and got it done. Full scale control system retrofits or furnace design-and-build jobs obviously involved the procurement and fabrication of materials, on-site installation and construction, definition of milestones, project requirements, and so on. This required an on-site construction manager (they lived in places like Thailand, Korea, India, and Mexico for months at a time), a procurement officer, project managers, accountants, and people to handle international trade and labor and travel issues. Management was usually carried out using Waterfall methodologies. After the first few implementations most of my Level 2 systems were merely adapted and improved from previous designs. They were almost always different enough to require substantial rewrites but I reused and improved what I could.
Executing: All steps were completed in the obvious order. The budgets for Level 2 systems were usually so generous that there was no way we wouldn’t make money, so we rarely had to worry about time or resources. We worked pretty efficiently anyway so there was never a problem.
Monitoring and Controlling: Again, the intensity of monitoring and control was a function of the scope and scale of work. The full complement of tools was employed behind the scenes. I only worried about the technical aspects of each job though later in my tenure I started going on sales, design, and planning calls, especially those that involved identification of requirements. No one in the company ever used source code control systems but by this time I had gotten in the habit of storing each project’s work in a consistent directory structure and saving backups of each day’s work so I could always go back to a working system. I developed each system in the same order so I was able to quickly generate a minimum viable product that could be expanded to completion.
Closing: Closing activities for each project included creating system documentation (I wrote full user manuals for many of my systems and the company had a clever machine that could clamp the spine of a hardbacked cover onto documents printed on standard paper to created nice-looking books), passing final acceptance tests, providing the requisite training for the operators and maintenance engineers, and turning over all deliverables. Full-scale design-and-build projects were often problematic. The systems weren’t accepted until the entire production line produced acceptable outputs over the range of specifications. This often meant that we had to wait until the entire plant was in good working order and a fresh set of rolls were installed in the rolling mill. On a couple of occasions the wait for those conditions stretched out to six weeks, during which time we usually had very little to do. Sharing of lessons learned was done informally between colleagues and all software, documentation, and project files were archived.

To be continued…

Posted in Management | Tagged , , , | Leave a comment

Agile and Scrum Balance Needs of Different People

I first encountered the Myers-Briggs Personality Type Indicator in 19(cough, cough…) and have done quite a bit of reading about it since. I’m aware of its weaknesses but the primary interest for me is its role in making me aware of certain types of personality differences and how often they may be encountered in different situations. At the very least the tool is popular enough that lots of people will be able to comfortably discuss its concepts.

Every time you revisit a subject you tend to apply it to whatever happens to be on your mind at the time, and as I’m reading a couple of older books called Type Talk and Type Talk At Work the things I happen to be thinking about are Agile and Scrum practices. My last company worked in a consciously Agile fashion for a number of years but only began to introduce formal Scrum processes last year. Once I had been exposed to them and thought about them for a while I decided to pursue numerous certifications in Scrum methodologies this year. The certifications were straightforward against my long background of developing software and working on and managing projects and programs in many different environments.

Scrum, of course, is just a special case of Agile. Neither Scrum nor Agile replace all of the traditional development, problem-solving, and management concepts from previous methodologies. Rather, based on lessons learned over years of industry practice, they change the emphasis to enhance flexibility and improve the chances of producing a workable solution with minimum waste and fuss. The techniques are about managing the expectations of all participants in the process, from developers to analysts to managers to customers, and of prioritizing needs with respect to available resources.

A section of one of the books opined that the tension between Judgers, who tend to prefer order, clarity, and deadlines, and Perceivers, who tend to prefer alternatives, exploration, and… butterflies!–which they will make sound perfectly reasonable–potentially causes the most friction between people. This is understandable in a business environment where stuff has to get done and out the door at some point. One can easily imagine a manager getting peeved that a perceiver-type worker has identified and classified the problems in endless detail but hasn’t solved them yet. Agile, with its emphasis on continual feedback and iterative development, and Scrum, with its defined ceremonies and breakpoints, meets the needs of both judgers and perceivers. Judgers are happy because something has to be delivered and shown at regular intervals, and the intervals are short enough that plenty of hard data about who did what and what works and what doesn’t is always available. Continuous review, feedback, and prioritization help the judgers ensure that the most important things get addressed and, through specific techniques like test-driven development (TDD), ensure that effort isn’t expended on things that aren’t needed. Perceivers are happy because they are freer to use their creativity to break the problems down and apply interesting, efficient, and effective solutions in their own way. Continuous review, feedback, and refinement also help the perceivers relate current tasks to the bigger picture so they can continually assess and reevaluate the cohesion of the whole.

Both types are served because they get their preferred needs met and the project is served because it is considered and managed from a variety of perspectives. After-action reviews and project closeouts are useful for both types to capture lessons about what could be done better from the viewpoint of design, completeness, and unity as well as from the viewpoint of procedure, management, and feedback. The pithy observation about this phenomenon was that marketing always wants it yesterday so they can deliver to the customer, while engineering always wants more time to make it perfect. I don’t know whether this says more about marketers vs. engineers or judgers vs. perceivers (surely there are both judgers and perceivers among both marketers and engineers), but the techniques of Agile and Scrum work to build a minimum viable product that can be continually enhanced in terms of meeting requirements and embracing beauty in a way that everyone can appreciate.

Posted in Tools and methods | Tagged , , , , | Leave a comment

Solving the Right Problem

The cover of the June 2010 issue of Mechanical Engineering magazine asked the question, “Can Visionary Engineers Revive Industry in America?” Here’s the question I would ask: Did engineers break industry in America?

To ask the question is to answer it. Of course they didn’t. Did the guys in the labs at steel mills in Cleveland forget how to analyze materials? Did the software teams in Silicon Valley forget how to write code? Did the process analysts in Texas forget how to make systems work better? Did the thermodynamicists in Pittsburgh forget how to balance heat flows? Did the Six Sigma Black Belts in Detroit forget how to eliminate waste and improve quality? No, no, no, no, and more no.

If anything there’s so much innovation going on that many people and groups go out of their way to stop it.  What should happen is that the best ideas should be allowed to compete.  The public should be allowed to vote with its own dollars for the products and services they want.  Providers that can’t keep up need to be allowed to fail.  In the interest of protecting activities people can see today they prop up enterprises and activities and jobs that would otherwise have disappeared because of poor performance.  Those enterprises and activities and jobs that should be gone continue to consume resources that could have been used more efficiently by others — but nobody sees that.

Is it painful for the participants when companies fail? Sure. People lose money. They have to be retrained. They may have to move. On the other hand, what about the opportunities lost? The people who weren’t allowed to compete could have made more money, could have used their best ideas, and could have had better lives. Their customers could have paid lower prices for better products. Who’s looking out for them?

I can give you a long list of suggestions for who’s causing the problems, but I can guarantee you it isn’t the engineers, coders, designers, and analysts.  Just get out of their way and they’ll come up with amazing solutions like they always have.  You can’t solve a problem unless you ask the right question.

Posted in Engineering | Tagged , , | Leave a comment

Artists and Technicians

A lot has been written about learning styles but there are questions about how meaningful it is and I’m not an expert anyway. I have, however, always felt that there are two opposing approaches to learning that each end where the other begins. Here I’m thinking not just of how people begin a learning process but how they progress to become experts. I came to this through a combination of reading, having widely varying interests, and being in contact with many different kinds of people.

The most interestingly different people were the arts majors at Carnegie Mellon University. Most of my family were economists and both my grandfathers and my mother were very technically inclined so I was used to thinking about engineering and science. Economists and engineers think in very similar ways so the apple didn’t fall too far from the tree. Everyone meets a variety of personalities and dispositions going through school but most aren’t too highly differentiated when they’re young; grade school is where they start to get that way. In college people sign on to a goal and go for it. Being hip deep in sculptors, industrial designers, singers, athletes, musicians, stage designers, and actors (and to a lesser degree architects) was mind-expanding for someone like me. Not only did we get to know each other up close in my fraternity, we got to participate in major events that allowed each type of person to experience something of what makes the other tick.

When we weren’t studying we were often building things, painting things, wiring things, singing things, pushing things, planning things, and practicing things. Putting up crazy decorations for theme mixers was one thing but the big Spring Carnival competitions of Booth and Buggy (formally called Sweepstakes) took it up several levels. Booth was a mixture of art and engineering and Buggy a mix of athletics and engineering. The biggest crossover for me was Greek Sing, where all the fraternities and sororities put on seven-minute musical productions. My fraternity wasn’t very good at Buggy but we went all out in Booth and Greek Sing.

The winning acts in Greek Sing usually performed two or three numbers from a famous stage musical (we did How to Succeed in Business Without Really Trying and 1776), and we always got into full costume, sang in three- or four-part harmonies, and worked up some complicated choreography. I can’t sing to speak of but I can fake it respectably in one of the lower registers. Performing a big production in front of a thousand people or more was both scary and a rush. It was so much fun and no small relief to have three months of work pay off with a nice placement. It wasn’t the performance itself that was so moving, it was the experience of working together on something so physically engaging. It tended to be a more emotional experience than the technical efforts I was used to. It gave me a real appreciation for what the graduating theater majors felt each year when they got to paint their names on a different section of wall backstage.

In later years I grew to appreciate musical theater more, seeing many live performances and catching pop culture events that featured song and dance (I saw Baryshnikov and Hines in White Nights three times in the theater—when I wasn’t watching Rambo and Jean-Claude Van Damme movies, of course!). Around this time I also read about how some people learn by just doing, without fear and without thinking about it too much, while others start taking things apart by trying to do the individual technical parts right. Those who get better and better through the joy of just doing eventually have to figure out the technical parts; their love and passion must carry them through the work of analysis, instruction, research, and endless practice. Barysnikov may have been born with talent but he wouldn’t have been who he was if he didn’t put in the work. By contrast those who learn by analyzing and experimenting can find the effort so engaging and interesting that they keep going until they can stop thinking and do more and more by feel. They ultimately get to where they can create without the hesitation of not being perfect, because they’ve perfected all the pieces.

Performers and artists can learn either by doing or by analysis and engineers and programmers can do the same but it’s tempting to use the example of dancers and engineers as a shorthand to observe, in the end, that the best artists have to be great technicians while the best technicians raise their work to the level of art. That’s the goal for all of us, right?

Posted in Engineering | Tagged , | Leave a comment

How and Why I Got My Certifications

For years I never worried about getting certified in anything. I was steadily developing skills and picking up new things. I already had the jobs I wanted. I was able to learn what I needed from books or the occasional class. Many classes were provided by employers but on one occasion I sought out a week-long class in Oracle database management because we were going to start interfacing with Oracle products. Mostly I never availed myself of training allotments made available at places I worked. I just did my thing, studied and experimented on my own, and lived my life.

That was good enough until a situation where my company and a larger partner company were working together to bid on a contract for which I was envisioned as the project manager. I had served in this capacity on many occasions already but this was going to be bigger and more formal than what I had previously done. It was going to involve dozens of people (hence the need for a larger partner) and involve a full Project Management Office (PMO) and as such the contract called for the PM to be formally certified as a Project Management Professional (PMP). The owner of the company said, essentially, “you will obtain that certification or you will find someplace else to work.” So, I went back to the office, read up on the requirements for the cert, filled out the application including documenting the thousands of hours of PM experience I already had, and sent it in. (My company did not win that project, btw.)

One of my co-workers was also interested in the cert so I signed up for the online prep course he recommended and blasted through it at home one Saturday. That’s it, I did the whole thing in one day, maybe 7-9 hours. The process was easier because I already had most of the background but this seemed kind of incomplete to me. I therefore also signed up for a four-day class in a nearby hotel and finished that as well. It consumed a lot more hours and cost a lot more money but didn’t provide much extra background. The PMP exam was (and I believe still is) a closed-book proctored exam given at automated testing centers around the country. In order to prepare I made a table of five columns, one for each of the phases of a project (Initiating, Planning, Executing, Monitoring and Controlling, and Closing), and nine rows, one for each management subject area (Integration, Scope, Time, Cost, Quality, Human Resource(s), Communications, Risk, and Procurement), and then inserted all the summary information listed in the PMBOK Guide (Project Management Body of Knowledge), which was at that time in its fourth edition. The fifth edition was released recently and I reviewed it to ensure I haven’t missed anything. I made the grid in Excel, printed it out, and carried it around and reviewed it periodically. I also made a shorthand grid that just had the subject matter initials in the row headers, the project phase initials in the column headers, and Xs where there was actually data in the grid. Not all subject matter areas have actions defined for every project phase. The pattern was pretty regular and I memorized it and the few exceptions quickly. When I walked into the testing session I was able to start by drawing out the shorthand grid from memory, and that gave me enough of a framework to hang the rest of the details on. I passed every section of the test easily enough, though my strongest section was clearly Project Execution, the area in which I’d had the most direct experience at that time.

Project Management Professional (PMP) #1284485

That certification needs to be renewed every three years so when it came time for the first renewal I breezed through a couple of inexpensive online review courses (one evening after work, about three hours from end to end). When it came up for the most recent renewal I used the PDUs I earned by completing a series of Lean Six Sigma courses, which brings me to the next certification. I already have most of the PDUs I need for renewal beyond 2018 and will probably just apply those I earned from this year’s Scrum certification classes, unless the requirements have changed.

I had heard about six sigma programs and techniques for at least ten years and probably more and had been actively involved in TQM (Total Quality Management) programs at two companies earlier in my career. I had also seen that two of my younger co-workers had used their yearly educational allotment to take Six Sigma Green Belt courses. That looked interesting enough but it took a specific event to motivate me to follow in their footsteps. I was looking for additional activities that my employers might pursue to open new lines of business. They were always talking to people and working with community development boards and so on. I therefore resolved to see if I could get a tour of a business or two to see what they were up to and how they did things.

The first business was run by a friend of mine. He let me come in and observe his dental practice for an afternoon, take some notes, and ask a bunch of questions. That was interesting to me because I had created a tool to simulate activities in dentists’ offices some years previously and already knew the basics. I watched him go through his paces and after closing he told me what the secret was that allowed him to keep up to nine chairs going with him as the only dentist in the practice. It made perfect sense when he described it. (No, I’m not telling you what it is here, but I would be pleased to do so for the appropriate fee!) The analysis performed was very similar to that I was using to analyze operations for over ten years at that point. The innovations and improvements came both at the level of improving individual processes and also at the level of rearranging the overall processes to achieve certain efficiencies. It later became obvious that the process rearrangements were developed using classic Lean techniques.

The second business I saw on a trip to Pittsburgh. My father owns a small interest in a local manufacturing company up there and got a couple of the guys to show me around one morning. The engineer described both what was going on in the operation and how he was improving and rearranging things based on his Six Sigma training. It looked fascinating to me. I therefore marched myself back home and signed up for a Green Belt course right away. Looking back on what I saw that day, plus my own years of experience, made all the information in the classes very easy to visualize and understand.

The Green Belt course was designed to be completed in eight weeks with five or six hours a week of effort. I finished a couple of weeks early after having devoted four full Saturdays to the process. I got my little printout certificate and stewed about it for a while. It had been too easy and I could get a quantity discount if I took all three courses together. I decided to go ahead and sign up for the other two. My company was good enough to pay for the second course in full even though that took me over my training allotment for the year. I paid the larger cost for the final course out of pocket because I was on a roll and wanted it done. I finished the Lean Six Sigma course in about six weeks and then started in on the Black Belt course, which was supposed to take sixteen weeks. I finished that a few weeks early as well. I did extra review in each section of each course by taking the test as many times as it took to get a 100, or until I had taken each one the maximum of three times. The few times I didn’t do better than 96 still annoy me.

I took the Six sigma courses through Villanova University’s online program because they also offer a certification that accepts the single project you complete as part of the Black Belt course for the work requirement. I have done a bucketload of process analysis and process improvement work over the years but never while formally using the tools of Six Sigma, so this option allowed me to achieve formal certification in a way I would not have been able to had I gone the ASQ (American Society for Quality) route. The downside of the Villanova certification is that it needs to be renewed every three years, while the ASQ cert appears to be a permanent, one-time only thing. When I get the chance to do a formal project or two in Six Sigma I’ll be sure to go back and take that test and get the permanent certification. I’ve already passed an online practice exam in the ASQ format.

The Villanova Lean Six Sigma Black Belt exam is online and therefore open book (and even open Internet, though time constraints make it impossible to spend too much time looking around), but even the proctored ASQ exam allows test-takers to consult their own notes. I reviewed for the Villanova exam by re-watching all of the course videos and recording the highlights on seven sides of unlined, B-size paper. That review took eight solid evenings and full weekend days. I finally took the test on a Sunday and got the highest score earned that month. The entire process had taken a year. When it was all done I ordered a number of highly regarded books for future reference.

Certified Lean Six Sigma Black Belt (CLSSBB) #VIL020684

My last company operated in an Agile fashion but employed formal Scrum concepts only loosely due to the dispersed and stop-and-go nature of small teams dividing their efforts across multiple projects. When they decided to begin adopting Scrum formally they brought in the son of one of the owners who had done formal Scrum work and was formally certified as a ScrumMaster, Scrum Product Owner, and Scrum Practitioner. Achieving the last requires at least one of the former and two or three years of actual Scrum practice. I eventually decided that these would be good certifications to have and signed up for and completed the ScrumMaster and Scrum Product Owner certification classes earlier this year. They were fun to take and made perfect sense both in terms of what I saw at my last company and in all of my previous experience. Over the next few weeks I learned Java programming by working through a book on the subject I accessed through my subscription to SafariBooks.com. I also learned both the Eclipse and IntelliJ IDEA development IDEs. When that was done I was able to complete a Certified Scrum Developer course that was given in Java in the IntelliJ IDEA environment and thus earn that certification as well. I could probably apply for and receive the Certified Scrum Practitioner badge as well but that seems cheesy until I’ve formally worked in the Scrum framework for a while longer.

The concepts from Agile and Scrum are an interesting and effective distillation of practices I’ve been following for years. I always operated in a relatively Agile way in the software development efforts I’d been part of as a team member, manager, or sole practitioner. It was always about figuring out what needed to be done, getting a minimum viable product up and running, and building it out while soliciting and incorporating continuous feedback from the customer. I’ve seen that done in various forms for 25 years, so the certifications were a nice way of crystallizing and formalizing that experience.

Certified Scrum Master (CSM), Certified Scrum Product Owner (CSPO), Certified Scrum Developer (CSD) (Scrum Alliance)

Posted in Tools and methods | Tagged , , , | Leave a comment

Getting “In The Zone”

I worked on nuclear power plant training simulators at Westinghouse with up to 250 colleagues. I obviously didn’t meet or get to know them all but I worked with and hung out with a bunch of them and some clearly stood out. One lady I was never on a project with but got to know at social events worked on what we called handler routines for a lot of small, common pieces of equipment in the plant and on the control panels: valves, pumps, gauges, alarm lights, and so on. This involved careful configuration of variables in memory and lower-level logic often down to individual bits. A lot of addressing and comparing was done directly in hexadecimal notation, which preserves the status of bits in the mind of the programmer. She was always impressive to talk to and something someone mentioned about her always stuck with me. It might have been her husband (who also worked with us and who co-wrote the Interactive Model Builder tool we often used there) who commented that “she worked on the stuff so much she could multiply in hex in her head”. That’s an odd thing to remember about someone but it illustrated something interesting. She had clearly gotten “in the zone” over time and had probably mastered quite a few related skills. That one stuck because it was rather esoteric.

Being “in the zone” is about doing something so much that you don’t make mistakes, don’t have to look things up, get exceptional amounts of work done in a short period of time, and generally feel great about what you’re doing. Interruptions inhibit getting “in the zone” and they can come in many forms. Phone calls, e-mails, text messages, meetings, extraneous noise, equipment failures, and people trying to talk to you are examples of short term interruptions. Changing tools, projects, jobs, and venues are examples of longer-term interruptions. That said, doing the same sorts of analyses and creating the same kinds of outputs over long periods of time, even if using different tools on different projects at different jobs in different venues, can allow you to get “in the zone” a different way. You can get there through intensity, doing the same thing over and over on a continuous basis, or by reflection, doing the same thing repeatedly over time.

Over the short or long term if you’re reasonably aware you’ll eventually see patterns in what you’re doing and will see a bunch of different combinations and permutations of things that illustrate different facets of your subject. You’ll build up numerous touchpoints you can expand and connect. You start with a few plugs of grass end end up with a lawn.

I’ve gotten “in the zone” at times in my career and it’s always been fun. There were times at Westinghouse where I’d worked with values in the steam tables so often I started to memorize them. There were times at Bricmont when I was banging out so much code that I rarely had to look up anything new, I never made syntax errors, and I could grind through edit/compile/test cycles for days and weeks on end. At my last two jobs I approached so many new systems and talked to so many users, developers, and subject matter experts that I got to where I could just feel how things fit together and what questions to ask to get the information I and my team needed. I’ve also spent so much time wallowing around in huge spreadsheets and Word documents that I was able to make major edits, changes, and improvements because I felt like I had the whole picture in my head at once.

Remembering how great it is to get “in the zone” you might ask if there are any ways to get there more quickly and stay there longer. Here are a few that I can think of.

  • Make a conscious effort to learn your subject up front. Being “In the zone” is like becoming an expert at something. It involves training and repetition and correct performance to be able to push granular elements down to the subconscious so the conscious brain is able to process information in larger and larger chunks. Formally studying a subject through books, mentors, online research, and after-hours noodling around will get you up to speed more quickly. You can’t always predict what specific skills you’re going to need but consulting multiple sources will expose you to things to watch out for.
  • Do after-action reviews, even if you do it by yourself on your own work. Write down a few thoughts on what you’d like to know, what questions you have, and where you get stuck. Since this information is of little use if you don’t do anything with it, be sure to review it at intervals and take action based on what you find. In groups you definitely want to have formal and informal reviews and record and disseminate lessons learned.
  • If you see something you do start to become more automatic, make an effort to push the process a little bit. If you’ve memorized a bunch of he multiplications or steam table values, take some times to expand your range. Consciously try to link up your touchpoints.
  • Deal with distractions quickly. If you can, group them into a concentrated block of time and deal with them all at once. If there are things you don’t particularly like to do, get them done first so they aren’t hanging over your head. You generally aren’t as good at things you don’t like to do so addressing them quickly gives you more time to get feedback and make sure they get done right.
  • Actively promote a culture that makes these ideas visible to people. Having an internal blog, bulletin board, or regular e-mail keeps people thinking about and implementing these things. Having the occasional break from normal duties to explicitly share experiences might build trust and connections that weren’t there before.
  • Ask people if they’re thinking about these ideas. Plant the seed if they aren’t, recommend resources and offer to help if they are.

What stories can you tell and what ideas would you add for getting and staying “in the zone”?

Posted in Soft Skills | Tagged , , , | Leave a comment

What Can and Cannot Be “Homebrewed”

Americans (and many independent-minded people elsewhere) have always enjoyed solving problems on their own. There is nothing they won’t experiment with if they think it’ll do them some good. People have always worked with metal, wood, stone, leather, and cloth to make every kind of contraption, tool, and product you can think of. A lot of people did this at home or with friends or small businessmen. A collection of metal implements displayed in a little museum in Troy, MT illustrates this in a powerful way (this caboose sits behind the museum, I apparently failed to take any pictures of the museum itself and couldn’t find anything online). The display must have included fifty or sixty different black, wrought-iron implements of various shapes and sizes, though in length they were mostly on the scale of a good-sized crowbar, maybe four feet long. The working end of each tool had a different shape and I could only guess what problems these tools had been created to address. The point was that every individual, either on their own or working with a blacksmith, came up with a unique solution to a knotty problem. Some might have been produced in nontrivial quantities but most had the look of one-offs meant for specific purposes.

This was just tucked away in the remote, northwestern corner of Montana. Multiply that out by the whole country and the whole world and think of the variety there must have been. Plenty of Americans and others were able to experiment with electronics as they came into vogue in the first decades of the 1900s. People could buy components, batteries and tubes of every imaginable kind from almost innumerable suppliers. Some local drug stores still had tube-testing machines on site in the mid-70s. Radio was getting into many homes and television was just a complicated radio with a twist; it has been called (incorrectly) the last major invention to be created in a garage. Car and racing culture in Southern California and elsewhere got a huge boost in the 1940s and 50s when hundreds of thousands of mechanics and tinkerers trained for WWII got to turn their skills loose on their own creations.

Computer hardware was made accessible to early adopters through kits and found a ready audience in those already predisposed to try things on their own. Software was especially approachable for the non-specialist. Anyone who could read a book and think for themselves could make a computer sit up and dance. There were tons of homebrew magazines and disk-sharing services from the early days. If you had the IBM Technical Reference Manual you had the same information as the Big Boys.


All of these efforts were possible because they weren’t too big, too small, or too expensive. They were at a human scale that could be accessed, if not by anyone, then at least by those with a certain amount of curiosity and initiative. I bring this up because one of the next great frontiers of innovation is in the biotech space, which does not seem to operate at a human scale. It operates at a microscopic scale that is largely beyond the ability and frame of reference for most people to work with comfortably. As much as people can do some clever chemistry (crystal meth, anyone?) and possibly produce substances on future micro-scale 3D printers in home environments it doesn’t seem to me that they’ll be tinkering with their own chemicals and DNA. If they do it’s because they’ll be leveraging tools that operate at higher levels of abstraction. There will always be a few of the intensely curious and motivated and a few moonlighting professionals, but experimentation in this area does not seem destined for mass participation.

Than again, as Dennis Miller often noted, I could be wrong.

Posted in General Technology | Tagged , , | Leave a comment

Right People, Right Analyses, Right Decisions

Some time ago I found myself in a gathering somewhere in the bowels of a large, very bureaucratic organization for which my company was doing some work. The room was full of forty or so senior managers and analysts, many of whom were not technologically inclined. A pair of spit-shined young employees from one of the Big Consulting companies ran an efficient, well-organized meeting intended to decide which of the big enterprise software packages should be adopted by the organization. There were rounds of questions to assess the most important capabilities the software had to have, there were selections made using multi-voting techniques, and there was a beautifully prepared list of candidate packages listing the strengths and weaknesses of each printed on both sides of B-sized handouts. The group narrowed the options to three, and senior managers chose the winner.

The quality of that decision: garbage.

Seriously, I was embarrassed to be in the room.

I liked that a smaller and hopefully better-informed group of people made the final decision but everything else about the process was wrong. The meeting was the first time most of the people in the room had seen the options; there had been no time for preparation and research. As capable as many of the participants were in their own subject matter areas a large fraction of them had little to no experience evaluating enterprise software systems (including me) and many had very little knowledge of software systems of any kind. The elephant in the room was that the organization had already licensed fifteen or twenty thousand seats of one of the candidates and, surprise, surprise, that’s the one that was chosen.

Was the probably outrageous sum charged by the Big Consulting company a good expenditure? I guess that depends whether the purpose was to make a good decision or provide cover for one that had already been made. Who knows? Maybe the decision was exactly what those in charge wanted it to be.

Did that selection have any effect on what was actually going to get used within the organization? The individuals we were working with wanted no part of it. The selection of that package was in no way going to make management of any data more accurate or approachable. It was not going to address the fact that the organization did not have a strong, integrated picture of how its own data were acquired and used. The organization was also moving toward web-based delivery and access internally so I would question whether all those seat licenses were ever going to be used. For anything.

The improvements the organization was seeking were only going to come from implementations carried out by skilled practitioners who understood their data and their needs and knew how to create interfaces that gave the customer the control and ease of use they needed. Systems can be made simpler, but they cannot be made simpler than they need to be.

That said, over time that organization will probably stumble through a series of analyses and implementations that will slowly hammer their data and processes into some kind of shape. I see the same process happening at another organization of similar scope and scale. I just can’t help but think it could all be done more rationally if some groundwork were laid first. I propose the following steps might be useful before any undertaking:

  • Ensure any project has support from senior management.
  • Ensure that senior management understands the following ground rules so they have an appreciation of what their senior implementers are going to do.
  • Determine what business needs are to be represented and implemented.
  • Determine the business roles that need to be defined. Ensure that training and scope of actions are properly assigned and understood.
  • Determine whether data exist that support the business need. If the need for new data items are identified, plan how to acquire them. (As one of my college professors was fond of pointing out, the problem should be solved before you put the numbers in. If you turn your calculator on before you have the final equation you’re not doing it right.)
  • Determine the needs of the meta-implementation.
  • Determine the meta-implementation roles that need to be defined. Ensure that training and scope of actions are properly assigned and understood. Ideally these will be closely related to the business roles.
  • Ensure the meta-implementation provides the right capabilities and controls to the right classes of users.
  • Determine the source of all data. Ensure that incoming data is validated at the source and is complete.
  • Ensure that all sources of data agree and are consistent, specifically ensure that individual data records and roll-ups and aggregations of them are consistent. For example, reports of the number of items processed per day should be consistent with the number of records created for individual actions. If different systems collect these numbers differently then find out why they are different and rationalize them. Yes, this happens.
  • Ensure that downstream uses of data remain consistent.
  • Select the tools, practitioners, and hosts for the actual implementation.
  • Carry out the implementation, whether a custom build or a ready-made solution.

Decomposing the problem in this way will help you choose the right tools for any job. Doing this in pure form might be extreme; there is a place for prototyping and internal experimentation to work out some of the ideas. You may also choose to go with a ready-made implementation, like an enterprise system with predefined, business-specific modules. If you’ve done the groundwork ahead of time you’ll have better target against which to compare the features each solution offers and the costs it imposes. Just as importantly, thinking through the problem as completely as possible ahead of time will keep you from implementing features and buying capabilities you don’t needed.

Posted in Soft Skills, Software, Tools and methods | Tagged , , , | Leave a comment