Cost for QD employees to rent an apartment in Chico, CA. $1,200/month. Please turn off your ad blocker in Quality Digest
Our landlords thank you.
Niranjan Deodhar
Published: Thursday, January 21, 2016 - 15:51 As process improvement practitioners, we get hired to drive waste and variation out of our clients’ businesses. But what if we hired ourselves, provided frank advice, and then listened to it to drive waste out of our own business or process? Could we then drive down the cost of organizational transformation, and reduce the time to the realization of benefits? Can we increase the certainty of benefits actually being realized? Can we identify and realize more benefits than otherwise? Last but not the least, can we ensure longer sustainability of the changes implemented? Enterprises engage lean Six Sigma experts on a regular basis. They are employed on a full-time basis, or hired as consultants. They can also be engaged via process outsourcing providers, who have a significant stake in ensuring the ongoing improvement of their clients’ processes. Regardless of who employs these experts, it’s important to track their effectiveness and efficiency. Annual business transformation expenditures worldwide run into the hundreds of billions of dollars, and as a profession we could be doing better in measuring the return on this investment. This gap is all the more intriguing because we are, in fact, efficiency experts, yet we don’t have clear standards and benchmarks for our own efficiency. Some leading organizations track “business impact per Black Belt” or some such equivalent, but there are very few companies with the quality maturity to do this on a regular basis. In this first of a series of articles, we will explore these questions and their implications. Our methodology tells us to start by looking at how to measure success and define value from the customer’s perspective. Accordingly, this first article focuses on what we would need to measure to discover how we are doing. The performance of the process improvement (PI) function can be measured across various dimensions, which we will consider individually. PI begins with a problem or an idea, so a good way of measuring the effectiveness of the PI function is to see how many problems or ideas were identified, and how many were solved or otherwise benefited from the solutions provided by the PI function. One can visualize an ever-narrowing funnel of ideas, from generation at one end to benefits realization at the other. At each checkpoint (e.g., “idea validated” or “sponsor engaged”) one can measure yield, eventually giving an overall throughput yield of this funnel of ideas. A good funnel is one which has low yield if need be at the beginning to keep out impractical suggestions or noise, and high yield towards the end, at which point the project has seen significant investment. Another perspective could be the effectiveness of planning and costing of projects. For example, how often do projects run on time? On budget? How early in the process do costs and benefits become clear, and how closely does the project actually deliver to those agreed benefits? Such metrics can help ensure predictability and reliability in the project planning and resourcing function, leading to improved utilization of resources and improved credibility with sponsors. A “first-time right” measure for PI projects can identify the proportion of projects that come in on time and within budget. In this dimension, many organizations do a fair job of tracking their ideas engine or funnel, but many don’t. By and large, it’s not something that gets enough attention. The IT industry publishes on-time and on-budget figures for projects, which often ends up highlighting a clear need for improvement. PI fails to track and publish such data beyond individual pockets within enterprises, but if we did gather this data across the industry, it is unlikely that it would be very flattering. A key consideration should be return on investment for individual projects, which can employ the very definition of efficiency—output per unit input. This gets a fair amount of attention from management, as it should, and that’s a good start. Another perspective can be cost per project. A smaller number implies more granular projects, and hence less risk and more control. Practitioner resource utilization and benefits realized per practitioner can provide insight into resourcing as well as governance efficiency. Given the scarcity of expertise, one can also track span (i.e., the number of practitioners/Black Belts per unit on the operations staff). Due to the cost differential between internal and external resources, one could add the cost of external expertise to the set of efficiency measures. Last but not the least, the cost of tools and platforms used by practitioners is also an important metric. This can be measured as cycle time to reach certain key milestones in the life cycle of an improvement opportunity, such as the time to identify, validate, and prioritize an idea; the time to generate actionable insights for improvement; the time to implement the change; the time to realize benefits; and the time to institutionalize the learning. This is an area that has really not received the attention it deserves. Speed to market not only directly influences cost, but also influences returns by bring forward the benefits. Yet there is very little evidence of systematic tracking of these cycle time measures across PI projects. As a result of the lack of such tracking, one encounters projects with questionable benefits and return on investment. For example, the author recently reviewed a project that cost approximately $70,000 and delivered a 1.5 FTE reduction in operating costs, with each FTE costing about $90,000 per annum in that particular business. On the face of it, this appears to make good business sense, however, note that the project took one year to execute. At this pace, one has to question the nonvalue-adding activity that is being undertaken alongside, as well as the veracity of the cost being only $70,000. In addition, one has lost out on benefiting from the savings for a whole year. This is a measure of leveraging insights, learning, and intellectual property from past projects. It can be further broken into reuse of data collected, reuse of insights generated, and reuse of solutions implemented. Some organizations have a good culture of encouraging reuse through incentives to share what you know and to ask others what you don’t. Likewise, some do a good job of capturing learning into corporate memory and making it accessible. But by and large, this is an underdeveloped capability and leads to frequent reinvention of wheels. Apart from the obvious loss of efficiency, this also creates variation in outcomes, as the same situation is solved differently by different business units. Ideally, a business should harvest learning from projects in a structured way and not rely on informal networks or re-interpretation of project deliverables. An example could be a database of symptoms, likely causes, and potential remedies. Such a database would be significantly better than a shared drive of large files containing project deliverables. This database should then be made available through effective search engines, and project methodologies should include an explicit step of checking for existing insights on similar problems. Theoretically, an improvement is sustained forever, or at least until a new disruptive change makes the older improvement irrelevant. In practice, however, improvements decay over time, which is why we need continuous improvement. To apply a measurement lens to this is difficult because of the protracted effect of that decay. In other words, it’s impossible to say at what time the improvements decayed down to a level as if the original change for the better had never happened. It is also impossible to analyze where the natural decay would have landed the process, had it never been improved. We want to recommend a different perspective to tackling sustainability, and that is to ask a single question: “Did we permanently change the way the operator of this process thinks about his or her job?” If we did, the improvement effects will continue forever, or at least until the next disruptive wave. There are many tools and techniques to measure if lean thinking, for example, has been absorbed into daily practice in the operations unit that performs the process. In the end, benefits are sustained by people, so instead of measuring the process for sustainability of benefits, we can measure the people and their capabilities as a closer proxy. There are quite a few frameworks in the existing body of knowledge on how to measure quality maturity of a team or business unit. They usually center on effect delivered, executive buy-in, lean or Green Belt training penetration, ongoing idea generation, the ability of the operating team and their Green Belts to execute with reduced or no Black Belt supervision, 5S audits, and so on. All of these are valid ways to generate and track sustainability, and they need to be adopted more widely. Let us now turn to the last, but some would argue the most important of all dimensions—variation between PI projects. Most if not all the above measures are transactional and not aggregate (a transaction in this case is a PI project), so they will also have the potential to show insights through variation analysis. For example, we could analyze if a given business unit, or practitioner unit, or industry, or project manager, is consistently better than their peer on number of PI projects, ROI, cycle times, reuse of intellectual property, and sustainability of change. That can lead to actionable insights on learning from the best and applying to the rest. Measuring the performance of the PI function is just the first step towards improving the PI function as a whole. As with our advice to our clients, we will also need to change some work practices, devise and use tools and techniques to enable that change, and put in place a culture of continuously improving ourselves over time. In the subsequent articles of this series we will expand on these ideas. Quality Digest does not charge readers for its content. We believe that industry news is important for you to do your job, and Quality Digest supports businesses of all types. However, someone has to pay for this content. And that’s where advertising comes in. Most people consider ads a nuisance, but they do serve a useful function besides allowing media companies to stay afloat. They keep you aware of new products and services relevant to your industry. All ads in Quality Digest apply directly to products and services that most of our readers need. You won’t see automobile or health supplement ads. So please consider turning off your ad blocker for our site. Thanks, Niranjan Deodhar is the founder of Open Orbit, developer of a SaaS platform (also called Open Orbit) for the lean Six Sigma professional. Deodhar is a lean Six Sigma practitioner with extensive experience in senior roles across companies like Genpact, IBM, PwC Consulting, and Siemens. He has worked on building new business, creating consulting capabilities, and implementing change programs in a variety of cultural and geographic contexts in Australia, India, and the Silicon Valley in Northern California. Open Orbit is based in Sydney, Australia.Process Improvement on Process Improvement
Driving waste out of our own activities
Effectiveness
Efficiency
Speed to market
Reusability
Sustainability
Variation
Our PROMISE: Quality Digest only displays static ads that never overlay or cover up content. They never get in your way. They are there for you to read, or not.
Quality Digest Discuss
About The Author
Niranjan Deodhar
© 2023 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute, Inc.