IT INSIGHTS

metrics

Metrics can make or break
process improvement plans

By JP Batra

Companies launch process improvement initiatives for many reasons — to improve a specific area (like supply chain management), to improve an entire department, to increase IT efficiency or to enhance a host of different business practices.

With the launch of those initiatives, leaders enthusiastically design Key Performance Indicators (KPIs) and Critical Success Factors (CSFs) to measure program effectiveness. But if those measurements are not properly designed and managed, successful efforts can become casualties of misinterpreted results.

Metrics play an important role in measuring improvements. But many initiatives will be canceled or fall short of expected benefits because little or no attention is paid to properly designing metrics and creating an effective implementation plan.

Metrics provide useful information on performance – both overall performance of the entire initiative as well as performance of individual sub-areas in the initiative. Evaluating performance of sub-areas is particularly important when the initiative is in-progress. Lack of attention at this stage can lead to an inaccurate picture of performance and impact. As a result, management may miss the initiative’s true value and choose to downgrade the project, move resources to other initiatives, or cancel it altogether.

In addition to proper metrics design, identification of data sources and an adequate training/communications plan are essential to the success of process improvement.

I am currently helping a local Fortune 500 company with a project to increase the efficiency of its IT department through continuous improvement. The initiative task force has provided a structure for managing improvements to IT functional areas. The structure includes metrics to measure progress, as well as controls to bring the project back on track if the metrics show the plan is not going in the right direction.

The company was diligent in identifying desired improvements and designing strategies for success. But working teams were less attentive to designing measurement metrics. With inaccurate measurements, the IT department was unable to report true progress and the tangible impact of the program on IT functions. Senior management began to question the value of the initiative.

Fortunately, the department is still in the early stages of metrics design and collection. We were still able to adjust the metrics lifecycle, including the design of the metrics and plans to collect them. The plan entails identification of data sources, data collection approaches and how specific targets will be measured. The final component is the proper design of reports that accurately illustrate progress … or lack thereof. If progress lags, the working group may exercise the controls developed to bring the initiative back in line.

As a working example, I selected a highly visible IT function — Quality and Testing Management — that was at the metrics development stage. IT working groups were actively involved in designing the metrics and developing an implementation plan.

Participants decided that standardizing tools across the enterprise was an opportunity for improvement. As part of the design process, the groups evaluated both the current state of the tools and the desired state after improvement.

“Current state” analysis included metrics to measure the total number of tools in the enterprise, the number of users of the tools and other relevant metrics to create a baseline from which progress could be measured. The “end state” goal was to standardize a limited number of tools, based on certain criteria, to satisfy the majority of users.

Once the baseline, the end-goal and the target completion date were determined, other relevant metrics were added. A plan was developed to identify data sources and establish procedures to collect information at regular intervals.

The working groups also devised a communication plan to inform IT personnel of their role in data collection and how specific areas were to be measured. We also made other parts of the organization aware of the program to ensure consistent reporting in all areas and to evangelize our team’s efforts.

Past performance data was used to generate historical reports for comparison with improvement as the initiative progressed.

Once the plan was reviewed and implemented, progress reports were generated. The disciplined process of designing metrics, implementing collection plans and measuring results have revealed new insights. Management teams are happy with the results.

The overall approach has since undergone refinements based on input from other areas to make it applicable to all areas, and is now being adopted by the entire IT department.

Properly designed metrics clearly reflect the effectiveness of any process improvement initiative. It is important to establish KPIs and CSFs early on. However, it is equally important to spend time on specific areas when the project is “in-flight.”

With a good metrics design and ongoing measurements at regular intervals, project leaders can identify issues early and can initiate corrective actions. Process improvement projects require constant adjustments to fine tune performance. Without proper metrics that gauge progress, a fine-tuning or corrective actions are nearly impossible. Tight control can also decrease the project’s time-to-completion and improve the quality of the outcome.