Topics In Demand
Notification
New

No notification found.

Blog
Project Metrics – Do we count what really counts?

October 31, 2020

391

0

The Art and Science of Measurement - Lessons from Eratosthenes 200 BC

The Art and Science of Measurement – Lessons from Eratosthenes 200 BC

 

Management by Metrics

All of us in our long careers in the IT industry spent a lot of time collecting loads of metrics from our projects for a status report or a client presentation. A project manager may sometimes be very proud of producing a status report that is mathematically appealing with a lot of numbers, tables, charts, graphs, etc. In some cases, the sheer act of preparing and presenting a status report of this nature, by itself, commands a lot of respect and entitles one to gain the image of being a proponent of ‘Management by Metrics’ philosophy. If you happen to be a developer, tester, or analyst on the project, you might be wondering about the relevance and utility of the metrics program that is taking away a good portion of productive mindshare from writing and testing your software. There are some developers, who go so far as to claim that the biggest purpose of metrics and all the mumbo-jumbo in the organizations is to keep the project managers in the job. Of course, we all know that it is not the case, but what this extreme sentiment reflects is the disconnect that many professionals on the ground feel with the metrics and their practical interpretation or inference to help them produce better products. Therein lies the biggest challenge before the leaders on how to think about and design metrics that make sense, metrics that uncover hidden challenges, metrics that infer latent trends, metrics that help drive improvement, and finally and more importantly, metrics with a purpose.

Vanity Metrics

The key question to ask while collecting data for a metric is to have a unanimous clarity with your team on – What problem are we trying to solve? Or What aspect of our activity we want to improve? In today’s environment of the automated tool-based ecosystem, it is rather easy to ‘measure’ every activity or action that was ever done and to extract the metrics out of these. In such an environment, one is tempted to collect everything that can be measured first so that the data can be analyzed later to see if something interesting can be found in that data (inspired by the unsupervised learning models). This approach is highly counterproductive and produces what people refer to as Vanity Metrics. Their utility is limited to making one look good, deceptively data-driven decision maker, with no relevance to business outcomes or results that matter.

What to Measure?

We all know the famous quote from Einstein on this topic – “Not everything that counts can be counted, and not everything that can be counted, counts”. Although a tongue twister, this quote drives the core spirit behind every successful metrics program. A good metric is the one that is directly connected with the significant drivers of the business success of the organization or the team. Clear articulation of the connection between the point of measurement to the business goal is an important criterion in making the metric and the need for measurement credible. When the project manager can establish this connection as the primary justification for collecting data on that metric, they are sure to get the enthusiastic support of the team. Another key requirement for a good metric is that it is unambiguous and controllable. After all, we want to be able to control it with our underlying actions to move it from the current state to a more desirable state to meet the business goals. In essence, a top-down approach to designing metrics (starting from strategic goals and working down to the project level measurement) is a more appropriate approach rather than falling prey to the relatively easy and appealing bottom-up approach (collect what can be measured and then try to see how it can be made meaningful).

How do I measure the Unmeasurable?

It is already said that not everything that counts can be counted… well, in that case, how do we resolve the catch-22 situation? All too often, we come across a situation where there is no direct, simple, and straightforward way to measure things that are of our interest. Even if it is possible, it appears to be too daunting of an exercise to do, too expensive to measure, and too time-consuming. Like many of you, I too have spent quite a bit of time thinking about this and experimenting with a few alternative ideas to try and measure the unmeasurable, or not-directly measurable things. I wanted to share a story that inspired me on this theme….

This is the story of a Greek scholar Eratosthenes from 200 BC.  He was a chief librarian at the Library of Alexandria. He is famously known for being the first person to calculate the circumference of the earth with no sophisticated astronomical equipment, but with remarkable accuracy. Eratosthenes read from the library about a well in Syene (southern Egypt) with an interesting property. At noon on the summer solstice, the sun illuminated the entire bottom of this well, without casting any shadows, indicating that the sun was directly overhead. He also observed that at the same time, vertical objects in Alexandria, almost north of Syene cast a shadow. He then measured the angle of a shadow cast by a stick at noon on the summer solstice in Alexandria, and found it made an angle of about 7.2 degrees, or about 1/50 of a circle. Therefore, if the distance between Alexandria and Syene was one-fiftieth of an arc, the circumference of the earth must be 50 times that of the distance. In the later years with various sophisticated equipment and technologies, modern scientists calculated the circumference of the earth with approximately a 3% variation of Eratosthenes’ findings.

The takeaway from this story for us is that the design of measures and metrics is not all science but a combination of art and science. While we tend to look for direct indicators of variables of interest, sometimes the answers lie in the ‘indirect’ indicators of the main effect. If we can pay attention to capturing those clever indirect indicators, we will be able to find success in measuring things that are otherwise assumed to be unmeasurable.

Application of Metrics that do more harm than good

With metrics, creating them is one thing and interpreting or applying them for an appropriate purpose is totally a different thing. Several managers go down the wrong path of applying the metrics collected for a certain purpose for a completely different and inappropriate context. Let me give you a few examples. In the context of Agile teams – Velocity, story points delivered are the common metrics that are tracked and are meaningful for the purposes of project planning and scheduling. Given that these metrics are available for different scrum teams within the same program, managers could be misguided to use the velocity metrics to compare the productivity of two teams. This is a huge mistake and costs the teams in terms of their morale if these planning metrics are used to measure the productivity or efficiency of the teams, or even worse, the productivity of individuals.

Some of the organizations, with the best intentions of course, run the enterprise-wide metrics program to collect the operational metrics from all the teams. This normally covers delivery efficiency, First Time Right, Productivity, Quality, and other process metrics. At every team level, these metrics hold a certain context and their interpretation is relevant and meaningful when applied back in the context of that team. With the right questions asked, these metrics could be valuable tools in driving continuous improvements in those teams. However, some leaders could find the availability of these metrics as an opportunity to compare different teams to identify highly performing teams from the rest. This is a classic pitfall that must be avoided to use these metrics for comparing teams coming from different contexts, process maturity, and technological environments. Inappropriate applications of the metrics for the cross-comparison purposes contributes to the lion’s share of why many of the enterprise-wide mega metrics programs fail. As opposed to being helpful, they create a demotivating environment for people and results in loss of credibility and trust in the process of measurements and metrics. It is the responsibility of the project managers and senior leaders to pay close attention to defining the purpose, application, and context of the metrics right at the beginning of creating the metrics program.

 

Author : Prasad Vemuri, Sr. EVP, Broadridge Financial Solutions


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


Senior FinTech Executive Global Delivery Leader in Capital Markets and Customer Communications

© Copyright nasscom. All Rights Reserved.