If you can not measure it, you can not improve it.

Peter Drucker said the above words to pay emphasis on the fact that if you want something to count in your life, you gotta find a way to COUNT it. He certainly didn’t say it in the context of Agile Project Management but to put focus on the fact that continuous improvement is absolutely necessary to survive or to maintain Competitive Advantage.

Metrics deflection in the positive direction may indicate enhancement, progress or improvement but it certainly is not a guarantee. Maybe if we consolidate diverse measurements and metrics, all demonstrating progress on interconnected ideas, we can have higher certainty that there is growth or improvement.

Take a look at a sample case below:

Imagine for instance that there are two teams X and Y with 5% code coverage in terms of unit tests. The leadership of the company is pressing hard to increase the coverage in future iterations. Both the teams approach the challenge differently. Team X opted to start writing the unit tests for the code that is already written which in a sense is a traditional approach. Team Y opted for a more modern approach by adopting the extreme programming practice of test driven development or TDD which favours writing unit tests before writing the actual code.

After a few months the code coverage metrics for the two teams indicated 30% growth for team X and only 15% for team Y.

Does that mean Team X progressed more than Team Y? Measuring progress is a little more complicated than that. Just the increase in growth in a particular metric does not confirm progress. In this particular case 30% growth does declare Team X as the winner as far as the code coverage is concerned but it does not prove that the unit test written by the Team X are of great quality. To explore this point a little further let’s bring in another metric. Let’s look into the number of defects found per iteration and let’s say that both team initially reported the defects count to be 25 and after a few months Team X reported 14 while Team Y reported only 9.

Can we now compare the progress of the two teams?

Perhaps.

Lack of defects may seem proportional to good quality code, but it may just be the practice of TDD which forced the team to focus on code behaviour rather than it’s implementation that led to a better quality. This however didn’t increase the code coverage for the already written code. In the previous scenario Team X concentrated on expanding code coverage yet the adequacy of the unit tests was arguable, which brought about higher coverage however not a relative increase in quality.

By all means the above deduction may still be wrong due to several factors. May be Team Y had a few testers take some time off which prompted a few stories not being tried and tested on all levels like functional, manual, exploratory and so forth, resulting in fewer defects. The point to note is that the numbers produced by various metrics are important but they are just indicators. The metrics should just be regarded as conversation starters to start a discussion to determine what is actually happening. The metrics can not possibly replace the interactions between individuals or face to face conversations. Hence the improvement in numbers does not confirm the overall growth.

Progress Measurement Techniques

Having said that keeping  such indicators handy can proves to be of advantage for an Agile Coach. Say for instance as a coach you have a feeling that the team is making progress. At this time these indicators will back your speculation. These indicators also help us to decide which direction to move in. We can decide to investigate a particular approach to a solution further if the indicators show a negative trend. What is important for us is to decide how often we should collect such numbers, how to examine the outcome and what action items to work on.

Metrics

Baseline metrics can be used for constant observation. When you are collecting a Metric you should always start with a WHY. Try not to collect a metric without a purpose. Metric should just be a means to the goal of enabling the team to do their best work. Also try not to rely on a single metric. You can read more about Baseline Metrics here.

Release Confidence Indicator

Another technique that I have used very effectively in the past is Release Confidence. Put a flip chart on a clean wall with name of all the team members. Ask the team members to put a percentage besides their name at the end the daily standup everyday to indicate the level of their confidence they have to achieve the objectives of the upcoming release. A percentage of 85 and above shows high confidence whereas 30 indicates low confidence. Release confidence can effectively be used for just in time planning. A greater difference between the percentage will trigger a discussion among the team members.

Metrics are only a correlation to reality. They can help frame a discussion but can not be used to come to a conclusion. Metrics can help you motivate the team so try to make every metric you collect visible using information radiators spread across the wall in team room.

Caution

The problem with the Metrics is that it stabilizes and yields no new information over time.

If a lot of emphasis is placed on a particular metrics the team may begin to target that metrics in an attempt to hit the desired value. Take velocity for example. Velocity tends to stabilize after about four to five sprints and hence becomes less valuable. If the team focuses on increasing the velocity sprint after sprint then it will do so at the expense of the other important measures.

When a measure becomes a target, it ceases to be a measure. – Goodhart’s law