Posts Tagged With: PMO metrics

Why project work doesn’t get completed early…

I usually advise my students to encourage their team members to not provide padded work estimates, but rather to make schedule contingencies visible and tie these contingencies to milestones rather than to individual activities.

This is intended to counter the potential confluence of Parkinson’s Law (work expands so as to fill the time available for its completion), Student Syndrome (let’s wait till the very last possible moment to start an undesired activity) and Murphy’s Law (anything that can go wrong will go wrong).

While this behavior might be true in some circumstances, it is based on a rather Theory X view of the world. Student Syndrome hurts no one other than the student procrastinating on starting their homework assignments or preparing for an exam. With projects, the impact of delays from one team member ripple downstream to others so with the exception of that small minority of Homer Simpson-like workers most of us want to do a good day’s work.

So if the fault doesn’t lie with the individual contributors, where else could we look for answers?

Today’s Dilbert cartoon provides us with better root causes for this behavior – the system which people work in or the managers they work for.

How could the system affect work completion? Poorly thought out performance measures are one way:

  • If staff are measured based on utilization, then completing work in less time means they are either going to be reprimanded or handed more work which they might not be able to complete by working a sustainable pace.
  • If they are paid based on the hours they work, they could be tempted to work to plan to avoid being financially penalized for completing early.
  • If they are measured based on the accuracy of their predictions (I.e. they are penalized for being early or late), then the work will tend to be completed exactly on time.
  • If they are forced to fill out weekly timesheets and the process to do so is onerous, it might be easier for them to just copy planned time over as actual time within their time recording system.

Managers can also inadvertently cause staff to complete work on time, but rarely early by:

  • Penalizing team members who complete work early on their future tasks by setting unrealistic target dates
  • Giving them additional projects or operational activities to do rather than letting them use that slack time productively
  • Breaking their focus by interrupting their work with urgent but un-important tasks

If we recognize that project activity durations are more likely to follow a lognormal distribution than the nice symmetrical normal distribution which we hope for, then we should praise team members for completing work early (within quality, safety, health and other constraints) rather than introducing impediments which discourage them from doing so.

Categories: Facilitating Organization Change, Process Peeves, Project Management | Tags: , , | 1 Comment

Are we marketing the right metrics?

Recently, I’ve been experiencing frequent brief loss of Internet connectivity issues at home. I live in a major urban area, no internal or external home renovations have happened which would affect cabling, and my cable modem was recently swapped. Thankfully, the technician who swapped the modem did provide me with his mobile number and recommended that I call him if I had further issues within a few weeks.

We have all heard that the Internet is becoming a critical utility and hence we should demand the same reliability as we do with power, water or our telephone dial tone. While this is a reasonable expectation, few Internet Service Providers (ISPs) have focused on this in their marketing campaigns to the personal market. Commercial customers are a different story – they enjoy real SLAs but at a higher cost. Most of the ISPs who service residential customers will hype their transmission speed or capacity in their advertising. While those are important, guaranteed up time would be a more welcome benefit in the long run, and would likely contribute to greater customer loyalty. ISPs are under pressure to scale their infrastructure to support greater speeds at lower costs, but the side effect of this “arms race” might be reliability.

This situation brought to mind the challenges we face when communicating delivery metrics as part of an agile transformation.

Many of the leaders I’ve worked with focus on schedule metrics: reducing time to market, lead time, time between releases, and so on. While these are important, an overemphasis on reducing lead time may unconsciously encourage delivery teams to kick quality concerns down the road. Having effective Definition of Done working agreements can help, but these can also be diluted to favor speed over quality. Defect reporting and customer satisfaction surveys provide opportunities to identify whether there is an unhealthy focus on delivering faster, but these are lagging indicators.

This is why it is so important that the communication campaign supporting the transformation, including the sound bites from top-level executives, reflect an equal footing for speed AND quality. And mid-level managers need to walk this talk in their daily interactions with their teams.

Don’t sacrifice quality at the altar of speed.

 

Categories: Agile, Facilitating Organization Change, Project Management | Tags: , , , , | Leave a comment

Can you prove that your PMO has improved project delivery?

statisticsSome project management offices (PMO) are like Rodney Dangerfield – they don’t get no respect. While there are many causes for a PMO to be shut down, the inability to demonstrate their value proposition is one of the more common reasons.

So how can a PMO prove that there has been an improvement in project delivery?

To answer this question, we need to identify one or more metrics which will be used to represent project delivery capability. A commonly used metric these days is time to market which could be calculated as the duration from the start of project investment to the first delivery of customer-facing value.

You might think that it would be a simple matter of calculating the average time to market based on a sample of pre-PMO and post-PMO projects, but this is not statistically defensible. The sample size used to determine the average values might not be sufficient to prove that the difference is statistically significant. If variation has remained the same or has increased, even if the average time to market has dropped, portfolio-level outcomes won’t have improved.

Time to call your friendly neighbourhood statistician!

It might not make sense lumping all projects together for these calculations. For example, one might reasonably expect that a $10,000 project will usually take less time to deliver value than a $1,000,000 project. Project size and complexity influence timelines, so you might wish to stratify your population into a few distinct project tiers.

The next step is to determine the minimum sample size to prove that a difference is statistically significant. Statistical analysis packages such as Minitab enable you to calculate sample size based on the statistical test you will be running, the difference you’d like to see, and an estimate of the standard deviation of the population. For example, let’s say that we’d like to prove a reduction of one month in time to market, and the estimate of the population standard deviation is also one month. Minitab will calculate a minimum sample size of 18 projects. Unless we have at least 18 projects in both the before and after samples for each project tier, the difference in averages can’t be stated to be statistically significant.

Assuming you have sufficient data to support statistical testing, the two statistical tests you can run for each project tier are a 2-sample t test and a 2 variances test. The first will help you decide if the difference between the averages for the before and after samples is statistically significant or not, and the second determines if there is a statistically significant difference in the variation of the two samples. Ideally, after running the tests you will see a reduction in both the average time to market and the variation in the post-PMO sample. This won’t prove causality – there could have been other factors which more directly caused the improvement, but barring any obvious alternative influencers, you can state with confidence that things have improved since your PMO was established.

A key assumption underlying these tests is that before and after sample time to market data is normally or close to normally distributed – this can be confirmed using an Anderson Darling normality test. If it turns out that the sample is significantly non-normal, other tests would need to be used to statistically prove an improvement.

Disraeli was accurate when he said “There are three types of lies — lies, damn lies, and statistics”, but used appropriately, statistical testing can support the case for a PMO’s continued existence.

 

 

 

 

Categories: Facilitating Organization Change, Project Management | Tags: , | 1 Comment

Blog at WordPress.com.