On paper the practice of scoring projects as a means of objective evaluation or prioritization is very compelling.
One envisions a complex system that distills the wisdom of governance committees into a set of consistent business rules such that project requests input in one end generate a legally defensible number at the other end for selection and ranking purposes.
Alas, this (like so many other PPM or PM best practices) remains as elusive as the Loch Ness Monster.
You can certainly create a scoring model that is based on multiple dimensions to achieve a balanced evaluation of each project – unfortunately, you can’t force a governance body to accept the results of this scoring. Beyond that, very few such dimensions are likely to be truly objective – I challenge you to make objectify strategic alignment, risk profile or even “business value” (especially for non-revenue generating projects). Finally, what guarantee do you have that there will be consistent scoring across each project request, especially if multiple analysts are involved in the process.
Does this mean that you should abandon any attempt to develop a scoring model for projects – of course not! If nothing else, a good scoring model can provide a foundation for principled negotiation around project priorities. If someone insists on arguing with you that their (low scoring) project is infinitely more critical than your (calculated) number one project, you can defuse this debate by asking them to review your scoring model and help you understand what might be missing.
Here are a few tips to improve the quality of scoring models:
1. Once you have a first draft ready, score your current slate of active projects against it and confirm if the relative rankings of each project makes sense. If not, use that feedback to fine tune the scoring model.
2. While it is useful to score risk and value/reward independently to support graphical evaluation of your portfolio’s balance, consider a single score approach that incorporates risk such that the higher the risk profile of a given project, the lower its score. Given the lip service given to risk management in many organizations, this ensures that risk scores are “baked in” and not cast aside.
3. Allow negative and positive scores for each criterion – for example, many scoring models incorporate financial metrics as a key component. If you only permit a scoring range from 0 to N, how can you account for projects that reduce your organization’s profitability – by allowing negative values you can effectively penalize projects that negatively impact a given criterion.
4. Remember that your work of art is just a model – it cannot replace good people and consistent practices. Effort expended in developing and using a scoring model needs to be weighed against the benefits gained through its use. Don’t develop a convoluted approach that forces analysts to have a PhD to know how to use it – always remember “garbage in, garbage out”.
I’ll conclude by quoting the Architect from The Matrix Reloaded: “The first matrix I designed was quite naturally perfect, it was a work of art, flawless, sublime. A triumph equaled only by its monumental failure. The inevitability of its doom is as apparent to me now as a consequence of the imperfection inherent in every human being.”