Skip to content
Home » Consistent metrics for benchmarking infrastructure projects (IP5a)

Consistent metrics for benchmarking infrastructure projects (IP5a)

  • by

Benchmarking should consider the whole life of an infrastructure asset

What is a ‘good’ project? The answer to that question should take into account the performance of an asset over its entire lifecycle, but there aren’t any benchmarking frameworks that do this for transport infrastructure projects. This project aimed to work out what such a framework would look like.

Benchmarking experts used the four physical demonstration projects, that are part of the TIES Living Lab programme, to test out what metrics could work for all of them. Although they used the Royal Institution of Chartered Surveyors’ (RICS’) International Cost Management Standard (ICMS) as a starting point, it had to be modified significantly to accommodate the needs of the various organisations in the Living Lab.

The team identified nine core performance areas to be benchmarked: cost; schedule; productivity; quality; carbon; circular economy; biodiversity and natural capital; climate resilience; and social value. Additionally, there are multiple stages in an infrastructure asset’s life that will require benchmarking: pre-design; preliminary design; detailed design; construction (including manufacture, transport and installation); renewals; operation; maintenance; and decommissioning. The result is no less than 80 metrics to be collected for each asset.

In deciding what the metrics should be, the researchers had some contrasting requirements. The metrics had to be granular enough to allow meaningful analysis and comparison – whereas many metrics collected currently are high level, big picture stuff. They also had to be straightforward to collect and familiar if possible, since users would be more likely to use them.

The project also aimed to focus on the 20% of a project’s elements which tend to account for 80% of the cost. The idea behind this is that the cost of data collection delivers the most possible bang for buck.

The report on this project observes that the biggest challenge in creating this sort of benchmarking environment is cultural: getting multiple organisations to agree on how to share data and then to actually do it. And it acknowledges that this could be a slow process.