Your browser is no longer supported

For the best possible experience using our website we recommend you upgrade to a newer version or another browser.

Your browser appears to have cookies disabled. For the best experience of this website, please enable cookies in your browser

We'll assume we have your consent to use cookies, for example so you won't need to log in each time you visit our site.
Learn more


  • Comment
Performance monitoring in the public services is poorly conducted, according to a report published by the ...
Performance monitoring in the public services is poorly conducted, according to a report published by the Royal Statistical Society.

It found that despite some good examples - such as enhanced size of the British Crime Survey, Europe's informative BSE testing of cattle, and Scotland's reporting of confidence intervals - scientific standards, in particular statistical standards, have been largely ignored.

This criticism applies not just in target setting, but also in the design, analysis and reporting of performance indicators.

The RSS report offers practical solutions for resolving these critical issues, against which current and future performance monitoring of the public services should be judged.

A striking feature of UK public services over the past 10 years has been the rise of performance monitoring. Performance data can be used in establishing 'what works' among policy initiatives; to identify well-performing or under-performing institutions and public servants; and, equally important, to hold ministers to account for their stewardship of the public services.

Hence, government is both monitoring the public services, and being monitored, by performance indicators. Because of government's dual role, performance monitoring must be done with integrity and shielded from undue political influence, in the way that national statistics are shielded.

The RSS report calls for:

* performance monitoring protocols - to ensure that statistical standards are met. Protocol is an orderly record not only of decisions made (from design to analysis and reporting) but also of the reasoning and calculations that led to those decisions;

* independent scrutiny - to safeguard the wider-than-government public interest, the indiv iduals and institutions being monitored, and methodological rigour;

* the reporting of measures of uncertainty whenever performance data are published, including as league tables or star ratings - to avoid over-interpretation and the false labelling of performance;

* research on different strategies than 'name and shame' for the public release of performance data, and better designs (including randomisation) for evaluating policy initiatives - the first to allay ethical and effectiveness concerns, the second for robust evidence about 'what works';

* much wider consideration of the ethics and cost-efficiency of performance monitoring.

Sheila Bird, chair of the RSS's working party on performance monitoring in the public services, appealed to journalists to champion better reporting:

She said: 'When league tables or star ratings are published, we'd like journalists to insist on access to (and reporting of) the measure of uncertainty that qualifies each ranking or rating. Without this qualifier, no-one can separate the chaff from the wheat, the good from the bad.'

The society said there is a precedent for this type of statistically-savvy journalism - though sometimes honoured in the breach. When reporting social or polling surveys, journalists know to cite the number surveyed, response rate, and a 'margin of error' (accounting for random variation) in any headlined percentage.

Policy initiatives often aim to change performance indicators. But, public money spent on inferior (usually non-randomised) study designs that result in poor-quality evidence about how well policies actually work is an evaluation-charade. Costly, inefficient by denying scientific method, and a loss in public accountability.

The working party looks to both the Treasury and Delivery Unit to review the resources spent on inferiorly-designed policy initiatives, and to hasten the implementation of better designs (including randomised) for cost-efficient and robust policy evaluation.

Professor Bird said: 'I very much regret that randomised trials of policy initiatives are rare. Missed opportunities for proper evaluation were mandatory drugs testing of prisoners (rolled out from 1995) and drug treatment and testing orders (piloted from 1998) to name but two.

'Despite targets on recidivism and crime clear-up rates, UK judges prescribe sentences on lesser evidence (about 'what works') than doctors prescribe medicines.

'Performance monitoring done well is broadly productive for those concerned. Done badly, it can be very costly and not merely ineffective but harmful and indeed destructive - of morale, reputations and the public services,' she said.

RSS president, Andy Grieve, said: 'The Royal Statistical Society wants to promote well-informed public debate on performance monitoring in the public services.

'Therefore, we'll engage with journalists in pursuit of better reporting standards, and with government and parliament to foster good practices in performance monitoring by implementing them across government.

'The Royal Statistical Society will start this by hosting a workshop on PM protocols at which a template can be worked through for a series of existing PM procedures, and then disseminated with these as exemplars.'

Click herefor more details of this report.

  • Comment

Have your say

You must sign in to make a comment

Please remember that the submission of any material is governed by our Terms and Conditions and by submitting material you confirm your agreement to these Terms and Conditions.

Links may be included in your comments but HTML is not permitted.