The catalogue contains study descriptions in various languages. The system searches with your search terms from study descriptions available in the language you have selected. The catalogue does not have ‘All languages’ option as due to linguistic differences this would give incomplete results. See the User Guide for more detailed information.
Quantification, Administrative Capacity and Democracy: Database of Performance Indicators Used for the Steering of Hospitals, Universities and Prisons in England, 1985-2015
Creator
Mennicken, A, London School of Economics
Griffiths, A, PEP Health
Cabane, L, Leiden University
Study number / PID
855065 (UKDA)
10.5255/UKDA-SN-855065 (DOI)
Data access
Open
Series
Not available
Abstract
This database seeks to capture the rise and extent of quantification as a tool of government by tracing the development of performance indicators used for regulatory purposes between 1985 and 2015 across different public service sectors (health/hospitals; higher education/universities; criminal justice/prisons) in the UK (with a specific focus on England). The cross-sectoral comparison intends to capture the diffusion of indicators across domains, and to compare temporal dynamics in the adoption of similar or different new public management instruments: Do indicators develop in a similar pace across sectors? What are their focus, audience and goals? And how did these change over time?Numbers increasingly govern public services. Both policymaking activities and administrative control are increasingly structured around calculations such as cost-benefit analyses, estimates of social and financial returns, measurements of performance and risk, benchmarking, quantified impact assessments, ratings and rankings, all of which provide information in the form of a numerical representation. Through quantification, public services have experienced a fundamental transformation from government by rules to governance by numbers, with fundamental implications not just for our understanding of the nature of public service itself, but also for wider debates about the nature of citizenship and democracy. This project scrutinizes the relationships between quantification, administrative capacity and democracy across three policy sectors (health/hospitals, higher education/universities, criminal justice/prisons) and four countries (France, Germany, Netherlands, UK). It offers a cross-national and cross-sectoral study of how managerialist ideas and instruments of quantification have been adopted and how they mattered. More specifically, it examines (i) how quantification has travelled across sectors and states; (ii) relations between quantification and administrative capacity; and...
Terminology used is generally based on DDI controlled vocabularies: Time Method, Analysis Unit, Sampling Procedure and Mode of Collection, available at CESSDA Vocabulary Service.
Methodology
Data collection period
04/04/2016 - 31/12/2019
Country
England
Time dimension
Not available
Analysis unit
Organization
Event/process
Universe
Not available
Sampling procedure
Not available
Kind of data
Numeric
Text
Data collection mode
The database aims to provide a comprehensive overview of different indicators used by regulators to measure the performance of universities, hospitals and prisons in England at different points of time (1985, 1995, 2005, 2015). It intends to be exhaustive. However, we cannot lay claim to completeness, given the growing extent and scope of indicators which makes 100% completeness a difficult goal to attain. The chosen sectors (higher education/universities, healthcare/hospitals and criminal justice/prisons) constitute three public sectors where performance measurement took hold, and where issues of quality, economy, and democracy have been publicly discussed. All three sectors have been particularly exposed to managerialist thinking over the past three decades and present ideal cases to explore tensions between “government by rules” and “governance by numbers”. The database starts in 1985 as it aims to assess the impact of new public management reforms that began in the 1980s on the development of performance indicators. Indicators were then re-assessed every ten years to capture their development over time (1995, 2005, 2015). One challenge we faced was to identify empirically what counts as an indicator. Indicators are often based on a compilation of different data assessing performance. They often involve multiple sources of data and various ways of data aggregation, which complicates the task of identifying what counts as an indicator (or its sub-component). In addition, some indicators are more easily accessible than others. For the purpose of compiling this database, we identified and selected indicators through two main methods. First, we reviewed different primary sources where indicators are published (official reports, websites, etc.). Second, we complemented our search with a review of secondary sources (academic articles, books, etc.) to check for completeness and gain information about indicators that was otherwise not available. As highlighted above, the database is aimed at being as comprehensive as possible, however, it does not (and cannot) provide a complete overview of all indicators that may have existed at the time.
Funding information
Grant number
ES/N018869/1
Access
Publisher
UK Data Service
Publication year
2021
Terms of data access
The Data Collection is available to any user without the requirement for registration for download/access.