The catalogue contains study descriptions in various languages. The system searches with your search terms from study descriptions available in the language you have selected. The catalogue does not have ‘All languages’ option as due to linguistic differences this would give incomplete results. See the User Guide for more detailed information.
Calibration with confidence: A principled method for panel assessment
Creator
MacKay, R, University of Warwick
Study number / PID
852889 (UKDA)
10.5255/UKDA-SN-852889 (DOI)
Data access
Open
Series
Not available
Abstract
Frequently, a set of objects has to be evaluated by a panel of assessors, but not every object is assessed by every assessor. A problem facing such panels is how to take into account different standards among panel members and varying levels of confidence in their scores. Here, a mathematically based algorithm is developed to calibrate the scores of such assessors, addressing both of these issues. The algorithm is based on the connectivity of the graph of assessors and objects evaluated, incorporating declared confidences as weights on its edges. If the graph is sufficiently well connected, relative standards can be inferred by comparing how assessors rate objects they assess in common, weighted by the levels of confidence of each assessment. By removing these biases, ‘true’ values are inferred for all the objects. Reliability estimates for the resulting values are obtained. The algorithm is tested in two case studies: one by computer simulation and another based on realistic evaluation data. The process is compared to the simple averaging procedure in widespread use, and to Fisher's additive incomplete block analysis. It is anticipated that the algorithm will prove useful in a wide variety of situations such as evaluation of the quality of research submitted to national assessment exercises; appraisal of grant proposals submitted to funding panels; ranking of job applicants; and judgement of performances on degree courses wherein candidates can choose from lists of options.This network project brings together economists, psychologists, computer and complexity scientists from three leading centres for behavioural social science at Nottingham, Warwick and UEA. This group will lead a research programme with two broad objectives: to develop and test cross-disciplinary models of human behaviour and behaviour change; to draw out their implications for the formulation and evaluation of public policy.
Foundational research will focus on three inter-related...
Terminology used is generally based on DDI controlled vocabularies: Time Method, Analysis Unit, Sampling Procedure and Mode of Collection, available at CESSDA Vocabulary Service.
Methodology
Data collection period
31/12/2012 - 30/09/2017
Country
United Kingdom
Time dimension
Not available
Analysis unit
Individual
Universe
Not available
Sampling procedure
Not available
Kind of data
Numeric
Data collection mode
Experimental data. We have tested the approach in three contexts. We report in detail on two case studies here. In the first case study, we use a computer-generated set of data containing true values of assessed items, assessor biases and confidences for the assessments and resulting scores. This has the advantage of allowing us to compare the values obtained by the new approach with the true underlying value of each item. The second case study is an evaluation of grant proposals using realistic data based on a university's internal competition. In this test, of course, there is no possibility to access ‘true’ values, so instead we compare the evidence for the models using a Bayesian approach (appendix E), and we compare their posterior uncertainties (appendix D). The third context in which we tested our method was assessment of students; we report briefly on this at the end of the section.
Funding information
Grant number
ES/K002201/1
Access
Publisher
UK Data Service
Publication year
2017
Terms of data access
The Data Collection is available from an external repository. Access is available via Related Resources.