Performance Evaluation and The NHS: A Case Study In Conceptual Perplexity and Organizational Complexity

Rudolf Klein Professor of Social Policy at the University of Bath.

First published in Public Administration Vol. 60 Winter 1982 (385-407)

This paper adopts a comparative perspective towards the analysis of performance evaluation in the National Health Service. The NHS, it is argued, is best seen as an organization which is not unique but which ranks high on a number of dimensions: uncertainty about the relationship between inputs and outputs; heterogeneity of activities and aims; the ambiguity of the available information. These factors help to explain why performance evaluation in the NHS is both conceptually and organizationally problematic, and fragmented and professionalized in practice. By looking at the same factors in other organizations, it may be possible to start constructing a framework for examining the problems of performance evaluation in different settings.


To evaluate, appraise or assess the performance of any organization is to engage in debate. It is a debate about how the objectives of the organization should be defined, about the criteria to be used to measure performance and about the interpretation of information. Objectives do not set themselves: they are the product of organizational processes. Criteria are not self-evident: they are selected and shaped by the values of the organizational actors. Information does not automatically yield conclusions: data acquire meaning only when there is already an .agreed policy paradigm (Wildavsky and Tenenbaum 1981). Nor can the debate be resolved by an appeal to techniques; on the contrary, the choice of techniques to be used is part of the debate about how the performance of the organization should be perceived and analysed.

The starting point of this paper is therefore that performance evaluation can most usefully be seen as a process of argument: to adapt Anderson’s definition of policy analysis (Anderson 1979). It is an argument which will be shaped both by the substantive nature of the policy arena and by the characteristics of the organization concerned. On the one hand, it will reflect the nature of the ‘goods’ produced. On the other hand, it will reflect the structure of the organization: the power, legitimacy and authority of the different actors involved in the argument.

If the evaluation of performance is problematic in any organization, particularly so in an organization whose final output is ‘health care’. For is a policy arena distinguished by its complexity, heterogeneity, uncertainty, and ambiguity — to introduce the themes which will shape the analysis of this paper. It is marked by a high degree of complexity in that the product of health care requires the co-operation of a wide mix of skills, ranging from doctors and nurses to laboratory technicians and ward orderlies (Klein 1980)  Indeed, this complexity is reflected in the structure of the Department of Health  and  Social  Security,   which is  remarkable  for  the  number of  ‘occupational groups’ represented in its administrative hierarchy:  141 against 25 in the Department of Education (Hood and Dunsire 1981, 181) It is distinguished by its heterogeneity in that it covers a variety of very different activities, ranging from the provision of curative services for the acutely ill to care for the mentally handicapped. It is marked by the large degree of uncertainty about the relationship between inputs and outputs, means and ends: while a shoe manufacturer may be reasonably certain that a given input of leather, labour and capital investment in machinery will yield boots as a final output, there can be no equivalent certainty that a given input of resources will produce a given quantity of ‘health’ at the end of the day. Finally, and linked to the uncertainty, the health care policy arena is remarkable for the ambiguity of the available information: thus a patient treated may be seen either as an indicator of success (if the objective of the organization is defined as being to provide treatment) or as an indicator of failure (if the objective is defined to be the prevention of ill-health).Organizations   providing   health   care   have   a   further   distinguishing characteristic. This is that they are not simply or primarily responding to demands.  To a large (if debatable) extent, they themselves create the demands. If the private market is ultimately controlled by the preference of the consumers, in theory at least, the health care market is ultimately controlled by the decisions of the producers (Abel-Smith 1976). That is the professional providers determine what the patient ought to have; once the consumer has made the decision to enter the health care market, it will be the professionals who decide what kind of treatment he or she should have — what drugs should be prescribed, what tests should be carried out, whether an operation should be performed, how long the patient should stay in hospital, and so on. Health care organizations can therefore be seen as supplier-dominated services, shaped not by consumer preferences but by producer norms. Lastly, health care organizations define and select to a large extent, their own clientele. The population of hospitals is not defined by statute (except in the case of a small minority of the mentally ill); it is not legislation which determines who visits the GP’s surgery.

All this would seem to suggest that the process of argument about performance evaluation will be particularly difficult to resolve in the health care policy area. Its complexity inevitably generates a variety of objectives; heterogeneity   reinforces   competition   between   criteria;   uncertainty and   ambiguity add to the difficulties of appealing to the ‘facts’ as a way of resolving the debate. The dominance of producers means that performance cannot be judged by its success in meeting demands, while the absence of statutory definitions of the clientele means that it cannot be assessed on the basis of legislative criteria.

To argue that health care is ‘different’ is not to assert that it is unique. The aim of the analysis so far has not only been to identify the specific factors calculated to make performance evaluation in the health care policy arena particularly problematic, but also to suggest some conceptual benchmarks to allow comparisons with other services. For example, the analysis might suggest that the problems of performance evaluation faced by health care organizations will be shared — if to a lesser degree — by other services which rank high on the complexity, heterogeneity, uncertainty and provider-dominance scale: the personal social services and education for example. In contrast, the problems will be less for those services at the opposite end of the scale which are relatively simple, have a clearly defined product and an unambiguous relationship between inputs and outputs, such as water authorities and the social security system.

Equally, the analysis so far has sought to identify those factors which are common to all health care organizations, irrespective of their institutional structure or their financial basis. In this, the aim has been to try to emphasise that the problems of performance evaluation faced by the National Health Service do not necessarily or exclusively reflect its own, special institutional characteristics. All health care organizations, it is argued, face the same conceptual perplexities when it comes to performance evaluation. It is however, their special institutional characteristics which shape the response to these perplexities: the organizational politics of performance evaluation. And it is to two characteristics of the NHS, particularly relevant to the present discussion, that we now turn.

Among public services in Britain, the NHS is unique in that it tries to square two circles (Klein 1983). First, it is an attempt to reconcile central government responsibility for the financing of the service with delegation of responsibility for service delivery to peripheral authorities. Second it is an attempt to combine the doctrine of public accountability with the doctrine of professional autonomy. Both points need brief elaboration.

From its launching in 1948, the NHS has been a case-study in unresolved conflict between centre and periphery. Given that it is financed overwhelmingly by Exchequer revenue, the constitutional position is quite clear: the Secretary of State for Social Services is responsible to Parliament for every penny spent in the NHS. Moreover, if the aim of creating a National Health Service is to provide national policies — such as equity in the distribution of resources and the achievement of specific standards for particular client groups — then, ineluctably, central government cannot avoid taking responsibility. Yet at the same time, given the complexity and heterogeneity of the service, central government has sought to push responsibility for day to day decisions about the delivery of services to the periphery: to stress the responsibility of authority members for the performance of those services under their control. The balance of power between centre and periphery has shifted from time to time, but the dilemma of how to reconcile central control and local autonomy has remained unsolved throughout the 35 years of the NHS’s history. In other words, the locus of responsibility for performance evaluation is itself problematic: the subject of an on-going debate.

But not only is the locus of responsibility for performance evaluation uncertain, so too is the scope of the responsibility. Health care is about the use of public resources at the point of service delivery: the way in which any given bundle of resources is used by the professional providers. It is they who — within the limits of their budgetary allocations — determine who gets what. But the institutionalization in the NHS of the doctrine of professional autonomy — that only doctors and other clinicians can make judgments about the way in which their peers use resources — inevitably means that professionals have a monopoly of legitimated authority when it comes to assessing or appraising what is done (or not done) for individual patients.

It is these two unresolved dilemmas — how to reconcile central and local responsibility, public accountability and professional autonomy — which provide the threads which will help to guide us through the labyrinth of the organizational politics of performance evaluation in the NHS. But first a general point needs to be made. There is a risk that, in discussing performance evaluation, its desirability will be taken for granted: that the self-evaluating organization will be assumed to be the norm (Wildavsky 1980). To smuggle this assumption into any analysis is to ignore the fact that performance evaluation not only raises problems — in the sense of generating conceptual perplexities — but is itself problematic in the specific sense that its use in any organization requires explanation. If it seems self-evidently rational for any organization to assess its performance, it does not follow that it will be self-evidently rational for organizational actors to engage in or support such activities. To assess someone’s performance is, potentially, threatening. Evaluation is an instrument of control. Therefore, any inquiry must analyse not only the organizational processes which define performance — who takes part in the argument, and what the currency of debate is — but also the factors that inhibit or promote the practice of evaluation.


Organizations not only evaluate themselves, they can also be evaluated from the outside. The evaluation of organizational performance thus complements evaluation in organizations. This section reviews some of the independent attempts to assess the performance of the NHS. By doing so we can help to extend our understanding of the conceptual perplexities involved in trying to appraise the NHS, before exploring the way in which those perplexities have been dealt with in the organizational context of the NHS.

The first attempt, albeit a restricted one, to assess the performance of the nhs, was made by the Guillebaud Committee (Guillebaud 1956). Its remit was a highly specific one: to examine the costs of the NHS. In this, it reflected the circumstances of its birth — the post-1948 anxieties raised by what appeared to be the spiralling budget of the NHS. The Committee therefore defined performance in terms of a single criterion: value for money or efficiency.

Since this was to be a continual preoccupation in attempts to evaluate the performance of the NHS, it is worth looking at the way in which the Guillebaud Committee tackled the issue. Most notably it adopted a very limited definition of efficiency. It concentrated on the way in which resources were used, rather than on the relationship between inputs and outputs. This approach highlights the problem (not even discussed, however, in the report) of measuring the output of the NHS — a recurring theme in all subsequent attempts to evaluate its efficiency.

‘It is one of the problems of management’, the report argued, ‘to find the right indices for measuring efficiency’. And the criterion it adopted, in line with the Ministry of Health’s own strategy at the time and subsequently, was that of comparative performance. If the relationship between inputs and outputs could not be measured, at least the comparative performance of different health authorities could be. The efficiency or inefficiency of individual health authorities could thus be assessed in terms of their relative position in the performance league table, using such indicators as ‘average occupancy of beds, length of stay of patients, bed turn-over, turn-over interval, waiting time etc.’, as well as financial statistics.

It is difficult to avoid the conclusion that the Guillebaud Committee equated efficiency with economy: that it was more concerned with maximizing the intensity with which NHS resources were used than with the appropriateness of their use. Indeed, the report explicitly repudiated the suggestion that data about the use of NHS resources might be used to assess performance in terms of the impact on patients, and assured the medical profession that:

We should regard it as unfortunate if opposition to the compilation of departmental costs or similar data were to be based on the mistaken idea that any conclusions could be drawn from these figures as to the professional standards or competence of doctors in different hospitals.

If evaluating the performance of the NHS was difficult, the Guillebaud Report concluded, then the answer lay in more and better information. Once again they were anticipating future reactions to the perplexities of trying to evaluate the NHS’s performance. The Ministry of Health had appointed its first statistician as late as 1955, and the report urged the creation of a ‘Research and Statistics Department’, within the Ministry of Health, to generate the information needed. The department was created, took root and expanded. The number of statisticians and economists increased greatly. So did the flow of data. But when, towards the end of the seventies, the  Royal Commission on the National Health Service (Merrison 1979) came to look at the NHS’s performance, it found itself almost as flummoxed as the Guillebaud Committee had been 23 years before. The questions put by the Royal Commission were a great deal more sophisticated and wide-ranging than those of the Guillebaud Committee. But the answers were scarcely more convincing.

The first question, and the most difficult, that the Royal Commission asked itself, was whether the performance of the NHS could be assessed in terms of its final output i.e., its impact on the population’s health. In so far as the NHS has any clearly stated objectives — which might be used as criteria of performance — they are ‘to secure improvement in the physical and mental health of the people’ and ‘in the prevention, diagnosis and treatment of illness’ in the words of the 1946 Act creating the Service. But the Royal Commission was forced to conclude that this question was strictly unanswerable. First, there was the puzzle set by conceptual complexity:

“Health is not a precise or simple concept.   We are therefore dealing with many different concepts of health, and the functions of the NHS should reflect this’. Second, there was the puzzle set by the uncertainty of the relationship between inputs and outputs:

“Measuring the health of a nation should not be confused with measuring the performance of its health service. even the most cursory examination of the past shows clearly enough that improved nutrition, hygiene and drainage have had greater effects than many dramatic cures for specific ills.”

Thirdly, there was the puzzle set by the ambiguity of such statistics as were available: The lack of a clear and commonly accepted definition of health creates problems for attempts to assess the efficiency of a health service by measuring the health of a population’.

These conceptual perplexities are central to any attempt to evaluate the performance of the NHS. Take the problem of trying to assess the total impact of the NHS on the nation’s health. Here there are two fundamental difficulties, as the Royal Commission pointed out. In addition to the problem of relating health service inputs to the health of the population, there is the further problem of finding appropriate indicators. Death is a reasonably robust statistical fact so comparisons across time and between countries at any point in time are, in theory, feasible. Accordingly, the Royal Commission examined mortality rates and found that these had indeed fallen in the life-span of the NHS. There had, for example, been a drop of almost 56% in perinatal mortality rates between 1948 and 1977. But at the same time, Britain’s perinatal rate was higher than that of Sweden, France, the Netherlands and most other advanced industrial societies (with the exception of the United States). So one possible conclusion might, on the face of it, be that while the NHS’s performance had improved over time, it was still lagging on a comparative basis. In fact, however, such a conclusion has immediately to be qualified. There is no simple relationship between health care inputs and the outputs as measured by perinatal mortality rates.(or indeed any other mortality rates). Not only are these influenced by social and economic factors but a statistical analysis shows paradoxically that ‘a high ratio of doctors to patients is associated with a relatively high perinatal rate’. Furthermore, the Royal Commission pointed out:

“advances in medicine may improve perinatal figures by preserving the lives of severely handicapped babies who would otherwise have died at birth but whose prospects of survival for more than a few years, or of having anything like a normal life, are small.”

However, mortality rates measure only one dimension of health. They ignore the other dimension of health — the quality of life being led. Yet, as the Royal Commission argued:

“Quality of life becomes increasingly important as the possibilities develop of extending life for people who would in the past have died from their illnesses or injuries, and as people live longer and the chronic conditions of old age become more common.”

Measuring the performance of the NHS — or any health care system — would therefore require indicators of morbidity to supplement mortality data. Unfortunately, morbidity data is both scarce and ambiguous. Most of the information about morbidity is generated by the activities of the NHS itself, and is accordingly biased:

“Hospital statistics give information about the numbers of patients treated in hospital and about numbers waiting to be treated, but variations in these figures may be due as much to facilities available, either in hospitals themselves or in the community, and to the costs to patients of using the service, as to differences in the health of the population.”

If the supply of health care largely determines the demand for it, then it is clearly dangerous to draw any conclusions about the population’s health from data about activity.

Other sources of data also pose problems of interpretation. Sickness absence figures seem to be very sensitive to general social factors such as individual satisfaction with the job and the general rate of unemployment. And although the general household survey has asked questions about people’s own perception of their health since 1972, it is very difficult to draw any conclusions from changes over time since the information produced appears to vary with the precise wording of the questions being asked. For example, in 1977 the survey changed the wording of its questions about health, and there was an immediate — and very sizeable — increase in the number of people reporting themselves to be suffering from ill-health (Office of Population Censuses and Surveys 1979).

The conceptual problems involved in trying to measure the overall impact of the NHS on the population’s health also constrain the conclusions that can be drawn about the general performance of the NHS in terms of efficiency. There can be no simple summary measure of the efficiency of the health service because of the fundamental difficulty of defining and measuring its outcome’, the Royal Commission concluded. It went on to argue that ‘crude indications of efficiency are the average length of stay in hospital and the number of patients treated per hospital bed, and these suggest steady improvements in NHS efficiency’. But these are very crude factors indeed: to compare the productivity of the NHS by using the throughput per bed across time in an attempt to assess its performance is to make a large number of assumptions. It assumes above all, that there is no variation in the kind of patients being treated (e.g., the severity of their condition) or the consequent improvement in their health. Moreover, it ignores the fact that using the ‘bed’ as the unit of analysis for measuring the efficiency of the NHS may be positively misleading. It is not beds, as such, which cost money, but the staff required to run them and the resource demands generated by the staff.

If, in fact, we look at productivity in terms of the ratio between staff and the number of patients treated, a very different conclusion seems to emerge. A study carried out by the DHSS, for the period 1971 to 1978, showed that the number of medical staff in the medical specialties increased by 25%, while in-patient admissions went up by 6.9% and out-patient attendances by 3.4% (DHSS 1981). During the same period, the number of nurses also rose much more sharply — by 17% — than the number of patients treated. So, overall, it would appear that there had been a drop in the efficiency — as measured by productivity — of the NHS. Similar conclusions have been drawn from the increase, during the same period, in the number of administrative and managerial staff (Confederation of British Industry 1981). In fact, the figures are once again flawed by ambiguity. Thus one possible explanation of the apparent drop in productivity might be, as the DHSS study pointed out, that ‘the quality of care may have improved as a result of the development of new techniques and treatments which often require more intensive use of medical (and other) manpower’. In short, unless we have some concept of the ‘value added’ by the NHS in terms of improvements in health — and an index for measuring it — we cannot measure its performance in terms of the relationship between inputs and outputs.

It is not surprising, therefore, that a considerable academic industry has sprung up — in this country and elsewhere — dedicated to developing such an index. Some advance has been made towards developing measurements of hospital outputs (Rosser and Watts 1972; Rosser 1976) and indicators designed to provide measurements of changes in the health of the population over time (Williams 1974). But there are some inherent conceptual problems in such an approach: an output index has to try to sum up what may be incommensurate health status indicators such as death and discomfort (Doll 1974). Equally, the theoretical arguments have so far proved more convincing than the attempts to translate them into practice. For the foreseeable future, at any rate, this approach does not seem likely to deliver the technical tools required for evaluating the performance of the NHS or any other health care system. And even if it succeeded in developing something equivalent to the gross national product for health — an index which would allow us to look at year by year variations in the population’s health and relate these to changes in the inputs of health care — this would not dispose of the underlying conceptual problems of evaluation, although it would certainly make the task easier by dispelling at least some of the present uncertainty and ambiguity.

The underlying conceptual problem involved in trying to assess the performance of the NHS arises, to return to the argument of the introduction, from the multiplicity of criteria that can be deployed. Performance is, itself, a contested notion — a black hole, as it were — which is defined by the criteria used. Efficiency is obviously one such criterion. But there are others and, once again, the Report of the Royal Commission usefully illustrates both their variety and some of the problems encountered when trying to apply them in practice.

Thus one objective of the NHS, as defined by its creators, is to ensure equality of access to health services. Accordingly, the Royal Commission used this as an evaluative criterion in appraising the geographical distribution of resources within the NHS, and examined the distribution of access in terms of the use made of NHS services by different social classes, allowing also for differences in the distribution of ill-health between social classes i.e., defining equality in terms of equal access for equal conditions. In both respects it found the performance of the NHS wanting. The geographical distribution of resources showed a bias towards the historically well-endowed parts of the country. And while the lower social classes suffer from more ill-health, it concluded that ‘the higher socio-economic groups receive relatively more of the expenditure on the NHS’.

But there are a number of problems about using this evaluative criterion. These are well illustrated by the report of a research working group set up by the DHSS (Black 1980) to examine inequalities in health. The report concluded that the NHS had failed to bring about equality in health care, whether measured by access or outcome. The analysis can be criticized on technical grounds. For example, it tends to brush aside the effects of selective social mobility i.e., the possibility that healthier people are also more mobile. But, more fundamentally, it raises the question of the extent to which it makes sense to assess the performance of the NHS using indicators which reflect factors outside its control. If equality of access is important because it is supposed to produce equality of outcome in terms of health status then, as the Black report makes clear, the NHS is set an impossible aim since health status itself is largely determined by socio-economic conditions. Furthermore, if equality of access is seen as a desirable policy objective in its own right, then it may still be impossible for the NHS to achieve it on its own since the resources required to make use of any given amount of health care (information, social confidence and skills) are unequally distributed in our society. Finally, it may be noted that using equality as the only criterion for evaluation (Fishkin 1978) may produce perverse policy conclusions. Not only may there be a trade-off between equality and efficiency (Okun 1975) but there may be a trade-off between the aims of maximizing equality and maximizing welfare. The former, for example, could be achieved by reducing those services (e.g., preventive care) used most intensively by the middle classes. Conversely, a welfare maximizing strategy might well increase inequalities, in so far as it might concentrate on those population groups which are most accessible to health care intervention.

To use the equality criterion is to make explicit the ideological basis of performance appraisal. It is derived from a social equity model of health care and the consequent definition of the NHS as an instrument of distributional justice. Starting from a different ideological position, and using a different model of health care, would suggest a different evaluative criterion. A market model of health care would suggest that, in appraising the performance of the NHS, the test should be the extent to which it meets consumer preferences (Harris and Seldon 1979). The attempt by the Royal Commission to combine the equality criterion with the consumer preference criterion shows, however, that there may not only be a trade-off between different criteria but also a direct conflict or incompatibility.

Thus, in assessing the performance of the NHS, the Royal Commission took into account the extent to which it satisfied ‘reasonable expectations’. It is a curious phrase whose question-begging ambiguity — what expectations are to count as ‘reasonable’? — reveals the tension between two ultimately irreconcilable views about who determines the legitimacy of the criteria to be used in appraising performance, providers or consumers. If performance evaluation is seen as a technical process, assessing progress towards agreed societal goals such as the achievement of equality or efficiency, then it follows that the service providers are likely to have the required expertise and know-how. If, however, performance evaluation is seen as an inter-active process, where the goals themselves are determined in the political or economic market, then it follows that the participation of consumers in the goal-setting activities are as important as that of the providers.

The Royal Commission, despite its ritual nod in the direction of the consumer, clearly saw performance evaluation as an essentially technical process. It argued that:

“It is misleading to pretend that the NHS can meet all expectations. Hard choices have to be made. It is a prime duty of those concerned in the provision of health care to make it clear to the rest of us what we can reasonably expect.”

This would seem to be saying that the criteria of performance can only be defined by the performers.

Moreover, the Royal Commission defined the domain in which consumer preferences are relevant in very restrictive terms. “Most patients lack the technical  knowledge  to  make  informed  judgments  about  diagnosis  and; treatment’, it pointed out, ‘ignorance may as easily be a reason for a patient 1 being satisfied with his treatment as for his being dissatisfied’. It therefore concluded that the prime expertise of the consumer lay in knowing ‘whether he has been humanely treated’. And indeed the survey of patient views, sponsored by  the  Royal  Commission   (Gregory  1978)  concentrated  very much on attitudes towards the routines of hospital life, the extent to which patients felt they had been given adequate information for example. Although the study revealed considerable dissatisfaction with specific aspects of the NHS such as poor communications, it confirmed the findings of all previous surveys: there was overwhelming general satisfaction with the performance of the NHS.

To make this point is not to pick out the Royal Commission for criticism but rather to re-emphasize the fact that criteria of performance are inescapably linked to the models of health care being used. The NHS is, in effect, a monument to the social equity model of health care. Equal needs, it is implied by this model, should get equal treatment. But while consumers may have requirements or make demands, it is experts who define needs. Built into the social equity model of health care there is, therefore, a bias towards deriving criteria of performance from the definitions of need made by the service providers. This bias is accurately reflected in the Royal Commission’s definition of ‘reasonable’ expectations — a definition which is not arbitrary or accidental but follows logically from the technocratic paternalism implicit in the social equity model of health care. It is a bias which suggests that the performance of the NHS must be judged not by the extent to which it satisfies consumer demands, but by the extent to which it meets professionally defined need.

Given such a perspective, the number of people on the waiting lists for NHS treatment is seen as being a legitimate criterion. Their presence on the lists means that they have been defined to be ‘in need’ by the professional providers. But there are serious problems in interpreting even these statistics since doctors tend to define ‘need’ elastically in the light of the available resources (which may help to explain the otherwise puzzling fact that throughout the history of the NHS, the numbers on the waiting lists have tended to hover around the 600,000 mark). Thus the annual reports of the DHSS regularly give the numbers on the NHS waiting lists, but provide no information about the numbers opting for the private sector (although the latter could, logically, be considered just as much an indicator of unsatisfied demand as the former).

The central role of the professional providers in defining the parameters of performance also emerges from the Royal Commission’s discussion of how to assess the NHS in terms of the quality or standards of service provided. Information about the quality of care, the Royal Commission pointed out, is of three types:

  1. Information About Inputs. Here the assumption is that the greater the inputs — as measured in, for example, the number of medical and nursing staff — the higher the quality.
  2. Information About Outcome. Here the critical element is the measure­ment of efficacy, not efficiency; the ability to demonstrate that certain forms of care or intervention improve the patient’s health.
  3.  Information About Process. Here the central requisite element is a professional consensus about what ought to be done i.e., standards of treatment, laying down the pattern of care and procedures (for example X-raying patients suspected of having a fracture).

The disadvantage of deriving conclusions about quality from the quantity of inputs is its implication that the only way of raising quality is to increase resources. The difficulty about basing conclusions on outcome is that however desirable and feasible this may be in principle there is in fact a remarkable shortage of the kind of studies required to measure the efficacy of medical intervention (Cochrane 1972). “Medicine is still an inexact science, and many of the procedures used by doctors, nurses and the remedial professions have never been tested for effectiveness’, the Royal Commission pointed out. So, inevitably, assessment of standards or quality tend to fall back on process indicators: what is thought to be desirable by the professional providers. ‘Standards of cure and care within a given level of resources are in practice largely in the hands of the health professions’, the Royal Commission concluded. Performance evaluation, from this perspective, becomes mainly a matter of self-evaluation by the professional providers since they alone have the kind of experiential knowledge (as distinct from scientific techniques) required in the management of uncertainty.


The Secretary of State will be accountable for the performance of the NHS and must maintain control of the performance of functions delegated to the Regional Health Authorities. The DHSS will therefore monitor Regional performance in relation to agreed objectives..

….. The Regional Health Authority must control the performance of its Area Health Authorities and its Regional officers. To do so it will receive reports on AHA performance from each AHA, ensure that progress is according to plan and that services are being provided throughout the region with efficiency and economy, challenge the performance of AHAs if necessary and ensure that appropriate remedial action is taken….

….. The (Area Health) Authority must control the performance of its officers at Area Headquarters and in District Management Teams.. To do so, it will receive reports on performance from the Area Team of Officers and from each DMT, ensure that progress is according to agreed objectives, targets and budgets, and that services are provided with efficiency and economy, challenge DMTs on their performance and ensure that appropriate action is taken to correct unsatisfactory performance (DHSS 1972).

As the above quotation shows, the concept of performance evaluation is embedded in the structure of the NHS. The extract comes from the ‘Grey Book’, published by the DHSS in the run-up to the 1974 reorganization (DHSS 1972), which set out the managerial philosophy that helped to shape the new model NHS. Moreover, as we shall see, the emphasis on performance evaluation survived the 1982 reorganization, even though this was originally intended to diminish the role of central government and diffuse responsibility to the periphery (Klein 1981).

Indeed, in many respects the 1974 model NHS represented an institutionalized monument to the faith in rational planning that dominated the first half of the seventies. The new technology of planning was to provide the tools, while the new institutional structure of the NHS was to provide the appropriate organizational setting for encouraging their use. Already in 1971 the DHSS had started work on developing a programme budget designed to relate expenditure figures to policies by breaking down its total budget into the spending allocations for specific client groups (Banks 1979). Subsequent to 1974, the DHSS introduced a new planning machinery which laid down guidelines intended to shape the decisions made by health authorities (Butts, Irving and Whitt 1981). In 1976, the DHSS published its first priorities document, setting out growth patterns and objectives for different parts of the service (DHSS 1976). The same year the Department adopted the recommendations of a working party which devised a formula for allocating funds to health authorities according to criteria derived from epidemiological data, and intended to bring about a rational fit between objectively measured need and resources (Buxton and Klein 1978). Thus the insistence on the importance of assessing performance — at all levels of the NHS’s organizational hierarchy — sprang logically from the insistence on setting objectives, deciding on priorities and monitoring projects.

Implicit in such an approach, however, is a very different definition of performance evaluation from that discussed in the previous section. It involves defining performance in organizational terms: assessing the organization’s performance in the currency of its own criteria. The question being asked is not primarily whether the NHS is improving the population’s health, equalizing access or improving the quality of the services being asked. It is whether the NHS is moving in the direction laid down by the central policy makers.

This, of course, is to over-simplify. If we look at the intentions of the central policy makers — as expressed in the planning guidelines, the priorities documents and the new (RAWP) formula for distributing resources — these were designed to promote equity and quality. For example, they were intended to bring about geographical equity in access to health services by distributing resources according to need, and to improve the quality of services for certain client groups such as the elderly and the mentally ill and handicapped by increasing the funding of the services concerned.

But the way in which the intentions of the central policy makers were set out reflected the limitations on their ability to influence what happened at the periphery (Hunter 1980; Hay wood and Alaszewski 1980). The policies — particularly as set out in the planning guidelines and priorities documents — were expressed in a series of norms of desirable levels of provision or desirable levels of funding for a given population. The assumption was that a given pattern and level of inputs would produce the desired outputs. Using the agreed objectives, targets and budgets as thecurrency of performance evaluation therefore, constrained the conclusions that could be drawn from the exercise because of the uncertain relationship between inputs and outputs and the ambiguity of the statistical information about activities. Such an approach could assess the extent to which the objectives of the central policy makers were being carried out in terms of the distribution of resources. It could not, however, evaluate the extent to which the intended effects were being produced in terms of the final outputs; the way in which any given bundle of resources were being used by the service providers at the periphery.

For information about the periphery, the DHSS can turn to another source: what is, in all but name, its inspectorate. This is the Health Advisory Service; originally entitled the Hospital Advisory Service and set up in 1969 in the wake of the Ely scandal to report to the Secretary of State on conditions in hospitals for geriatric patients, the mentally handicapped and ill (DHSS 1971). In 1976 responsibility for the mentally handicapped was hived off to a separate national development team for the mentally handicapped, and the remit of the HAS was extended to cover community services and long-stay services for children (DHSS 1977). The HAS sends out peripatetic, multi-disciplinary teams of health service professionals to inspect facilities and services and, apart from publishing an annual report, also makes a confidential report on each visit to the relevant health authority. It therefore provides an instrument for assessing the standards, or quality, of care — for feeding to the centre information about the performance of the services with which it is concerned.

The performance of the HAS has not, itself, been independently evaluated and it is therefore difficult to come to any conclusions about its effectiveness. But there are two aspects of its role which require noting. First, its criteria of evaluation are essentially taken from professional “best practices’. Second, the remit of the HAS is limited to services for the most deprived groups: it provides no information about the performance of the NHS as a whole. In part, this may reflect the anxiety of central policy makers to have information about the most scandal-prone sectors of the NHS where clients tend to be exceptionally vulnerable and therefore in need of protection. However, it also reflects the fact that these are the sectors of the NHS which come near the bottom of the medical profession’s own hierarchy of prestige, and where indeed doctors tend to play a less dominant role than in the acute services. The profession as a whole, therefore, did not see the introduction of the HAS as a direct attack on medical autonomy, even though its title — and the careful avoidance of the word ‘inspectorate’ — betrays the need to manage medical suspicions. Significantly, however, the introduction of the HAS has not led to the subsequent adoption of this model for evaluating the performance of the rest of the NHS.

There is,   then,   a  complex  and  comprehensive  system  for  evaluating performance in the NHS. But the problems of evaluating the performance of the NHS remain. Perhaps they can be best illuminated by the dialogue between ” the DHSS and parliamentary committees, searching for a way of relating  changes in public expenditure (inputs) to the performance (outputs) of the NHS  (The author must declare an interest: he was a specialist adviser to the Social Services and Employment Sub-Committee of the Expenditure Committee and to its successor, the Social Services Committee, from 1976 to 1981.)

In 1977, for example, the Expenditure Committee addressed itself to the question of how ‘increasing expenditure has, over the years, been reflected in the services concerned’ (Expenditure Committee 1977). It argued that the statistics of activity provided by the DHSS as a matter of routine in the Expenditure White Paper, were inadequate: The number of patients treated, or of prescriptions issued, tell us little about the adequacy or otherwise of the service provided in terms either of the availability of facilities for treatment or of standards of care’. It therefore recommended that:

The Department should make a start now on developing, for the longer term, indicators of performance. In particular, the Committee recommend that the DHSS should give priority to developing two kinds of measures. First, measures of access are required to show to what extent people in different parts of the country have the same chance of obtaining treatment or care for particular conditions and needs, and whether access is improving over time. Second, measures of quality or provision are needed to show improvements (or deteriorations) in the physical environment, amenities and patient satisfaction.

The Committee’s recommendations reflected its dissatisfaction with the ambiguity of much of the information provided by the DHSS in its evidence. This was most conspicuous in the case of one of the performance indicators used by the Department: unit costs for patients in different hospitals. It turned out that rising unit costs in the services for the deprived groups were seen by the Department as a sign of improving standards, while falling unit costs in the acute services were seen as a sign of increasing efficiency. But, as the Committee pointed out, increasing unit costs ‘can reflect either higher standards or falling efficiency’. Conversely, of course, falling unit costs can reflect either higher efficiency or lower standards.

The Committee was to make the same point four years later in its new incarnation (Social Services Committee 1981). In evidence, the DHSS presented the Social Services Committee with its programme budget. This showed that policy appeared to be succeeding in achieving its objectives; that priority, in the allocation of resources, was being given to the deprived client groups and that costs per case in the acute sector of the NHS were falling throughout the second half of the seventies, while costs per bed for the mentally ill and handicapped were rising. But the Committee remained worried about the “basic ambiguity’ of the evidence:

In the case of the acute services, the presumption is that a fall in costs is a sign of improved efficiency. In the case of the services for the chronically ill, the presumption is that a rise in costs is a sign of improved quality. There is no necessary contradiction here. But the contrast does indicate a need for caution. One way of increasing efficiency, in terms of shortening lengths of stay and cutting costs per case, might be to reduce quality; another way might be to transfer some of the costs to other services, such as the domiciliary services of the Personal Social Services.

In short, the available instrument of performance evaluation — the programme budget — appeared to be better at measuring the performance of the organization (in terms of being able to achieve its immediate objectives) than at assessing the performance of the services (in terms of their delivery of care to the population).

The Social Services Committee was also much preoccupied, over the years, with the more general problem of measuring the performance of the NHS in terms of ‘efficiency and economy’, to go back to the words of the 1972 ‘Grey Book’ quoted at the beginning of this section. It was a concern which it shared with the Public Accounts Committee, and which was reinforced by the expenditure cutting strategy of the Conservative Government. Although the Conservative administration did not cut the budget of the NHS, it reduced its rate of growth. Moreover, the budgetary calculations included a provision for ‘efficiency savings’. The planned-for growth in the budget of the NHS was to be contingent on the achievement of savings (Chancellor of the Exchequer 1982). It was these savings which, in part at least, were to finance the growth. But how would Parliament know that the savings had actually been achieved through greater efficiency, as distinct from cutting services? The Social Services Committee asked the question in successive years, but found it difficult to get a satisfactory answer from the DHSS. In its latest report it concluded that: We fear that there is a danger that health authorities will achieve the savings simply by cutting back on maintenance programmes and deferring well-planned developments for which a firm need has been shown to exist’ (Social Services Committee 1982). In other words, efficiency is not to be confused with economy in the sense of cutting back on spending.

Starting from a somewhat different perspective, with the emphasis chiefly on value for money, the Public Accounts Committee came to equally critical conclusions about the DHSS’s capacity to evaluate performance in terms of efficiency. Its 1981 enquiry into the NHS, which concentrated largely but not exclusively on manpower (accounting for 70% of the service’s total cost), is particularly revealing for its illumination of DHSS attitudes towards its own role in performance evaluation (Committee of Public Accounts 1981). Asked by a Committee member to give some specific examples of how the degree of efficiency of a particular health authority could be measured Sir Patrick Nairne, the then Permanent Secretary, replied:

I think that you can only do it by identifying a range of what the jargon would describe as performance indicators. You can look at the throughput in the hospitals, to see how the general throughput of patients in an acute hospital in the district compares with the general average existing in the region as a whole, and that can be compared with the national average. One can look at the costs in greater detail….          one can look at the catering costs. You can see what the degree of waste is in the hospital. You can look at the degree of mark-up in staff catering. You can, I think, look at the way in which the ancillary staffs are employed.

But when asked whether it was the responsibility of the DHSS itself to carry out such performance reviews, Sir Patrick was quite clear that this was not its role:

I do not see it as the Department’s job to be looking at district health authority by district health authority, all 190 of them, wherever they are, and looking at the sort of performance indicators that I have been describing. That I would see as a job that belongs to the regions, and in particular, of course, those performance indicators which apply to a hospital at unit level, I see those very much as a task that belongs to the district health authority itself.

There are a number of significant aspects to this exchange which require noting. First, there is the diffusion of responsibility for performance monitoring, which is essentially seen as consisting of bringing local perspectives to bear on statistical information. The assumption here is that, given the ambiguity of the available data, it is only local knowledge which can give meaning to the figures. Interpretation has to be left to the lowest possible administrative tier since it is at this level that the required knowledge is concentrated. Second, there is the emphasis, echoing the Guillebaud Committee, more than 20 years earlier, on seeing performance in comparative terms. Lacking absolute standards — adequate ways of measuring efficiency in the full sense of looking at the relationship between inputs and outputs — the fall-back position is relative performance.

The Public Accounts Committee concluded that this was an unsatisfactory state of affairs. It stressed the need for an effective information system which would permit the DHSS ‘to monitor key indicators of performance by the regions’. The requirements of parliamentary accountability could only be satisfied, the Committee argued, if there was a ‘flow of information about the activities of the districts which will enable the regions, and in turn DHSS, to monitor performance effectively, and to take necessary action to remedy any serious deficiencies, or inefficiencies, which may develop’.

The Committee returned to the charge the following year (Committee of Public Accounts 1982). This time the DHSS submitted a pre-emptive memorandum stressing that it was strengthening its machinery for assessing the performance of the NHS. This stated that: ‘the Department has been studying the feasibility and value of introducing a standard set of indicators which would be used in the process of reviewing performances’. But in giving evidence Sir Kenneth Stowe, the new Permanent Secretary, sounded a note of caution:

We are addressing a service, the end product of which is patients better or cured and that is the supreme performance indicator. The very real difficulty — it is both conceptually and technically difficult — is to bring into a direct relationship the outputs of the Health Service in that sense and the inputs in terms of money and manpower. What we are addressing are performance indicators which will be of a very broad nature and will in the early stages enable us to ask questions.

The reply brings the argument back to the central point made above: if the only available performance indicators are of ‘a very broad nature’, and depend on local information to give them interpretative significance, then the role of the central department is inevitably constrained. They can start a process of argument, they cannot clinch it. The point was clearly made by Sir Kenneth Stowe when, in reply to a question in a somewhat different context, he said:

Our position is that first of all we would not want to embark upon a process of detailed comparisons between regions because we would not necessarily know what the relevant variations were. We have not hitherto been in a condition to compile at the centre the data and the yardsticks which enable us to make those comparisons   If, however, we are able to bring to a successful conclusion — and I think we shall — the work now being done on performance indicators and manpower standards we shall be in a position to challenge the regional health authorities in a way that we have not done hitherto, to explain why there are differences in performance standards as between relevant districts in their region and as between regions.

In short, performance evaluation is seen as dialogue; indicators are a tool for generating questions. But, it would seem to follow, the less ambiguous the indicators are — in the sense of having an agreed meaning — the more the burden of proof shifts onto the peripheral health authorities to justify themselves. And the more background information the central department has, the more it can narrow the area of dispute about the significance of the statistics. This would suggest that the locus of information in the organizational hierarchy, as well as the degree of its ambiguity (i.e., the extent to which its interpretation is consensual as distinct from contested) is of critical importance when it comes to performance evaluation.

The performance indicators, referred to by Sir Kenneth Stowe, are also of interest in their own right:

  • Average total cost per in-patient case; ( Data standardized to take account of specialty mix.)
  • average cost of direct treatment services and supplies per in-patient case; average cost of medical and paramedical support services per in-patient case;
  • average cost of general (non-clinical) services per in-patient day;
  • proportion of all admissions classified as immediate admissions;
  • proportion of all admissions classified as urgent involving a delay of more than one month before admission;
  • proportion of all admissions classified as non-urgent involving a delay of more than one year before admission;
  • average length of stay for hospital in-patients; (Data standardized to take account of specialty mix.)
  • average in-patient cases per bed over the year; (Data standardized to take account of specialty mix.)
  • proportion of all in-patients and day patients treated as day cases; ( Data standardized to take account of specialty mix.)
  • average number of out-patients seen in each clinic session; ( Data standardized to take account of specialty mix.)
  • ratio between new and returning out-patients; ( Data standardized to take account of specialty mix.)
  • number of health visitors and district nurses per head of population; Using population figures weighted to take account of factors affecting morbidity such as the age mix of the population.
  • number of NHS administrative and clerical staff per head of population Using population figures weighted to take account of factors affecting morbidity such as the age mix of the population.

(Social Services Committee 1982).

They suggest that the DHSS is trying to apply a variety of different criteria to the evaluation of performance. One predictable concern is with costs, another is with the intensity of use. Again, other indicators are designed to assess the extent to which health authorities are using least-cost forms of treatment i.e., the proportion of all patients treated as day patients. Yet another set of indicators represents an attempt to measure the adequacy of the services being offered to patients in terms of the waiting times for treatment. Finally, the input indicators are derived from current policy concerns: the decision to concentrate on indicators for the numbers of health visitors and district nurses, on the one hand, and on those of administrative and clerical staff, on the other, reflects the policy emphasis on improving community care while cutting management costs.

The list of indicators is significant for a number of reasons. It is significant for what it leaves out. There are no attempts to translate the figures of NHS activities into services provided for any given population, although it would be technically feasible to produce figures of the quantity and kind of treatment (e.g., number of specific types of operations) delivered. The indicators are thus essentially service rather than population orientated; inward rather than outward looking. The list is also remarkable for the fact that most, if not all, of the indicators have been available for the last decade at least. The only innovative aspect of the whole exercise is the decision to use information that has been available in the NHS for years as indicators. It is not the data that has changed, but the DHSS’s attitude towards its use.

This is part of a wider puzzle. The 1982 reorganization of the NHS, as already pointed out, was intended to diminish the degree of central intervention. The main change was to abolish the area authority tier and to emphasize the devolution of decision making to the new district authorities. Yet this was accompanied by increasing emphasis on the role of the DHSS in monitoring performance. The introduction of performance indicators was only one aspect of this new strategy. At the same time, the Secretary of State introduced a new system of annual reviews: ‘each year Ministers will lead a Department review of the long-term plans, objectives and effectiveness of each Region with the Chairmen of the Regional Authorities and Chief Regional Officers’ while, in turn, ‘the Regional Health Authorities will hold their constituent District Health Authorities to account’ (DHSS 1982). The intention was to use the performance indicators as part of this review. The DHSS launched a series of regional experiments in regional performance evaluation, with a view to establishing whether a national management advisory service should be established to promote efficiency, quality and effectiveness in the NHSS (Social Services Committee 1982).

The paradox of decentralization being accompanied by a greater emphasis on central scrutiny over performance is more apparent than real. For, as the Social Services Committee argued, the easier it is to compare the overall performance of individual health authorities, the less need is there to scrutinize their decisions in detail (Social Services Committee 1981). Moreover, the interest in performance indicators also reflects a loss of faith in the traditional input indicators for measuring progress towards the achievement of the DHSS’s policies and priorities. Thus the Department’s most recent priorities document (DHSS 1981,b), in contrast to its predecessors in the seventies, no longer expressed its policy objectives in terms of resource norms — the desirable level of service provision. In this it reflects the economic climate. Using progress towards the achievement of resource norms as a measure of performance assumes the availability of extra funds. If budgetary growth can no longer be taken for granted, then their use of norms is likely to lead to disillusion and to demonstrate failure (Klein 1981). So the search for alternative methods of assessing performance is not surprising; again a reminder that definitions of performance are not set in concrete for all time.

The enthusiasm for devising new tools of performance evaluation in the eighties also indicates the importance of another factor: the role of political markets. The supply of performance indicators has followed the demand for them. The DHSS’s development of new indicators and new machinery can be seen, in part at least, as a response to the pressure of parliamentary committees. It would thus seem to support the argument of this paper that performance evaluation should not be analysed exclusively as a spontaneous organizational development — the production of information being seen as a kind of statistical virgin birth — but has to be seen as a response to external pressures.


So far this discussion has concentrated on the evaluation of performance as a routinized and institutionalized administrative activity within the bureaucratic hierarchy of the NHS. This ignores, however, a major dimension of performance evaluation within the NHS: the amount of spontaneous, fragmented and, to a large extent, professionalized self-assessment that is characteristic of the organization.

The extent to which individual health authorities formally evaluate their own activities is not known. We can note specific examples such as the experiment in performance evaluation carried out by the Wessex Regional Health Authority in the late seventies (Wessex RHA 1977). This set out to examine performance ‘in the light of agreed standards’ in a variety of service areas e.g., catering. But we can make no quantitative judgment, in the absence of a national survey, of how many such attempts at self-evaluation have been carried out, or what their outcome has been.

What does seem reasonably clear (and here the author has to draw on his own experience as a member of a health authority) is that self-assessment in the NHS is both a diffuse and fragmented activity. It is fragmented in two different senses. First, it is local in character. Second, it addresses the problem of complexity by unpackaging the notion of ‘performance’ into its component parts and examining the way in which particular parts of the service operate. Moreover, it appears to be carried out not so much as a standard organizational practice but as a response to perceived problems and local political markets. Performance appraisal, in other words, is seen not as a desirable activity to be pursued routinely in its own right, but as a tool for coping with specific problems (e.g., over-spending under a particular budget head) or as a way of dealing with political pressures (e.g., protests about

the length of waiting lists). Again, we come back to viewing performance evaluation as being dependent on the existence of incentives. Its organizational costs are obvious, and there have to be perceived benefits before there is willingness to embark on the enterprise. Indeed one of the assumptions implicit in the 1982 reorganization was that, by delegating more responsibility to the district health authorities, it would also strengthen the local visibility of the NHS and thus strengthen the local political market for performance indicators.

The other area of self-assessment, so far neglected, is that of professional self-evaluation. The performance indicators discussed in the previous sections ignore one crucial aspect of the NHS’s activities: the quality of the clinical care being offered to patients. To return to the argument of the introduction, this partly reflects the conceptual problems of assessing quality in health care caused by the difficulties of measuring outcomes. But it also reflects the medical profession’s insistence on upholding the doctrine of clinical autonomy i.e., that only professional experts can judge the work of other professional experts. The two points are, of course, linked. For it is the conceptual perplexities — the problem of actually measuring the quality of care — which legitimate the doctrine of clinical autonomy. If the relationship between inputs and outcome is uncertain, if any information is therefore likely to be ambiguous, then professional judgments — embodying experiential knowledge — are likely to prevail over other criteria.


It is not surprising therefore that much of the self-assessment takes the form of doctors examining their own performance in the light of standards which reflect a professional consensus. Given that there is no equivalent to the HAS in the acute services of the NHS, as already noted, assessment takes the form of self-audit by doctors: clinical teams reviewing their own performance. Again, there is no evidence as to the extent to which this takes place. It may well be, for example, that the effect of medical audit is to widen differences in standards of performance in so far as consultants in prestigious institutions have fewer inhibitions about examining their own performance than those in other hospitals (where standards may be lower and where, therefore, there may be a positive disincentive to engage in an exercise likely to reveal shortcomings). But, most important, it is clear that this kind of activity — whatever its scale — provides no kind of systematic performance evaluation, since it is dependent on the initiative of the doctors concerned. Nor does it feed into the mainstream of bureaucratic evaluation, since any information generated about standards of performance is restricted to the medical domain.

To stress the role of the medical profession in restricting the scope of performance assessment is to risk inviting the conclusion that the problems of developing a systematic and comprehensive machinery of evaluation highlight the power of the doctors: that this is yet a further illustration of medical domination (Freidson 1971). But the evidence reviewed here can yield a rather different conclusion. It may be precisely because the health care policy arena is characterized by complexity, uncertainty andambiguity that it is difficult to devise the bureaucratic rules, standards and norms which would permit the activities of the medical profession (and other health service providers) to be evaluated and therefore controlled. If that hypothesis is correct then, indeed, we would expect the problems of performance evaluation — and the ability of service providers to claim immunity for their own activities — to be similar in those policy areas which share at least some of the characteristics of health care like the police and education, although they are not characterized by the domination of powerful professional groups. From this perspective, it is conceptual perplexities which help to explain the power not only of specific professions but of other provider groups.

The evidence also suggests a further conclusion, specific to the NHS. This is that the halting progress towards developing performance evaluation may reflect the success of the NHS. Whatever the shortcomings of the NHS may be, internationally regarded it is an outstanding success in limiting the claims of health care on national resources (Maxwell 1981). Compared to the United States and other countries, where the proportion of the national income devoted to health care is 50% or more higher than in Britain, there have consequently been far weaker incentives to develop performance evaluation as an instrument of cost control. But financial stringency, the experience of the eighties so far suggests, provides such incentives. And if that is indeed so, it would seem safe to predict a continuing growth of interest in developing the machinery of performance evaluation in the NHS.


Abel-Smith, Brian. 1976. Value for money in health services. London: Heinemann.

Anderson, Charles W. 1979. The place of principles in policy analysis. American Political Science Review 73, 711-724.

Banks, G.T. 1979. Programme budgeting in the DHSS, in Timothy A. Booth (ed.), Planning for welfare. Oxford: Basil Blackwell.

Black, Douglas (Chairman). 1980. Inequalities in health: report of a research working group. London: DHSS.

Butts, Michael, Doreen Irving and Christopher Whitt. 1981. From principles to practice. London: Nuffield Provincial Hospitals Trust.

Buxton, M.J. and Rudolf Klein. 1978. Allocating health resources. Royal Commission on the NHS. Research paper no. 3. London HMSO.

Chancellor    of    the    Exchequer.    1982.    The   government’s    expenditure    plans,    vol.   II. Cmnd. 8494 — II. London: HMSO.

Cochrane, A.L. 1972. Effectiveness and efficiency. London: Nuffield Provincial Hospitals Trust.

Committee of Public Accounts. 1981. Financial control and accountability in the national health service. Session 1980-81, seventeenth report, H.c. 255. London: HMSO.1982.   Financial  control  and  accountability  in   the  national   health  service.   Session

1982-82, seventeenth report, H.c, 375. London: HMSO.

Confederation of British Industry.  1981.  Report of the CBl working party on government expenditure. London: CBI.

Department of Health and Social Security, 1971. National health service hospital advisory service: annual report for 1969-70. London: HMSO.

_ 1972. Management arrangements for the reorganized national health service. London: HMSO.

_ 1976. Priorities for health and personal social services in England. London: HMSO.

_ 1977. Annual report of the health advisory service for the year 1976. London: HMSO.

_ 1981. Report of a study of the acute hospital sector. London: DHSS.

_ 1981 b. Care in action. London: HMSO.

_ 1982. NHS to be asked to improve accountability. Press release. London: DHSS

Doll   Richard, 1974. Surveillance and monitoring. International Journal of Epidemiology 3, 305-314.

Expenditure Committee.  1978.  Selected public expenditure programmes:  chapter V.  Session 1977-78, eighth report, H.c 600. London: hmso.

Fishkin, James S. 1978. Tyranny and legitimacy. Baltimore: John S. Hopkins University press.

Freidson, Eliot. 1971. Profession of medicine. New York: Dodd, Mead & Co.

Guillebaud, C.W. (Chairman). 1956. Report of the committee of enquiry into the cost of the national health service. Cmnd. 9663. London: HMSO.

Gregory, Janet. 1978. Patients’ attitudes to the hospital service. Royal Commission on the National Health Service, No. 5. London: HMSO.

Harris, R. and Arthur Seldon. 1979. Over-ruled on welfare. London: Institute of Economic Affairs.

Haywood, S. and Andy Alaszewski. 1980. Crisis in the health service. London: Croom Helm.

Hood, Christopher and Andrew Dunsire. 1981. Bureaumetrics. Farnborough: Gower.

Hunter, David. 1980. Coping with uncertainty. Chichester: Research Studies Press.

Klein, Rudolf. 1980. Costs and benefits of complexity, in Richard Rose (ed.), Challenge to governance. London: Sage.

_1981. The strategy behind the Jenkin non-strategy. British Medical Journal 282, 1089-1091.

_  1983. The politics of the national health service. London: Longmans.

Maxwell, Robert J. 1981. Health and wealth. Lexington Mass: Lexington Books.

Merrison,  Alec  (Chairman).   1979.  Report of the  royal commission  on  the  national  health service. Cmnd. 7615. London: HMSO.

Office of Population Censuses and Surveys. 1979. General household survey, 1977. London: HMSO.

Okun, Arthur M. 1975. Equality and efficiency. Washington: The Brookings Institution.

Rosser, R.M. and V.C. Watts. 1972. The measurement of hospital output. International Journal of Epidemiology 1, 361-367.

Rosser,  R.M.  Recent studies using a global approach to measuring illness.  Medical Care Supplement 14, 138-147.

Social Services Committee. 1981. Public expenditure on the social services. Session 1980-81, third report, H.c. 324. London: HMSO.
1982. Public expenditure on the social services. Session 1981-82, second report, H.c. 306. London: HMSO.

Wessex   Regional   Health   Authority.   1977.   Monitoring   in   Wessex.   Mimeo.   Winchester: Wessex RHA.

Wildavsky, Aaron. 1980. The art and craft of policy analysis. London: Macmillan.

Wildavsky, Aaron and Ellen Tenenbaum. 1981. The politics of mistrust. London: Sage.

Williams, Alan. 1974. Measuring the effectiveness of health care systems. British Journal of Preventive and Social Medicine 28, 196-202