During the post-War period, state universities were generally financed through a single block grant. But over time they have additionally been gaining external income from research councils, other government agencies and industry, so the block grant is a declining proportion of their income. Curiously, despite wide variations in the share of total income provided by the part of the block grant dedicated to research (for example Danish universities received 72% block grant funding in 2009, the UK 48% and Ireland 31% according to Eurostat) there is no clear relationship between the share of institutional funding and research performance.
Since 1986, education ministries have begun to use performance-based research funding systems (PRFS) in awarding institutional funding. In general, they aim to stimulate performance by reallocating resources to the strongest universities and research groups. The UK Research Assessment Exercise (RAE – recently renamed REF) was the first of these, and like other first-generation PRFS it relies heavily on peer review of submissions from the universities. From roughly 2000, more countries adopted a second generation of PRFS, mostly using bibliometric indicators rather than peer review. By that time the cost and difficulty of doing bibliometric analysis had fallen enough to be affordable – and, indeed to provide a much cheaper solution than peer review. Up to this point, PRFS focused on scientific quality, typically viewed through the lens of scientific publication. But a third generation of PRFS now aims to incorporate aspects that consider the influence of research on innovation and society more widely. Evidence
used ranges from patents through counting innovation March 2015 – N° 13 3 outputs like prototypes to text descriptions of research impacts in the REF.
Key design decisions for PRFS include
- Whether also to reward external project-based income
(many PRFS do), in which case the system will tend to
reinforce the themes and types of research that already
get external funding - Whether to create a list of ‘approved’ national language
publications of quality, in addition to relying on the
international journals indexed in the bibliometric
databases - Whether and how to address impact
- Whether to extend to PRFS beyond the universities
to research institutes (only Norway and the Czech
Republic do this) - How much of the institutional funding to make
contestable each time the PRFS is run. There is a need
to balance stability against change. Many systems
seem to effect a lot of change while moving only small
amounts of money about – it looks as if esteem and
career prospects matter more to individual researchers
than how much money their institution gets - Whether to put different research fields in competition
with each other or to make a policy decision about how
much funding to give to each field and then arrange for
competition within each field
Despite their widespread adoption, there is little evidence about whether and how PRFS work. What we know is mostly based on evidence from the UK, Norway and the Czech Republic, which suggests that effects depend both on policy purposes and on the effectiveness of implementation.
Norway introduced a PRFS in 2004 as part of the university ‘Quality Reform’ and subsequently set up a similar but separate system for the university hospitals and research institutes. Both act upon a very small fraction of total institutional funding. The PRFS distributes money in a linear way and in practice rewarded the newer and weaker universities for their increasing research efforts, building national capacity rather than reallocating resources to the existing winners. It drove up the number of publications but not their average quality. (An early Australian PRFS did the same, a few years before.) The PRFS for the institutes had similar effects but failed to increase either the amount of institute-university collaboration or international income – perhaps because both were already high.
In response to dissatisfaction with peer review based approaches, the Czech Republic introduced a metricsbased PRFS in 2008, which ran annually and was intended over a short period to become the only mechanism for allocating institutional research funding. Universities doubled their academic paper production in three years and the production of innovation-related outputs grew even faster. Allocations to individual organisations and fields became very unstable. Gaming was widespread and despite repeated attempts to refine the formulae used, the system was abandoned in 2012 and is currently undergoing radical redesign.