Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

The ‘core’ or ‘institutional’ funding universities receive to do research of their own choosing is being subjected to quality control and competition in a growing number of countries, apparently with significant impact on their behaviour and productivity – but also with unanticipated negative effects.

During the post-War period, state universities were generally financed through a single block grant. But over time they have additionally been gaining external income from research councils, other government agencies and industry, so the block grant is a declining proportion of their income. Curiously, despite wide variations in the share of total income provided by the part of the block grant dedicated to research (for example Danish universities received 72% block grant funding in 2009, the UK 48% and Ireland 31% according to Eurostat) there is no clear relationship between the share of institutional funding and research performance.

Since 1986, education ministries have begun to use performance-based research funding systems (PRFS) in awarding institutional funding. In general, they aim to stimulate performance by reallocating resources to the strongest universities and research groups. The UK Research Assessment Exercise (RAE – recently renamed REF) was the first of these, and like other first-generation PRFS it relies heavily on peer review of submissions from the universities. From roughly 2000, more countries adopted a second generation of PRFS, mostly using bibliometric indicators rather than peer review. By that time the cost and difficulty of doing bibliometric analysis had fallen enough to be affordable – and, indeed to provide a much cheaper solution than peer review. Up to this point, PRFS focused on scientific quality, typically viewed through the lens of scientific publication. But a third generation of PRFS now aims to incorporate aspects that consider the influence of research on innovation and society more widely. Evidence
used ranges from patents through counting innovation March 2015 – N° 13 3 outputs like prototypes to text descriptions of research impacts in the REF.

Key design decisions for PRFS include

Despite their widespread adoption, there is little evidence about whether and how PRFS work. What we know is mostly based on evidence from the UK, Norway and the Czech Republic, which suggests that effects depend both on policy purposes and on the effectiveness of implementation.

Norway introduced a PRFS in 2004 as part of the university ‘Quality Reform’ and subsequently set up a similar but separate system for the university hospitals and research institutes. Both act upon a very small fraction of total institutional funding. The PRFS distributes money in a linear way and in practice rewarded the newer and weaker universities for their increasing research efforts, building national capacity rather than reallocating resources to the existing winners. It drove up the number of publications but not their average quality. (An early Australian PRFS did the same, a few years before.) The PRFS for the institutes had similar effects but failed to increase either the amount of institute-university collaboration or international income – perhaps because both were already high.

In response to dissatisfaction with peer review based approaches, the Czech Republic introduced a metricsbased PRFS in 2008, which ran annually and was intended over a short period to become the only mechanism for allocating institutional research funding. Universities doubled their academic paper production in three years and the production of innovation-related outputs grew even faster. Allocations to individual organisations and fields became very unstable. Gaming was widespread and despite repeated attempts to refine the formulae used, the system was abandoned in 2012 and is currently undergoing radical redesign.

What's new?

All articles All news