Get the latest articles and downloads sent to your inbox in a monthly newsletter.

Get the latest articles and downloads sent to your inbox in a monthly newsletter.

The Norwegian Paradox

A Revealing Case Study About Research Policy and Numbers

Per Koch, Research Council of Norway | Jan 01, 2009

Held in Helsinki in November 2008, the 6th Annual Library Connect Nordic Librarian Forum covered topics including research assessment and policy. During that event, Per Koch delivered a presentation which he’s kindly agreed to reshape as this article for Library Connect.
 

How can research and innovation assessment be useful for librarians? Universities, colleges and research institutes are policy organizations. Consequently, university administrators, politicians, civil servants, industrialists and NGO staff ask for assessment information to evaluate the performance of their institutions, including their libraries. These data are also used to ascertain the institutions’ role in the national innovation system, i.e., to what extent they contribute to learning and innovation in society.

Make sure that your document includes a good story that communicates well to others in addition to more detailed and complex analysis.

By learning how to assess and interpret data, librarians can better explain the development and performance of their libraries and their institutions to policy makers — including administrators at their own institutions. Also, academic librarians can more effectively participate in the crafting of funding proposals to go to university administrators, politicians, civil servants, industrialists and NGO staff.

Although policy makers are often aware of the complexity of social systems and weaknesses of indicators, they operate in an environment that has to digest a lot of information in a short time. Moreover, they have to communicate complex problems to politicians with even less time at their disposal.

Therefore, it helps to supply policy makers with:

  1. A simple story a reasonably informed person can understand.
  2. Statistics! Policy makers are number fetishists.

Policy makers make decisions based on less-than-perfect information and need good reasons for their budget allocations. Quantitative data can be utilized as a persuasive device for these reasons:

  1. Numbers make things seem true: Since Newton, mathematics is — rightly or wrongly — understood as the best tool for describing the world. Moreover, numbers give legitimacy among economists, and economists have a lot of power in these systems.
  2. Cynics choose the numbers that prove their point, and disregard the rest. You can even use a lack of numbers or hard indicators to overcome a competitor: “It has not been documented that this policy has an effect ...”

I would not recommend attempts at misleading policy makers, though. That is a seriously risky strategy. If I — as a policy maker — disagree with you, I may use your interpretation of the statistics against you. In other words: Do not mistake my longing for clarity for stupidity. What I am saying is that you should make sure that your document includes a good story that communicates well to others in addition to more detailed and complex analysis.

A case study:
The Norwegian paradox

One particular example of the use of statistics in research and innovation assessment is especially instructive. You will often see that policy makers benchmark the innovative capability of nations on the basis of one indicator: research and development as percentage of GDP.

In Norway we faced the interesting problem that the country produces extreme wealth despite seemingly low R&D investments.

The use of this indicator was originally based on the idea that innovation was born in the research institutions’ company labs and transferred over to the market by companies. The EU Commission's objective to increase the average national research investment level to 3% of GDP by 2010 was also originally based on this kind of thinking. Note that this is a political objective. There is no scientific reason for saying that 3% is a better level than, let’s say, 2.8 or 3.5.

In Norway we faced the interesting problem that the country produces extreme wealth despite seemingly low R&D investments. In 2005, the R&D/GDP for Norway was 1.5%. However GDP per capita was 172 (100 being the EU average), making Norway one of the richest countries in the world. The Swedes, on the other hand, have been struggling with a Swedish paradox, involving a high R&D investment with lower wealth creation than Norway. Sweden’s R&D/GDP was 3.9%, with a GDP per capita of 115. Hence it seemed there was no correlation between national R&D investments and economic growth.

So, although the indicator says something about how much of a nation’s wealth is spent on R&D (and certain types of knowledge), it cannot really be used to measure a country’s innovative capabilities. It is one of many input indicators, and not a relevant output indicator.

The Norwegian paradox case study provides a good example of the importance of thorough research and analysis when assessing performance and telling your story or proposal to policy makers in terms they can understand.

The Research Council of Norway and Innovation Norway agreed to develop a new “story” that made more sense for a country like Norway. We took a critical look at the indicators used by the Organisation for Economic Co-operation and Development (OECD) and the European Innovation Scoreboard, and came to the conclusion that there is no “best practice” in this area. Innovation systems do — and should — differ. Moreover, there is no single indicator that captures the complexity of the system. You have to compare several of them in order to get a grasp of what is going on.

Norway is dominated by traditional resource-based industries which are not research-intensive in the statistical sense of the word (i.e., as regards R&D investments as percentage of turnover). We do not have a huge research-intensive company like Nokia (which alone explains 2/3 of Finnish industrial spending on R&D). Resource-based industries like aquaculture and petroleum do, however, make use of advanced technology and knowledge. Moreover, their tools and their competencies are often directly or indirectly based on research. Hence in our new “story,” we argue the case that not all companies have to invest large sums in R&D in order to thrive and innovate; the system as a whole must secure a strong research-based knowledge base.

We are also convinced there are important social factors that are not captured by existing indicators. Such social factors include social equality and the autonomy of workers, risk-reducing social security and trustworthy legal systems, and good knowledge flows between companies and between the private and public sectors.

The OECD now agrees with us. Taking into consideration the heterogeneity of national innovation systems, the 2008 OECD report on Norwegian innovation says that there is no Norwegian paradox, and that Norwegian success can be explained by factors like, for instance, high levels of education and good learning capabilities in companies and public institutions.

Take-away for librarians

The Norwegian paradox case study provides a good example of the importance of thorough research and analysis when assessing performance and telling your story or proposal to policy makers in terms they can understand.

Comments