Get the latest articles and downloads sent to your inbox in a monthly newsletter.

Get the latest articles and downloads sent to your inbox in a monthly newsletter.

Benchmarking with SciVal in Scholarly Communication and Research Services

By Rebecca Reznik-Zellen, University of Massachusetts Medical School | Apr 25, 2016

UMMS dept benchmarking article graph

 

 

 

Benchmarking is the process of evaluating the performance of one entity in relation to other similar entities using standard measures.1 In the academic sphere, when we benchmark, we are evaluating an individual researcher’s or a group of researchers’ scholarly performance using bibliometric measures. 

 

Bibliometric measures are traditionally citation-based, measuring the circulation of an idea through formal communication outlets — journals — by tracking how often, where, and by whom a work is referenced. Though anchored in print, citation-based metrics are an established and familiar way to determine how well a work is received within its discipline. These metrics can be applied at varying levels of granularity (article, author or publication venue2) and measure different dimensions of scholarly performance (productivity, impact and collaboration3). 

 

Large indexing databases do the work of collecting bibliographic metadata and make it easy to do basic reporting, such as identifying total citation counts for an article or the h-index4 for authors. But benchmarking can be more complex. For example, librarians might hear questions like these: 

 

  • Is there a way I can evaluate my department’s research performance against departments at other institutions?
  • What is the best way to quantify the impact of a department or division, in terms of its collective publication record?
  • How many articles did we publish in this particular journal last year compared with our competitor institutions? 
  • How collaborative is my research group?

 

Research intelligence tools like InCites by Thomson Reuters and SciVal by Elsevier greatly facilitate the evaluation of research performance using citation-based measures. SciVal, for example, is built on the Scopus database and calculates up to 31 metrics that can be run independently or in groups for a given individual or group of individuals. Several of these metrics are Snowball metrics, or vetted global standards for institutional benchmarking.5 The power of SciVal comes from its ability to evaluate multiple entities (individuals or groups or research areas) at once. By doing so, these products enable libraries to provide more powerful, citation-based metrics and tell better stories about research performance to faculty and administrators.  

 

For example, in the first scenario listed above, University of Massachusetts Medical School (UMMS) librarians used SciVal to assess the research performance of an academic department that wanted to compare itself with similar departments at other institutions. The results would enable the group to, among other things, rank itself and its peers specifically with respect to the strength of their scholarly output. 

 

 

Selecting metrics and groups to compare

 

Given the range of metrics that SciVal provides, the first task in undertaking this project was to select appropriate metrics to make the comparison. After reviewing the merits and uses of all of the metrics SciVal offers, the project team settled on simple but powerful measures: 

 

  • Productivity as measured by scholarly output over time
  • Impact as measured by citation counts, cited publications and citations per publication

 

The next step was to identify the groups being compared. To do this the project team randomly selected 10 institutions nationally and randomly selected 10 departmental faculty from each. Finally, entities for each group of 10 individuals were created in SciVal and used to calculate the benchmarks.  

 

 

Results indicate leadership and areas for improvement

 

The results were immediately demonstrative. The following figures show how each institutional department performed based on productivity over time (Figure 1) and impact (Figure 2) for articles published between 2003 and 2013 with no subject filters applied. Although our institutional department (in brown) leads its peer group in productivity for the time period, a peer institutional department (in coral) is more impactful as measured by total citations, cited publications and citations per publication. These data together show a much broader picture than productivity metrics alone would and indicate some areas for improvement. 

 

 

Scholarly output per year for academic department, 2003-2013

Figure 1: Scholarly output per year for academic department, 2003-2013, articles only, no filters (Source: SciVal, Scopus data)

 

Total citations, cited publications, citations per publication for academic department, 2003-2013                  

Figure 2: Total citations, cited publications, citations per publication for academic department, 2003-2013, articles only, no filters, including self-citations (Source: SciVal, Scopus data)
 

 

This is just one example of using research intelligence tools within our institutions to do large scale benchmarking projects. At UMMS, the results of these projects have been positive: faculty have been very pleased with the results and have integrated this information into decision making processes.

 

As librarians, we provide a service to our institutions by introducing and utilizing research intelligence. We understand the features and uses of these tools and are able to match them to the evaluation needs of our communities. We explain and can help select meaningful metrics; we can develop methods and train staff, especially for involved collaborative projects like the one described above. We can generate reports and we are able to use the data gathered with these tools in other programs, to meet the reporting and evaluation needs of our institution.

 

At UMMS we have now had SciVal for over a year and we have been able to successfully engage academic departments, the Office of Research, and the Development Office on the topic of research impact and benchmarking.

 

 

A broader basket of metrics

 

Of course, citation-based metrics are not the only metrics that can be used to measure scholarly performance. Where citations measure the circulation of an idea through formal publishing venues, web-based, article-level metrics, such as download counts and page views, social media mentions and media coverage (“altmetrics”6) measure the penetration of an idea outside of formal channels and into the public sphere. (SciVal’s roadmap shows the addition of alternative metrics to the product throughout 2016 to complement the traditional citation indicators.) Taken together, citation-based and alternative metrics create a broad view of scholarly performance and present a comprehensive set of measures to use when benchmarking the scholarly performance of researchers. 

 


  1. https://en.wiktionary.org/wiki/benchmark
  2. Roemer, R. C., & Borchardt, R. (2015). Meaningful Metrics: A 21st Century Librarian’s Guide to Bibliometrics, Altmetrics, and Research Impact. Chicago, Illinois: Association of College and Research Libraries. Retrieved from http://www.ala.org/acrl/sites/ala.org.acrl/files/content/publications/booksanddigitalresources/digital/9780838987568_metrics_OA.pdf
  3. Colledge, L., & Verlinde, R. (2014). SciVal Metrics Guidebook.
  4. https://en.wikipedia.org/wiki/h-index
  5. Snowball Metrics website
  6. http://altmetrics.org/manifesto/

Comments