Get the latest articles and downloads sent to your inbox in a monthly newsletter.

Get the latest articles and downloads sent to your inbox in a monthly newsletter.

Eureka points on the Research Assessment Metrics Timeline

By Marianne Parkhill, Research Metrics Community & Adoption, Elsevier | June 01, 2017

Crop of research assessment metrics timeline

Librarians play a critical role in helping faculty members and researchers understand the impact of their work. With the rate of change of technology, products and metrics, this job can feel overwhelming. The Research Assessment Metrics Timeline clearly tells you that you are justified to feel that. But, it also tells you how we got where we are, and that understanding can serve to “decomplicate” things and help you move forward.

 

Spanning seven decades, the Research Assessment Metrics Timeline looks at pivotal events from Eugene Garfield’s introduction of citation indexing to the creation of 88+ million DOIs. Following are some key takeaways that we had while working as a team to develop the timeline.1

 

Image of Research Assessment Metrics timeline
Download the timeline.

 

 

Sharing research

 

At Plum Analytics,2 we had been working on item- or article-level metrics for five years. It became obvious that advancements in technology enabled us to do the powerful work of gathering and analyzing metrics. That got us curious about how research metrics, analytics and products aligned with technology changes over time. 

 

Mosaic (an early internet browser) was introduced in 1993. From there important changes start to happen at an accelerated rate — in particular, new ways to share research. Once research is shared, you can start to capture and analyze metrics about how people are sharing and talking about it. Examples of this include whether or not the research was mentioned in a news report or blog post, or whether it was downloaded or saved. If you understand and analyze article-level activity, you can start to tell interesting stories about the research. 

 

Librarians have an opportunity to be on the front lines of new ways to help faculty and researchers tell these stories. Librarians are starting to use article-level metrics to help researchers, especially early-career researchers, with their promotion and tenure files, grant applications and researcher profiles. 

 

In another sharing example, we were surprised to learn preprint repositories such as arXiv (1991) and SSRN (1994) were established before the internet had become ubiquitous, pointing out that researchers have long desired to share their research findings. Now dozens of such services are either in production or planned.

 

 

Identifying research

 

Another insight occurred when we noticed that advances in measuring research came hand-in-hand with the ability to identify that research. That is why we included the introduction of identifiers such as PubMed IDs and DOIs. More recently ORCID iDs are being championed through the library as a means to identify researchers and connect them to their research. (Learn more about introducing ORCID iDs and other research metrics topics at the June 8 Library Connect webinar, Researcher Profiles and Metrics That Matter.)

 

 

Looking ahead 

 

Our primary goal in creating the timeline was to look back at how research and metrics have evolved over the last 50+ years, but we also wanted to take a peek into the future. Funders are starting to demand more evidence that the research they are funding is having an impact. 

 

Additionally, some research, such as translational or clinical research, is at a disadvantage when measured by traditional citations. The former includes research that translates basic scientific discoveries into clinical or policy recommendations. The latter is written by clinicians in medical practice about what works in that arena. With today’s technology and products, it is possible to discover non-traditional areas where this research is referenced or cited. For example, an article on the most successful treatments of heart disease may be referenced in a PubMed Clinical Guideline. This citation is not picked up in traditional citations indexes, but helps tell a story of the impact of that research. This applies to policy as well. Many governmental and non-governmental organizations publish public policy documents on topics ranging from agriculture and farming to urban issues. When these policy documents reference research it demonstrates a deep-level of societal impact. At Elsevier, we believe measuring these clinical and policy citations is an important part of the future of measuring research impact.

 

I’m eager to see what other advances technology combined with human ingenuity, or perhaps even machine learning, will bring!

 


 

1. A former colleague, Tom Crosby, worked with me to develop much of the content. Other contributors to the timeline include Andrea Michalek, Talia Arthur, Christopher James, Rachel McCullough and Taylor Stang.

  

2. In early 2012, Plum Analytics was founded with the vision of bringing modern ways of measuring research impact to individuals and organizations that use and analyze research. In 2017, Plum Analytics joined Elsevier. 

 

 

Comments