This page provides guidance on ways of reporting and locating evidence of "impact" based on "units of analysis."
What or who is being measured?
Article metrics are metrics based on usage of a scholarly work, components of a work such as figures, or a non-article work such as software or slides, and its subsequent application or use. A peer-reviewed journal article is one example of a scholarly work.
An example of a traditional article metric is a citation to a work noted in the scholarly literature which allows for in-context understanding as to the nature, purpose and motivation of the citing author/s. See the Citations tab for more information.
With the advent of sophisticated digital applications, publishers and vendors developed other types of article metrics based on usage of the work in its digital format such as the number of times a work is read, viewed, or downloaded. These are also referred to as altmetrics or alternative metrics.
Other examples of altmetrics or alternative metrics represent an immediate set of metrics that can be captured to determine how a work is shared among others, disseminated further, or commented upon using various social media-based platforms. These metrics are generated by a variety of audiences including non-academic audiences, and are considered to be representative of the level of "public or social engagement" activity based on a work.
Examples:
Non-citation metrics can be useful as they provide supplemental metrics that can be used by authors to quantify the influence or impact of their works, and go beyond the traditional peer-reviewed journal article to include other scholarly works such as datasets, software, slides, figures, unpublished works such as a policy brief, etc. Some metrics such as online views or comments or recommendations represent early-stage engagement indicators of how and by whom a work is being shared, used, commented on, and disseminated further. Who is reading the new work? Who is tweeting about the new work? Where are they tweeting from? Is the work being commented on in a blog posting? By whom? A scientist or a policy-maker or a layperson? Are users bookmarking the work in Mendeley? Is the work the topic of an article in the press? Is a user viewing the slides in Slideshare? Is a user viewing the figure in Figshare?
The idea behind non-citation metrics is to gauge nascent influence or attention a work is garnering on various online platforms. It is evidence of the outreach of a work and serves as a complementary means to traditional citations as well as allowing authors to highlight multiple examples of scholarly output.
Alternative Metric Tools:
One commonly used impact metric for journals is the Journal Citation Reports Journal Impact Factor score. The Impact Factor is calculated by dividing the number of citations in the JCR year by the total number of articles published in the two previous years. An Journal Impact Factor Score of 1.0 means that, on average, articles published one or two years ago have been cited one time. The idea of a Journal Impact Factor score was introduced by Eugene Garfield in the early 1960s as a method for comparing journals regardless of size or citation frequency and as a library acquisition decision-making tool. Over time the Impact Factor score evolved into a proxy for author impact. Dr. Garfield himself stated:
“It is one thing to use impact factors to compare journals and quite another to use them to compare authors.”
Other journal ranking tools include:
Interested in ways to demonstrate impact of books and book chapters?
See Book Metrics.
There are various metrics that can be used to assess the impact of an author. A basic metric is the number of publications. Another is the number of citations to the publications. See Publication Metrics for a list of metrics based on publication data and Citations for information and resources for citations.
In 2005, JE Hirsch proposed the h index. The h index is a quantitative metric based on analysis of publication data using publications and citations to provide “an estimate of the importance, significance, and broad impact of a scientist’s cumulative research contributions.”
Several databases such as Elsevier’s Scopus, Clarivate Analytics Web of Science, and Google Scholar’s Citations provide h index values for authors. See What is the h Index? for more information.
Please contact Amy Suiter for an h index report.
Download the free Altmetric bookmarklet to receive instant metrics for your recently published articles or check with the journal publisher about publisher platform metrics such downloads/views/reads or others.
Research groups are another example of a "unit of analysis." Research activities and ouputs for research groups can be used as a basis for measuring outcomes and impact. A basic metric is the number of publications generated by the research group. Another is the cumulative citation count to the publications.
If the research group is funded by external funding such as NIH, databases such as Clarivate Analytics Web of Science (since 2008) and Elsevier Scopus (since late 2013) allow for searching of publications that acknowledge a specific grant award number.
Ranking organizations utilize different methodologies and data sources for ranking of organizations using indicators such as the number of publications, the number of citations, normalized citation rankings, the number of webpages with the university URL, the number of inbound links to the university domain, to name a few.
Scopus and SciVal from Elsevier and Web of Science and Essential Science Indicators Clarivate Analytics are databases offered by the library that can be used for institutional benchmarking purposes, such as universities or research organizations.
Another source for ranking information is the Nature Index. Washington University information in the Nature Index.
“Do not use journal based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.”
IEEE Statement on Appropriate use of Bibliometric Indicators
The use of multiple complementary bibliometric indicators is fundamentally important to offer an appropriate, comprehensive, and balanced view of each journal in the space of scholarly publications.
Any journal-based metric is not designed to capture qualities of individual papers, and must therefore not be used as a proxy for single-article quality or to evaluate individual scientists.
The primary manner for assessment of either the scientific quality of a research project or of an individual scientist should be peer review, which will consider the scientific content as the most important aspect in addition to the publication expectations in the area, as well as the size and practice of the research community.