This page provides guidance on ways of reporting and locating evidence of "impact" based on "units of analysis."
What or who is being measured?
Article metrics are metrics based on usage of a scholarly work, components of a work such as figures, or a non-article work such as software or slides, and its subsequent application or use. A peer-reviewed journal article is one example of a scholarly work.
An example of a traditional article metric is a citation to a work noted in the scholarly literature which allows for in-context understanding as to the nature, purpose and motivation of the citing author/s. See the Citations tab for more information.
With the advent of sophisticated digital applications, publishers and vendors developed other types of article metrics based on usage of the work in its digital format such as the number of times a work is read, viewed or downloaded. These are also referred to as altmetrics or alternative metrics.
Public Library of Science (PLoS) publishers, the first to offer quantiative usage counts in 2009, provides perhaps the most highly-developed publisher platform that provides number of reads, views or downloads.
Other examples of altmetrics or alternative metrics represent an immediate set of metrics that can be captured to determine how a work is shared among others, disseminated further, or commented upon using various social media-based platforms. These metrics are generated by a variety of audiences including non-academic audiences, and are considered to be representative of the level of "public or social engagement" activity based on a work.
Non-citation metrics can be useful as they provide supplemental metrics that can be used by authors to quantify the influence or impact of their works, and go beyond the traditional peer-reviewed journal article to include other scholarly works such as datasets, software, slides, figures, unpublished works such as a policy brief, etc. Some metrics such as online views or comments or recommendations represent early-stage engagement indicators of how and by whom a work is being shared, used, commented on, and disseminated further. Who is reading the new work? Who is tweeting about the new work? Where are they tweeting from? Is the work being commented on in a blog posting? By whom? A scientist or a policy-maker or a layperson? Are users bookmarking the work in Mendeley? Is the work the topic of an article in the press? Is a user viewing the slides in Slideshare? Is a user viewing the figure in Figshare?
The idea behind non-citation metrics is to gauge nascent influence or attention a work is garnering on various online platforms. It is evidence of the outreach of a work and serves as a complementary means to traditional citations as well as allowing authors to highlight multiple examples of scholarly output.
Alternative Metric Tools:
One commonly used impact metric for journals is the Journal Citation Reports Journal Impact Factor score. The Impact Factor is calculated by dividing the number of citations in the JCR year by the total number of articles published in the two previous years. An Journal Impact Factor Score of 1.0 means that, on average, articles published one or two years ago have been cited one time. The idea of a Journal Impact Factor score was introduced by Eugene Garfield in 1955 as a method for comparing journals regardless of size or citation frequency and as a library acquisition decision-making tool. Over time the Impact Factor score evolved into a proxy for author impact. Dr. Garfield himself stated:
“It is one thing to use impact factors to compare journals and quite another to use them to compare authors.”
Other journal ranking tools include:
There are various metrics that can be used to assess the impact of an author. A basic metric is the number of publications. Another is the number of citations to the publications. See Publication Metrics for a list of metrics based on publication data and Citations for information and resources for citations.
In 2005, JE Hirsch proposed the h index. The h index is a quantitative metric based on analysis of publication data using publications and citations to provide “an estimate of the importance, significance, and broad impact of a scientist’s cumulative research contributions.”
Please contact Amy Suiter for an h index report.
Some databases such as Clarivate Analytics Essential Science Indicators provide rankings of authors based on citation data for a ten year period. "Citation counts are a form of peer recognition and generally reflect the dependence of the scientific community on the work of individual scientists. It could be argued that highly cited scientists form the essential core of scientific community. Many highly cited scientists have also received peer recognition in the form of honorific awards." Overview of Scientists
Clarivate Analytics also issues annual reports on Highly Cited Authors, "Influential Minds: Presenting Highly Cited Researchers."
Download the free Altmetric bookmarklet to receive instant metrics for your recently published articles or check with the journal publisher about publisher platform metrics such downloads/views/reads or others.
Research groups are another example of a "unit of analysis." Research activities and ouputs for research groups can be used as a basis for measuring outcomes and impact. A basic metric is the number of publications generated by the research group. Another is the cumulative citation count to the publications.
If the research group is funded by external funding such as NIH, databases such as Clarivate Analytics Web of Science (since 2008) and Elsevier Scopus (since late 2013) allow for searching of publications that acknowledge a specific grant award number.
Ranking organizations utilize different methodologies and data sources for ranking of organizations using indicators such as the number of publications, the number of citations, normalized citation rankings, the number of webpages with the university URL, the number of inbound links to the university domain, to name a few.
Elsevier Scopus and Clarivate Analytics' Web of Science and Essential Science Indicators are three databases offered by the library that can be used for institutional benchmarking purposes, such as universities or research organizations. Please contact Cathy Sarli for more information.
Some organizations produce specialized rankings such as university-industry research collaboration. One example is the UIRC 2014, a report produced by CWTS of university-industry collaboration based on publication data (2009-2012) from Clarivate Analytics Web of Science.
Another example of a ranking is the beta Nature Index.
The Nature Index is a database of author affiliation information collated from research articles published in an independently selected group of 68 high-quality science journals. The database is compiled by Nature Publishing Group (NPG) in collaboration with Digital Science. The Nature Index provides a close to real-time proxy for high-quality research output at the institutional, national and regional level.
The Nature Index is updated monthly, and a 12-month rolling window of data is openly available at www.natureindex.com under a Creative Commons non-commercial license.
“Do not use journal based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.”
IEEE Statement on Appropriate use of Bibliometric Indicators
The use of multiple complementary bibliometric indicators is fundamentally important to offer an appropriate, comprehensive, and balanced view of each journal in the space of scholarly publications.
Any journal-based metric is not designed to capture qualities of individual papers, and must therefore not be used as a proxy for single-article quality or to evaluate individual scientists.
The primary manner for assessment of either the scientific quality of a research project or of an individual scientist should be peer review, which will consider the scientific content as the most important aspect in addition to the publication expectations in the area, as well as the size and practice of the research community.