We are strong proponents of Open Science to make scientific research and data more transparent and accessible to all. Therefore the entire ASReview project is Open Code: all our code is available on GitHub. For citing the software, please refer to the specific release of the ASReview software on Zenodo. The menu on the right can be used to find the citation format of prevalence.

ASReview encourages their users to be transparent about their review process by facilitating an open science workflow. All decisions made by the reviewer during the process, as well as the technical information is stored in a log-file. This log-file can (or should) be published alongside the paper. Some journals offer Open Science Badges to signal and reward when underlying data, materials or pre-registrations are available. Please visit cos.io/badges for a list of journals that issue open science badges.

Scientific Papers

Paper introducing the ASReview project

For many tasks the scientific literature needs to be checked systematically. The current practice is that scholars and practitioners screen thousands of studies by hand to find which studies to include in their review. This is error prone and inefficient. We therefore developed an open source machine learning (ML)-aided pipeline: Active learning for Systematic Reviews (ASReview). We show that by using active learning, ASReview can lead to far more efficient reviewing than manual reviewing, while exhibiting adequate quality. Furthermore, the presented software is fully transparent and open source.

Rens van de Schoot, Jonathan de Bruin, Raoul Schram, Parisa Zahedi, Jan de Boer, Felix Weijdema, Bianca Kramer, Martijn Huijts, Maarten Hoogerwerf, Gerbrich Ferdinands, Albert Harkema, Joukje Willemsen, Yongchao Ma, Qixiang Fang, Lars Tummers, Daniel Oberski (2020). ASReview: Open Source Software for Efficient and Transparent Active Learning for Systematic Reviews. arXiv preprint arXiv:2006.12166.

Simulation study

Active learning models were evaluated across four different classification techniques (naive Bayes, logistic regression, support vector machines, and random forest) and two different feature extraction strategies (TF-IDF and doc2vec). Moreover, models were evaluated across six systematic review datasets from various research areas to assess generalizability of active learning models across different research contexts. Performance of the models were assessed by conducting simulations on six systematic review datasets. The models reduced the number of publications needed to screen by 91.7% to 63.9%. Overall, the Naive Bayes + TF-IDF model performed the best.

Ferdinands, G., Schram, R., De Bruin, J., Bagheri, A., Oberski, D. L., Tummers, L., & Van de Schoot, R. (2020, September 16). Active learning for screening prioritization in systematic reviews – A simulation study. https://doi.org/10.31219/osf.io/w6qbg.


CORD19 database with ASReview mentioned

The Covid-19 Open Research Dataset (CORD-19) is a growing1 resource of scientific papers on Covid-19 and related historical coronavirus research. CORD-19 is designed to facilitate the development of text mining and information retrieval systems over its rich collection of metadata and structured full text papers.

Lu Wang, Lucy et al. “CORD-19: The Covid-19 Open Research Dataset.” ArXiv arXiv:2004.10706v2. 22 Apr. 2020 Preprint.

Studies citing ASReview

Below you can find a list of studies that have used ASReview. Want to be added to this list? Let us know via asreview@uu.nl

Odintsova, V. V., Roetman, P. J., Ip, H. F., Pool, R., Van der Laan, C. M., Tona, K.-D., Vermeiren, R. R. J. M., & Boomsma, D. I. (2019). Genomics of human aggression: Current state of genome-wide studies and an automated systematic review tool. Psychiatric Genetics29(5), 170–190. https://doi.org/10.1097/YPG.0000000000000239