You decide
You’re bossing the AI and managing your own data.
Distraction free
Clean interface
A simple and intuitive design for effortless navigation.
Open-source AI as your screening companion
Your AI-powered assistant for systematic reviews that enhances your screening process with full transparency and control.
“Having personally used ASReview, I can say it works beyond my wildest dreams, even after starting as a skeptic.”
– Mohammed Madeh Hawas on LinkedIn
Ready? Set. Go!
Explore ASReview LAB on your own or study open learning materials to gain an in-depth understanding its full capabilities.
Just start, really!
Forget the sweaty palms and just dive in. Using this software is easier than convincing a dog to chase a ball. And dogs really do love chasing balls.
Open learning materials
Once you’re up and running, we have a hub of open teaching materials designed to gain confidence and master your skills in AI-aided systematic reviews.
Diverse model setups and in-depth learning
Explore a wide range of research papers and resources that offer a deep understanding of ASReview LAB’s process and potential.
See how other screeners published with ASReview
Access the largest online database and read-up on publications made in your field referring ASReview on Zotero >>
Abstract
It is of utmost importance to provide an overview and strength of evidence of predictive factors and to investigate the current state of affairs on evidence for all published and hypothesized factors that contribute to the onset, relapse, and maintenance of anxiety-, substance use-, and depressive disorders. Thousands of such articles have been published on potential factors of CMDs, yet a clear overview of all preceding factors and interaction between factors is missing. Therefore, the main aim of the current project was to create a database with potentially relevant papers obtained via a systematic. The current paper describes every step of the process of constructing the database, from search query to database. After a broad search and cleaning of the data, we used active learning using a shallow classifier and labeled the first set of papers. Then, we applied a second screening phase in which we switched to a different active learning model (i.e., a neural net) to identify difficult-to-find papers due to concept ambiguity. In the third round of screening, we checked for incorrectly included/excluded papers in a quality assessment procedure resulting in the final database. All scripts, data files, and output files of the software are available via Zenodo (for Github code), the Open Science Framework (for protocols, output), and DANS (for the datasets) and are referred to in the specific sections, thereby making the project fully reproducible.
Abstract
Objectives
In a time of exponential growth of new evidence supporting clinical decision-making, combined with a labor-intensive process of selecting this evidence, methods are needed to speed up current processes to keep medical guidelines up-to-date. This study evaluated the performance and feasibility of active learning to support the selection of relevant publications within medical guideline development and to study the role of noisy labels.
Design
We used a mixed-methods design. Two independent clinicians’ manual process of literature selection was evaluated for 14 searches. This was followed by a series of simulations investigating the performance of random reading versus using screening prioritization based on active learning. We identified hard-to-find papers and checked the labels in a reflective dialogue.
Main outcome measures
Inter-rater reliability was assessed using Cohen’s Kappa (ĸ). To evaluate the performance of active learning, we used the Work Saved over Sampling at 95% recall (WSS@95) and percentage Relevant Records Found at reading only 10% of the total number of records (RRF@10). We used the average time to discovery (ATD) to detect records with potentially noisy labels. Finally, the accuracy of labeling was discussed in a reflective dialogue with guideline developers.
Results
Mean ĸ for manual title-abstract selection by clinicians was 0.50 and varied between − 0.01 and 0.87 based on 5.021 abstracts. WSS@95 ranged from 50.15% (SD = 17.7) based on selection by clinicians to 69.24% (SD = 11.5) based on the selection by research methodologist up to 75.76% (SD = 12.2) based on the final full-text inclusion. A similar pattern was seen for RRF@10, ranging from 48.31% (SD = 23.3) to 62.8% (SD = 21.20) and 65.58% (SD = 23.25). The performance of active learning deteriorates with higher noise. Compared with the final full-text selection, the selection made by clinicians or research methodologists deteriorated WSS@95 by 25.61% and 6.25%, respectively.
Conclusion
While active machine learning tools can accelerate the process of literature screening within guideline development, they can only work as well as the input given by human raters. Noisy labels make noisy machine learning.
Abstract
Governments use nudges to stimulate citizens to exercise, save money and eat healthily. However, nudging is controversial. How the media frames nudge impacts decisions on whether to use this policy instrument. We, therefore, analyzed 443 newspaper articles about nudging. Overall, the media was positive about nudges. Nudging was viewed as an effective and efficient way to change behavior and received considerable support across the political spectrum. The media also noted that nudges were easy to implement. The controversy about nudges concerns themes like paternalism, fear of manipulation, small effect sizes, and unintended consequences. Academic proponents of nudging were actively involved in media debates, while critical voices were less often heard. There were some reports criticizing how the government used nudges. However, these were exceptions; the media often highlighted the benefits of nudging. Concluding, we show how nudging by governments was discussed in a critical institution: the news media.
Abstract
Context
Predictive maintenance is a technique for creating a more sustainable, safe, and profitable industry. One of the key challenges for creating predictive maintenance systems is the lack of failure data, as the machine is frequently repaired before failure. Digital Twins provide a real-time representation of the physical machine and generate data, such as asset degradation, which the predictive maintenance algorithm can use. Since 2018, scientific literature on the utilization of Digital Twins for predictive maintenance has accelerated, indicating the need for a thorough review.
Method
Results
Conclusion
This study is the first SLR in predictive maintenance using Digital Twins. We answer key questions for designing a successful predictive maintenance model leveraging Digital Twins. We found that to this day, computational burden, data variety, and complexity of models, assets, or components are the key challenges in designing these models.
Abstract
Experience ASReview LAB
Find out how our AI helps you with a 95% disappearing act.
Questions & Answers
How does ASReview improve the efficiency of the screening process compared to traditional methods?
ASReview uses an AI-driven approach to show you the records most likely to be relevant first. As a result, you spend less time sorting through irrelevant material and can complete your screening more quickly without sacrificing quality, or process larger amounts of text data.
What strategies can be used to ensure high-quality and unbiased screening results?
ASReview displays only the essential content of each record (e.g., title, abstract) and avoids showing journal names or author details by default, helping reduce bias. Moreover, we store all human decisions and ensure that the AI process is transparent and reproducible. In this article, the authors explain how logging each screening step helps maintain a clear audit trail and why it’s crucial for transparency. They also show how documenting the entire workflow can reduce the risk of bias.
How can it be free to use?
ASReview is an open-source project firmly rooted in academic research, developed by a global community of researchers and institutions. We believe that making systematic reviews more efficient and transparent will accelerate scientific discovery, which is why there are no subscription fees or hidden costs. All you need is a computer with Python installed or a server-based solution provided by your organization. While it’s free in a monetary sense, many contributors devote their own time and resources—including research grants—to continuously improve the software.
What level of machine learning knowledge is required to use ASReview successfully?
You don’t need any specialized machine learning experience. The software helps you step by step.
How does ASReview handle missed relevant studies, and what are the best practices to minimize this risk?
ASReview updates its suggestions based on your feedback, becoming more accurate as you screen more records. To reduce the chance of missing important studies, try including known relevant records at the start so the software knows what you’re looking for. You can follow the SAFE procedure for systematic reviews with multiple screening steps, helping minimize the impact of certain decisions made when setting up the project. However, even the best AI can’t retrieve studies that aren’t already in your original dataset, as noted in “The hunt for the last relevant paper”.
Subscribe to our newsletter!
Stay on top of ASReview’s developments by subscribing to the newsletter.