Conduct your thesis on ASReview independently

From algorithm optimization to UX design – make ASReview the core of your thesis or PhD across disciplines.

>> For students <<

Test multiple models

Compare model’s performance according to your hypotheses.

Experimental studies

Conduct experiments and uncover new insights with advanced tools.

Out-of-the-BOX

Empower us with your guidance – no idea is too far-fetched.

Students’ stories

Hear directly from students about their experiences and contributions at ASReview.

IMG_8384-min 2

Meet Laurens

Born in Utrecht but raised in the southern province of Brabant, Laurens quickly realized it wasn’t the place for him—mainly due to a deep-rooted aversion to …

Continue reading >> 

Student Assistent Rutger Neeleman

Meet Rutger

A student who proves that being a dedicated academic doesn’t mean you have to be all work and no play. With just a year left before he wraps up his studies …

Continue reading >> 

Giulia Migliore student at ASReview

Meet Guilia

Originally from Padova, a charming city near Venice, Giulia made the bold move to settle in the Netherlands after an internship in Utrecht stole her heart …

Continue reading >> 

Understanding the core of ASReview: foundations and insights

Explore its key models, features, and tools to discover potential improvements in functionality and user experience.

Open learning materials

A hub of open teaching materials designed to help you master ASReview. Explore these resources to gain confidence and skill in AI-aided systematic reviews.

Start online learning >>

ASReview TV

Write your thesis while learning new concepts by watching our YouTube channel, featuring lectures on models, datasets, documentation, and more.

Watch TV >> 

From theses to published works

Discover how fellow students’ research on ASReview—exploring new features, tools, and user experiences—led to publication. Want to join the list? Send us your publicated thesis.

Abstract

It is of utmost importance to provide an overview and strength of evidence of predictive factors and to investigate the current state of affairs on evidence for all published and hypothesized factors that contribute to the onset, relapse, and maintenance of anxiety-, substance use-, and depressive disorders. Thousands of such articles have been published on potential factors of CMDs, yet a clear overview of all preceding factors and interaction between factors is missing. Therefore, the main aim of the current project was to create a database with potentially relevant papers obtained via a systematic. The current paper describes every step of the process of constructing the database, from search query to database. After a broad search and cleaning of the data, we used active learning using a shallow classifier and labeled the first set of papers. Then, we applied a second screening phase in which we switched to a different active learning model (i.e., a neural net) to identify difficult-to-find papers due to concept ambiguity. In the third round of screening, we checked for incorrectly included/excluded papers in a quality assessment procedure resulting in the final database. All scripts, data files, and output files of the software are available via Zenodo (for Github code), the Open Science Framework (for protocols, output), and DANS (for the datasets) and are referred to in the specific sections, thereby making the project fully reproducible.

Read full publication >>

Abstract

Objectives

In a time of exponential growth of new evidence supporting clinical decision-making, combined with a labor-intensive process of selecting this evidence, methods are needed to speed up current processes to keep medical guidelines up-to-date. This study evaluated the performance and feasibility of active learning to support the selection of relevant publications within medical guideline development and to study the role of noisy labels.

Design

We used a mixed-methods design. Two independent clinicians’ manual process of literature selection was evaluated for 14 searches. This was followed by a series of simulations investigating the performance of random reading versus using screening prioritization based on active learning. We identified hard-to-find papers and checked the labels in a reflective dialogue.

Main outcome measures

Inter-rater reliability was assessed using Cohen’s Kappa (ĸ). To evaluate the performance of active learning, we used the Work Saved over Sampling at 95% recall (WSS@95) and percentage Relevant Records Found at reading only 10% of the total number of records (RRF@10). We used the average time to discovery (ATD) to detect records with potentially noisy labels. Finally, the accuracy of labeling was discussed in a reflective dialogue with guideline developers.

Results

Mean ĸ for manual title-abstract selection by clinicians was 0.50 and varied between − 0.01 and 0.87 based on 5.021 abstracts. WSS@95 ranged from 50.15% (SD = 17.7) based on selection by clinicians to 69.24% (SD = 11.5) based on the selection by research methodologist up to 75.76% (SD = 12.2) based on the final full-text inclusion. A similar pattern was seen for RRF@10, ranging from 48.31% (SD = 23.3) to 62.8% (SD = 21.20) and 65.58% (SD = 23.25). The performance of active learning deteriorates with higher noise. Compared with the final full-text selection, the selection made by clinicians or research methodologists deteriorated WSS@95 by 25.61% and 6.25%, respectively.

Conclusion

While active machine learning tools can accelerate the process of literature screening within guideline development, they can only work as well as the input given by human raters. Noisy labels make noisy machine learning.

Read full publication >>

Abstract

Governments use nudges to stimulate citizens to exercise, save money and eat healthily. However, nudging is controversial. How the media frames nudge impacts decisions on whether to use this policy instrument. We, therefore, analyzed 443 newspaper articles about nudging. Overall, the media was positive about nudges. Nudging was viewed as an effective and efficient way to change behavior and received considerable support across the political spectrum. The media also noted that nudges were easy to implement. The controversy about nudges concerns themes like paternalism, fear of manipulation, small effect sizes, and unintended consequences. Academic proponents of nudging were actively involved in media debates, while critical voices were less often heard. There were some reports criticizing how the government used nudges. However, these were exceptions; the media often highlighted the benefits of nudging. Concluding, we show how nudging by governments was discussed in a critical institution: the news media.

Read full publication >> 

Abstract

Context

Predictive maintenance is a technique for creating a more sustainable, safe, and profitable industry. One of the key challenges for creating predictive maintenance systems is the lack of failure data, as the machine is frequently repaired before failure. Digital Twins provide a real-time representation of the physical machine and generate data, such as asset degradation, which the predictive maintenance algorithm can use. Since 2018, scientific literature on the utilization of Digital Twins for predictive maintenance has accelerated, indicating the need for a thorough review.

Objective
This research aims to gather and synthesize the studies that focus on predictive maintenance using Digital Twins to pave the way for further research.

Method

A systematic literature review (SLR) using an active learning tool is conducted on published primary studies on predictive maintenance using Digital Twins, in which 42 primary studies have been analyzed.

Results

This SLR identifies several aspects of predictive maintenance using Digital Twins, including the objectives, application domains, Digital Twin platforms, Digital Twin representation types, approaches, abstraction levels, design patterns, communication protocols, twinning parameters, and challenges and solution directions. These results contribute to a Software Engineering approach for developing predictive maintenance using Digital Twins in academics and the industry.

Conclusion

This study is the first SLR in predictive maintenance using Digital Twins. We answer key questions for designing a successful predictive maintenance model leveraging Digital Twins. We found that to this day, computational burden, data variety, and complexity of models, assets, or components are the key challenges in designing these models.

Read full publication >>

Abstract

Risk assessment of chemicals is a time-consuming process and needs to be optimized to ensure all chemicals are timely evaluated and regulated. This transition could be stimulated by valuable applications of in silico Artificial Intelligence (AI)/Machine Learning (ML) models. However, implementation of AI/ML models in risk assessment is lagging behind. Most AI/ML models are considered ‘black boxes’ that lack mechanistical explainability, causing risk assessors to have insufficient trust in their predictions.
Here, we explore ‘trust’ as an essential factor towards regulatory acceptance of AI/ML models. We provide an overview of the elements of trust, including technical and beyond-technical aspects, and highlight elements that are considered most important to build trust by risk assessors. The results provide recommendations for risk assessors and computational modelers for future development of AI/ML models, including: 1) Keep models simple and interpretable; 2) Offer transparency in the data and data curation; 3) Clearly define and communicate the scope/intended purpose; 4) Define adoption criteria; 5) Make models accessible and user-friendly; 6) Demonstrate the added value in practical settings; and 7) Engage in interdisciplinary settings. These recommendations should ideally be acknowledged in future developments to stimulate trust and acceptance of AI/ML models for regulatory purposes.

Read full publication >>

“Working with ASReview not only introduced me to the power of active learning in systematic reviews, but also deepened my passion for research and academic innovation.” 

– Bjarne van der Mark

Questions & Answers

Is ASReview free for students to use?

Yes! ASReview is completely open‑source and carries no licence fees or hidden costs. The project is maintained through community contributions and academic research grants, so anyone—including students—can install and run it at no charge.

Can I use ASReview for my thesis without prior coding experience?

Yes. Open the installation guide at https://asreview.readthedocs.io/install and follow the three‑line quick start. Then, open the web interface and start working in the simple dashboard: upload your file, press screen, and label records with two buttons (“relevant” or “not”). No coding is needed unless you want to automate things later.

How do I cite ASReview in my methodology section?

Add a sentence such as: “Records were screened with ASReview LAB v2 (ASReview Lab Developers, 2025) using the model [FILL IN THE MODEL USED] with a stopping rule of [ADD YOUR STOPPING RULE.”

If you wish to cite the underlying methodology of the ASReview software, please use this publication and for citing the software, please refer to the specific release of the ASReview software on Zenodo.

How can I collaborate with a supervisor on the same review project?

You have three practical options:

Work in parallel: you and your supervisor each create your own ASReview project and screen independently, then export the two project files and compare decisions (the Insights extension can highlight disagreements).

Quality check on exclusions: screen the set yourself first, export the list of records you marked “not relevant,” and let your supervisor re‑label just those to confirm none were wrongly excluded.

Real‑time collaboration: run ASReview on the server stack (Docker) and invite your supervisor as another oracle; the system hands out top‑ranked records to each of you and logs every label for full transparency.

I’ve heard there’s a hidden memory game—where can I find it?

We could spill the beans, but where’s the fun in that? 
Here’s the deal: fire up ASReview LAB in your browser and start poking, clicking, resizing, and keyboard‑mashing until something unexpected flips over. If curiosity still keeps you up at night, search the GitHub repository.

Subscribe to our newsletter!

Stay on top of ASReview’s developments by subscribing to the newsletter.