The Case of Systematic Reviewing

Anyone who goes through the process of screening large amounts of texts, such as scientific abstracts for a systematic review, knows how labor intensive this can be. With the rapidly evolving field of Artificial Intelligence (AI), the large amount of manual work can be reduced or even completely replaced by software using active learning. ASReview enables you to screen more texts than the traditional way of screening in the same amount of time. Which means that you can achieve a higher quality than when you would have used the traditional approach. Before Elas* was there to help you, systematic reviewing was an exhaustive task, often very boring, but not any longer!

*Your Electronic Learning Assistant who comes with ASReview, read more about The principles of Elas.

Systematic Reviewing in the pre-Elas era

Keyword search

Traditionally, the pipeline of a classical systematic review starts with the reviewer doing a keyword search to retrieve all potentially relevant references. See also this video.

Abstract Screening

The reviewer can then start screening the abstracts and titles to assess the potential relevance to his/her particular research question.

Months later

For an experienced reviewer, it takes between 30 seconds to a couple of minutes to classify a single abstract, which easily results in hundreds of hours spent merely on abstract screening.

Result

After all of the abstracts have been screened by the reviewer, the result is a subset of the initial search containing all potentially relevant references. The reviewer reads all of the full-text versions and writes his awesome paper.

Keyword search

Traditionally, the pipeline of a classical systematic review starts with the reviewer doing a keyword search to retrieve all potentially relevant references. See also this video.

Abstract screening

The reviewer can then start screening the abstracts and titles to assess the potential relevance to his/her particular research question.

Months later

For an experienced reviewer, it takes between 30 seconds to a couple of minutes to classify a single abstract, which easily results in hundreds of hours spent merely on abstract screening.

Result

After all of the abstracts have been screened by the reviewer, the result is a subset of the initial search containing all potentially relevant references. The reviewer reads all of the full-text versions and writes his awesome paper.

Systematic Reviewing with Elas

Process of ASReview systematic reviews

Keyword search

Similarly, in the research cycle of a systematic review with machine-aided systematic reviewing, the reviewer also starts with a keyword search to retrieve all potentially relevant references, downloads these and imports them into a reference manager.

AI-aided abstract screening

Then, the reviewer selects some relevant target papers. A machine learning model is trained on these papers to predict which reference to present next. Subsequently, the reviewer enters the active learning cycle. See also this video.

Only hours later

Since the abstracts are presented from most to least relevant, the reviewer can stop reviewing after having seen all relevant abstracts. This form of active learning will save hundreds or hours time going through all references.

Reproducible Results

Since we embrace Open Science, all decisions made by the reviewer as well as all the technical information is stored in a log-file, which can (or should) be published alongside the paper.

Keyword search

Similarly, in the research cycle of a systematic review with machine-aided systematic reviewing, the reviewer also starts with a keyword search to retrieve all potentially relevant references, downloads these and imports them into a reference manager.

AI-aided abstract screening

Then, the reviewer selects some relevant target papers. A machine learning model is trained on these papers to predict which reference to present next. Subsequently, the reviewer enters the active learning cycle. See also this video.

Only hours later

Since the abstracts are presented from most to least relevant, the reviewer can stop reviewing after having seen all relevant abstracts. This form of active learning will save hundreds or hours time going through all references.

Reproducible results

Since we embrace Open Science, all decisions made by the reviewer as well as all the technical information is stored in a log-file, which can (or should) be published alongside the paper.