AI Tools for Literature Review

AI has become the rising star of research. It’s not just transforming how projects are conducted—it’s also creeping into tasks like writing papers and conducting literature reviews. A quick search for “AI tools for literature review” will flood you with tutorials, articles, and a never-ending list of tools promising to revolutionize how we sift through academic papers.

But here’s the thing: amidst all this hype, I’ve found little critical discussion about how these tools compare to traditional approaches—or how to use them effectively. So, let’s talk about it. What do AI tools for literature review actually do? Are they worth the buzz? And how can we, as researchers, make the most of them?

The Traditional Approach: Still Relevant?

The traditional way of conducting a literature review usually involves database searches (Google Scholar, Web of Science), following citation trails, keeping an eye on researchers’ profiles, and subscribing to keyword alerts or new publications by specific authors.

When I started my PhD, this process felt overwhelming. Back then, I was building foundational knowledge, and every paper felt like it could hold the key to understanding my field. Now, as a final-year PhD candidate, my literature searches are more targeted. I mostly look for cross-disciplinary insights or new developments in my niche area.

For the most part, I’ve found the traditional approach works well. It’s systematic and dependable. But there are those moments when I think, How did I miss this paper? or What if I’ve overlooked something critical? This is where the idea of speeding up the process with AI tools becomes tempting.

Enter AI Tools: What Do They Actually Do?

When it comes to AI tools for literature review, I’ve noticed they generally fall into two categories:

1. Citation-Based Literature Mapping Tools

Examples: Connected Papers, Litmaps, Research Rabbit, VOSviewer, Open knowledge maps

These tools visualize relationships between papers based on citation networks (they are not truly AI-driven but are often mislabeled as such). Start with one or more “seed” papers, and they generate a web of connected research. This makes it easy to see academic links between authors, ideas, and keywords.

If you’ve ever tried to mentally map out “who’s citing whom” or how key ideas connect, these tools do the heavy lifting for you. A particularly helpful guide from Princeton University Library explains how to approach literature mapping manually and highlights tools that automate the process.

Among these tools, Litmaps stands out for its intuitive interface and ability to provide detailed connections, with Connected Papers coming in a close second. This ranking is supported by a comprehensive comparison of literature mapping tools, which also outlines their unique features and use cases.

In my experience, these tools are invaluable, particularly when working in a relatively new or interdisciplinary research area. They provide a clear, visual network of papers that helps uncover references you might have otherwise missed. Whether you’re just starting a literature review or diving into adjacent fields for inspiration, citation-based mapping tools can be a real game-changer.

2. Semantics-Based Literature Searching Tools

Examples: Elicit, Consensus, Scite, Perplexity

These tools are the real AI-driven ones, powered by large language models (LLMs) like ChatGPT. They promise to find papers and even summarize them to answer research questions.

It sounds great in theory, but my experience with them has been less than stellar. Take Elicit as an example: when I tested it on questions in my niche research area—a field I know inside and out—it returned references that were either irrelevant or of poor quality (think low-impact journals). Worse, the summaries generated from these papers were fluffy and, at times, flat-out wrong.

Interestingly, the opnion about these tools are highly polarized. Some researchers find them helpful, while others share my skepticism. A recent post in nature highlights this divide, capturing both optimism and frustration. For a more critical take, a comparison of these tools concluded that while they might be useful for general topics, they lack the critical analysis needed for niche or cutting-edge research.

The Limitations of AI Tools for Literature Reviews

AI tools shine in areas where there’s an abundance of data and established patterns. But niche research–where the real excitement often lies–is a different story. These fields have fewer papers, making it harder for AI to distinguish high-quality research from noise. Without human judgement, the risk of relying on irrelevant or flawed references is high.

And then there’s the bigger issue: creativity. AI might be able to summarize, sort, or visualize data, but it doesn’t think critically or creatively. It can’t recognize a groundbreaking idea hiding in an unconventional paper–or see how two seemingly unrelated studies might combine to spark something new. That’s the realm of human expertise.

Combine AI and Human Effort: A Practical Approach

The key to use AI tools effectively is knowing when and how to combine them with traditional methods. Here’s what I’ve found works best:

  • Exploratory Research: If you’re new to a research field, tools like Litmaps or Connected Papers can help you quickly grasp the breadth of the field and identify key papers. Semantics-based tools like Scite might also be worth exploring, but don’t rely on them exclusively.
  • In-Depth Research: Once you’ve established a solid understanding of your field, shift your focus to depth. This is where human effort becomes critical. Carefully analyze high-quality papers and delve into nuanced insights. AI tools can still help with exploratory searches in adjacent fields, but they should supplement—not replace—your own judgment.

Final Thoughts: Why Human Expertise Still Matters

At the end of the day, creativity and critical thinking remain the heart of impactful research. AI tools can assist by streamlining parts of the process, but they can’t replace the deep, thoughtful engagement required to truly understand and innovate.

If you’re a researcher, the best approach might be to treat AI tools as assistants–not experts. Use them to speed up repetitive tasks, but let you judgement guide the process. After all, the most groundbreaking ideas come from human curiosity and creativity, not algorithms.




Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • Can ChatGPT Write Your Thesis in an Hour?
  • Should I Stay in Academia? Exploring Career Paths Beyond PhD
  • Data Figure Workflow I use
  • Casual Work as a PhD Student
  • Colour Choices in Scientific Plots