Member-only story

Evolution of RAG: Baseline RAG, GraphRAG, and KAG

Enhancing LLM accuracy with structured knowledge, inverted indexes, and dynamic retrieval

5 min readMar 17, 2025
My girl

Not a member? You can still check out this article through here.

This article focuses on the evolution of RAG, so technical details will be skipped to avoid this article being too geeky.

Using LLM for chatbot is the most common application nowadays, but the answer of LLM is too generalized to serve specific application scenarios.

Therefore, Retrieval-Augmented Generation, aka RAG, has been developed.

Simply put, it is to store the specialization data needed by LLM, so that LLM can refer to it when answering.

Baseline RAG

A traditional RAG process is as follows.

Baseline RAG

Indexing in the left side means to process the data that LLM needs to refer to in advance, and the processing flow is to first divide the chunks and then vectorize the chunks into the vector database.

When the user asks a question, he also needs to vectorize the query and take…

--

--

Chunting Wu
Chunting Wu

Written by Chunting Wu

Architect at SHOPLINE. Experienced in system design, backend development, and data engineering.

No responses yet