Researchers from Moore Threads AI introduce TurboRAG, a novel approach to optimize the inference paradigm of RAG systems by pre-computing and storing the KV caches of documents offline. Instead of computing these KV caches during every inference, TurboRAG retrieves the pre-computed KV caches for efficient prefill, eliminating the need for repeated online computations. This approach leads to reduced computational overhead and faster response times without sacrificing accuracy. TurboRAG also addresses issues related to attention mask matrices and positional embeddings, ensuring that the pre-computed KV caches can be used effectively with most existing large language models (LLMs) without modifications to the model architecture.
The structure of TurboRAG is centered around its two-phase approach. In the offline phase, the KV caches for document chunks are computed and stored, reducing the amount of computation needed during the online inference phase. During the online phase, when a query is made, TurboRAG retrieves the pre-computed KV caches and combines them with a user query to generate responses. This hybrid paradigm involves utilizing independent attention masks, which prevent unnecessary cross-document attention, and relative position embeddings, which maintain the integrity of positional relationships within documents. TurboRAG is designed to work seamlessly with standard RAG pipelines, allowing for easy adoption without major infrastructure changes.
The experimental results demonstrate TurboRAG’s effectiveness in reducing TTFT by up to 9.4 times compared to conventional RAG systems, with an average speedup of 8.6 times. Importantly, the accuracy of TurboRAG remained comparable to that of traditional RAG approaches across multiple benchmarks. TurboRAG also significantly reduces computational resource utilization, cutting the cost of KV cache computation by over 98%, which allows for larger batch sizes and improved throughput. Fine-tuning experiments confirmed that TurboRAG maintains model accuracy even under challenging conditions, such as noisy retrieval environments. The experiments showed that different variants of TurboRAG, namely those with composite and reordered positional embeddings, were effective, with the reordered variant achieving slightly better performance.
In conclusion, TurboRAG offers a practical solution to the latency issues inherent in RAG systems by decoupling the computationally expensive KV cache generation from the online inference process. By leveraging pre-computed KV caches and adjusting attention mechanisms, TurboRAG significantly enhances response speed and efficiency while preserving accuracy. These improvements make TurboRAG a compelling option for deploying RAG in latency-sensitive applications, potentially expanding the scope of RAG’s usage in real-time and large-scale scenarios.