Latest Research Papers
2024-09-12
arXiv
Enhancing Q&A Text Retrieval with Ranking Models: Benchmarking, fine-tuning and deploying Rerankers for RAG
This paper benchmarks and evaluates various ranking models for enhancing the accuracy of text retrieval in question-answering tasks, introducing a new model, NV-RerankQA-Mistral-4B-v3, that significantly improves accuracy. It also discusses the trade-offs between model size, accuracy, and system requirements in real-world applications.
Ranking models play a crucial role in enhancing overall accuracy of text
retrieval systems. These multi-stage systems typically utilize either dense
embedding models or sparse lexical indices to retrieve relevant passages based
on a given query, followed by ranking models that refine the ordering of the
candidate passages by its relevance to the query.
This paper benchmarks various publicly available ranking models and examines
their impact on ranking accuracy. We focus on text retrieval for
question-answering tasks, a common use case for Retrieval-Augmented Generation
systems. Our evaluation benchmarks include models some of which are
commercially viable for industrial applications.
We introduce a state-of-the-art ranking model, NV-RerankQA-Mistral-4B-v3,
which achieves a significant accuracy increase of ~14% compared to pipelines
with other rerankers. We also provide an ablation study comparing the
fine-tuning of ranking models with different sizes, losses and self-attention
mechanisms.
Finally, we discuss challenges of text retrieval pipelines with ranking
models in real-world industry applications, in particular the trade-offs among
model size, ranking accuracy and system requirements like indexing and serving
latency / throughput.