In this we implement RAG Architecture using MCP for quick start
- Install all the required packages
pip install -r requirements.txt
-Run the mcp server
python mcp_rag.py
then the server is activated as
-After that use MCP Inspector in vscode to connect to the server URL to view the response
-when we run the index_pdf_file MCP tool the entire given pdf will be converted into number of chunks with embeddings

-when we run the second mcp tool rag_query with your question it will provide the answer from the pdf

The response from the rag_query mcp tool is as
"success": true,
"result": {
"content": [
{
"type": "text",
"text": "The document is about the challenges and advancements in generating question-answer (QA) pairs from text, specifically in the context of Natural Language Processing (NLP) and machine learning models. It discusses the difficulties of data collection, text length, and multilingual handling, but also highlights the achievements of a model that can work on different domains and handle multilingual text with an accuracy of 72%. The document also mentions the use of Prompt Engineering based on pipelining to improve the accuracy and performance of the model, and the evaluation of its performance using the BLEU score. \n\nIn summary, the document is about the development and evaluation of a model for generating QA pairs from text using NLP and machine learning techniques. \n\nAnswer to the user question: The document is about the development and evaluation of a model for generating QA pairs from text using NLP and machine learning techniques."
}
],
"isError": false,
"structuredContent": {
"result": "The document is about the challenges and advancements in generating question-answer (QA) pairs from text, specifically in the context of Natural Language Processing (NLP) and machine learning models. It discusses the difficulties of data collection, text length, and multilingual handling, but also highlights the achievements of a model that can work on different domains and handle multilingual text with an accuracy of 72%. The document also mentions the use of Prompt Engineering based on pipelining to improve the accuracy and performance of the model, and the evaluation of its performance using the BLEU score. \n\nIn summary, the document is about the development and evaluation of a model for generating QA pairs from text using NLP and machine learning techniques. \n\nAnswer to the user question: The document is about the development and evaluation of a model for generating QA pairs from text using NLP and machine learning techniques."
}
}
}