Learning GenAI RAG with LlamaIndex, Ollama and Elasticsearch

Welcome!

By registering with us, you'll be able to discuss, share and private message with other members of our community.

SignUp Now!

voska89

Active member
Joined
Aug 19, 2025
Messages
2,780
0f5c19b48495c182a21243d3f51d9923.webp

Free Download GenAI RAG with LlamaIndex, Ollama and Elasticsearch
Released 10/2025
MP4 | Video: h264, 1920x1080 | Audio: AAC, 44.1 KHz, 2 Ch
Genre: eLearning | Language: English | Duration: 21 Lessons ( 1h 49m ) | Size: 707 MB
Build a Generative AI Platform with Retrieval Augmented Generation with Elasticsearch, LlamaIndex, Ollama, and Mistral

Description
Retrieval-Augmented Generation (RAG) is the next practical step after semantic indexing and search. In this course, you'll build a complete, local-first RAG pipeline that ingests PDFs, stores chunked vectors in Elasticsearch, retrieves the right context, and generates grounded answers with the Mistral LLM running locally via Ollama.
We'll work end-to-end on a concrete scenario: searching student CVs to answer questions like "Who worked in Ireland?" or "Who has Spark experience?". You'll set up a Dockerized stack (FastAPI, Elasticsearch, Kibana, Streamlit, Ollama) and wire it together with LlamaIndex so you can focus on the logic, not boilerplate. Along the way, you'll learn where RAG shines, where it struggles (precision/recall, hallucinations), and how to design for production.
By the end, you'll have a working app: upload PDFs → extract text → produce clean JSON → chunk & embed → index into Elasticsearch → query via Streamlit → generate answers with Mistral, fully reproducible on your machine.
What will I learn?
From Search to RAG
Revisit semantic search and extend it to RAG: retrieve relevant chunks first, then generate grounded answers. See how LlamaIndex connects your data to the LLM and why chunk size and overlap matter for recall and precision.
Building the Pipeline
Use FastAPI to accept PDF uploads and trigger the ingestion flow: text extraction, JSON shaping, chunking, embeddings, and indexing into Elasticsearch, all orchestrated with LlamaIndex to minimize boilerplate.
Working with Elasticsearch
Create an index for CV chunks with vectors and metadata. Understand similarity search vs. keyword queries, how vector fields are stored, and how to explore documents and scores with Kibana.
Streamlit Chat Interface
Build a simple Streamlit UI to ask questions in natural language. Toggle debug mode to inspect which chunks supported the answer, and use metadata filters (e.g., by person) to boost precision for targeted queries.
Ingestion & JSON Shaping
Extract raw text from PDFs using PyMuPDF, then have Ollama (Mistral) produce lossless JSON (escaped characters, preserved structure). Handle occasional JSON formatting issues with retries and strict prompting.
Improving Answer Quality
Learn practical levers to improve results
Tune chunk size/overlap and top-K retrieval
Add richer metadata (roles, skills, locations) for hybrid filtering
Experiment with embedding models and stricter prompts
Consider structured outputs (e.g., JSON lists) for list-style questions
Dockerized Setup
Use Docker Compose to spin up the full stack (FastAPI, Elasticsearch, Kibana, Streamlit, and Ollama (Mistral) so you can run the entire system locally with consistent configuration.
Bonus: Production Patterns
Explore how this prototype maps to a scalable design
Store uploads in a data lake (e.g., S3) and trigger processing with a message queue (Kafka/SQS)
Autoscale workers for chunking and embeddings
Swap LLM backends (e.g., Bedrock/OpenAI) behind clean APIs
Persist chat history in MongoDB/Postgres and replace Streamlit with a React/Next.js UI
Homepage
Code:
https://learndataengineering.com/p/genai-rag-with-llamaindex-ollama

423b519448d4e936894130c701f35288.jpg

Code:
RapidGator
http://peeplink.in/48fdc2d16127
Fikper
https://fikper.com/bHBwGOTCpU/kgxum.GenAI.RAG.with.LlamaIndex.Ollama.and.Elasticsearch.rar.html
No Password - Links are Interchangeable
 
Back
Top