Releases: Bessouat40/RAGLight
v3.2.0
What's Changed
- Improve/UI by @Bessouat40 in #131
- add reformulation by @Bessouat40 in #133
- improve history management by @Bessouat40 in #134
Full Changelog: 3.1.1...3.2.0
v3.1.1
Fix CLI bug
v3.1.0
Added
raglight serve --ui— launches a Streamlit chat interface alongside the REST API--ui-portoption to configure the Streamlit port (default:8501)- Chat tab with full conversation history and markdown rendering
- Upload tab to ingest files or a server-side directory directly from the browser
- Both processes share the same configuration and shut down together on Ctrl+C
v3.0.0
New Features
REST API Server (#126)
- New raglight serve CLI command to deploy a RAG pipeline as a REST API without writing any Python code
- Built on FastAPI with automatic Swagger UI at /docs
- Fully configurable via RAGLIGHT_* environment variables (LLM provider, embeddings, vector store, etc.)
- Endpoints: GET /health, POST /generate, POST /ingest, POST /ingest/upload, GET /collections
- File upload support via multipart form-data
- Docker Compose example for one-command deployment
Hybrid Search (#127)
- New hybrid retrieval combining BM25 (keyword search) and semantic (vector) search
- Improves retrieval quality on sparse or technical documents where pure vector search falls short
- Configurable via the Builder API and RAGConfig
v2.10.1
Update dependencies
Remove smolagents dependency.
v2.10.0
v2.10.0
What's Changed
- [BREAKING] Remove old RAT feature
- [FEATURE] You can now add one or more github repositories as knowledge source when you use RAGLight's CLI. You can now discuss with your favorites repo using the framework.
v2.9.0
Dependencies update
Update dependencies to avoid vulnerabilities.
Reranking on classic RAG
Improve reranking feature on RAG feature.
Memory
Add memory use for RAG feature.
New PDF parser
Possibility to use a new pdf parser based on a vlm that can deals with images inside pdf files when you index documents.
Refactor Agentic RAG Feature
Refactor of Agentic RAG Feature : use langchain framework instead of smolagents one.
v2.8.1
Fix : modify RAG class for better context understanding.
v2.8.0
Custom Processors & VLM-Powered PDF Parsing
New Feature: Override Vector Store Processors
You can now override RAGLight's default document processors when indexing your data. This makes the ingestion pipeline fully customizable.
custom_processors = {
"pdf": VlmPDFProcessor(vlm),
"py": MyCustomPythonProcessor(),
}This allows you to plug in your own logic for any file extension and adapt RAGLight to advanced or domain-specific workflows.
New VLM-Enhanced PDF Processor
A new VlmPDFProcessor has been introduced. It:
- Extracts text and images from PDFs
- Converts images to base64
- Sends them to your Vision-Language Model (OpenAI, Mistral, Ollama…)
- Generates captions and embeds them as searchable content
This keeps visual information (diagrams, figures, schematics) inside the vector store and improves retrieval for technical documents.
v2.7.2
- Allow user to use more Ollama parameters (see options parameter)
- Fix problem for Mistral use