KerdosInfrasoft
Building Tomorrow

Command Palette

Search for a command to run...

Available on PyPI · v0.2.1

LLM Training & RAG in One Package.

pip install kerdosai — the official Python SDK for building RAG pipelines, fine-tuning HuggingFace LLMs, and deploying enterprise chat UIs without writing frontend code.

Python ≥ 3.8MIT Licensev0.2.1HuggingFace · FAISS · Gradio
Open-source & free CPU-friendly, no GPU required Enterprise-ready
pip install kerdosai
pip install "kerdosai[all]"
Capabilities

Everything in One Package

From document ingestion to LLM fine-tuning to production chat UIs — all from a single pip install.

kerdosai.rag
Full RAG Pipeline

End-to-end Retrieval-Augmented Generation: document loading, FAISS indexing, and LLM answering out of the box.

PDF · DOCX · CSV
Multi-format Ingestion

Index PDF, DOCX, TXT, Markdown, and CSV files via the KnowledgeBase API — all parsed and chunked automatically.

HuggingFace
LLM Fine-Tuning

Fine-tune any HuggingFace model on your domain data with a clean, high-level KerdosAgent API.

Streaming
Streaming Chat

RAGAgent supports both streaming and blocking chat with full conversation history maintained across turns.

One-line deploy
Gradio Integration

Deploy an enterprise-grade Chat UI with a single line: deployment_type="gradio-rag". Zero frontend code required.

kerdosai rag-chat
CLI Support

Launch an on-premise RAG UI from the terminal with `kerdosai rag-chat`. No Python code needed.

Code Examples

Start in 3 Lines

Real code — no boilerplate, no configuration files.

rag_example.py
from kerdosai.rag import KnowledgeBase, RAGAgent

# 1. Index your documents
kb = KnowledgeBase().index_documents([
    "report.pdf",
    "policy.docx",
    "notes.txt",
])

# 2. Create the agent
agent = RAGAgent(knowledge_base=kb)

# 3. Stream answers grounded in your docs
for chunk in agent.chat("What are the key findings?"):
    print(chunk, end="", flush=True)

Answers are grounded strictly in your uploaded documents — no internet hallucinations.

How It Works

The RAG Pipeline Under the Hood

kerdosai wires up the full pipeline automatically — you just provide the documents and ask questions.

Load Docs
PDF / DOCX / TXT
Parse & Chunk
512 chars, 64 overlap
Embed
Sentence Transformers
FAISS Index
In-memory vector store
Top-K Retrieval
Cosine similarity
LLM Answer
Grounded response
Dependencies

Battle-Tested Open-Source Stack

FAISSSentence TransformersPyMuPDFpython-docxGradioHuggingFace HubTenacityPyTorch ≥ 2.0Python ≥ 3.8
Use Cases

Built for Enterprise

kerdosai is purpose-built for teams that handle large volumes of internal documents and need private, auditable AI answers.

Healthcare

Clinical docs, protocols, patient FAQs

Financial Services

Compliance docs, annual reports, policy Q&A

Legal

Contract analysis, case law search

Enterprise IT

Internal wikis, runbooks, knowledge bases

Open-source · MIT Licensed

Start Building With kerdosai Today

Install the package, index your documents, and have a production-grade RAG pipeline running in minutes. Need a private deployment or custom fine-tuning? Talk to our team.