Explore Scripture
with AI
RAG-powered Bible Q&A, knowledge graphs, 3D verse visualization, and interactive maps — all running locally on your machine.
What BibleLLM Offers
Six powerful tools working together — from AI-powered Q&A to 3D visualization. All running locally on your machine.
AI Chat
ChatGPT-style interface with streaming responses. Every answer is grounded in scripture with verse citations — no hallucinated theology.
3D Bible Universe
~31,000 verses rendered as navigable stars in 3D space using Three.js. Explore scripture like never before — zoom, rotate, and discover connections.
Knowledge Graph
Obsidian-style interactive graph of people, places, themes, and events. See how biblical concepts interconnect across the entire text.
Bible Search
Combined keyword + semantic search across themes, people, and locations. Find passages by meaning, not just exact words.
Bible Map
Interactive map of biblical locations with events and verse references. Trace journeys, battles, and the spread of the early church.
Timeline Explorer
Chronological journey from Creation through the Early Church. Navigate biblical history with events, dates, and scripture links.
See It in Action
Ask a question and see how BibleLLM responds with scripture-grounded answers and verse citations. This is a preview — the real app runs locally with a live LLM.
Ask a question about the Bible
Try one of the suggestions below
Ready to run this yourself? See setup below or view the full source.
How It Works
BibleLLM uses Retrieval Augmented Generation (RAG) to ensure every answer is grounded in actual scripture — the AI cannot hallucinate beyond the retrieved text.
Ask a Question
Type any question about the Bible in natural language.
Embed Query
Your question is converted into a vector embedding using BAAI/bge-small-en.
Vector Search
ChromaDB searches ~93k verse embeddings to find the most relevant passages.
Build Prompt
Retrieved verses are injected into a prompt template with your question.
Stream via Ollama
The local LLM (llama3:8b) generates an answer grounded in the retrieved text.
Answer + Citations
You receive a streaming response with specific verse citations for every claim.
# The RAG pipeline in 5 lines
query_embedding = embed(user_question)
relevant_verses = chromadb.query(query_embedding, n=10)
prompt = build_prompt(user_question, relevant_verses)
response = ollama.generate(model="llama3:8b", prompt=prompt)
stream(response + citations)Tech Stack
Built with modern, battle-tested technologies. ~93k verses across KJV, WEB, and ASV translations from HuggingFace.
Frontend
Backend
AI Runtime
Data & Search
Visualization
State
Get It Running
BibleLLM runs entirely on your machine. Choose your setup path and be exploring scripture with AI in minutes.
System Requirements
macOS, Linux, or Windows (WSL2)
8GB+ recommended (for llama3:8b)
~10GB (model + data)
Quick Start — 3 Commands
git clone https://github.com/blakeschafer/biblellm.git
cd biblellm
docker compose upFirst-Time Data Pipeline
After containers are running, initialize the Bible dataset and vector embeddings. This runs once and takes approximately 60 minutes.
docker exec -it biblellm-backend python -m scripts.pipelineThe data pipeline downloads ~93k verses (KJV, WEB, ASV) from HuggingFace, generates embeddings, and loads them into ChromaDB. This is a one-time operation.