Open Source · Local-First · Free Forever

Explore Scripture
with AI

RAG-powered Bible Q&A, knowledge graphs, 3D verse visualization, and interactive maps — all running locally on your machine.

Knowledge Graph
AI Chat

What BibleLLM Offers

Six powerful tools working together — from AI-powered Q&A to 3D visualization. All running locally on your machine.

AI Chat

ChatGPT-style interface with streaming responses. Every answer is grounded in scripture with verse citations — no hallucinated theology.

3D Bible Universe

~31,000 verses rendered as navigable stars in 3D space using Three.js. Explore scripture like never before — zoom, rotate, and discover connections.

Knowledge Graph

Obsidian-style interactive graph of people, places, themes, and events. See how biblical concepts interconnect across the entire text.

Bible Search

Combined keyword + semantic search across themes, people, and locations. Find passages by meaning, not just exact words.

Bible Map

Interactive map of biblical locations with events and verse references. Trace journeys, battles, and the spread of the early church.

Timeline Explorer

Chronological journey from Creation through the Early Church. Navigate biblical history with events, dates, and scripture links.

Simulated Demo

See It in Action

Ask a question and see how BibleLLM responds with scripture-grounded answers and verse citations. This is a preview — the real app runs locally with a live LLM.

BibleLLM Chatllama3:8b

Ask a question about the Bible

Try one of the suggestions below

Ready to run this yourself? See setup below or view the full source.

How It Works

BibleLLM uses Retrieval Augmented Generation (RAG) to ensure every answer is grounded in actual scripture — the AI cannot hallucinate beyond the retrieved text.

01

Ask a Question

Type any question about the Bible in natural language.

02

Embed Query

Your question is converted into a vector embedding using BAAI/bge-small-en.

03

Vector Search

ChromaDB searches ~93k verse embeddings to find the most relevant passages.

04

Build Prompt

Retrieved verses are injected into a prompt template with your question.

05

Stream via Ollama

The local LLM (llama3:8b) generates an answer grounded in the retrieved text.

06

Answer + Citations

You receive a streaming response with specific verse citations for every claim.

pipeline.py
# The RAG pipeline in 5 lines
query_embedding = embed(user_question)
relevant_verses = chromadb.query(query_embedding, n=10)
prompt = build_prompt(user_question, relevant_verses)
response = ollama.generate(model="llama3:8b", prompt=prompt)
stream(response + citations)

Tech Stack

Built with modern, battle-tested technologies. ~93k verses across KJV, WEB, and ASV translations from HuggingFace.

Frontend

Next.js 15React 19Tailwind CSS v4React Three FiberThree.js

Backend

FastAPIPython 3.11+

AI Runtime

Ollamallama3:8bmistral:7b

Data & Search

ChromaDBBAAI/bge-small-enHuggingFace Datasets

Visualization

Sigma.jsGraphologyLeaflet

State

Zustand

Get It Running

BibleLLM runs entirely on your machine. Choose your setup path and be exploring scripture with AI in minutes.

System Requirements

OS

macOS, Linux, or Windows (WSL2)

RAM

8GB+ recommended (for llama3:8b)

Storage

~10GB (model + data)

Quick Start — 3 Commands

terminal
git clone https://github.com/blakeschafer/biblellm.git
cd biblellm
docker compose up

First-Time Data Pipeline

After containers are running, initialize the Bible dataset and vector embeddings. This runs once and takes approximately 60 minutes.

terminal
docker exec -it biblellm-backend python -m scripts.pipeline

The data pipeline downloads ~93k verses (KJV, WEB, ASV) from HuggingFace, generates embeddings, and loads them into ChromaDB. This is a one-time operation.