BibleVerse

BibleVerse was created to increase the accessibility of the Bible and equip the expansion of Christianity with a modern interface.

It is entirely free and asks for nothing in return. If you are motivated to donate, please give to your local church instead.

Currently in beta. Actively under development.

BibleVerse — The Modern Bible Interface

Efficient exploration of humanity's most consequential text

verses · books · cross-references
BibleVerse (Beta)— The Modern Bible Interface
Chat History
BibleVerse Chat
What would you like to explore?

BibleVerse — The Modern Bible Interface

Efficient exploration of humanity's most consequential text

Loading search model
Loading model
Loading your own personal model locally — which takes a few seconds — so that your conversation is secure.

About the sphere

Sphere visualization

The 3D sphere displays 31,102 vertices (one per verse) and cross-reference edges.

Vertices: ◆ Old Testament (39 books) · ✚ New Testament (27 books). Color encodes canonical book order along a gradient from blue (Genesis) through red to gold (Revelation).

Edges: Intra-testament (OT↔OT, NT↔NT) — · Inter-testament (OT↔NT) — . Cross-references sourced from openbible.info.

Sentence embedding

Let \(\mathcal{V} = \{v_1, \ldots, v_{31102}\}\) denote the KJV verse corpus. Each verse is encoded via a sentence transformer fine-tuned specifically on biblical text (odunola/sentence-transformers-bible-reference-final, 109M parameters, 12 transformer layers, MPNet architecture). The model was contrastively trained on 929K biblical sentence pairs using cosine similarity loss, learning to place theologically related passages close together in the embedding space:

$$\phi : \mathcal{V} \to \mathbb{R}^{768}, \qquad \phi(v_i) = \mathrm{MeanPool}\!\bigl(\mathrm{MPNet}(\mathrm{tokenize}(v_i))\bigr)$$

Unlike general-purpose encoders trained on web text, this model's latent space separates verses by biblical meaning — prophecy, typology, thematic parallel, and doctrinal correspondence — rather than surface vocabulary overlap.

Dimensionality reduction

The embedding matrix \(\Phi \in \mathbb{R}^{31102 \times 768}\) is reduced to 3 dimensions via UMAP, which minimizes the fuzzy set cross-entropy between high-dimensional affinities \(w_{ij}\) and low-dimensional affinities \(q_{ij}\):

$$\mathcal{L}_{\text{UMAP}} = \sum_{i \neq j} \left[ w_{ij} \ln \frac{w_{ij}}{q_{ij}} + (1 - w_{ij}) \ln \frac{1 - w_{ij}}{1 - q_{ij}} \right]$$

where \(w_{ij} = \exp\!\bigl(-({d(\phi_i, \phi_j) - \rho_i})/{\sigma_i}\bigr)\) captures local metric structure in the ambient space, with \(\texttt{n\_neighbors}=15\), \(\texttt{min\_dist}=0.1\), \(\texttt{metric}=\text{cosine}\).

Spherical projection

The 3D UMAP coordinates \(\mathbf{x}_i \in \mathbb{R}^3\) are centered and normalized onto the unit 2-sphere:

$$\hat{\mathbf{x}}_i = \frac{\mathbf{x}_i - \bar{\mathbf{x}}}{\|\mathbf{x}_i - \bar{\mathbf{x}}\|_2} \;\in\; S^2$$

This preserves angular relationships: semantically similar verses cluster into regions on the sphere, while distant passages occupy opposite hemispheres.

Cross-references

Arcs represent scholarly cross-references from the openbible.info dataset (Christoph Römhild, public domain). Each entry connects two verses identified by biblical scholars as linked via quotation, allusion, typological parallel, or prophecy–fulfillment, weighted by community vote count \(n_{\text{votes}}\). The homepage slider filters arcs by \(n_{\text{votes}} \geq \tau\), where \(\tau\) is user-selected.

Arc geometry

For two sphere points \(\hat{\mathbf{x}}_i, \hat{\mathbf{x}}_j \in S^2\), the arc midpoint is computed via normalized averaging and bowed outward by a height proportional to angular separation:

$$\mathbf{m}_{ij} = \frac{\hat{\mathbf{x}}_i + \hat{\mathbf{x}}_j}{\|\hat{\mathbf{x}}_i + \hat{\mathbf{x}}_j\|} \cdot (1 + h), \qquad h = h_{\min} + (h_{\max} - h_{\min}) \cdot \frac{\theta_{ij}}{\pi}$$

where \(\theta_{ij} = \arccos(\hat{\mathbf{x}}_i \cdot \hat{\mathbf{x}}_j)\) is the geodesic angular distance. This ensures short-range references hug the surface while long-range references arc prominently outward.

About search

The search bar performs real-time semantic retrieval using the same Bible-trained MPNet transformer that generates the sphere layout. The user's query \(q\) is encoded to a 768-dimensional embedding \(\phi(q)\) via ONNX Runtime Web, running the full 109M-parameter model directly in the browser (WASM backend).

Verse rankings are determined by cosine similarity in the embedding space:

$$\text{sim}(q, v_i) = \frac{\phi(q) \cdot \phi(v_i)}{\|\phi(q)\| \, \|\phi(v_i)\|} = \phi(q)^{\!\top} \hat\phi(v_i)$$

where \(\hat\phi(v_i)\) denotes the L2-normalised verse embedding (pre-computed and shipped as a 23.9 MB uint8-quantised binary). Since both query and verse vectors are normalised, the dot product is equivalent to cosine similarity. This is not keyword matching — it captures theological and semantic proximity: a query like "I feel lost" surfaces verses about wandering, despair, and seeking guidance, even when no query terms appear verbatim.

About chat

Client-side LLM inference

Chat runs a large language model entirely in your browser — no server, no API calls, no data leaving your device. This is achieved via MLC Web-LLM, which compiles transformer models to WebGPU shaders using Apache TVM's machine learning compilation stack. The model weights are downloaded once and cached in your browser's storage.

Available models

All models are 4-bit quantised (q4f16_1) to fit within browser VRAM constraints while preserving output quality. The available architectures are:

  • Logos Standard — Based on Google Gemma 2 2B. 2.6B parameters, 2.0 GB VRAM. Strong quality-to-size ratio, recommended default.
  • Logos Max — Based on Microsoft Phi 3.5 Mini. 3.8B parameters, 3.7 GB VRAM. Strongest reasoning capability, best for complex questions.
  • Max (Genius Mode) — Same model as Logos Max with a distinctive philosophical voice inspired by Dostoevsky and Nietzsche. Blunt, piercing, high-IQ responses.

Hardware requirements

WebGPU is required for GPU-accelerated matrix operations. This is supported in Chrome 113+, Edge 113+, and Safari 18+ (macOS Sequoia). Firefox does not yet ship WebGPU by default. On unsupported browsers, you can provide an OpenAI API key in the reader settings to use server-side inference instead.

Privacy

When using local inference, the full forward pass — embedding lookup, multi-head self-attention, feed-forward layers, and autoregressive token sampling — executes on your GPU via WebGPU compute shaders. No tokens, prompts, or completions are transmitted over the network. Your conversation exists solely in browser memory and is discarded when you leave the page.

Holy Bible
Holy Bible
Translation
English translation
Loading model
Loading your own personal model locally — which takes a few seconds — so that your conversation is secure.
Choose your model
Your conversation is private — AI runs locally in your browser.

Add Note

Filter by book
About Logos AI
Both models — Logos (Standard) and Logos (Genius Mode) — are powered by GPT-oss 120B, a 120-billion parameter open-source language model. All inference is routed through a private server. No data is stored or logged. Standard — Conversational Bible companion.
Genius Mode — Adversarial debate opponent.