Skip to main content

vectlite

Embedded vector store for local-first AI applications

A single-file vector database written in Rust. Dense + sparse hybrid search, HNSW indexing, transactions, and crash-safe persistence — all in a portable .vdb file.

pip install vectlite
npm install vectlite

Everything you need, nothing you don't

Hybrid Search

Dense vectors with HNSW + sparse BM25 keyword retrieval, fused with linear combination or RRF.

Single-File Storage

Everything in one portable .vdb file. Crash-safe WAL, file locking, snapshots and backup.

Bulk Ingestion

Import large datasets efficiently with deferred index rebuilds via bulk_ingest() and bulkIngest().

Rich Metadata Filters

MongoDB-style operators: $eq, $gt, $in, $contains, $exists, with nested dot-path access.

Transactions

Atomic batched writes with rollback. Context manager support in Python, try/catch in Node.js.

Multi-Language

Native Rust core with Python (PyO3) and Node.js (napi-rs) bindings.

Quick start

import vectlite

db = vectlite.open("knowledge.vdb", dimension=384)

db.upsert("doc1", embedding, {"source": "blog", "title": "Auth Guide"})
db.upsert("doc2", embedding2, {"source": "notes", "title": "Billing"})

results = db.search(query_embedding, k=5, filter={"source": "blog"})

# For large imports, prefer bulk_ingest()
records = [
    {"id": f"doc{i}", "vector": embeddings[i], "metadata": {"source": "corpus"}}
    for i in range(len(embeddings))
]
db.bulk_ingest(records, batch_size=5000)

for r in results:
    print(r["id"], r["score"])

db.compact()