MongoDB

voyage-4-nano

Model Overview

voyage-4-nano is a state-of-the-art text embedding model from the Voyage 4 series, designed for high-performance semantic search and retrieval tasks. This model features:

  • Developed by: Voyage AI
  • Supported Language(s): Multilingual
  • Context Length: 32000
  • Parameters: 180M [Non-embedding] + 160M [Embedding]
  • License: Apache 2.0

For detailed performance metrics and benchmarks, please refer to:

Key Features

Shared Embedding Space with voyage-4 series

The shared embedding space introduced in the Voyage 4 model series eliminates the need to re-index your data when switching between models in the series. Embeddings generated by different Voyage 4 models (voyage-4-large, voyage-4, voyage-4-lite, and voyage-4-nano) can be directly compared and used interchangeably. For example, use voyage-4-large for high-fidelity indexing, voyage-4-lite for high-throughput queries, and voyage-4-nano for local development.

Frontier Retrieval Quality at Low Cost

Outperforms much larger existing embedding models, including voyage-3.5-lite.

Matryoshka Representation Learning (MRL)

voyage-4-nano is trained with Matryoshka Representation Learning to enable flexible embedding dimensions with minimal loss of retreival quality. It supports 2048, 1024, 512, and 256 dimensional embeddings.

Quantization-Aware Training

voyage-4-nano uses quantization-aware training to enable flexible output data types with minimal loss of retreival quality. It supports 32-bit floating point, signed and unsigned 8-bit integer, and binary precision outputs.

Usage

Via Transformers

import torch
from transformers import AutoModel, AutoTokenizer


def mean_pool(
    last_hidden_states: torch.Tensor, attention_mask: torch.Tensor
) -> torch.Tensor:
    input_mask_expanded = (
        attention_mask.unsqueeze(-1).expand(last_hidden_states.size()).float()
    )
    sum_embeddings = torch.sum(last_hidden_states * input_mask_expanded, 1)
    sum_mask = input_mask_expanded.sum(1)
    sum_mask = torch.clamp(sum_mask, min=1e-9)
    output_vectors = sum_embeddings / sum_mask
    return output_vectors


# If you have an Nvidia GPU, it's recommended to use exactly the same arguments for Nvidia GPUs. attn_implementation="eager" or "sdpa" also works, but some minor differences in embeddings are expected

device = "cuda"
model = AutoModel.from_pretrained(
    "voyageai/voyage-4-nano",
    trust_remote_code=True,
    attn_implementation="flash_attention_2",
    dtype=torch.bfloat16,
).to(device)
tokenizer = AutoTokenizer.from_pretrained("voyageai/voyage-4-nano")

# Embed queries with prompts
query = "What is the fastest route to 88 Kearny?"
prompt = "Represent the query for retrieving supporting documents: "
inputs = tokenizer(
    prompt + query, return_tensors="pt", padding=True, truncation=True, max_length=32768
)
inputs = {k: v.to(device) for k, v in inputs.items()}
with torch.no_grad():
    outputs = model.forward(**inputs)
embeddings = mean_pool(outputs.last_hidden_state, inputs["attention_mask"])
embeddings = torch.nn.functional.normalize(embeddings, p=2, dim=1)

Via Sentence Transformers

from sentence_transformers import SentenceTransformer
import torch

# Standard loading, assuming no GPU access
model = SentenceTransformer(
    "voyageai/voyage-4-nano", 
    trust_remote_code=True, 
    truncate_dim=2048
)

# OPTIONAL: Loading for high-performance inference with GPUs
# Use 'flash_attention_2' and 'bfloat16' if your GPU supports it (e.g., A100, H100, RTX 30/40 series)
# model = SentenceTransformer(
#     "voyageai/voyage-4-nano", 
#     trust_remote_code=True, 
#     truncate_dim=2048, 
#     model_kwargs={
#         "attn_implementation": "flash_attention_2",
#         "dtype": torch.bfloat16
#     }
# )

query = "Which planet is known as the Red Planet?"
documents = [
    "Venus is often called Earth's twin because of its similar size and proximity.",
    "Mars, known for its reddish appearance, is often referred to as the Red Planet.",
    "Jupiter, the largest planet in our solar system, has a prominent red spot.",
    "Saturn, famous for its rings, is sometimes mistaken for the Red Planet."
]

# Encode via encode_query and encode_document to automatically use the right prompts
query_embedding = model.encode_query(query)
document_embeddings = model.encode_document(documents)

# Inspect the output shapes
print(f"Query Shape: {query_embedding.shape}")      # Expected: (2048,)
print(f"Document Shape: {document_embeddings.shape}") # Expected: (4, 2048)
  • The encode_query and encode_document methods automatically prepend the "Represent the query for retrieving supporting documents: " and "Represent the document for retrieval: " prompts as defined in config_sentence_transformers.json, respectively.
  • The default embedding dimension is 2048. To obtain lower-dimensional embeddings, you can use the truncate_dim argument in the encode_query and encode_document methods, or when initializing the model via the truncate_dim parameter. For example, model.encode_query(query, truncate_dim=512) will yield 512-dimensional embeddings. The model supports 2048, 1024, 512, and 256-dimensional embeddings.
  • You can post-process the embeddings to lower quantization levels using the precision argument in the encode_query and encode_document methods. For example, model.encode_query(query, precision='int8') will yield signed 8-bit integer embeddings. The supported precisions are 'float32', 'int8', 'uint8', 'binary', and 'ubinary'.

Acknowledgments

This model builds upon foundational work by the Qwen Team at Alibaba. We are grateful for their contributions to the open-source community, which have informed the development of this specialized embedding model for the Voyage 4 series.

We'd like to thank Tom Aarsen for adding sentence transformers suppport and improving transformers integration.

Downloads last month
18,267
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for voyageai/voyage-4-nano

Finetunes
1 model
Quantizations
2 models