BGE-Multilingual-Gemma2 is a LLM-based multilingual embedding model. It is trained on a diverse range of languages and tasks. BGE-Multilingual-Gemma2 primarily demonstrates the following advancements: Diverse training data: The model's training data spans a broad range of languages, including English, Chinese, Japanese, Korean, French, and more.Additionally, the data covers a variety of task types, such as retrieval, classification, and clustering. Outstanding performance: The model exhibits state-of-the-art (SOTA) results on multilingual benchmarks like MIRACL, MTEB-pl, and MTEB-fr. It also achieves excellent performance on other major evaluations, including MTEB, C-MTEB and AIR-Bench.
The Embeddings Converter API provides a single unified endpoint that can handle both single sentence and batch requests.
In NLP, word embeddings are often used to represent individual words as vectors in a high dimensional space, where the vectors capture semantic and syntactic relationships between words. These embedding models allow you to perform tasks such as word analogy or sentence similarity. To do this, the difference between vectors in the embedding space must be calculated to identify relationships between words.
AI Endpoints makes it easy, with ready-to-use inference APIs. Discover how to use them:
The embedding APIs endpoint allows you to access state-of-the-art Open-Source models that transform raw text input into high-dimensional embedding vectors suitable for a variety of multilingual and monolingual applications:
The request payload must contain a
modelinputinputFirst install the requests library:
pip install requests
Next, export your access token to the OVH_AI_ENDPOINTS_ACCESS_TOKEN environment variable:
export OVH_AI_ENDPOINTS_ACCESS_TOKEN=<your-access-token>
If you do not have an access token key yet, follow the instructions in the AI Endpoints – Getting Started.
Finally, run the following Python code:
import os
import requests
MODEL = "bge-multilingual-gemma2"
texts = [
"Paris is the capital of France",
"Paris is the capital of France",
"Berlin is the capital of Germany",
"This endpoint converts input sentence into a vector embeddings"
]
url = "https://oai.endpoints.kepler.ai.cloud.ovh.net/v1/embeddings"
headers = {
"Authorization": f"Bearer {os.getenv('OVH_AI_ENDPOINTS_ACCESS_TOKEN')}",
"Content-Type": "application/json"
}
payload = {
"model": MODEL,
"input": texts
}
response = requests.post(url, headers=headers, json=payload)
if response.status_code == 200:
data = response.json()
print(f"✅ Received {len(data['data'])} embeddings")
# Show a preview of the first embedding (first 5 values)
print("First embedding preview:", data['data'][0]['embedding'][:5])
else:
print("❌ Error:", response.status_code, response.text)
The bge-multilingual-gemma2 API is compatible with the OpenAI specification.
First install the openai library:
pip install openai
Next, export your access token to the OVH_AI_ENDPOINTS_ACCESS_TOKEN environment variable:
export OVH_AI_ENDPOINTS_ACCESS_TOKEN=<your-access-token>
Finally, run the following Python code:
import os
from openai import OpenAI
client = OpenAI(
base_url="https://oai.endpoints.kepler.ai.cloud.ovh.net/v1",
api_key=os.getenv('OVH_AI_ENDPOINTS_ACCESS_TOKEN')
)
MODEL = "bge-multilingual-gemma2"
texts = [
"Paris is the capital of France",
"Paris is the capital of France",
"Berlin is the capital of Germany",
"This endpoint converts input sentence into a vector embeddings"
]
response = client.embeddings.create(
model=MODEL,
input=texts
)
print(f"✅ Received {len(response.data)} embeddings")
# Show a preview of the first embedding (first 5 values)
print("First embedding preview:", response.data[0].embedding[:5])
curl -X POST "https://oai.endpoints.kepler.ai.cloud.ovh.net/v1/embeddings" \
-H "Authorization: Bearer $OVH_AI_ENDPOINTS_ACCESS_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "BGE-M3",
"input": [
"Paris is the capital of France",
"Paris is the capital of France",
"Berlin is the capital of Germany",
"This endpoint converts input sentence into a vector embeddings"
]
}'
After obtaining the embeddings, you can assess how semantically similar the input sentences are. The following example shows how to compute cosine similarity between the first embedding and the others.
Install the numpy library:
pip install numpy
Then run the following code:
import numpy as np
def cosine_similarity(vec_a, vec_b):
"""Return cosine similarity between two vectors."""
return np.dot(vec_a, vec_b) / (np.linalg.norm(vec_a) * np.linalg.norm(vec_b))
# In this code, `data` is the JSON response from the `requests` example above
data = response.json()
embeddings = [item["embedding"] for item in data["data"]]
base = embeddings[0]
# Compute similarities to the first embedding
similarities = {
f"Similarity with sentence {i}": f"{cosine_similarity(base, vec):.3f}"
for i, vec in enumerate(embeddings[1:], start=1)
}
print("Sentence similarities:", similarities)
If you exceed this limit, a 429 error code will be returned.
If you require higher usage, please get in touch with us to discuss increasing your rate limits.
For a broader overview of AI Endpoints, explore the full AI Endpoints Documentation.
Reach out to our support team or join the OVHcloud Discord #ai-endpoints channel to share your questions, feedback, and suggestions for improving the service, to the team and the community.
New to AI Endpoints? This guide walks you through everything you need to get an access token, call AI models, and integrate AI APIs into your apps with ease.
Start TutorialExplore what AI Endpoints can do. This guide breaks down current features, future roadmap items, and the platform's core capabilities so you know exactly what to expect.
Start TutorialRunning into issues? This guide helps you solve common problems on AI Endpoints, from error codes to unexpected responses. Get quick answers, clear fixes, and helpful tips to keep your projects running smoothly.
Start Tutorial