Qwen2.5-VL is a powerful vision-language model, designed for advanced image understanding. It can generate detailed image captions, analyze documents, OCR, detect objects, and answer questions based on visuals, making it useful for AI assistants, RAG and Agents.
The following examples walk you through the use of a VLM (Vision Language Model), able to take images and texts prompts and generate text. To send a multimodal input to the model, you have to use a content list that will contain the text prompt and the base64 encoded image.
A VLM encodes the image as embeddings and uses tokens to represent and process the image along the usual text tokens, so an image will use some of the input context length of the model.
These Vision Language Model APIs are based on Open-Source models:
Please ensure that you choose the appropriate model based on your specific use case.
First, install the requests library:
pip install requests
Next, export your access token to the OVH_AI_ENDPOINTS_ACCESS_TOKEN environment variable:
export OVH_AI_ENDPOINTS_ACCESS_TOKEN=<your-access-token>
If you do not have an access token key yet, follow the instructions in the AI Endpoints – Getting Started.
If you don't have any images available for testing, save the following image locally as sample.jpg:

And then run the following example python code to ask the VLM to describe this image (or perform another task by adapting the content field of the messages dictionary):
import mimetypes
import os
import requests
import base64
image_filepath = "sample.jpg"
with open(image_filepath, "rb") as img_file:
image_data = img_file.read()
# detect MIME type (default to jpeg if unknown)
mime_type, _ = mimetypes.guess_type(image_filepath)
if mime_type is None:
mime_type = "image/jpeg"
encoded_image = f"data:{mime_type};base64,{base64.b64encode(image_data).decode('utf-8')}"
# You can use the model dedicated URL
url = "https://qwen-2-5-vl-72b-instruct.endpoints.kepler.ai.cloud.ovh.net/api/openai_compat/v1/chat/completions"
# Or our unified endpoint for easy model switching with optimal OpenAI compatibility
url = "https://oai.endpoints.kepler.ai.cloud.ovh.net/v1/chat/completions"
payload = {
"max_tokens": 512,
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image."
},
{
"type": "image_url",
"image_url": {
"url": encoded_image
}
}
]
}
],
"model": "Qwen2.5-VL-72B-Instruct",
"temperature": 0.2,
}
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {os.getenv('OVH_AI_ENDPOINTS_ACCESS_TOKEN')}",
}
response = requests.post(url, json=payload, headers=headers)
if response.status_code == 200:
# Handle response
response_data = response.json()
# Parse JSON response
choices = response_data["choices"]
for choice in choices:
text = choice["message"]["content"]
# Process text and finish_reason
print(text)
else:
print("Error:", response.status_code, response.text)
The Qwen2.5-VL-72B-Instruct API is compatible with the openai specification.
First install the openai library:
pip install openai
Next, export your access token to the OVH_AI_ENDPOINTS_ACCESS_TOKEN environment variable:
export OVH_AI_ENDPOINTS_ACCESS_TOKEN=<your-access-token>
If you do not have an access token key yet, follow the instructions in the AI Endpoints – Getting Started.
Finally, run the following python code:
import mimetypes
import os
import base64
from openai import OpenAI
# You can use the model dedicated URL
url = "https://qwen-2-5-vl-72b-instruct.endpoints.kepler.ai.cloud.ovh.net/api/openai_compat/v1"
# Or our unified endpoint for easy model switching with optimal OpenAI compatibility
url = "https://oai.endpoints.kepler.ai.cloud.ovh.net/v1"
client = OpenAI(
base_url=url,
api_key=os.getenv("OVH_AI_ENDPOINTS_ACCESS_TOKEN")
)
def multimodal_chat_completion(new_message: str, image_filepath: str = None) -> str:
new_user_message = {
"role": "user",
"content": [
{
"type": "text",
"text": new_message
}
]
}
if image_filepath is not None:
with open(image_filepath, "rb") as img_file:
image_data = img_file.read()
# detect MIME type (default to jpeg if unknown)
mime_type, _ = mimetypes.guess_type(image_filepath)
if mime_type is None:
mime_type = "image/jpeg"
encoded_image = f"data:{mime_type};base64,{base64.b64encode(image_data).decode('utf-8')}"
image_content = {
"type": "image_url",
"image_url": {
"url": encoded_image
}
}
new_user_message["content"].append(image_content)
history_openai_format = [new_user_message]
return client.chat.completions.create(
model="Qwen2.5-VL-72B-Instruct",
messages=history_openai_format,
temperature=0.2,
max_tokens=1024
).choices.pop().message.content
if __name__ == '__main__':
print(multimodal_chat_completion("Describe this image.", "sample.jpg"))
When using AI Endpoints, the following rate limits apply:
If you exceed this limit, a 429 error code will be returned.
If you require higher usage, please get in touch with us to discuss increasing your rate limits.
New to AI Endpoints? This guide walks you through everything you need to get an access token, call AI models, and integrate AI APIs into your apps with ease.
Start TutorialExplore what AI Endpoints can do. This guide breaks down current features, future roadmap items, and the platform's core capabilities so you know exactly what to expect.
Start TutorialRunning into issues? This guide helps you solve common problems on AI Endpoints, from error codes to unexpected responses. Get quick answers, clear fixes, and helpful tips to keep your projects running smoothly.
Start TutorialLearn how to use Structured Output with OVHcloud AI Endpoints.
Start TutorialLearn how to use Function Calling with OVHcloud AI Endpoints.
Start TutorialLearn how to use OVHcloud AI Endpoints Virtual Models.
Start Tutorial