This YOLO model, developed by Ultralytics, excels in real-time instance segmentation, enabling precise identification and delineation of objects within images.
The Image Segmentation API endpoint allows you to recognize instances of objects of a certain class in images.
Image Segmentation is a process in Computer Vision, a field of Artificial Intelligence, that involves partitioning an image into multiple segments or regions to simplify the representation of the image into something that is more meaningful and easier to analyze. The resulting image will have the recognized objects outlined and covered with a mask of the color of its class.
AI Endpoints makes it easy, with ready-to-use inference APIs. Discover how to use them:
This Image Segmentation API is based on YOLO11 Open-Source model (licence AGPL-3.0): Source YOLOv11. It takes image file as input (.jpg or .png) and returns the masks (segmentation of the detected objects), the coordinates of their position in the image and an accuracy score.
Model configuration
The Image Segmentation endpoint offers you an optimized way to detect and segment various objects into an image. Learn how to use them with the following example:
First install the requests library:
pip install requests
Next, export your access token to the OVH_AI_ENDPOINTS_ACCESS_TOKEN environment variable:
export OVH_AI_ENDPOINTS_ACCESS_TOKEN=<your-access-token>
If you do not have an access token key yet, follow the instructions in the AI Endpoints – Getting Started.
If you don't have any images available for testing, save the following image locally as sample.jpg:

And then run the following Python code to segment objects in the sample.jpg image:
import os
import requests
url = "https://yolov11x-image-segmentation.endpoints.kepler.ai.cloud.ovh.net/api/image2segment"
files = {
'img': open('sample.jpg', 'rb')
}
headers = {
"accept": "application/json",
"Authorization": f"Bearer {os.getenv('OVH_AI_ENDPOINTS_ACCESS_TOKEN')}"
}
response = requests.post(url, headers=headers, files=files)
if response.status_code == 200:
# Handle response
response_data = response.json()
print(response_data)
else:
print("Error:", response.status_code, response.text)
This code will return a list of segmented detected objects and their corresponding bounding boxes and masks arrays. Each element in the list contains the following information:
Here is an example of the output:
[
{
"x1": 0.7798194885253906,
"y1": 315.55170822143555,
"x2": 474.6342658996582,
"y2": 704.6021547317505,
"class_id": 15,
"label": "cat",
"score": 0.9323908686637878,
"mask": [
[
0,
...
]
]
},
{
"x1": 644.2183494567871,
"y1": 69.76975107192993,
"x2": 903.4891128540039,
"y2": 700.8093109130859,
"class_id": 16,
"label": "dog",
"score": 0.9269720911979676,
"mask": [
[
0,
...,
]
]
}
]
To go further and extract masks from the image, you can use the following code:
import requests
import os
import numpy as np
from PIL import Image
url = "https://yolov11x-image-segmentation.endpoints.kepler.ai.cloud.ovh.net/api/image2segment"
files = {
'img': open('sample.jpg', 'rb')
}
headers = {
"accept": "application/json",
"Authorization": f"Bearer {os.getenv('OVH_AI_ENDPOINTS_ACCESS_TOKEN')}"
}
response = requests.post(url, headers=headers, files=files)
for object in response.json() :
mask_toarray = np.array([np.array(x) for x in object['mask']], \
dtype=np.int8)
object_into_mask = Image.fromarray(mask_toarray * 255).convert('L')
object_into_mask.show()
This will return masks as follows:

To get the final segmented image, you can generate contours and polygons with this code sample:
import requests
import os
import numpy as np
import random
import cv2
from PIL import Image, ImageDraw
url = "https://yolov11x-image-segmentation.endpoints.kepler.ai.cloud.ovh.net/api/image2segment"
files = {
'img': open('sample.jpg', 'rb')
}
headers = {
"accept": "application/json",
"Authorization": f"Bearer {os.getenv('OVH_AI_ENDPOINTS_ACCESS_TOKEN')}"
}
response = requests.post(url, headers=headers, files=files)
yolo_classes = [
"person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat",
"traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse",
"sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie",
"suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove",
"skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon",
"bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut",
"cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse",
"remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book",
"clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush"
]
number_of_colors = 80
obj_colors = ["#"+''.join([random.choice('0123456789ABCDEF') for j in range(6)])
for i in range(number_of_colors)]
img = Image.open("sample.jpg")
draw = ImageDraw.Draw(img, "RGBA")
for object in range(len(response.json())) :
mask_toarray = np.array([np.array(x) for x in response.json()[object]['mask']], \
dtype=np.int8)
color = obj_colors[yolo_classes.index(response.json()[object]['label'])]
contours = cv2.findContours(mask_toarray.astype(np.uint8), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
polygon = [[contour[0][0],contour[0][1]] for contour in contours[0][0]]
polygon = [(int(response.json()[object]['x1']+point[0]),int(response.json()[object]['y1']+point[1])) for point in polygon]
draw.polygon(
polygon,
fill=f"#{color.lstrip(color[0])}95",
outline=f"#{color.lstrip(color[0])}95",
width=1
)
img.show()
The result segmented image is as follows:

When using AI Endpoints, the following rate limits apply:
If you exceed this limit, a 429 error code will be returned.
If you require higher usage, please get in touch with us to discuss increasing your rate limits.
For more information about the Image Segmentation model, please refer to the following documentation.
For a broader overview of AI Endpoints, explore the full AI Endpoints Documentation.
Reach out to our support team or join the OVHcloud Discord #ai-endpoints channel to share your questions, feedback, and suggestions for improving the service, to the team and the community.
New to AI Endpoints? This guide walks you through everything you need to get an access token, call AI models, and integrate AI APIs into your apps with ease.
Start TutorialExplore what AI Endpoints can do. This guide breaks down current features, future roadmap items, and the platform's core capabilities so you know exactly what to expect.
Start TutorialRunning into issues? This guide helps you solve common problems on AI Endpoints, from error codes to unexpected responses. Get quick answers, clear fixes, and helpful tips to keep your projects running smoothly.
Start Tutorial