public cloud ML Serving OVHcloud

Machine learning model deployment directly usable via the API

From predicting user preferences to validating marketing strategies, anticipating re-provisioning and much more, machine learning is becoming more popular across all sectors, and is no longer just used by data scientists. But they still need to be easy to set up. Whether you are using your own model or a pre-trained model from OVHcloud, you can use ML Serving to deploy and use them in just a few clicks via a dedicated API. You can then perform analyses, or process a high volume of data.

Save time

Switching from a machine learning prototype to deploying a model into production is often a time-consuming process. With ML Serving, you can do it easily in just a few minutes.

Compatible with market standards

Whether you are exporting models directly (via Scikit-Learn, Pandas or Keras), or using software (like Dataiku or H20.Ai), ML Serving accepts your models in formats that are standard in data science: ONNX, PMML and TensorFlow.

Able to handle the load

Your models are deployed on a scalable infrastructure. Whether you have 10 requests per day or 10,000 per minute, we will increase the resources you need, to give you an optimal experience.

Uses for our ML Serving solution

Real-time forecasting

By deploying a model on ML Serving that takes into account your sales criteria, you can update your estimations by regularly querying its API.

Emotional analysis

Every day, many comments are posted on your products and brand, on social networks. In just a few clicks, you can deploy our emotional analysis model, and find out if these comments are positive or negative.

Fraud detection

Design a model that predicts behaviour, and detects suspicious purchase cases on your website, then deploy the model via ML Serving. For each new order, you can then easily get a trustworthy score.

Usage

After training a machine learning model (a kind of intelligent algorithm that often represents cognitive functions), its analyses and expected predictions will become good quality. They can then be used on a daily basis, to improve a situation. The “Serving” involves offering this model for usage en-masse, based on computing capacity that is scaled to support all of the requests. The ML Serving automates the usage of these tools, making the model work.

1

Deploy your model

Choose from a number of pre-trained OVHcloud models, or use your own model.

2

Get predictions from your model

We monitor the deployment and manage its elasticity, to get the quickest response possible.

3

Update your model

Each deployment is versioned, and this is done without any downtime.

Features

Version management

Each model deployment is versioned. This means you can have several versions of a model, and revert to an earlier version if you need to do so (available soon).

Transparent upgrades

ML Serving uses a rolling upgrade mechanism, so that deployment upgrades are performed without any downtime. You can then work regularly on your modelling, and keep production versions up-to-date.

Auto-scaling

Whether your model receives a high volume of requests or you use it at specific times of day, we automatically scale its deployment so it adapts in record time.

Pricing Public Cloud

ML Serving billing