Machine learning model deployment directly usable via the API
From predicting user preferences to validating marketing strategies, anticipating re-provisioning and much more, machine learning is becoming more popular across all sectors, and is no longer just used by data scientists. But the models still need to be easy to set up. Whether you are using your own model or a pre-trained model from OVHcloud, you can use ML Serving to deploy and use them in just a few clicks via a dedicated API. You can then perform analyses, or process a high volume of data.
Switching from a machine learning prototype to deploying a model into production is often a time-consuming process. With ML Serving, you can do it easily in just a few minutes.
Compatible with market standards
Whether you are exporting models directly (via Scikit-Learn, Pandas or Keras), or using software (like Dataiku or H20.Ai), ML Serving accepts your models in formats that are standard in data science: ONNX, PMML and TensorFlow.
Able to handle the load
Your models are deployed on a scalable infrastructure. Whether you have 10 requests per day or 10,000 per minute, we will increase the resources you need, to give you an optimal experience.
ISO/IEC 27001, 27701 and health data hosting compliance
Our cloud infrastructures and services are ISO/IEC 27001, 27017, 27018 and 27701 certified. Thanks to our compliance, you can host healthcare data securely.
Uses for our ML Serving solution
After training a machine learning model (a kind of intelligent algorithm that often represents cognitive functions), its analyses and expected predictions will become good quality. They can then be used on a daily basis, to improve a situation. The “Serving” involves offering this model for usage en-masse, based on computing capacity that is scaled to support all of the requests. The ML Serving automates the use of these tools, making the model work.
Deploy your model
Choose from a number of pre-trained OVHcloud models, or use your own model.
Get predictions from your model
We monitor the deployment and manage its elasticity, to get the quickest response possible.
Update your model
Each deployment is versioned, and this is done without any downtime.
Each model deployment is versioned. This means you can have several versions of a model, and revert to an earlier version if you need to do so (available soon).
ML Serving uses a rolling upgrade mechanism, so that deployment upgrades are performed without any downtime. You can then work regularly on your modelling, and keep production versions up-to-date.
Whether your model receives a high volume of requests or you use it at specific times of day, we automatically scale its deployment so it adapts in record time.