
Manage variable traffic loads on your application
Access reserved for users of the Managed Kubernetes® Service.
As your business grows and your application experiences more varied traffic, it is vital to maintain the same level of service. This is why cloud applications are usually built on distributed architectures that are spread out. They are more robust, and can easily handle peak loads. With our Load Balancer, you can securely and automatically balance your application’s load in real time, across several nodes.
99.99% availability
The OVHcloud Load Balancer is designed to deliver a high level of availability and resilience, and is also based on its own distributed architecture.
Automated node management
If a node stops working properly, it is automatically removed from the list of available nodes for balancing. This means you can easily manage maintenance operations, preventing downtime.
Directly integrated into Kubernetes
The Load Balancer delivers an interface that is directly compatible with Kubernetes. This means you can easily control your Load Balancer, with native tools.
ISO/IEC 27001, 27701 and health data hosting compliance*
Our cloud infrastructures and services are ISO/IEC 27001, 27017, 27018 and 27701 certified. Thanks to our compliance*, you can host healthcare data securely.
* Coming soon
Use Case
Specifications
Our Load Balancer solution is constantly being developed. Currently, the service is working with the following limits.
TCP | 10,000 connections |
HTTP | 2,000 req/s |
Bandwidth | 200 Mbit/s |
We will soon be able to offer more flexibility, and resources to suit greater requirements.
Usage
For Kubernetes:
Create a Load Balancer
kubectl -f apply load_balancer.yaml
Delete a Load Balancer
kubectl delete service load-balancer
Features
Immediate interaction
Create a Load Balancer in less than one minute, and update it almost instantly. This means you can be well-prepared for managing traffic spikes.
Kubernetes interface
Create and manager your Load Balancer directly via Kubernetes.
Multiple health check protocols
Define the conditions for excluding a node to fit your criteria. You can choose from: standard TCP verification (already available), an application return code, or HTTP code (available soon).
Proxy protocol
To retain the sender’s initial address, the Load Balancer integrates Proxy Protocol. This means you can perform essential actions on the nodes such as IP address filtering, generating statistics, and analysing logs.
IP address filtering
You can choose a filtering access policy by default, and provide a restricted list of IP addresses that can connect to your solution.
TLS encryption (coming soon)
Most applications communicate via a TLS encryption layer. Our Load Balancer integrates this layer using a certificate provided by the user, or managed by the service.
Private network connections
To keep your application nodes isolated on the private network, the Load Balancer can be used as a pathway between public addressing and your private networks, with the OVHcloud vRack.
Compatibility with instances (available soon)
The Load Balancer can manage several node types, like the containers provided by Kubernetes and standard instances operated by OpenStack.

Other products
Your questions answered
What is load balancing in the cloud?
Load balancing is an operation that distributes the workload among several elements capable of performing the required task. In the cloud, load balancing is most often used for network connections that correspond to the load. These network connections are also known as service requests.
How does a load balancer work?
Load balancing follows rules set up by the operator. A flat or weighted distribution is most often selected when dealing with network connections alone. For example, when considering an application distribution, you can choose one according to routing rules depending on the content served, or user identification.