Dynamically distribute your traffic to increase the scalability of your application
Load Balancer makes it easier to ensure the scalability, high availability, and resilience of your applications. This is achieved by dynamically balancing the traffic load across multiple instances, in multiple regions. Deliver your application's users a great experience, by automatically managing variable traffic and handling peak loads, while getting costs under control. By combining Load Balancer with Gateway and Floating IP, you can set up a solution that acts as a single entry point to your application, secures the exposure of your private resources, and supports fail-over scenarios.
Built for high-availability
Load Balancer is built upon a distributed architecture and is backed by an SLA providing 99.99% availability. Leveraging its health check capability, Load Balancer distributes the load to available instances.
Designed for automated deployment
Choose from the load balancer size that fits your needs. Configure and automate with Openstack API, UI, CLI, or with OVHcloud API. Load Balancer can be deployed with Terraform to automate and balance the traffic loads on a wide scale.
To ensure data security and confidentiality, Load Balancer comes with free SSL/TLS encryption and benefits from our Anti-DDoS Infrastructure protection—real-time protection from network attacks.
Discover our Load Balancer range
|Load Balancer Size||Size S||Size M||Size L|
|Bandwidth||200 Mbit/s (UP/DOWN)||500 Mbit/s (UP/DOWN)||2 Gbit/s (UP/DOWN)|
|Maximum request per second*||
|SSL connection per second*||250 new SSL cps||500 new SSL cps||1000 new SSL cps|
Repartition type ie network or application load balancing
|Load balancing algorithm: least-conns, RoundRobin, source-ip or source-ip-port with session-persistence (cookie or source IP)||Yes||Yes||Yes|
|Support of HTTP/HTTPS/PROXY/PROXY2/SCTP/TCP protocol||Yes||Yes||Yes|
|OVHcloud API support||Yes||Yes||Yes|
|OpenStack API support (Octavia)||Yes||Yes||Yes|
|UI support for OpenStack Horizon||Yes||Yes||Yes|
|Create your Let's Encrypt certificate for TLS encryption||Yes||Yes||Yes|
|Upload your own certificate file for TLS encryption||Yes||Yes||Yes|
|Support of public IP through Floating IPs||Yes||Yes||Yes|
|Integration with Public Gateway||Yes||Yes||Yes|
|Support of private network (vRack)||Yes||Yes||Yes|
Health check support with HTTP/TLS/TCP/UDP/SCTP and PING
*informational value presented to assist you in choosing the best plan for your needs.
Manage high volumes of traffic and seasonal activity
With the Load Balancer, you can manage traffic growth seamlessly by adding new instances to your configuration in just a few clicks. Should your traffic be variable, whether it is increasing or decreasing, the Load Balancer will adapt how it distributes traffic.
Blue-Green deployment and testing scenarios
Support of Openstack API for using Load Balancer, Gateway and Floating IPs enables customers to spawn and test staging environments before deploying them on production. This can lead to swapping production and staging environments, which facilitates a continuous deployment model.
Use our Load Balancer as a single SSL entry point for your web application. Coupled with Public Gateway and Floating IPs, you can ensure data confidentiality, secure the exposure of private resources and prepare for failover scenarios.
Load Balancer scenario
Floating IP, Gateway, and Load Balancer can work together to set up in your design the appropriate rules for network accessibility and provide the security you require.
Exposing services behind Load Balancer
A Load Balancer can be reached through the Floating IP, and distributes the incoming traffic to several instances. The instances behind the Load Balancer have no public IP, which ensures they remain completely private and not directly accessible from outside. The Load Balancer brings higher security, supports SSL encryption, and can be updated transparently as the Floating IP is hosted at the Gateway level.
Our Load Balancer can be used with Openstack API or CLI, and will be available later through the Control Panel.
Below are the basic commands to
Create a Load Balancer
openstack loadbalancer create --flavor small --vip-network-id my_private_network
Configure an entry point (listener) and a target (pool) :
openstack loadbalancer listener create --name listener1 --protocol HTTP --protocol-port 80 test
openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type HTTP --url-path /healthcheck pool1
openstack loadbalancer member create --subnet-id my_subnet --address 10.0.0.1 --protocol-port 80 pool1
openstack loadbalancer member create --subnet-id my_subnet --address 10.0.0.2 --protocol-port 80 pool1
Configure the network (remind that you need be inside a vRack for this to work properly, check our guide to deploy a vRack)
# configure the network Gateway
openstack subnet set --gateway 10.0.0.254 my_subnet
# add a vrouter
openstack router create myrouter
openstack router set --external-gateway Ext-Net myrouter
openstack router add subnet myrouter my_subnet
# add the floating IP
openstack floating ip create Ext-Net
# The following IDs should be visible in the output of previous commandsv
openstack floating ip set --port
Créez et exposez votre service Load Balancer au plus proche de votre clientèle et adoptez une approche géographique lors de la construction de votre infrastructure.
Choose the tool that suits you for administration of your Load Balancer: OpenStack Horizon UI or API.
Integrated with the Public Cloud ecosystem
Deploy and manage your Load Balancer directly from your Public Cloud environment, thanks to Octavia API support and all compatible tools (Terraform, Ansible, Salt, etc.).
Load Balancer supports SSL/TLS encryption to ensure data confidentiality. You can either quickly create your Let's Encrypt DV SSL certificates, included at no additional charge with any of our Load Balancer service plan. You also have the possibility to upload your own certificate if you work with a specific Certificate Authority.
Connection to private networks
To keep your application nodes isolated on the private network, the Load Balancer can be used as a pathway between public addressing and your private networks, with the OVHcloud vRack.
If you want to use the Load Balancer privately, and having it reachable only from your private network with backend instances inside - it is a possibility!
Multiple health check protocols
Define the conditions for excluding an instance or node to fit your criteria. You can choose from: standard TCP verification, HTTP code, or many other options that you can find on the official OpenStack Load Balancer documentation.
Support any Public Cloud instances
The Load Balancer can manage several node types, like the standard instances operated by OpenStack and containers provided by Kubernetes. Through the private network, you can use Hosted Private Cloud virtual machines and Bare Metal servers as a backend.
Load Balancer billing
Load Balancer is billed upon usage, on an hourly basis. The service is available in three plans, depending on your traffic profile: Small, Medium, and Large.
Deploy private networks, supported by the OVHcloud vRack, to connect your instances across the globe
Securely access Internet from secure and fully private instances, while getting the best flexibility to expose services combining with Floating IP and Load Balancer.
What is Layer-7 HTTP(S) load balancing?
This describes the way to transport the application layer (ie: the web traffic) from a source, to backend servers through a loadbalancing component which can apply different advanced traffic routing policies. These policies includes the use of http cookies, proxy-protocol support, different methods of load distribution between the backends, https use and offloading
Why is my Load Balancer spawned per-region?
The availability of Public Cloud solutions depends on OpenStack regions. Each region has its own OpenStack platform, which provides it with its own computing, storage, network resources, etc. You can find out more about regional availability here.
What protocols can I use with my Load Balancer?
The supported protocols are - at the launch of the product, with version : TCP, HTTP, HTTPS, TERMINATED_HTTPS, UDP, SCTP and HTTP/2.
How does Load Balancer verify which hosts are healthy?
Load Balancer uses healthmonitor to check if backend services are alive. You can configure a number of protocols for that purpose, including (but not limited to) HTTP, TLS, TCP, UDP, SCTP and PING.
I have my own SSL certificate, can I use it?
Yes, of course. You can either use the OVHcloud Customer Control Panel to upload your own SSL certificate to be used with Load Balancer, or you can perform this operation using the OVHcloud API if you have require this action to be automated.
I don't know how to generate an SSL certificate, how can I use HTTPS LBaaS?
That's not an issue! Through the OVHcloud Customer Control Panel, you're able to create and generate your own Let's Encrypt SSL DV certificate, and use it with your LoadBalancer, making your deployment easy. The Let's Encrypt SSL DV certificate is included within in the price of the Load Balancer at no additional charge.
What is a load balancer in the Cloud ?
A cloud Load Balancer is a load balancing system that is fully managed in the cloud, which can be quickly instantiated, configured via API and has very high availability. A typical feature of a cloud Load Balancer is pay-per-use billing. This means that you only pay for what you use.
What is the difference between Load Balancer for Kubernetes and Load Balancer?
Load Balancer for Kubernetes works for our Managed Kubernetes offer only. It delivers an interface that is directly compatible with Kubernetes. This means you can easily control your Load Balancer for Kubernetes, with native tools.
Load Balancer is built upon Openstack Octavia, and can be deployed within your Public Cloud project, leveraging the Openstack API, enabling automation through tool like Terraform, Ansible, or Salt. Load Balancer is planned to support Kubernetes and we will keep you updated about its availability.