Easily handle traffic spikes using containers
Easily handle traffic spikes using containers
Deploying a modular architecture made up of containerised microservices often requires agility, resilience and scalability. But containers are also a suitable technology for making your infrastructure more flexible, meaning that it can absorb traffic spikes through auto scaling. The appeal of containers lies in their elasticity – their ability to be easily started, multiplied or stopped. But we still need to utilise the right tools.

Elasticity: a concept that maximises performance and cost savings
Elasticity is an important concept when your application experiences unexpected peaks in activity, regular peaks at certain times of the day or seasonal peaks. Elasticity is the ability to increase or decrease the number and/or capacity of the containers hosting your microservices, to ensure that your application remains available and high-performing, regardless of the number of users connected. This is all done automatically. The advantage of this is that you save money by only consuming the resources you really need. You no longer need to over-scale your infrastructure or anxiously monitor the peak load curve then urgently add resources.
Don’t forget: microservices and containerisation are underlying technologies that are needed to implement auto-scaling, but they are not sufficient on their own. You need to use an orchestrator like Kubernetes, along with on-demand cloud resources managed by OVHcloud.

Understanding container auto-scaling mechanisms
There are two ways to scale your containerised architecture: vertically, by allocating more computing power (CPU/RAM) to a container as and when, and horizontally, by multiplying containers where requests are more concentrated (creating bottlenecks). Here are the 3 load balancing mechanisms that OVHcloud’s Managed Kubernetes Service® brings you:

Pod autoscaling
This is the auto-scaling mechanism natively offered by the Kubernetes orchestrator, which will adjust the number of containers in real time based on the workload, within the upper and lower limits that you have previously set. For example, for a given microservice in your architecture, K8S will be authorised to create up to 10 containers, and must not go below 3. Like magic? Perhaps - but what happens if K8s doesn't find enough resources in your cluster to create the necessary containers? It will, for example, create only 7 out of the 10 authorised.

Node auto-scaling
In its implementation of Kubernetes as a service, OVHcloud has complemented this native K8S mechanism with “node auto-scaling”: you specify a minimum and maximum number of deployable instances in your cluster, and it is OVHcloud that automatically adds the necessary resources if required, then turns them off them once the peak load has passed – so that you are only billed for what you really need.

Load Balancer for Managed Kubernetes Service
Upstream from your Kubernetes cluster, an optional load balancing service fully managed by OVHcloud distributes traffic between your different resources. This automatically and securely balances the load between your containers, and across several nodes if required. The Load Balancer delivers an interface that is directly compatible with Kubernetes.
It is also this Load Balancing service that will make the rolling upgrade possible, by temporarily ejecting containers that are under maintenance from the target services.

Secure your image management with Managed Private Registries (Harbor)
To make your container projects more reliable, OVHcloud has developed a private registry service, hosted and managed by our teams, to easily store, manage and access (via an API) images of your containers and Helm charts (your packages).
This service is based on the Cloud Native Computing Foundation’s Harbor project, which provides secure role-based access to your teams (RBAC) and relies on a Content Trust mechanism to ensure the integrity of your image sources.
This avoids the risk of identifying vulnerabilities by reverse engineering code in public access on platforms such as Github or Gitlab.
Which OVHcloud services can make your microservices application more elastic?
You can host your microservices yourself on bare metal servers or Public Cloud instances, and manage peak loads by allocating the necessary resources yourself. However, you can make your life easier by delegating the most critical services, such as databases or storage, to OVHcloud. OVHcloud will then be responsible for managing the scaling of these services.

Managed Kubernetes Service
Kubernetes is the leading orchestrator, making it easy to deploy and manage your Docker container groups, while also managing self-healing and auto scaling. It’s an indispensable tool, but not the easiest to manage. Fortunately, you can delegate the administration of your Kubernetes cluster to your cloud provider: Managed Kubernetes Service® is powered by OVHcloud Public Cloud instances. What's more, this service is free – you only pay for the on-demand instances and storage that you use within your Kubernetes cluster.
As an option: Load Balancer for Managed Kubernetes Service

Database as a service
Most database technologies are now offered in database-as-a-service mode, which is fully managed. You can switch in just a few minutes: simply import your database into its new environment, test it, and decommission the old database if all goes well.
Need more power for your database? You can update your solution in just a few clicks, and add a node to your cluster, while OVHcloud handles the data resynchronisation.

Object Storage
With OVHcloud Object Storage, not only do you outsource any issues with content availability, but you also reduce the load on your web servers.
Content requests (images, videos, sounds, etc.) are no longer your concern. OVH manages these peak loads for you.
Ready to get started?
Create an account and launch your services in minutes