Load balancer with Kubernetes

Kubernetes

Load balancer with Kubernetes

 
Before we look at this feature, we can look back again on the benefits Kubernetes will offer for your business. It is the most popular container orchestration tool on the market. You can use it to automate the deployment of applications within a cluster, regardless of whether the servers are physical or virtual.
 
 
 
 
 
 
 
 
 

With Kubernetes, companies can stay focused on developing their software. It also simplifies several tasks.

 

Icons/concept/Lines/Line Communicating Created with Sketch.

Automating the lifecycle of containerised applications

These services need to be scaled, to keep up with requests and optimise their resource usage. With Kubernetes, you can automate this step. This ensures continuous deployment (CI/CD) of new versions, and drastically reduces maintenance.

Icons/concept/Container Created with Sketch.

Multi-container application deployment

Some software programs can use several containers at once (databases, front-end, back-end, cache, etc.). This situation sometimes requires several instances. During the deployment process, Kubernetes syncs the various containers and related components.

Icons/concept/Component/Component Square Created with Sketch.

Launch from any environment

Whether your infrastructure is based on a public cloud, private cloud, physical or virtual server, Kubernetes is easy to implement — even in a hybrid architecture. It gives you more flexibility for your projects, depending on their environment.

What is a load balancer?

The purpose of this balancer is to distribute the workload between different servers or applications. It can be set up on both physical and virtual infrastructures. A load balancing software takes the form of an application delivery controller (ADC), and you can use it to scale the workload automatically depending on traffic forecasts.

In real time, the ADC identifies which server or application is best suited to meet a request, so that the cluster maintains a stable level of performance. In the event of an outage, it will also be responsible for redirecting traffic to a resource capable of handling it. Several configuration types are available.

In the timeline, the load balancer intervenes between the user and the server. It analyses the request, determines which machine is available to respond to it, and then forwards it to the machine in question. It can also add servers as required.

Load balancing is just one of several potential uses for a load balancer. It is particularly useful for unblocking an SSL certificate, or updating application groups. You can even use them to route your domain names.

There are two types of load balancers:

  • L4 load balancers, otherwise known as network load balancers

They process layer 4 data that is present at the network and transport (TCP/UDP) level. These load balancers do not focus on application information, such as content type, cookies, header location, etc. This means they will only redirect traffic based on network layer data.

  • L7 load balancers, otherwise known as application load balancers

Unlike L4 load balancers, this type of load balancer redirects traffic by using the application layer settings. These load balancers process a higher volume of data, and are based on more information. This includes HTTP, HTTPS and SSL protocols, for example.

Load Balancer OVHcloud

How does the load balancer work with Kubernetes?

When you start using Kubernetes for your applications, the issue of external traffic is an important factor to consider. This topic is briefly discussed on the official Kubernetes website, but we will provide some details.

There are several ways of routing external traffic to your cluster:

  • Using a proxy with the ClusterIP.
  • Defining a service as a NodePort.
  • Declaring a service as a load balancer, and exposing it to external traffic. This method is the most widely used one.

Using a ClusterIP via a proxy

This method is generally used for development, and is available by default from Kubernetes. By opening a proxy between your external source and your ClusterIP, you can route traffic. You can use the kubectl command to create this proxy. When it is operational, you will be directly connected to your cluster’s IP for this specific service.

Exposing a service as a NodePort

By doing this, you expose the addresses of your nodes individually on the ports concerned (a fixed port of the service from 30000 to 32767). This way, you can access your service externally via its own port, on each node. To do this, use the following command: <NodeIp>:<NodePort>

While this approach is still feasible, it is relatively difficult for services that are in production. Because you use “non-standard” ports, you often need to configure an external load balancer. It processes standard ports, and redirects traffic to the '<NodeIp>:<NodePort>' request.

Opting for a load balancer

This involves declaring a service as a load balancer for your cluster. This is how it is exposed to external traffic. This method involves using a cloud provider’s load balancing solution, such as our Load Balancer. It will provision the service for your cluster, automatically assigning its NodePort.

If you have a production environment, the load balancer is the solution we recommend. However, please note two important things:

  • Each service you define and deploy as a load balancer has its own IP address.
  • The OVHcloud Load Balancer is reserved for users of the Managed Kubernetes Service.

It acts as a filter between incoming external traffic and your Kubernetes cluster. You can deploy up to 16 load balancers per cluster, which you manage directly from your K8s interface. We have published a series of guides to help you configure your load balancer.

Adding an Ingress with your load balancer

The Ingress is a Kubernetes object that manages external access to your cluster’s services (e.g. a HTTP protocol). So how does it differ from the load balancer?

As the only entry point to your cluster, it works like a reverse proxy. The Ingress then directs incoming requests to the different services, according to a rule configuration. It is exposed to external traffic via ClusterIP, NodePort or a load balancer. The best known Ingress is NGINX Ingress Controller.

Why use an Ingress with your load balancer? The biggest benefit is that it reduces your costs. A load balancer is billed for the number of services it orchestrates, with limited capacity. With an Ingress, you can attach more services to the same load balancer, and save money.

What would you advise?

To answer this question, you need to determine concretely what you will be using Kubernetes for.

The load balancer is suitable for the vast majority of uses. You can define one of them individually for each of your Kubernetes services, with no configuration required afterwards. However, this method will not suit all budgets, as it involves managing a large number of IP addresses.

If you need a simpler method, you can use a load balancer and an Ingress behind it. This means that all your services are stored under the same IP address, and you only pay for one load balancer.

However, you will need to ensure that the services orchestrated by this model have a logical relationship between one another. Otherwise, your infrastructure could experience malfunctions or outages.

 

An optimal configuration could be a load balancer coupled with an Ingress for each “family of services” or microservices. The best organisation method will depend on how complex your architectures are.