Best Practices for Performance on DigitalOcean Load Balancers

DigitalOcean Load Balancers are a fully-managed, highly available service that distributes incoming traffic to pools of Droplets.

Here are some recommendations on how to get the best performance from your Load Balancers based on your use case and application architecture.

In addition to the recommendations below, our update to Load Balancers in May 2018 included significant improvements to its supporting infrastructure, which provided immediate performance increases to all users at no cost and with no work necessary.

Use HTTP/2

When Should I Do This?

In most production workloads, HTTP/2 will outperform HTTP and HTTPS due to its pipelining and connection handling. We recommend using it unless there is a clear case for HTTP or HTTPS.

How Does This Improve Performance?

HTTP/2 is a major update to the older HTTP/1.x protocol. It was designed primarily to reduce page load time and resource usage.

Its major features offer significant performance improvements; for example, HTTP/2 is binary (instead of text) and fully multiplexed, uses header compression, and has a prioritization mechanism for delivering files.

The IETF HTTP Working Group’s documentation on HTTP/2 is a good resource to learn more.

How Do I Implement This?

You can use HTTP/2 by setting your Load Balancer’s forwarding rules in the Control Panel. Additionally, Load Balancers can terminate HTTP/2 client connections, allowing them to function as gateways for HTTP/2 clients and HTTP/1.x applications. In other words, you can transition your existing applications without upgrading the backed apps on your Droplets from HTTP/1.x to HTTP/2.

Monitor the Performance of Your Droplets

When Should I Do This?

Monitoring provides critical performance insights and should be part of any production setups.

How Does This Improve Performance?

Often times, performance issues are caused by a lack of resources on the backend rather than the Load Balancer itself or its configuration. Monitoring enables you to identify the bottlenecks affecting your infrastructure’s performance, including when your workload is overloading your Droplets, so you can implement the most impactful changes.

How Do I Implement This?

There are a number of ways to monitor performance. One place to start is with DigitalOcean Monitoring, a free, opt-in service that gives you information on your infrastructure’s resource usage.

You can start by looking at the default Droplet Graphs and setting up the DigitalOcean Agent to get more information on CPU, memory, and disk utilization. If you find that you don’t have enough resources for your workload, you can scale your Droplets.

Scale Droplets Horizontally or Vertically

When Should I Do This?

If your backend Droplets don’t have enough resources to keep up with your workload, you should consider scaling up or out.

How Does This Improve Performance?

It won’t matter how your Load Balancer distributes work among your Droplets if the total workload is too large for them to handle, so it’s critical to make sure your backend Droplet pool has sufficient resources.

There are two ways to scale: horizontally, which distributes work over more servers, and vertically, which increases the resources available to existing servers. Although Load Balancers facilitate horizontal scaling, both kinds of scaling will improve performance.

How Do I Implement This?

To scale horizontally, you can add more Droplets to your Load Balancer by navigating to a particular Load Balancer’s page in the Control Panel and clicking the Add Droplets button.

The kind of Droplets you use will impact performance as well, so make sure you choose the right Droplet for your application. For example, Optimized Droplets work best for computationally intensive workloads, like CI/CD and high performance application servers.

To scale vertically, you can resize your existing Droplets to give them more RAM and CPU.

Choose the Right Load Balancing Algorithm

When Should I Do This?

DigitalOcean Load Balancers allow you to distribute load via two different algorithms: round robin and least connections. Round robin, which is set by default, is the most popular algorithm and works best for most use cases.

However, if clients tend to keep connections to your application open for a long time, you may want to consider switching to the least connections algorithm.

How Does This Improve Performance?

The round robin algorithm iteratively sends requests to each backend server in turn without taking any information about the underlying server into account. The least connections algorithm, as the name suggests, sends requests to the server with the fewest connections.

When client connections tend to be long-lived, using round robin can cause the number of concurrent connections to a server to accumulate. If the number of connections to a server correlates with its load, using the least connections algorithm will distribute work across your backend servers more evenly than round robin.

How Do I Implement This?

Whether or not a large number of connections causes a high load on a server will depend how resource-intensive each connection’s workload is, so monitor your Droplets first to understand their resource utilization.

The advanced settings section of your Load Balancer configuration includes the option to choose between round robin and least connections. You can choose an algorithm on creation and change the algorithm after creation.

Next Steps

For more information on DigitalOcean Load Balancers, see:

You can learn more about load balancing in general with:

You can also read our product release notes.

Source: DigitalOcean News Best Practices for Performance on DigitalOcean Load Balancers