Load-balancing Splunk Search heads
The first question that might buzz your mind after seeing the above topic: Why after all, do we need Load-balancing?
The term Load-balancing refers to efficiently distributing incoming network traffic across a group of backend servers, aka server pool or server farm. So we can say, a load balancer has the following basic function(s) – Distribution of client requests i.e the network load, efficiently across multiple servers.
Load-balancers: A load balancer serves as the single point of contact for your clients, generally categorized as hardware or Software load-balancer(s). The load balancer distributes incoming application traffic across multiple targets(backend servers). This increases the availability of your application.
Some techniques that are generally used by load-balancers:
>>Round Robin: A simple method of load balancing servers, for providing simple fault tolerance. Multiple identical servers are configured to provide exactly the same services.
>>Weighted Round Robin: This is a variant of the round-robin algorithm taking additional factors into account and can result in better load balancing. A weight is assigned to each server based on some criteria chosen by the site administrator, the most common being the server’s traffic-handling capacity.
>>Least Connection: Neither Round-Robin or Weighted Round-Robin take into consideration the current server load when distributing/handling the requests. The Least Connection method does take the current server load into consideration, assuring that the current request goes to the server that is servicing the least number of active sessions at the current time.
>>Weighted Least Connection: Builds on the Least Connection method. Like in the Weighted Round Robin method each server is given a numerical value. The load balancer uses this when allocating requests to servers. If two servers have the same number of active connections then the server with the higher weighting will be allocated the new request.
>>Weighted Response Time: This method uses the response information from a server health check to determine the server that is responding fastest at a particular time. The next server access request is then sent to that server. This ensures that any servers that are under heavy load, and which will respond more slowly, are not sent new requests.
>>Chained Failover (Fixed Weighted): In this method, a predetermined order of servers is configured in a chain. As a norm, all the requests are sent to the first server in the chain till it can’t accept any more requests, then the next server in the chain is sent all requests, then the third server. And so on.
>>Source IP Hash: This load balancing technique uses an algorithm that takes the source and destination IP address of the client and the server both and combines them to generate a unique hash key. This key is used to allocate the client to a specific server. Since the key can be regenerated if the session is broken, this method of load balancing can ensure that the client request is directed to the same server that it was using previously. This becomes useful if it’s important that a client should connect to a session that is still active after a disconnection.
Now, that you are well aware of load-balancers and load balancing, we can continue our quest to set our own load balancing rule(s). Obviously, for load-balancing we need load balancers, we are going to use HAProxy, an application load-balancer:
HAProxy: HAProxy(High Availability Proxy) is an open-source load-balancer capable of load balancing any TCP service. It is a free, very fast and reliable solution that offers load-balancing, high-availability, and proxying for TCP and HTTP-based applications. This is useful in cases where too many concurrent connections over-saturate the capability of a single server. Instead of a client connecting to a single server which processes all of the requests, the client will connect to an HAProxy instance, which will use a reverse proxy to forward the request to one of the available endpoints, based on a load-balancing algorithm.
Installing and setting up HAProxy
STEP 1 – Installation:
Generally, HAProxy is included in the package management systems of most of the Linux distributions:
#apt-get install haproxy
#yum install haproxy
Alternatively, you can download the source code with the command below. You can check for the latest versions available for download on the HAProxy download page.
Once the download is complete, use the command below to extract files.
#tar xvzf .../haproxy.tar.gz
Change your working directory to the extracted source directory.
Now, compile the program for your system (we are testing on Centos).
Finally, install the HAProxy itself.
STEP 2 – Setting up HAProxy:
You can skip this step if you have installed HAProxy from package management systems or repositories.
Add the following directories and the statistics file for the HAProxy records.
#mkdir /etc/haproxy #mkdir /var/lib/haproxy #touch /var/lib/haproxy/stats
Now, create a symbolic link for the binary to allow you to run HAProxy commands as a normal user.
#ln -s /usr/local/sbin/haproxy /usr/sbin/haproxy
NOTE: You may want to add the proxy as a service to the system, to do that copy the haproxy.init file from the examples directory to your /etc/init.d directory. Also, change the file permissions to make the script executable and then reload the systemd daemon.
#cp .../haproxy-1.5.11/examples/haproxy.init /etc/init.d/haproxy #chmod 755 /etc/init.d/haproxy #systemctl daemon-reload //doesn't work on older distributions of linux
You also need to enable the service to allow it to restart automatically whenever the system boots up.
#chkconfig haproxy on
CAUTION: The firewall on some distributions can be quite restrictive by default. So, make sure you have configured your firewall properly to allow HAProxy connections.
Configuring HAProxy is a pretty simple process you just need to tell HAProxy what kind of connections it should be listening for and where the connections should be distributed.
This can be done by creating/editing a configuration file /etc/haproxy/haproxy.cfg.
Use the following command:
Now, add the following sections as shown in the pictures below:
1. defaults: Your haproxy.cfg file might contain some default settings, we are just showing a generic defaults section that fits our purpose.
2. listen stats(optional): The HAProxy stats node is configured to listen on port 32700 for connections and is configured to hide the version of HAProxy as well as to ask for credentials at login. Replace user_name and password as per your requirement to make it secure.
3. listen http_proxy: Adding this section configures your HAProxy to listen to incoming connections to the specific port and distribute it to the target servers based on the load-balancing technique you use, we chose the port 8000 (default splunkWeb port).
Replace the server-names, server-IPs etc according to your requirement.
Once you are done modifying, your configuration file should look like below:
Now, restart HAProxy for your configuration file changes to take effect:
#service haproxy restart
After a successful HAProxy configuration any incoming requests to the HAProxy node on the port 8000 will be forwarded to the internally connected nodes with an IP address 10.142.0.8 OR 10.142.0.9 on the port 8000 according to the round robin technique.
Note: linux-3 and linux-4 are our Splunk Search-Heads.
In the picture below you can see that we have the Splunk interface when hitting the HAProxy node (here, load-balancer) on the port 8000.
To see the statistics and monitor the health of the nodes, navigate to the IP address or the domain name of the HAProxy node on a web browser at the assigned port, for example, http://18.104.22.168:32700 . This will provide you with the statistics related to the target servers, a sample image is below for your reference,
To find out more about the configuration options visit the HAProxy documentation page.
Thanks for Reading…