Spread our blog

Search head Pooling and Search head clustering are the two ways to implement Distributed Search feature to your Splunk deployment.

Search head pooling

The term pooling in this context is related to sharing of resources. The search head pooling feature of splunk allows you to have multiple search heads so that they share configuration and the user data.The advantage of having multiple search heads is to facilitate horizontal scaling when you have a large numbers of users searching across the same data. If you create and save a search on one search head, all the other search heads in the pool will automatically have access to it. Also, having a Search head pool reduces the impact if any of your search head goes down/becomes unavailable.

NOTE : This feature of splunk has been deprecated as of Splunk Enterprise version 6.2, i.e it’s not recommended to use this feature anymore, and splunk may remove this feature in there upcoming versions.

Enabling Search head pooling makes all files in $Splunk_Home/etc/{apps,users} to be shared to the pool like *.conf files, *.meta files, search scripts, lookup tables, etc. The objects that will be available as common resources across all search heads in the pool:

  • configuration data – configuration files containing settings for saved searches and other knowledge objects.
  • search artifacts – records of specific search runs.
  • scheduler state – It ensures that only one search head in the pool runs a particular scheduled search at a time.

Search head clustering

Splunk search head cluster is a group of Splunk Enterprise search heads that serve as a central resource for searching. You can access and run the same searches, dashboards, search results from any member of the cluster. The search head members are interchangeable, this is only possible when the search heads in the cluster share the same configurations and apps, search artifacts, and job schedules. Search head clusters automatically handle these shared resources among the members.

You can also know about :  Instrumentation: Sharing data with Splunk Enterprise

Search head clustering is a better and encouraged alternative to search head pooling due to its numerous benefits, such as:

High availability and the cluster architecture:
Whenever a search head goes down, you can run the same set of searches and access the same set of search results from any of the cluster members. The splunk search head cluster uses a dynamic captain to manage the cluster. If the captain is down then any other member takes its place automatically and starts managing the cluster, all the search heads are grouped together over the network. The captain, coordinates all cluster-wide activities.

Horizontal scaling :
Increased number of users to your splunk deployment results in the search load increase , you can keep on adding new search heads to the cluster as and when required. By combining a search head cluster with a third-party load-balancer placed between users and the cluster, the topology can be transparent to the users.

Job scheduling:
The cluster manages job scheduling centrally, allocating each scheduled search to the optimal member, usually the member with the least load, it also replicates search artifacts and makes them available to all members.

Configurations:
A splunk cluster requires that all members share the same set of configurations. Any kind of runtime updates (via Splunk Web or Splunk CLI Commands) to knowledge objects, like dashboards or reports, is replicated by the cluster automatically to all members. In case of apps and some kinds of configurations, you must push configurations to the cluster members by means of the deployer (Splunk Enterprise instance for managing the apps and configurations across your Search head cluster).

You can also know about :  Report Acceleration In Splunk

You can access the cluster by accessing any of the search head members. Just open the interface of  any search head cluster member from your browser as cluster members share jobs, search artifacts, as well as the configurations, it doesn’t matter which search head you are accessing. You can always access/view the same set of knowledge Objects and other functionalities.

NOTE:   If you are looking for high availability and load balancing, you can always put a load balancer before the search head cluster. Then the load-balancer will be responsible for letting the users to any search head in the cluster and balance the user load across the cluster members.
The load-balancer can always re-direct a user to a working state search head cluster member in case any of the members goes down.

Search head and indexer clusters:
A point to always keep in mind is that a search head cluster is completely different from the indexer clusters. The advantage of having a indexer clusters is to provide highly available data through coordinated groups of indexers. Indexer clusters are always connected to one or more search heads to access the data on the indexers.
These search heads may or may not be part of a search head cluster.

why search head pooling is deprecated?

Some of the reasons for the deprecation of this feature are:

>  If you make any changes to the authentication and authorization through a search head’s Splunk Web that will not be reflected to the other search heads in that pool.

> Every member of the pool maintains its local configurations on the path $SPLUNK_HOME/etc/system/local. Permission problems on the shared storage server can cause pooling failure, additionally the user account Splunk runs as must have read/write permissions to the files on the shared storage server.

> Search head pooling is mostly reported for under performing.Due to one or more of the issues as mentioned below:

Clock skew between search heads and shared storage:
It’s really important to keep the clocks on your search heads and the shared storage server in sync, via NTP (network time protocol) or some similar way. If the clocks become out-of-sync by more than a few seconds, it will start impacting the searches across your data with search failures or premature expiration of search artifacts.

You can also know about :  Splunk Phantom Introduction & Overview

Artifacts and incorrectly displayed items in Splunk Web after upgrade:
This most common issue with search head pooling may be encountered after an up-gradation, you must have all the updated apps, including the default ones(like search and reporting app) on the search head pool’s shared storage once the upgrade is complete. If you don’t, you might end up seeing artifacts or other incorrectly-displayed objects/items in Splunk Web.

Thanks, for your continuous love and support, keep visiting us.

Happy Splunking!!!

What’s your Reaction?
+1
+1
+1
2
+1
+1
2
+1
+1

Spread our blog
Previous articleSplunk Phantom Introduction & Overview
Next articleReport Acceleration In Splunk
Passionate content developer dedicated to producing result-oriented content, a specialist in technical and marketing niche writing!! Splunk Geek is a professional content writer with 6 years of experience and has been working for businesses of all types and sizes. It believes in offering insightful, educational, and valuable content and it's work reflects that.

LEAVE A REPLY

Please enter your comment!
Please enter your name here