EVENT_BREAKER_ENABLE & EVENT_BREAKER

EVENT_BREAKER_ENABLE & EVENT_BREAKER

Hi guys !!

You all know that for creating any dashboards, reports , alerts etc. in Splunk we need some events. It is the responsibility of Splunk Developers. But for on-boarding, parsing  and filtering some  data in Splunk you have to be confident  in handling the configurations files. For parsing some data we use props.conf and also we do parsing on the Heavy Forwarder(HF).Today we will show you how to break the events using  EVENT_BREAKER_ENABLE and EVENT_BREAKER attributes.

But this two attributes we have to  use only inside the props.conf of Universal Forwarder.We will discuss about it later.

First of all what is the necessity of using props.conf in UF, as we always use props.conf in HF.

The necessity of using props.conf in Uf is to improve the load balancing during the forwarding of data from UF to receivers. It helps the UF to distribute data more evenly among all the receivers.

Following is the sample data  on which we are going to perform parsing:

Hi today we will gonna show
you ]] how , to do line break.
so to do that we need, 4 - lines
and for that 4 - lines we will
write some regular expressions.
There are basically 2 ways of line breaking
so we will show you that 2 - ways.

Follow the below steps :

Step 1:

First, you have to go to the  location where you want save the sample data and there you have to create a file where you want to save your data.

Here, I have created one file called sample.txt in /tmp location. You can use any other location or any existing file for storing you data.

Step 2:

In the next step we will configure inputs.conf, where I will give the absolute path of  sample.txt ,  index name and mention the metadata(host,source,sourcetype)[but it is not mandatory to define metadata].

Step:3

Example no – 1

Now we will configure the props.conf. You can find the props.conf in following path. $SPLUNK_HOME$/etc/system/local.

As you can see I have mentioned here the sourcetype=data, then in props.conf I have to mention the sourcetype in stanza.

See below,

In the above I have mentioned EVENT_BREAKER_ENABLE=true. But, if  you will not mention EVENT_BREAKER_ENABLE, by default it is false.

It improves distribution of data from UF to receivers for a given source type.

Then, I have mentioned the EVENT_BREAKER=(,).But you can mention any regular expression in the place of comma according to the type of your data and requirement.

Step:4

After configuring configuration files you always should restart splunk in UF, so that all the changes will be will be updated.

Step:5

After restarting splunk you just have to go to sample.txt  again and write that sample data there

Step:6

Let’s see in Search Head that how the data is being parsed.

Now ,you can see that as I have given the delimiter comma that’s why the first line which has comma inside it, the lines after that comma have gone to another event for the given sourcetype. So, it matches the delimiter only for the first time and creates the new event . If after that, it also find the mentioned delimiter at other lines, it will not break those lines into another events.

Always remember, that the delimiter part will not be disabled.

Example no – 2

From Step:1 and Step:2 will be same as before.

Step:3

Here,we have used regular expression in EVENT_BREAKER.

EVENT_BREAKER=(\d+\s+\-\s+)

Step:4 and Step:5 are same as before.

Step:6

Let’s see in Search Head that how the data is being parsed.

Now, you can see that as I have given here regular expression that’s why the first line which matches the pattern of regular expression the lines after that pattern have gone to another event for the given sourcetype. As the regular expression which I have used here

EVENT_BREAKER= (\d+\s+\-\s+),which is matching the pattern(4 – ) and the lines after that pattern have gone to another event.

Hope, this has helped you in achieving the below requirement without fail:

EVENT_BREAKER_ENABLE & EVENT_BREAKER

Happy Splunking  !!

Advertisements

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.