DEST_KEY=_MetaData:Index

DEST_KEY=_MetaData:Index

Hi guys,

We all know that at the time of indexing when the data is getting  stored  into indexers , Splunk software parses the data stream into a series of events. Now ,we can perform different actions on those events. For parsing and filtering we use two configuration files i.e. props.conf and transforms.conf  in Heavy Forwarders.

But, what if you can add just some attributes in transforms.conf and store data into a specific index as per your requirement then you have to configure the props.conf and transforms.conf in the Indexer only.

So, today we will show you how to do this.

Following is the sample data on which we are gonna show you the parsing:-

Hello, my name is Gurav Roy.
What is your name?
Are you fine?
I  am fine
Do you want to tell me your name?

Follow the below steps:-

Step 1:-

First,you have to go to the  location where you want save the sample data and there you have to create a file where you want to save your data.

Here,we have created one file called data.txt in /tmp location.You can use any other location or any existing file for storing you data.

Step 2:-

In the next step we will configure inputs.conf, where we will give the absolute path of  data.txt ,  index name and mention the metadata(host,source,sourcetype)[but it is not mandatory to define metadata]

Here,we have specified index =name

Step 3:-

Now we will configure the props.conf in Indexer. You can find the props.conf in following path, $SPLUNK_HOME$/etc/system/local.

As you can see, we have mentioned here the sourcetype=date, then in props.conf I have to mention the sourcetype in stanza. Here, the we have written #_MetaData:Index ,it is nothing but a comment, which we have mentioned for my understanding ,you can mention or not that’s up to you.

You can see the attribute SHOULD_LINEMERGE=false. It will only help to break each line in different events.

 Now, the second attribute is TRANSFORMS-class=abc(the general format is TRANSFORMS-<class_name>-<unique_stanza_name>. So, here the mentioned class name is class(you can give any string) and the unique_stanza_name is abc(you can give any string).Now, the stanza_name you have to specify in transforms.conf.

Step 4:-

Here the transforms.conf will be configured.

So ,what we have done here.

[abc]
REGEX=name
DEST_KEY=_MetaData:Index
FORMAT=specific

 As we have said before, we want to store some portion of the sample data in a particular index i.e. the data the potion of the data will go to the ‘specific’ index(the index specific is already created) .For that , we have mentioned the string ‘name’ as pattern in the REGEX attribute(you can mention any pattern as per your requirement).

DEST_KEY=_MetaData:Index, this is the attribute which will help to send the portion of data in index ‘specific’. In the sample data anywhere the regex pattern will be matched i.e. the line which contains the string ‘name’ only that lines are going to  store in the index ‘specific’ and the lines which will not contain the  string ‘name’ will be there in the ‘name’ index, which we have already specified in the inputs.conf.

FORMAT=specific i.e.’specific’ is  the index where we want to send some portion of the data.

NOTE: Also make sure that ‘specific’ and ‘name’ both indexes are created previously in the indexer.

Step 5:-

After configuring configuration files you always should restart splunk in Indexer,so that all the changes will be will be updated.

Step 6:-

After restarting splunk you just have to go to location of data.txt and the use the command [vi data.txt]

and write the sample data into it.

Step 7:-

So,you can see the line which contains the string ‘name’ is present in the events i.e. only those lines have come which contain the string ‘name’.

The lines which don’t contain the string name  are there in the index ‘name’ .So, like this you also can send data in the index you want as per you wish.

Hope, this has helped you in achieving the below requirement without fail:

DEST_KEY=_MetaData:Index

Happy Splunking  !!

Advertisements

One comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.