Hi guys,

We all know that at the time of indexing when the data is  getting stored into indexers , Splunk software parses the data stream into a series of events. Now ,we can perform different actions on those events. For parsing and filtering we use two configuration files i.e. props.conf and transforms.conf  in Heavy Forwarders. But ,we  can use also configure props.conf and transforms.conf in Search Head(SH) when we want to perform Search Time Field Extraction .

Today ,we are going to show you the how to use the attribute MV_ADD in transforms.conf of SH to perform Search Time Field Extraction.

We know that , in any log files or data, if same field occurs multiple times  in same event with different values splunk will take only the first filed value (sometimes ,not everytime ) and the other values will be discarded.

So, how to take all field values of the field which has occurred multiple times in one event. Lets see, how.

Below is the sample data :-

ANIMAL=cow BIRD=parrot ANIMAL=goat BIRD=sparrow


First, you have to go to the  location where you want save the sample data and there you have to create a file where you want to save your data.

You can also know about :  How to create an alert for any changes to the role of existing Users

Here, we have created one file called data.txt in /tmp location.You can use any other location or any existing file for storing you data.


In the next step we will configure inputs.conf in UF, where we will give the absolute path of  data.txt ,  index name and mention the metadata(host,source,sourcetype)[but it is not mandatory to define metadata]

Here,we have specified index =name


Now we will configure the props.conf. As, it is search time field extraction we will configure the  props.conf in Search Head(SH). You can find the props.conf in following path, $SPLUNK_HOME$/etc/system/local.

In props.conf write,


As you can see, we have mentioned here the sourcetype=date, then in props.conf we have to mention the sourcetype in stanza.

You can see the attribute SHOULD_LINEMERGE=false. It will only help to break each line in different events.

Now, the second attribute is REPORT-class=abc(the general format is REPORT-<class_name>-<unique_stanza_name>. So, here the mentioned class name is ‘class’(you can give any string) and the unique_stanza_name is abc(you can give any string).Now, the stanza_name you have to specify in transforms.conf. Lets see how in the next step.

**We use REPORT in props.conf  in search time field extraction.


Here the transforms.conf will be configured in SH.

You can also know about :  MUST_BREAK_AFTER

So ,what we have done here.


[abc] is transformation name given as stanza

In REGEX, we have given the pattern which will find the field name ANIMAL and  take the field value as per the pattern is given.

Due to the attribute FORAMT the  matched REGEX pattern will get stored in the field ANIMAL as field value.

MV_ADD true will help to make  the ANIAMAL field  multivalued.


After configuring configuration files you always should restart splunk in SH and UF, so that all the changes will be will be updated.


After restarting splunk you just have to go to location of data.txt and the use the command [vi data.txt]

and write the sample data into it.


So,as you can see all the events in Search Head and in the selected field panel you can see two fields are created i.e. ANIAML,BIRD

As, we have said before  splunk  take only the first  field value of a field which has occurred multiple times in one event. But using MV_ADD=true the field has become  multivalued  field and two values are present in the ANIMAL field I.e. cow and goat.

But , as BIRD field didn’t match with the  pattern specified in REGEX, the BIRD field is still single valued value i.e. only the first value(parrot) is present in the BIRD field and the second value(sparrow) is discarded.

You can also know about :  How To Install Splunk On Linux Server?

Hope, this has helped you in achieving the below requirement without fail:


Happy Splunking  !!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.