HEC (Http Event Collector) with Syslog-NG :  Aggregated  and Scalable Data Collection Method in Splunk

HEC (Http Event Collector) with Syslog-NG :  Aggregated  and Scalable Data Collection Method in Splunk

Are you thinking of a taking huge log in Splunk? So here we have come with a scalable data collection method in Splunk. More or less everyone knows about the basic data collection method in Splunk. As we know we can collect the data from any application server by installing forwarders or by agentless integration methods. Now a days most of the application servers , network devices, operating systems and applications continue to use syslog standard for system and application logging. Below we have given a diagram of different of data ingestion method in Splunk.


We have come with a new and interesting topic of Splunk. i.e HEC with Syslog-NG.

Form this blog you can take one thing – Do not send syslog traffic (on any port) directly to Splunk indexers.


Here are some reasons : —

  • Sending ”514” traffic to just one indexer works in only the smallest of deployments
  • UDP load balancing typically trickier than TCP
  • Syslog is a protocol – not a sourcetype
  • Syslog typically carries multiple sourcetypes

Best Practice: Pre-filter syslog traffic using syslog-ng or rsyslog

  • Provides for a separate sourcetype for each technology in the syslog stream of events
  • Use a UF (good) or HEC (best!) back end for proper sourcetyping and data distribution

Here are the diagrams :

UF with Syslog-NG


HEC with Syslog-NG


In this blog we will discuss about the HEC with Syslog-NG.

You can also know about :  What happens when License Master goes Down?

To collect the data from a syslog server at scale you can follow few easy steps. Using this method we can bypass the Heavy Forwarders. Because all the filter we can use easily in the syslog-ng configuration file.

Step 1: Create a HEC token in the indexer.

a) Login to your indexer. Click on Settings. Then Data Inputs. After that click on Http Event Collector.


b) Click on Global Settings to Enable HEC in this server.


Configure as follows.


c) Click On New Token to create a HEC token.

  1. Give a name for your token. We have given as “testhec”. Click on Next.

2.  Select a index from the list. We have selected as hec. Click on Review to see the settings changes.


3. After reviewing the settings click on Next. Now one HEC token will be generated. Click on the link to know more about HEC.


Step 2: – Syslog-NG configuration In Syslog server.

a) Created a repository in the “/etc/yum.repos.d/”

# cd /etc/yum.repos.d/

Download a repository from the mentioned link.

# wget https://copr.fedorainfracloud.org/coprs/czanik/syslog-ng324/repo/epel-7/czanik-syslog-ng325-epel-7.repo

b) Install Syslog-NG.

# yum install syslog-ng

c) Check the version of syslog-ng. In our case we have used syslog-ng 3.24 version. Make sure that your syslog-ng version should support “http” module.

# syslog-ng –version

d) Install ‘http’ module for syslog-ng.

# yum install syslog-ng-http

e) Then enable syslog-ng.

# systemctl enable syslog-ng

After that start syslog-ng service.

# systemctl start syslog-ng

Next go to “/etc/syslog-ng”

# cd /etc/syslog-ng

List the files

# ls

Here we have to edit syslog-ng.conf for the configuration changes.


f) Open syslog-ng.conf

# vi syslog-ng.conf

Inside the syslog-ng.conf you have to write as following …..


In the source we have written a stanza called my_auth. We have given a path “/var/log/secure” as a source of the log file .

# Source of the file which you want to monitor.

source my_auth{
file ("/var/log/secure") ;

In the destination we have written a stanza called my_idx. We have used a module called http. Inside the http module specify the url where you want to send the data. We have specified the endpoint address of the indexer. You have to give the internal ip of the indexer along with 8088 port( default port of HEC ). In the user you have to specify the http token name what you have created in the indexer. In our case we have written as “testhec” and inside the password we have specified the token value for the “testhec” . Also in the body we have to specify the event. In the event we have specified index called “hec” . Also we have given the sourcetype as “testlog”.

# Destination of the data that means where you want to send.

destination my_idx{
         user_agent("syslog-ng User Agent")
#        headers("connection: Closed")
         body("{ \"time\": ${S_UNIXTIME},
                 \"host\": \"${HOST}\",
                 \"source\": \"${HOST_FROM}\",
                 \"sourcetype\": \"testlog\",
                 \"index\": \"hec\",
                 \"event\":  \{ \"message\": \"${MSG}\",
                                \"msg_header\": \"${PROGRAM}:${PID}\",
                                \"severity\": \"${LEVEL}\", 
                                 \"date_time\": \"${ISODATE}\" \}



Note : If you have some valid certificate then you have to use tls in this way
#          pkcs12-file("/path/to/server.p12")
#         ca-dir("/path/to/cadir") # optional
#      peer-verify(yes)
# )

# Log for sending the source to destination.
log { source(my_auth); destination(my_idx); };

g) Restart syslog-ng service. Click on the link to know more about the Syslog integration with Splunk.

# systemctl restart syslog-ng

h) Now check the status of selinux. It should be in permissive state to send the log via firewall in 8088 port.

You can also know about :  Fishbucket in Splunk

To see the status

# getenforce

To enable permissive state use “setenforce 0”

# setenforce 0

Again check the status


Step 3: Check the log on Splunk.

Access the index and sourcetype to view log.

In the search box write  “index=hec sourcetype=testlog” and then hit enter.


See we are getting the data in the mentioned index.

Hope this blog has helped to understand Syslog-NG With HEC integration step by step. This method is also used for sending the data from syslog server to the Splunk Cloud.

Happy Splunking!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.