Splunk is a log aggregator that allows users to analyze their machine data. The Rapid7 application for Splunk Enterprise allows you to integrate Rapid7 products into Splunk.
You also have the option of configuring Splunk via conf files.
You can download the newest version of the Splunk App for InsightIDR here.
Further details on installation and configuration information for Splunk is on the Details page here.
NOTE: When installing this app, make sure to install and configure the Splunk app on each node of the index cluster.
The Data source is equivalent to what is being searched or queried in Splunk, and using this option when forwarding logs to the InsightIDR will always work.
Sourcetype is also an option, although in some instances this may not work because Splunk may not tag them properly as
sourcetype before forwarding them to InsightIDR.
While the Rapid7 Splunk app is used for sending data from IDR to Splunk, utilizing
conf files it an additional option. Manual edits of the Splunk .confs may affect in the Splunk App User Interface. This will not impact the log forwarding.
Event Sources Need Their Own Port
InsightIDR requires that each event source (eg, firewall, AD, DNS, etc.) log stream be sent to its own, unique port. As such, when forwarding data from Splunk to your Collector, each log stream must be configured and forwarded separately. If multiple data streams are sent to the same port, only one log stream will be parsed.
To forward the logs to InsightIDR, you must modify 3 files, all located in $SPLUNK_HOME/etc/system/local:
- Create outputs.conf if it does not exist. In there, you’ll inform Splunk of the location to send your data. Send each stream to a separate port.
- Create a stanza called [tcpout:<output name>].
- <output name> can be whatever you want, since it an identifier. Add a single argument named server, whose value is the hostname or IP address of the machine running the Collector, followed by the destination port, such as below.
[tcpout:ciscofirewall_out] server = collector.bos.example.com:1234
- Create transforms.conf if it doesn't exist. In it, dictate how Splunk is to filter your data, and what action it should take on the result. This will pass all data through to the output stanza created above.
- For each stream you wish to capture, create a stanza called [<name>] which can be anything as it’s an identifier. This stanza will contain 3 values.
- REGEX which should have a value should be a period (.) which tells Splunk to let all data through.
- DEST_KEY which should have a value should be _SYSLOG_ROUTING.
- FORMAT, which should have a value should be the same as the output name specified in outputs.conf
[ciscofirewall] REGEX = . DEST_KEY = _SYSLOG_ROUTING FORMAT = ciscofirewall_out
- In props.conf, indicate which streams to forward over to the transforms.conf file.
- Create a stanza for each host from which you’re capturing a stream, called [host:<hostname>].
- The hostname should be the name of the machine from which the logs in question are coming.
- Each stanza should have a single value named TRANSFORMS-<class>, whose value is the name of a stanza in transforms.conf.
- <class> is an identifier and can be anything
[host::cisc-fire.boss.example.com] TRANSFORMS-ciscfire = ciscofirewall
- Once all of these values are in place, restart Splunk, and your data should begin forwarding to your collector machine over the specified ports.
To listen for log events forwarded by Splunk, create a new event source of the appropriate type (eg, Palo Alto Firewall). Choose the Log Aggregator Collection Method, select Splunk from the menu, and choose a unique port (we recommend starting at 2000 and working your way up one at a time).
Your Collector will open the specified port and begin listening for data coming from a Splunk forwarder.