High Performance syslogging for Splunk using syslog-ng – Part 2
As I mentioned in part one of this blog, I managed a sizable deployment of Splunk/Syslog servers (2.5TB/day). I had 8 syslog-ng engines in 3 geographically separate data centers. Hong Kong, London and St. Louis. Each group of syslog-ng servers was load balanced with F5. Each group was sending traffic to their own regional indexers. Some of the syslog servers processed upward of 40,000 EPS (bursts traffic). The recommendation that I am about to describe here is what worked for me; your mileage may vary of course. I tried optimizing the syslog-ng engines to get as much performance as possible out of them. If you feel, however, that it is over kill or if you don’t have the manpower to …
Smart AnSwerS #62
Hey there community and welcome to the 62nd installment of Smart AnSwerS.
There’s a lot of hustle and bustle going on at Splunk today as we will be expanding HQ with a brand new building next door! Construction has been ongoing for the past two years, but the big day is finally here with more than half of the Splunkers in our current building moving over. Folks are packing up their desks for the rest of the morning because in a little less than two hours, we’ll be celebrating the opening of the new building in true Splunk style with a Cinco de Mayo party. There is just no other way
Check out this week’s featured Splunk Answers posts:
High Performance syslogging for Splunk using syslog-ng – Part 1
Today I am going to discuss a subject that I consider to be extremely critical to any Splunk’s successful deployment. What is the best method of capturing syslog events into Splunk? As you probably already know there is no lack of articles on the topic of syslog on the Internet. Which is fantastic because it enriches the knowledge of our community. This blog is broken into two parts. In part one, I will cover three scenarios of implementing syslog with Splunk. In part two, I will share my own experience running a large Splunk/Syslog environment and what can you do to increase performance and ease management.
When given the choice between using syslog agent (ex: http://sflanders.net/2013/10/25/syslog-agents-windows/ ) or UF (Universal …
Tracing your TCP IPv4 connections with eBPF and BCC from the Linux kernel JIT-VM to Splunk
Starting with Linux Kernel 4.1, an interesting feature got merged: eBPF. For anyone playing with network, BPF should sound familiar: it is a filtering system available to user-space tools such as tcpdump or wireshark to filter and display only the wanted (filtered) packets. The e in eBPF means extended, to bring that out of just Network traffic and allowing to trace from the Kernel various things, syscall capture, kprobes, tracepoints etc.
eBPF will run a piece of C code compiled in bytecode which uses the Just-In-Time Compiler to the BPF interpreter. In short, eBPF uses the virtual machine which interprets code into the Linux Kernel. In the current git tree, BPF offers 89 instructions called from the bytecode buffer making …
Splunk 6.4 – Using CORS and SSL settings with HTTP Event Collector
In Splunk 6.4.x and beyond CORS and SSL settings for HTTP Event Collector are dedicated. To use CORS and SSL in 6.4, you must configure the new settings which are located in the [http] stanza of inputs.conf.
In Splunk 6.3.x, CORS and SSL settings for HTTP Event Collector are shared with Splunk’s REST API, and are set in server.conf in the [httpServer] and [sslConfig] stanzas.
In Splunk 6.4.x we’ve introduced dedicated settings for HEC. This means you can now have more fine-grained control of your HEC endpoint.
It also means if you were relying on CORS and SSL prior to 6.4, then you must configure the new settings in 6.4. They do not automatically migrate over.
The settings are located …
Enriching threat feeds with WHOIS information
It’s almost been 2 years since I spent a summer in Seattle interning with the Splunk Security Practice (SecPrax) Team. Damn, time flies! The Splunk Security community is growing everyday, due to the unbelievable amount of flexibility, visibility, insight Splunk Enterprise offers for all data and as I have learned all data is security relevant. Back at Splunk to work with the Security Research team, this is my first blog post and I would like to hear what you people have got to say about it, so please leave a feedback/comment.
What am I missing while doing threat intelligence?
While I am doing some research looking for threat intelligence data sets to ingest into Splunk, I realized there can be …
Smart AnSwerS #61
Hey there community and welcome to the 61st installment of Smart AnSwerS.
I just had the pleasure of joining over 60 Splunk users for the April SplunkTrust Virtual .conf session on Best Practices for Splunk SSL by dwaddle and starcher. You can find the recording and slides for this and previous presentations on the Virtual .conf wiki page in case you missed out. For those of you in the San Francisco Bay Area that want to continue getting your Splunk clue on, come out to the SFBA Splunk User Group meeting at Splunk HQ next Wednesday, May 4th @ 6:30PM PDT. Becky Burwell from Yahoo!/Flickr will give a talk on batch search parallelization, and Sasha Velednitsky…
- Custom editors or management interfaces (e.g. lookup editing, slide-show creation)
- Custom visualizations (though modular visualizations are likely what you will want to use from now on)
Smart AnSwerS #60
Hey there community and welcome to the 60th installment of Smart AnSwerS.
Hot off the press! The next SplunkTrust Virtual .conf Session has been scheduled for next Thursday, April 28th, 2016 @ 9:00AM PST. Duane Waddle and George Starcher will be giving their popular talk “Avoid the SSLippery Slope of Default SSL”, which has been used and referenced far and wide among the Splunk community in the past couple years. See what the hype is all about by visiting the Meetup page to RSVP and find the WebEx link to join us next week!
Check out this week’s featured Splunk Answers posts:
How to put an expiration date on a set of saved searches or alerts…
When entropy meets Shannon
This is the third post on URL analysis, please have a look at the two other posts for more context about what can be done with Splunk to analyze URLs:
You will find in this article information on how one can detect DNS tunnels. While you can find lots of very useful apps on Splunkbase to help you analyze DNS data, it is always good for curious individuals to discover some techniques being used underneath.
A lot of captive portals are bypassed everyday by anyone able to run a DNS request, if someone can run on their machine the following command:
$ host splunk.com splunk.com has address 22.214.171.124 ...
Without being authenticated …