Splunking Kafka At Scale
At Splunk, we love data and we’re not picky about how you get it to us. We’re all about being open, flexible and scaling to meet your needs. We realize that not everybody has the need or desire to install the Universal Forwarder to send data to Splunk. That’s why we created the HTTP Event Collector. This has opened the door to getting a cornucopia of new data sources into Splunk, reliably and at scale.
We’re seeing more customers in Major Accounts looking to integrate their Pub/Sub message brokers with Splunk. Kafka is the most popular message broker that we’re seeing out there but Google Cloud Pub/Sub is starting to make some noise. I’ve been asked multiple times for guidance …
Configuring Nginx With Splunk, REST API & SDK Compatibility
Last year I posted an article on how to configure HAProxy with Splunk, REST API & SDK compatibility. Yesterday, I posted an article on how to configure Nginx as a load balancer in front of a tier of HTTP Event Collectors. Today, I want to iterate on the work I did yesterday and show a basic config for Nginx that’s compatible with Splunk, the REST API and SDK’s.
You’re going to need to build or install a version of Nginx that enables HTTPS support for an HTTP server.
If you install from source and don’t change the prefix then you’ll have everything installed in /usr/local/nginx. The rest of the article will assume this is the …
Configuring Nginx Load Balancer For The HTTP Event Collector
The HTTP Event Collector (HEC) is the perfect way to send data to Splunk, at scale, without a forwarder. If you’re a developer looking to push logs into Splunk over HTTP or you have an IOT use case then the HEC is for you. We cover multiple deployment scenarios in our docs. I want to focus on a single piece of the following distributed deployment for high availability, throughput and scale; the load balancer.
You can use any load balancer in front of the HEC but this article focuses on using Nginx to distribute the load. I’m also going to focus on using HTTPS as I’m assuming you care about security of your data in-flight.
You’re going to need to …
Show Me Your Viz!
Have you just download Splunk 6.4 and asked yourself what’s new and awesome? Have you ever built a dashboard with a custom visualization and wanted to share that with someone or easily replicate it somewhere else? Have Splunk’s core visualizations dulled your senses?
Reader, please meet Splunk 6.4 Custom Visualizations. Are you besties yet? If not, you two will be making sweet love by the end of this article.
I’m going to walk you through a Custom Visualization app I recently wrote and lay it all out there. I’m going to talk about why building these visualizations in Simple XML and HTML are a pain in your ass and how the API’s make your life easier. I’m going to …
Securely Storing & Accessing Passwords For Alert Action Scripts
I recently helped a customer securely store and access credentials for an alert action script in Splunk Cloud and wanted to share the details. Ledion Bitincka wrote a great article about storing encrypted credentials using the storage/passwords REST endpoint and accessing them in scripted inputs. This tactic is just a slight tweak on the same foundation.
This example gives you a base template to use within a shell script. You can easily adapt the methods to the language of your choice. Ledion actually gives some sample code for accessing and using the stored credentials using Python in his article.
Create Bare Bones App
Create a barebones app from the UI for this to live in. For this example we’ll call …
Using The SplunkJS Stack – Part 1
I’ve recently helped a customer integrate the SplunkJS stack into their own custom web application. I wanted to spread the knowledge so others could learn as well.
What is the SplunkJS stack you ask? The SplunkJS stack is a component of the Splunk Web Framework that allows web developers to create apps in their own development environment allowing them to access and manipulate Splunk data. This allows you greater flexibility over the look and feel of your app, including the use of third party visualization tools like D3 and Keylines.
This blog post will be a three part series. I will be covering the following topics in detail.
Configuring HAProxy & Splunk With REST API & SDK Compatibility
As a customer of Splunk I used HAProxy as a software load balancer to distribute users amongst my search heads. I was using the old search head pooling technology at the time, but the same principal holds true for our search head clustering feature; both require a load balancer to distribute users to your search heads. At the time, I couldn’t quite get HAProxy configured to allow use of the REST API. I now believe that was a function of the fact that I was on the 1.4.x branch which didn’t support SSL proxying.
Late last year I had a customer who used our professional services to help with a project. It revolved around using our SDK’s and REST API …
Splunk DB Connect & Cloudera Hive JDBC Connector
First things first. Try Hunk before you go down this path. Hunk allows you to seamlessly query your Hive tables with native SPL queries from the search interface. This gives you all the goodness of Splunk including agile reporting and analytics, role-based access controls, report acceleration and the fast time to value that you’ve come to know and love from Splunk. If you have tried Hunk and it’s just not the right fit then read on.
I recently helped a customer use Splunk DB Connect and the Cloudera Hive JDBC Connector to query tables on their Hive Server 2. Hive Server 2 is available in CDH 4.1+. You can read more about it on this Cloudera blog post.
Here are the quick …