Mission Critical Availability with Splunk Enterprise 6.1

One of the newest features in Splunk Enterprise 6.1 is Multi-site Clustering.  This feature strengthens our ‘Operational Intelligence for everyone’ message by making the mission critical machine data available to users all the time, and it can even withstand an entire datacenter outages. Splunk Enterprise 6.1 has raised the bar on enterprise readiness to a new level.


Just as a recap, the clustering feature has been available in Splunk Enterprise since version 5.0. The earlier versions provided the much-needed High Availability (HA) capabilities – if one of the indexers is down then a replicated copy of the same data is available to users, thus minimizing any interruption in service.


The new Multi-site Clustering feature in 6.1 provides the …

» Continue reading

Clustering Optimizations in Splunk 6

One of the new features we introduced in Splunk 6 is the Simplified Clustering Management. This allows administrator to setup and monitor the health of the cluster through an easy to use, intuitive UI. In addition to the cool new UI, many performance optimizations were added to handle peer failures and recovery from such failures blazingly fast. In this blog post, I’m going to highlight two such performance optimizations.

1. First Searchable Copy Optimization

This optimization is all about making sure that at least one, complete searchable copy exists in the cluster so that business users can continue to use the data while the cluster master is handling peer failures.

Let’s take a look at this with an example.  Assume …

» Continue reading

Disk Space Estimator for Index Replication

One of the first questions customers ask when they start considering index replication is about storage requirements. Index replication keeps additional copies of data for redundancy purposes, but how would it affect the storage needs and what are the factors to consider in designing scalable storage architecture are the main questions. I’ll cover the important factors in this blog post.

There are two major dimensions to consider. First one is the replication policies and the second one is the data retention period.

Replication Factor (RF) and Searchability Factor (SF) control the replication policies. RF determines the number of raw data files to keep while SF determines the number of time series indexed files. For syslog data, the raw data …

» Continue reading

Replicate your data

Imagine a scenario in which one of your Splunk indexers just abruptly went down due to hardware failures. The data stored in the indexers aren’t available for searching until the indexers are restored. Your business users are unhappy, because they’re unable to act on the very important historical data.

This scenario can be completely avoided, thanks to a new feature in Splunk 5.0 called Index Replication. The index replication allows IT administrators to specify and store redundant copies of the data across a cluster of indexers. When one of the indexers is down, the system automatically detects this failure and redirects the search queries to other available indexers, which has the data. Everything happens so seamlessly that your business users …

» Continue reading

The Magic behind Report Acceleration

One of the coolest features we’ve introduced in Splunk 5.0 is Report Acceleration. This speeds up reports by many orders of magnitude, and it is so easy to set up.  So, what is the secret behind such a powerful acceleration?   I’ll attempt to explain some of the concepts that powers report acceleration in this post.

Before report acceleration, one of the ways for users to speed up reports is through summary indexing. Although very powerful, summary indexing was more suited for Splunk admins rather than for report developers. Summary indexing also didn’t have a way to auto-update its summaries to back-fill data and it stores the summaries on the search heads instead of on the indexers.

Report acceleration is targeted …

» Continue reading