Pick Up Where You Left Off In Scripted And Modular Inputs
Splunk is really good at knowing what has been read and what is new when dealing with machine data that is on disk (using the fishbucket). However, there is a lot of machine data that does not exist on disk. Some examples of this type of data are 3rd party APIs or in-memory data. In order to get to this data, we often use scripted or modular inputs.
When dealing with scripted or modular inputs, it is important to only get new events or information since the last time the input ran. You don’t want to keep indexing the same events over and over because this will cause index bloat and performance issues. So, you need a way of …
Identifying Zombie, Chatty and Orphan VMs using Splunk App for VMware
Virtualization is difficult to manage given the complex moving parts from storage to networking to hardware. When you have a dynamic VMware environment with Distributed Resource Scheduler (DRS) and High Availability (HA) enabled, Virtual Machine’s (VM) in the environment can transition through multiple hosts and clusters and can potentially become unregistered VM’s. This can lead a VMWare Administrator to loose visibility for these VMs. In addition each VM in a datacenter could cost from a couple hundred dollars into the thousands (http://roitco.vmware.com) based on your environment and infrastructure costs.
In this blog post I will cover three types of VM’s that can exist in your VMware Infrastructure and requires additional attention. The definition of these VM’s vary, but I’m sure …
I’m somewhat of a Heroku fan boy. I’ve been using it for some time because it is just so simple to deploy applications. However, I’ve never really looked too deeply into the logs produced by my apps via the command line.
In this post we’ll look at how you can start Splunking data from apps deployed in Heroku, and some recipes to visualise it using the SPL.…
Splunk Answers is now migrated!
Splunk Answers has just been migrated to a new platform! Read more about the process and goals.
What to expect
You won’t see much in the way of UI changes, but the site underneath will be more stable and more flexible. You should experience faster loading times, more responsive controls, and very importantly, an improved search experience. We will now also have access to new and improved spam blocking features, a much-needed improvement.
The goal of the initial migration is to maintain feature parity with the existing Splunk Answers site. This will help us make sure we don’t break anything you’ve come to rely on. Over time, we will be able to launch new features and improved functionality.
Big data and the business of higher education
There was a nice article published on GovDataDownload today about the potential for big data to impact the business of higher education. The material does a nice job of explaining big data in simple concepts, then cites an excellent example of how it can help the bottom line of a university directly. Perhaps more importantly, the article closes with a mention of big data being used to help with learning analytics by “helping identify predictors and patterns for student success”, which is near and dear to my heart as a former educator.…
Using Flume to Sink Data to Splunk
If you have ever used Splunk, you can probably come up with a number of reasons why you should use a Splunk forwarder whenever possible to send data to Splunk. To quickly illustrate some of the benefits, a Splunk forwarder maintains an internal index of where it left off when sending data. If for some reason the Splunk Indexer has to be taken offline, the forwarder can resume its task after the indexer is brought back up. Additionally, a forwarder can automatically load balance traffic between multiple Splunk indexers. There’s already a Splunk blog here devoted to getting data into Splunk that highlights a forwarder’s benefits that I encourage you to review.
But what if using a Splunk Forwarder is …
Recently I had a request internally for how to access the Export endpoint from Splunk from a node.js application. The Export endpoint is useful for exporting large amounts of data efficiently out of Splunk as it will stream the results directly rather than requiring you to continually poll for more results. It turns out we don’t support the Export endpoint currently in our JS SDK, but it is very easy do access it yourself using Mikael’s super simple request module.
A picture (or a snippet in this case) tells a thousand words. Below you can see how to export Splunk’s internal index. Once you start it up it will instantly start streaming. Make sure you have enough disk space, or …
Battling APTs with the Kill Chain Method
Recently I had the privilege of presenting to a monthly meeting of the Raleigh, NC chapter of the ISSA. My presentation focused on educating the audience of security professionals on advanced persistent threats (APTs) and how they can use the kill chain method to battle them. I’d like to share some of the key points and highlights here in an effort to help readers better defend themselves against motivated attackers.
There are 3 main points about APTs that must be established before we dive into the kill chain method itself, namely:
1) Unlike automated, technology-driven attacks of years past, APTs are driven by a combination of people, processes, and technology. APTs are therefore goal-oriented (financial, political, etc.), human-directed, coordinated, …
Cross-Platform Scripted Inputs
Building an app and making sure that it is environment agnostic can be a bit challenging. One challenge that I come across over and over is how to make it work cross-platform… whether Splunk is installed on Windows, MacOS or *nix environments.
A good illustration of that challenge is when you use a “Scripted Input” in your app. Scripted Inputs are one of the many ways you can use Splunk to run scripts to collect data from 3rd party interfaces such as REST. Referencing that script in a Windows environment is different than the way you would do it in a MacOS environment.
Let’s take the example of the following scripted input stanza:
disabled = 1
APP WALKTHROUGH: Writing a custom search command
One of the best ways to learn is by example. If you want to build your own Splunk app, one of the best things you can do is dissect other apps.
In the below youtube video, I slowly go through a simple but useful app that adds a single search command: timewrap.
I go line-by-line, file-by-file, explaining everything. You will learn something.
Youtube video: Splunk App Walkthrough: Timewrap
A few notes:
- Yes, that’s a Hobbit movie poster behind me
- It’s about 50 minutes long, most of it dealing with the details of the python search command.
- Tell me if it was helpful, or what I could do to improve it.