Reflections on a Splunk developer’s journey : Part 2

Why should you develop ?

In “Reflections on a Splunk developer’s journey : Part 1″ I shared some of my experiences of developing and supporting Splunk Community Apps and Add-ons over the years.

But WHY did I choose to develop and WHY should you choose to develop and start your foray the Splunk developer ecosystem?

Well the reasons for developing are going to be different for everyone depending on your motives. You might be a business or you might just be an individual community collaborator.

The reasons I started developing were because I discovered Splunkbase (now Apps / Answers) and realized that it was a great forum for collaborating and getting involved with the “Big Data” community to use …

» Continue reading

Announcing the Splunk Add-on for Check Point OPSEC LEA 2.1.0

Check Point administrators rejoice, Splunk Add-on for OPSEC LEA 2.1.0 has been released! The free update provides useful improvements to almost every aspect of the add-on.

 

User Interface

The old OPSEC interface has been completely overhauled and streamlined. The interface is no longer stuck in the past and should look right at home on your Splunk 6 search heads.

manage

 

The manage connections page now offers a much more powerful overview of your Check Point connections. As you can see on the screenshot, every connection has a set of metrics available. These differ based upon the connection type. An audit connection displays the timestamp of the last event collected. A normal connection displays throughput over the last 24 hours …

» Continue reading

Reflections on a Splunk developer’s journey : Part 1

It seems like only yesterday

…that I was writing my first Splunk App. It was the openness and extensibility of the Splunk platform that attracted me to this pursuit in the first place, and when I discovered the thriving community on Splunkbase (now called Splunk Apps / Answers), I just had to contribute. 12,000+ downloads across 9 different freely available community offerings later, I am feeling somewhat reflective. So in this 2 part blog series I want to share with you some of my lessons learned from developing and supporting Splunk community Apps/Add-ons (part 1) and then some musings on why you should consider developing Splunk Apps/Add-ons yourself and contribute to the Splunk developer ecosystem (part 2).

Some lessons learned…

» Continue reading

Building custom search commands in Python part I – A simple Generating command

Custom search commands in our Python SDK allow you to extend Splunk’s search language and teach it new capabilities. In this and other upcoming posts we’re going to look at how to develop several different search commands to illustrate what you can do with this.

In this post, we’re going to focus on building a very basic Generating command.  A generating command generates events which can be from any source, for example an internal system, or an external API. We’re going to create a GenerateHello command that will generate Hello World events based on a supplied count. The command is not very useful in itself, but it is a quick way to see how you can author custom commands.

Below …

» Continue reading

Splunk as a Recipient on the JMS Grid

A number of years ago, I was fascinated by the idea of SETI@home. The idea was that home computers, while idling, would be sent calculations to perform in the search for extraterrestrial life. If you wanted to participate, you would register your computer with the project and your unused cycles would be utilized for calculations sent back to the main servers. You could call it a poor man’s grid, but I thought it of it as a massive extension for overworked servers. I thought the whole idea could be applied to the Java Messaging Service (JMS) used in J2EE application servers.

Background

Almost a decade ago, I would walk around corporations at “closing” time and see a mass array …

» Continue reading

Using Splunk as a data store for developers

A number of years ago, I wrote a blog entry called Everybody Splunk with the Splunk SDK, which succinctly encouraged developers to put data into Splunk for their applications and then search on the indexed data to avoid doing sequential search on unstructured text. Since it’s been a while and I don’t expect people to memorize the dissertations of ancient history (to paraphrase Bob Dylan), I’ve decided to write about the topic again, but this time in more detail with explanations on how to proceed.

Why Splunk as a Data Store?

Some may proclaim that there are many no-sql like data stores out there already, so why use Splunk for an application data store? The answers point to simplicity, …

» Continue reading

Splunk’s New Web Framework, Volkswagen’s Data Lab, and the Internet of Things.

There are many incredible features in Splunk 6. Pivot, Data Models and integrated maps really stole the show at .conf2013. But I really have to give credit to our developer team in Seattle for the massive leap forward in user interface possibilities with the addition of the integrated web framework, which is included in Splunk 6 but is also available as an app download for Splunk 5.

In the midst of all that Splunk 6 excitement at .conf, I was introduced (at the Internet of Things pavilion) to the team at Volkswagen Data Lab, and had some great discussions with them about their interest in using Splunk as a  platform for the management, analysis, and visualization of data from …

» Continue reading

The new developer tool chain for data, panel participation at DeveloperWeek

NewImage

 

A few weeks ago I had the pleasure of partaking in a panel at DeveloperWeek entitled “Next Gen Data Dev: NoSQL, NewSQL, Graph Databases, Hadoop, Machine Learning….”. On the panel I was joined by Emil Eifrem, CEO of Neo Technology and co-founder for Neo4J as well as Ankur Goyal, Director of Engineering for MemSQL. The high level theme was around the kinds of tools that have emerged for developers to work with data, and whether or not a new breed of developers is emerging. The panel started of with quick introductions on each of the products.

  • Ankur described MemSQL as the fastest database. MemSQL is a highly performant, distributed, transactional SQL database with an in-memory write-back
» Continue reading

RedMonk Chats with Customers and Partners about Development and DevOps with Splunk

RedMonk analyst Donnie Berkholz sat down with a few Splunk customers at various locations to discuss everything from DevOps and continuous deployment to building Splunk Apps with the Web Framework in Splunk 6. First Donnie sat down with Nick DiSanto of Snap Interactive, who talks about how they use Splunk to monitor continuous deployment and for trouble shooting, remarking “every single developer and product person uses Splunk on a daily basis”. Donnie also sat down with Steve Dodson and Kevin Conklin of Prelert who discuss why they chose to build on Splunk and the flexibility of the Web Framework in Splunk 6. Finally, Donnie also talks with Ashish Bhutiani of Function1 about the the Web Framework.

There are a ton …

» Continue reading

Command Modular Input Use Case Series

Modular Inputs and Scripted Inputs provide a great way to develop custom programs to collect and index virtually any kind of data that you can set your mind to.

But on whatever platform you have deployed Splunk on, you will also have a whole bevy of other inputs just waiting for you to tap into to get that data into Splunk .They would be the various programs that come with the platform and those that you have installed on your platform.v

This is actually why I created the Command Modular Input  that I introduced in a recent blog, a means to as simply as possible leverage the power of your existing system programs and get this data into …

» Continue reading