Astronomy Part 3: Splunk, Megastructures, and ITSI
It’s been over half a decade since I wrote a blog entry here at Splunk about an astronomy related topic and that’s because I was just waiting for something interesting to talk about. In the past, I have spoken about star brightness, but recent news has taken this subject to another level.
For those, who may not be following, last fall, astronomers found an inexplicable property of a star almost 1500 light years away. Using Kepler spacecraft data, the star was shown to dim up to 22 percent of its original brightness at various times. To put this in perspective, the largest planets passing in front of a star only dim most stars less than one percent. So, it was …
Another Update to Keyword App
It’s been three years since I first released the relatively simple Keyword app on Splunkbase and wrote an initial blog entry for it describing it followed by an updated entry. In summary, the Keyword app is a series of form search dashboards designed for Splunk 6.x and later that allow a relatively new user to type in keywords (e.g., error, success, fail*) and get quick analytical results such as baselines, prediction, outliers, etc. Splunk administrators can give this app to their users as is, use the app as a template to write their own keyword dashboards, or take the searches in the app to create new views.
For this update, I’ve used, fellow Splunker, Hutch’s icons to update the …
Steps for implementing Fraud Detection
A couple of years ago, I wrote about how easy it is to detect fraud, mostly in the financial services industry, using Splunk Enterprise in a blog article. What I provided were the last steps on using the Splunk Search Processing Language to accomplish the task. However, for most people, who are new to Splunk, that doesn’t really help as it only gives you a prescription after you’ve uncovered the symptoms and, should I say, possible disease.
Today, I’d like to step back a little bit and give you the full high level steps on implementing fraud detection for your needs. This may make the previous article a little more clear.
Understand Your Use Cases
Before you do anything, …
Please Bypass the Database
It has been a while since I posted to these pages and I am sure there may be one or two of you who misses my erudite musings or as some may say ramblings of a longtime Splunker. Either way, here’s my first post for 2015.
I have noticed that there are a quite a few deployments in the world that write time series data to a log rotated file and have another process translate those events into a rows and columns to be ingested into a relational database. After this extract, translate, and load process (ETL), they then use SQL to gather their database records either add-hoc search or for aggregate reporting. This practice has been going on for …
Updated Traffic App
A few years ago, I created a publicly available traffic app for monitoring traffic incidents in major US cities configured by user. Since then, the provider of the feed has cut down on the number of cities they monitor and no longer provide incident counts per intersection. Nevertheless, they still provide a Jam Factor. A Jam Factor is a subjective number provided for a roadway that indicates how busy (or jammed) the roadway is.
For my reference implementation, I used this Jam factor field to visually allow you to to see your city’s (assuming the provider covers it) current Jam Factor for major highways. This updated traffic app that you can download has new dashboards that you can use to …
Updated Keyword App
Last year I created a simple app called Keyword that consists of a series of form search dashboards that perform Splunk searches in the background without having to know the Splunk search language. You can read about the original app here and see how it easy it is to use. This year, I added some dashboards for the Rare Command, but I didn’t think it was newsworthy to blog about it.
Then, Joe Welsh wrote a blog entry about using the cluster command in Splunk, which allows you to find anomalies using a log reduction approach. Joe’s example using Nagios is easy to follow and gives the novice a useful approach to get rare events. So, using this approach, I …
Splunk as a Recipient on the JMS Grid
A number of years ago, I was fascinated by the idea of SETI@home. The idea was that home computers, while idling, would be sent calculations to perform in the search for extraterrestrial life. If you wanted to participate, you would register your computer with the project and your unused cycles would be utilized for calculations sent back to the main servers. You could call it a poor man’s grid, but I thought it of it as a massive extension for overworked servers. I thought the whole idea could be applied to the Java Messaging Service (JMS) used in J2EE application servers.
Almost a decade ago, I would walk around corporations at “closing” time and see a mass array …
Another NY Metro Splunk Users Group Meeting
We had our first NY Metro Splunk Users Group meeting of the year this week and it was hosted at Blackrock in NYC with Reed Kelly, one of the leaders of the users group playing host. Thanks Reed.
Our first order of business was to watch a presentation from Splunk Product Manager Jack Coates on the new 3.0 Splunk Common Information Model. Unlike the past CIM that focused heavily on security, the new CIM is general purpose for all of IT and flexible to add more knowledge to it, when needed. As a bonus, the app in the app store has data models to quickly get started and test your data sources.
Next, we had a discussion (or some …
Using Splunk as a data store for developers
A number of years ago, I wrote a blog entry called Everybody Splunk with the Splunk SDK, which succinctly encouraged developers to put data into Splunk for their applications and then search on the indexed data to avoid doing sequential search on unstructured text. Since it’s been a while and I don’t expect people to memorize the dissertations of ancient history (to paraphrase Bob Dylan), I’ve decided to write about the topic again, but this time in more detail with explanations on how to proceed.
Why Splunk as a Data Store?
Some may proclaim that there are many no-sql like data stores out there already, so why use Splunk for an application data store? The answers point to simplicity, …
Over the day in the life of a Splunk user, he or she probably utilizes less than 50% of the available Splunk commands. It may be that the most popular commands such as stats, transaction, eval, top, timechart, chart, etc are already sufficient enough to do the types of manipulation and reporting that is required for the use case. Another way to look at it is that the other commands are not being utilized because of their lack of high cardinally and hence popularity in the abundant Splunk blogs, documentation, wiki’s, and answers.
In order to provide more awareness for many of these commands that are not as prevalent in use for the Splunk community, the field engineers at Splunk …