Small IT, Big Problems: Discovering the Unknown with Log Data

The following excerpt is from a contributed blog post to InfoWorld:
Small IT, Big Problems: Discovering the Unknown with Log Data

For your IT team to successfully leverage log data, you first need to find a way to manage it.

Collect and centralize
Aggregating log data in one place—as it’s generated from apps, infrastructure, and distributed environments—is essential to getting an end-to-end view of IT. Having to search through individual silos of data and manually make correlations can be time consuming, especially when a key service is down. For example, sending all syslog and Windows events to a single place means you can break away from having to rely on multiple point tools to resolve an issue. Automating the collection of log data and centralizing it is the starting point to getting more value from your data.

Most tools and manual approaches require users to normalize or select specific data from the log files, which takes time and loses the ultimate context. A better approach is to collect in real time and keep log data in its raw, native format to answer unforeseen questions. However, that can be challenging. There are no standard formats for log data. Just about every system, application, and security device will have a different log data format.

Spreadsheets and BI tools break when used to analyze log data from disparate systems. The minute a schema is created or the data is put in rows and columns, spreadsheet and BI tools present a mountain of work beyond simply adding log data from other systems. The worst-case scenario is requiring context from a full log file, only to find out the context was lost when the data was normalized or put through ETL.

Read the full contributed InfoWorld article:
Small IT, Big Problems: Discovering the Unknown with Log Data


Shay Mowlem
VP, Product Management & Product Marketing

Learn more about Splunk Light