Managing your Ingestion with the search bar

Many of our cloud customers have asked me how to better manage their data, e.g. determine volume by sourcetype, or volume by forwarder.  This is typically available via the Distributed Management Console, but in some cases, a person’s role prevents them from getting full access to it.  In the article below, I will guide you through several searches aimed to let anyone dive a bit deeper into their Splunk Cloud service.

Below are a few searches I find helpful

Total Ingestion Volume over time

index=_internal source="/opt/splunk/var/log/splunk/license_usage.log" type="RolloverSummary" | eval GB=b/1024/1024/1024 |timechart span= 1d sum(GB) as GB |

TotalLicUsage

Be sure to double check your time range selector here, I usually search over the past 7 days. If you want to look at hour by hour, simply adjust the search time.   If you want to see what you’ve ingested over the past 30 days, you’ll need to adjust accordingly, and if you want to get fancy, be sure and set the earliest=-30d@d latest=-0d@d to ensure you’re using midnight to midnight as the markers for time range. You this search uses type="RolloverSummary", which indicates when the log rolled each day.  You could also use the type “Usage” as well.

 

ProTip: Get even fancier and use | eval myLicense=XXX to see how close you are getting to your limit.

You can use the same search to look by various other input components, such as ingestion by sourcetype:

index=_internal source="/opt/splunk/var/log/splunk/license_usage.log" type=Usage | eval GB=b/1024/1024/1024 |timechart sum(GB) by st

 

VolBySourcetype

Or if you want to see ingestion by forwarder (or forwarder AND sourcetype) use the following:

index=_internal source="/opt/splunk/var/log/splunk/license_usage.log" type=Usage | eval GB=b/2014/1024/1024 | stats sum(GB) as GB by s host | sort - GB

Either way you want to slice it, Splunk automatically indexes its own internal logs, including the license log (license_usage.log).

Stay tuned for part 2 where we start to dive even deeper into managing your instance, straight from the search bar.

If you happen to be in Orlando next week, be sure to stop by and say “Hi” during the Cloud Adoption Team’s presentation of “Best Practices in Splunk Cloud”

 

Any advice on working around the source/host squashing limit (2,000 tuples on Splunk 6.0+ according to this: http://wiki.splunk.com/Community:TroubleshootingIndexedDataVolume)?

2,000 tuples are easily reached, and if that happens we cannot report on the data volume per host any more, which is quite unfortunate.

September 21, 2016