Crumpled paper and light bulb

Splunk Data Management – Things I Wish I’d Known

One of the best ways to ensure that you’re getting the most out of any tool is to get a handle on others’ mistakes and lessons learned. Our team of Splunk Professional Services engineers is continuously gaining insight into how to best deploy Splunk, including how to overcome technical challenges as well as internal hurdles. We’re sharing these insights to help your organization gain faster value from your Splunk deployment.

This post is the first in a series of articles on “Things I Wish I’d Known” about Splunk. In today’s post, Roman LopezSplunk Professional Services Consultant with SP6, and Jon Papp, SP6’s Professional Services Manager, explore things they wish they’d known about managing data within Splunk.

Normalize Your Data
(Jon Papp)

One of the first queries many people write in Splunk looks for failed logins with the keywords “failed login”. This is an excellent way to get started with Splunk, but once you are pulling in data from 20+ products you start running into other keywords that mean the same thing “login failure”, “authentication failed”, “error”, etc. You may have a scheduled alert looking for failed logins across all of your products. If each product uses its own keywords to describe a failure, now you either have to write an incredibly complicated search query or have individual failed login searches for each data source.

There is a better way. Use the Splunk Common Information Model (CIM) to normalize the keywords from different products to match the Authentication data model. Now instead of looking for the specific keyword that means “failed login” on each product, you only have to look for “action=failure.” Most of the work is already done for you in TAs and apps available on Splunkbase.

Not only does normalizing your data to the Splunk Common Information Model help you greatly simplify your queries, but it also prepares your environment to deploy tools like ES, UBA, and ITSI. On top of all of that, using the Splunk Common Information Model accelerates how quickly you can integrate new products into Splunk – once a new data source is made CIM compliant, it will automatically begin populating data into the searches, dashboards, and alerts that have been made with CIM-based queries – no need to write new dashboards for this new data sources if you don’t want to!

Never Underestimate the Value of Data Management
(Roman Lopez)

The entire endeavor is dependent on good data. That’s the raw material you will cut and polish into actionable information, but if you get bad data the entire platform is useless. This is what we call “garbage in, garbage out”. To this end, it’s critical to have the 4 V’s of big data at the forefront when planning your environment: Volume, Variety, Velocity, and Veracity.

You need to weigh how changes in the 4 V’s will affect what you’re building. For example, yes, your dashboard performs fine now, but how will increased volume affect it? Veracity is also crucial: Are you sure you’re getting the data in the format and the values you expect?

Learn Data Warehousing Principles
(Roman Lopez)

Trying to structure your Splunk environment by using a traditional database methodology will land you in trouble. Database management systems (DBMS) are geared towards supporting production systems, not reporting systems and this distinction is a huge advantage for those in the trade. Reporting systems need to be able to process huge amounts of historical data to deliver an answer in a short amount of time so they’re built for speed more than anything. Data warehousing methodology teaches how to build platforms with this in mind.

Build Data Ingestion Checks
(Jon Papp)

As more users come to rely on Splunk in your environment, it will become more and more important that you verify your data is complete and arriving in a timely fashion. Say a firewall rule changes in a routine network audit and now a group of Splunk forwarders can’t send data from your remote data center to your Splunk indexers. Any dashboards, reports, or alerts that you’ve built that rely on this data will now produce incorrect metrics or be entirely empty!

There is no one-size-fits-all solution for this problem, but you can design a solution that works in your environment. A simple solution would be to write a scheduled search that queries your Splunk metadata (using the metadata search command) and looks for sourcetypes (data sources) that haven’t been indexed in the last 30 minutes. The Splunk Monitoring Console also includes an out-of-the-box feature to monitor your Splunk forwarders and alert you if any of them stop communicating.

By having these checks in place, a qualified Splunk admin can address issues immediately instead of getting an email from a frustrated C-level executive the next day when their report didn’t arrive, is empty, or – worse – has incorrect metrics. Setting up these data ingestion checks sooner rather than later will save hours of administrative work as more users come to rely on Splunk.

SP6 – Expertise for a Successful Splunk Deployment

SP6 is a Splunk consulting firm focused on Splunk professional services including Splunk deployment, ongoing Splunk administration, and Splunk development. SP6 has a separate division that also offers Splunk recruitment and the placement of Splunk professionals into direct-hire (FTE) roles for those companies that may require assistance with acquiring their own full-time staff, given the challenge that currently exists in the market today.