Reader small image

You're reading from  Splunk Developer's Guide - Second Edition

Product typeBook
Published inJan 2016
Publisher
ISBN-139781785882371
Edition2nd Edition
Tools
Right arrow
Authors (2):
Marco Scala
Marco Scala
author image
Marco Scala

Marco Scala has been working for more than 15 years in delivering solutions to Large Enterprise Customers, first in the APM and J2EE field, and since 2009 in the field of Operational Intelligence and Splunk. He has provided consultancy for big Splunk installations on major Customers, focusing on the best and more effective solution for each different Customer's need. Since 2012 he's also a Certified Splunk Trainer. In the last years his major focus is to get Splunk Customer to gain the maximum value from their IT data, and provide the Business a better view and insight. BigData is another major field of interest, and his next challenge is using Splunk to give Customers useful insights and a practical implementation and exploitation of BigData.
Read more about Marco Scala

Kyle Smith
Kyle Smith
author image
Kyle Smith

Kyle Smith is a self-proclaimed geek and has been working with Splunk extensively since 2010. He enjoys integrating Splunk with new sources of data and types of visualization. He has spoken numerous times at the Splunk User Conference (most recently in 2014 on Lesser Known Search Commands) and is an active contributor to the Splunk Answers community and also to the #splunk IRC channel. He was awarded membership into the SplunkTrust as a founding member. He has published several Splunk Apps and add-ons to Splunkbase, the Splunk community's premier Apps and add-ons platform. He has worked in both higher education and private industry; he is currently working as an integration developer for Splunk's longest running professional services partner. He lives in central Pennsylvania with his family.
Read more about Kyle Smith

View More author details
Right arrow

Chapter 6. Advanced Integrations and Development

In this chapter, we will be discussing additional methods of integration and development. We will start with building a modular D3 visualization, which is an expansion of the D3 visualization from the last chapter. Using modular visualizations allows you to be more flexible with your dashboards and allows you to tweak a particular visualization for all dashboards in which it has been used. We will discuss modular inputs and how to create and test them. Modular inputs allow you to consume the same type of data in a modular fashion, similar to the native directory or file monitoring inputs found in Splunk. We will also cover the KV Store, how to use it, and why to use it. The KV Store allows you to store information in a key-value manner, which has the potential to speed up lookup tables, as the data is stored in the memory. We will also cover how to use Bower, npm, Gulp, and Git as tools for customizing and tracking our apps.

Modular D3 visualization


In this section, we will convert our previously used D3 box plot graph (we added this in Chapter 5, The Splunk Web Framework) into extended SimpleSplunkView. The benefits here are substantial. Primarily, you gain the ability to quickly reuse the view in other dashboards. Simply assign the SearchManager to the view and off you go. Retrieving the events from the SearchManager is also easier, as it is handled natively within the extension. Another great benefit is that when the view is loading, the default loading screens are used, so it's not just a blank panel until it is loaded; it is actually a first-class-citizen view. The first thing to have when creating an extended SimpleSplunkView is the base template.

Let's take a look at the basic structure and then fill in the pieces we are missing:

define(function(require, exports, module) {
    var _ = require("underscore");
    var mvc = require("splunkjs/mvc");
    var SimpleSplunkView = require("splunkjs/mvc/simplesplunkview...

Modular inputs


Modular inputs are a feature of Splunk that allow you to extend the platform in ways that are specifically geared to consuming data. Modular inputs can promote your scripted inputs to first-class natives of the Splunk platform. This gives you the ability to define how to collect the data and let your users define the settings with which to collect that data. "Why would you want that?" you might ask. Lots of reasons! For example, let's say that you want to gather data about the weather. You could write a scripted input to collect a single city's weather data from the API of wunderground.com. However, what happens when you want more than one city's weather data? You'll have to copy and paste the scripted input, change the API parameters, and update the API key. If there was a change made to the API specification, you will have to update all configured scripted inputs. If you use a modular input, you can give the user an option to specify the API key and the API parameters in...

The App Key Value Store


The App Key Value Store (KV Store) is new in Splunk 6.2. Think of them as lookups that are stored in memory. The actual storage is done in a Mongo database that is run by the Splunk process. The KV Store is very useful for storing state data and fills a gap that existed in earlier versions of Splunk. State data is data that defines what the current condition of something is. For example, we would like to know what the most recent memory and CPU usages are for a system. You could write this data to a typical lookup file, but by using the KV Store, you can get the ability to interface with the store from within your App. The KV Store has a complete REST interface with which to perform CRUD (short for create, read, update, and delete) operations, making it invaluable and extremely flexible. You can perform these CRUD operations directly from the Splunk search language, much like a typical lookup.

When would you use the KV Store?

Well, there are quite a few instances where...

Data models


Data models are becoming an essential part of the App developer's toolkit. They help developers design and maintain the semantic knowledge of their data. Semantic knowledge can be described as the underlying knowledge of the meaning and assessment of the data that is being consumed. This knowledge is typically known only to subject matter experts, but it can be transferred to the end user in the form of data models. These data models can then be summarized and accelerated as needed with Splunk Enterprise. Data models are also the driving force behind the Pivot feature of Splunk Enterprise. They define how data is related and/or broken down. They are created using searches that are tiered into different sections. For example, your root event may be tag=web_logs (which says that you want all web logs, including IIS or Apache), and the second tier may be Errors, which will constrain the child search to only web log errors (for example, status = 500). This gives the end user the...

Version control and package managers


Version control is fairly important in the realm of a Splunk App. When publishing your Apps, you must include version numbers, and the easiest way to keep a track of the changes is with version control. We will focus on Git, as it is a standard of version control. You can just as easily use CVS or SVN, but Git is much more flexible and easier to work with. Since almost everything in a Splunk App is ASCII-based (very few and far binary files), Apps are more easily integrated with version control systems. Package managers are a newer concept, at least in the realm of web development.

The two we will cover (npm and Bower) are specifically designed for web applications, and a lot of hard work in finding, updating, and converting JavaScript libraries has already been done for you. Gulp is another tool we will investigate. It is a streaming build system that can automatically watch files for changes, update static library contents, and provide minification...

Summary


In this chapter, we converted an inline visualization into a customized SimpleSplunkView. As we saw, the benefits of this conversion are incredible. The modular ability allows the same code to be used over and over, without the likelihood of making mistakes in the copy-and-paste process. Once you've included the JavaScript in the base RequireJS stack, you can take advantage of the objects and instantiate the objects wherever you like.

After JavaScript views, we dove into modular inputs. Modular inputs give us the ability to reuse a script, while providing the end user with a simple interface with which to do the configuration as required. They can also be configured to take advantage of encrypted credentials within the script, securing your credentials from the casual observer. We discussed portions of the script and how they relate to the total implementation.

We discussed the KV Store and the benefits of using it versus file-based lookups. We explored how to create them and how to...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Splunk Developer's Guide - Second Edition
Published in: Jan 2016Publisher: ISBN-13: 9781785882371
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Marco Scala

Marco Scala has been working for more than 15 years in delivering solutions to Large Enterprise Customers, first in the APM and J2EE field, and since 2009 in the field of Operational Intelligence and Splunk. He has provided consultancy for big Splunk installations on major Customers, focusing on the best and more effective solution for each different Customer's need. Since 2012 he's also a Certified Splunk Trainer. In the last years his major focus is to get Splunk Customer to gain the maximum value from their IT data, and provide the Business a better view and insight. BigData is another major field of interest, and his next challenge is using Splunk to give Customers useful insights and a practical implementation and exploitation of BigData.
Read more about Marco Scala

author image
Kyle Smith

Kyle Smith is a self-proclaimed geek and has been working with Splunk extensively since 2010. He enjoys integrating Splunk with new sources of data and types of visualization. He has spoken numerous times at the Splunk User Conference (most recently in 2014 on Lesser Known Search Commands) and is an active contributor to the Splunk Answers community and also to the #splunk IRC channel. He was awarded membership into the SplunkTrust as a founding member. He has published several Splunk Apps and add-ons to Splunkbase, the Splunk community's premier Apps and add-ons platform. He has worked in both higher education and private industry; he is currently working as an integration developer for Splunk's longest running professional services partner. He lives in central Pennsylvania with his family.
Read more about Kyle Smith