Home Data Splunk Operational Intelligence Cookbook - Third Edition

Splunk Operational Intelligence Cookbook - Third Edition

By Josh Diakun , Paul R. Johnson , Derek Mock
books-svg-icon Book
eBook $47.99 $32.99
Print $60.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $47.99 $32.99
Print $60.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
  1. Free Chapter
    Play Time – Getting Data In
About this book
Splunk makes it easy for you to take control of your data, and with Splunk Operational Cookbook, you can be confident that you are taking advantage of the Big Data revolution and driving your business with the cutting edge of operational intelligence and business analytics. With more than 80 recipes that demonstrate all of Splunk’s features, not only will you find quick solutions to common problems, but you’ll also learn a wide range of strategies and uncover new ideas that will make you rethink what operational intelligence means to you and your organization. You’ll discover recipes on data processing, searching and reporting, dashboards, and visualizations to make data shareable, communicable, and most importantly meaningful. You’ll also find step-by-step demonstrations that walk you through building an operational intelligence application containing vital features essential to understanding data and to help you successfully integrate a data-driven way of thinking in your organization. Throughout the book, you’ll dive deeper into Splunk, explore data models and pivots to extend your intelligence capabilities, and perform advanced searching with machine learning to explore your data in even more sophisticated ways. Splunk is changing the business landscape, so make sure you’re taking advantage of it.
Publication date:
May 2018
Publisher
Packt
Pages
541
ISBN
9781788835237

 

Play Time – Getting Data In

In this chapter, we will cover the basic ways to get data into Splunk, in addition to some other recipes that will help prepare you for later chapters. You will learn about the following recipes:

  • Indexing files and directories
  • Getting data through network ports
  • Using scripted inputs
  • Using modular inputs
  • Using the Universal Forwarder to gather data
  • Receiving data using the HTTP Event Collector
  • Getting data from databases using DB Connect
  • Loading the sample data for this book
  • Data onboarding: Defining field extractions
  • Data onboarding: Defining event types and tags
  • Installing the Machine Learning Toolkit
 

Introduction

The machine data that facilitates operational intelligence comes in many different forms and from many different sources. Splunk can collect and index data from several sources, including log files written by web servers or business applications, syslog data streaming in from network devices, or the output of custom developed scripts. Even data that looks complex at first can be easily collected, indexed, transformed, and presented back to you in real time.

This chapter will walk you through the basic recipes that will act as the building blocks to get the data you want into Splunk. The chapter will further serve as an introduction to the sample data sets that we will use to build our own operational intelligence Splunk app. The datasets will be coming from a hypothetical three-tier e-commerce web application and will contain web server logs, application logs, and database logs.

Splunk Enterprise can index any type of data; however, it works best with time-series data (data with timestamps). When Splunk Enterprise indexes data, it breaks it into events, based on timestamps and/or event size, and puts them into indexes. Indexes are data stores that Splunk has engineered to be very fast, searchable, and scalable across a distributed server environment.

All data indexed into Splunk is assigned a source type. The source type helps identify the data format type of the event and where it has come from. Splunk has several preconfigured source types, but you can also specify your own. The example source types include access_combined, cisco_syslog, and linux_secure. The source type is added to the data when the indexer indexes it into Splunk. It is a key field that is used when performing field extractions and when conducting many searches to filter the data being searched.

The Splunk community plays a big part in making it easy to get data into Splunk. The ability to extend Splunk has provided the opportunity for the development of inputs, commands, and applications that can be easily shared. If there is a particular system or application you are looking to index data from, there is most likely someone who has developed and published relevant configurations and tools that can be easily leveraged by your own Splunk Enterprise deployment.

Splunk Enterprise is designed to make the collection of data very easy, and it will not take long before you are being asked or you yourself try to get as much data into Splunk as possible—at least as much as your license will allow for!

 

Indexing files and directories

File- and directory-based inputs are the most commonly used ways of getting data into Splunk. The primary need for these types of input will be to index logfiles. Almost every application or system produces a logfile, and it is generally full of data that you want to be able to search and report on.

Splunk can continuously monitor for new data being written to existing files or new files being added to a directory, and it is able to index this data in real time. Depending on the type of application that creates the logfiles, you would set up Splunk to either monitor an individual file based on its location, or scan an entire directory and monitor all the files that exist within it. The latter configuration is more commonly used when the logfiles being produced have unique filenames, such as filenames containing a timestamp.

This recipe will show you how to configure Splunk to continuously monitor and index the contents of a rolling logfile located on the Splunk server. The recipe specifically shows how to monitor and index a Red Hat Linux system's messages logfile (/var/log/messages). However, the same principle can be applied to a logfile on a Windows system, and a sample file is provided. Do not attempt to index the Windows event logs this way, as Splunk has specific Windows event inputs for this.

Getting ready

To step through this recipe, you will need a running Splunk Enterprise server and access to read the /var/log/messages file on Linux. No other prerequisites are required. If you are not using Linux and/or do not have access to the /var/log/messages location on your Splunk server, use the cp01_messages.log file that is provided and upload it to an accessible directory on your Splunk server.

You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files emailed directly to you.

How to do it...

Follow these steps to monitor and index the contents of a file:

  1. Log in to your Splunk server.
  1. From the menu in the top right-hand corner, click on the Settings menu and then click on the Add Data link:
  2. If you are prompted to take a quick tour, click on Skip.
  3. In the How do you want to add data section, click on monitor:
  1. Click on the Files & Directories section:
  2. In the File or Directory section, enter the path to the logfile (/var/log/messages or the location of the cp01_messages.log file), ensure Continuously Monitor is selected, and click on Next:
If you are just looking to do a one-time upload of a file, you can select Index Once instead. This can be useful to index a set of data that you would like to put into Splunk, either to backfill some missing or incomplete data or just to take advantage of its searching and reporting tools.
  1. If you are using the provided file or the native /var/log/messages file, the data preview will show the correct line breaking of events and timestamp recognition. Click on the Next button.
  1. A Save Source Type box will pop up. Enter linux_messages as the Name and then click on Save:
  2. On the Input Settings page, leave all the default settings and click Review.
  3. Review the settings and if everything is correct, click Submit.
  4. If everything was successful, you should see a File input has been created successfully message:
  5. Click on the Start searching button. The Search & Reporting app will open with the search already populated based on the settings supplied earlier in the recipe.
In this recipe, we could have simply used the common syslog source type or let Splunk choose a source type name for us; however, starting a new source type is often a better choice. The syslog format can look completely different depending on the data source. As knowledge objects, such as field extractions, are built on top of source types, using a single syslog source type for everything can make it challenging to search for the data you need.

How it works...

When you add a new file or directory data input, you are basically adding a new configuration stanza into an inputs.conf file behind the scenes. The Splunk server can contain one or more inputs.conf files, and these files are either located in $SPLUNK_HOME/etc/system/local or in the local directory of a Splunk app.

Splunk uses the monitor input type and is set to point to either a file or a directory. If you set the monitor to a directory, all the files within that directory will be monitored. When Splunk monitors files, it initially starts by indexing all the data that it can read from the beginning. Once complete, Splunk maintains a record of where it last read the data from, and if any new data comes into the file, it reads this data and advances the record. The process is nearly identical to using the tail command in Unix-based operating systems. If you are monitoring a directory, Splunk also provides many additional configuration options, such as blacklisting files you don't want Splunk to index.

For more information on Splunk's configuration files, visit https://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles.

There's more...

While adding inputs to monitor files and directories can be done through the web interface of Splunk, as outlined in this recipe, there are other approaches to add multiple inputs quickly. These allow for customization of the many configuration options that Splunk provides.

Adding a file or directory data input using the CLI

Instead of using the GUI, you can add a file or directory input through the Splunk command-line interface (CLI). Navigate to your $SPLUNK_HOME/bin directory and execute the following command (replacing the file or directory to be monitored with your own):

For Unix, we will be using the following code to add a file or directory input:

./splunk add monitor /var/log/messages -sourcetype linux_messages
  

For Windows, we will be using the following code to add a file or directory input:

splunk add monitor c:/filelocation/cp01_messages.log -sourcetype linux_messages

There are a number of different parameters that can be passed along with the file location to monitor.

See the Splunk documentation for more on data inputs using the CLI (https://docs.splunk.com/Documentation/Splunk/latest/Data/MonitorfilesanddirectoriesusingtheCLI).

Adding a file or directory input using inputs.conf

Another common method of adding the file and directory inputs is to manually add them to the inputs.conf configuration file directly. This approach is often used for large environments or when configuring Splunk forwarders to monitor for files or directories on endpoints.

Edit $SPLUNK_HOME/etc/system/local/inputs.conf and add your input. After your inputs are added, Splunk will need to be restarted to recognize these changes.

For Unix, we will use the following code:

[monitor:///var/log/messages]
sourcetype = linux_messages

For Windows, we will use the following code:

[monitor://c:/filelocation/cp01_messages.log]
sourcetype = linux_messages
Editing inputs.conf directly is often a much faster way of adding new files or directories to monitor when several inputs are needed. When editing inputs.conf, ensure that the correct syntax is used and remember that Splunk will need a restart for modifications to take effect. Additionally, specifying the source type in the inputs.conf file is the best methods for assigning source types.

One-time indexing of data files using the Splunk CLI

Although you can select Upload and Index a file from the Splunk GUI to upload and index a file, there are a couple of CLI functions that can be used to perform one-time bulk loads of data.

Use the oneshot command to tell Splunk where the file is located and which parameters to use, such as the source type:

./splunk add oneshot XXXXXXX 

Another way is to place the file you wish to index into the Splunk spool directory, $SPLUNK_HOME/var/spool/splunk, and then add the file using the spool command, as shown in the following code:

./splunk spool XXXXXXX
If using Windows, omit the dot and slash (./) that is in front of the Splunk commands mentioned earlier.

Indexing the Windows event logs

Splunk comes with special inputs.conf configurations for some source types, including monitoring Windows event logs. Typically, the Splunk Universal Forwarder (UF) would be installed on a Windows server and configured to forward the Windows events to the Splunk indexer(s). The configurations for inputs.conf to monitor the Windows security, application, and event logs in real time are as follows:

[WinEventLog://Application] 
disabled = 0  
[WinEventLog://Security] 
disabled = 0  
[WinEventLog://System] 
disabled = 0  

By default, the event data will go into the main index, unless another index is specified.

See also

  • The Getting data through network ports recipe
  • The Using scripted inputs recipe
  • The Using modular inputs recipe
 

Getting data through network ports

Not every machine has the luxury of being able to write logfiles. Sending data over network ports and protocols is still very common. For instance, sending logs through syslog is still the primary method to capture network device data such as firewalls, routers, and switches.

Sending data to Splunk over network ports doesn't need to be limited to network devices. Applications and scripts can use socket communication to the network ports that Splunk is listening on. This can be a very useful tool in your back pocket, as there can be scenarios where you need to get data into Splunk but don't necessarily have the ability to write to a file.

This recipe will show you how to configure Splunk to receive syslog data on a UDP network port, but it is also applicable to the TCP port configuration.

Getting ready

To step through this recipe, you will need a running Splunk Enterprise server. No other prerequisites are required.

How to do it...

Follow these steps to configure Splunk to receive network UDP data:

  1. Log in to your Splunk server.
  2. From the menu in the top right-hand corner, click on the Settings menu and then click on the Add Data link.
  3. If you are prompted to take a quick tour, click on Skip.
  4. In the How do you want to add data section, click on Monitor.
  1. Click on the TCP / UDP section:
  2. Ensure the UDP option is selected and in the Port section, enter 514. On Unix/Linux, Splunk must be running as root to access privileged ports such as 514. An alternative would be to specify a higher port, such as port 1514, or route data from 514 to another port using routing rules in iptables. Then, click on Next:
  1. In the Source type section, select Select and then select syslog from the Select Source Type drop-down list and click Review:
  2. Review the settings and if everything is correct, click Submit.
  3. If everything was successful, you should see a UDP input has been created successfully message:
  4. Click on the Start Searching button. The Search & Reporting app will open with the search already populated based on the settings supplied earlier in the recipe. Splunk is now configured to listen on UDP port 514. Any data sent to this port now will be assigned the syslog source type. To search for the syslog source type, you can run the following search:
source="udp:514" sourcetype="syslog" 

Understandably, you will not see any data unless you happen to be sending data to your Splunk server IP on UDP port 514.

How it works...

When you add a new network port input, you basically add a new configuration stanza into an inputs.conf file behind the scenes. The Splunk server can contain one or more inputs.conf files, and these files are either located in the $SPLUNK_HOME/etc/system/local or the local directory of a Splunk app.

To collect data on a network port, Splunk will set up a socket to listen on the specified TCP or UDP port and will index any data it receives on that port. For example, in this recipe, you configured Splunk to listen on port 514 for UDP data. If data was received on that port, then Splunk would index it and assign a syslog source type to it.

Splunk also provides many configuration options that can be used with network inputs, such as how to resolve the host value to be used on the collected data.

For more information on Splunk's configuration files, visit https://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles.

There's more...

While adding inputs to receive data from network ports can be done through the web interface of Splunk, as outlined in this recipe, there are other approaches to add multiple inputs quickly; these inputs allow for customization of the many configuration options that Splunk provides.

Adding a network input using the CLI

You can also add a file or directory input via the Splunk CLI. Navigate to your $SPLUNK_HOME/bin directory and execute the following command (just replace the protocol, port, and source type you wish to use):

  • We will use the following code for Unix:
    ./splunk add udp 514 -sourcetype syslog
  • We will use the following code for Windows:
    splunk add udp 514 -sourcetype syslog

There are a number of different parameters that can be passed along with the port. See the Splunk documentation for more on data inputs using the CLI (https://docs.splunk.com/Documentation/Splunk/latest/Data/MonitorfilesanddirectoriesusingtheCLI).

Adding a network input using inputs.conf

Network inputs can be manually added to the inputs.conf configuration files. Edit $SPLUNK_HOME/etc/system/local/inputs.conf and add your input. You will need to restart Splunk after modifying the file. For example, to enable UDP port 514 use the following code:

[udp://514]
sourcetype = syslog
It is best practice to not send syslog data directly to an indexer. Instead, always place a forwarder between the network device and the indexer. The Splunk forwarder will be set up to receive the incoming syslog data (inputs.conf) and will load balance the data across your Splunk indexers (outputs.conf). The forwarder can also be configured to cache the syslog data in the event communication to the indexers is lost.

See also

  • The Indexing files and directories recipe
  • The Using scripted inputs recipe
  • The Using modular inputs recipe
 

Using scripted inputs

Not all data that is useful for operational intelligence comes from logfiles or network ports. Splunk will happily take the output of a command or script and index it along with all your other data.

Scripted inputs are a very helpful way to get that hard-to-reach data. For example, if you have third-party-supplied command-line programs that can output data you would like to collect, Splunk can run the command periodically and index the results. Typically, scripted inputs are often used to pull data from a source, whereas network inputs await a push of data from a source.

This recipe will show you how to configure Splunk on an interval to execute your command and direct the output into Splunk.

Getting ready

To step through this recipe, you will need a running Splunk server and the provided scripted input script suited to the environment you are using. For example, if you are using Windows, use the cp01_scripted_input.bat file. This script should be placed in the $SPLUNK_HOME/bin/scripts directory. No other prerequisites are required.

How to do it...

Follow these steps to configure a scripted input:

  1. Log in to your Splunk server.
  2. From the menu in the top right-hand corner, click on the Settings menu and then click on the Add Data link.
  3. If you are prompted to take a quick tour, click on Skip.
  4. In the How do you want to add data section, click on Monitor.
  5. Click on the Scripts section:
  6. A form will be displayed with a number of input fields. In the Script Path drop-down, select the location of the script. All scripts must be located in a Splunk bin directory, either in $SPLUNK_HOME/bin/scripts or an appropriate bin directory within a Splunk app, such as $SPLUNK_HOME/etc/apps/search/bin.
  7. In the Script Name dropdown, select the name of the script. In the Commands field, add any command-line arguments to the auto-populated script name.
  1. Enter the value in the Interval field (in seconds) in which the script is to be run (the default value is 60.0 seconds) and then click Next:
  2. In the Source Type section, you have the option to either select a predefined source type or to select New and enter your desired value. For the purpose of this recipe, select New as the source type and enter cp01_scripted_input as the value for the source type. Then click Review:
  3. By default, data will be indexed into the Splunk index of main. To change this destination index, select your desired index from the drop-down list in the Index section of the form.

  4. Review the settings. If everything is correct, click Submit.

  5. If everything was successful, you should see a Script input has been created successfully message:

  6. Click on the Start searching button. The Search & Reporting app will open with the search already populated based on the settings supplied earlier in the recipe. Splunk is now configured to execute the scripted input you provided every 60 seconds, in accordance with the specified interval. You can search for the data returned by the scripted input using the following search:
sourcetype=cp01_scripted_input 

How it works...

When adding a new scripted input, you are directing Splunk to add a new configuration stanza into an inputs.conf file behind the scenes. The Splunk server can contain one or more inputs.conf files, located either in $SPLUNK_HOME/etc/system/local or the local directory of a Splunk app.

After creating a scripted input, Splunk sets up an internal timer and executes the command that you have specified, in accordance with the defined interval. It is important to note that Splunk will only run one instance of the script at a time, so if the script gets blocked for any reason, it will cause the script to not be executed again, until after it has been unblocked.

Since Splunk 4.2, any output of the scripted inputs that are directed to stderr (causing an error) are captured to the splunkd.log file, which can be useful when attempting to debug the execution of a script. As Splunk indexes its own data by default, you can search for that data and put an alert on it if necessary.

For security reasons, Splunk does not execute scripts located outside of the bin directories mentioned earlier. To overcome this limitation, you can use a wrapper script (such as a shell script in Linux or batch file in Windows) to call any other script located on your machine.

See also

  • The Indexing files and directories recipe
  • The Getting data through network ports recipe
  • The Using modular inputs recipe
 

Using modular inputs

Since Splunk 5.0, the ability to extend data input functionality has existed such that custom input types can be created and shared while still allowing for user customization to meet needs.

Modular inputs build further upon the scripted input model. Originally, any additional functionality required by the user had to be contained within a script. However, this presented a challenge, as no customization of this script could occur from within Splunk itself. For example, pulling data from a source for two different usernames needed two copies of a script or meant playing around with command-line arguments within your scripted input configuration.

By leveraging the modular input capabilities, developers are now able to encapsulate their code into a reusable app that exposes parameters in Splunk and allows for configuration through processes familiar to Splunk administrators.

This recipe will walk you through how to install the Command Modular Input, which allows for periodic execution of commands and subsequent indexing of the command output. You will configure the input to collect the data outputted by the vmstat command in Linux and the systeminfo command in Windows.

Getting ready

To step through this recipe, you will need a running Splunk server with a connection to the internet. No other prerequisites are required.

You will also need to download the Command Modular Input Add-on app from Splunkbase. This app can be found at https://splunkbase.splunk.com/app/1553/.

How to do it...

Follow the steps in this recipe to configure a modular input:

  1. Log in to your Splunk server.
  2. From the Apps menu in the upper left-hand corner of the home screen, click on the gear icon:
  3. The Apps settings page will load. Then, click on the Install App from file button.
  4. Click the Choose File button and select the app file that was previously downloaded from Splunkbase, then click the Upload button:
  5. After the app has been installed, from the menu in the top right-hand corner, click on the Settings menu and then click on the Data inputs link.
  6. Click on the Command section:
  7. In the Mod Input Name field, enter a name for the input of SystemInfo. If you are using Linux, enter /usr/bin/vmstat in the Command Name field. If you are using Windows, enter C:\Windows\System32\systeminfo.exe in the Command Name field:
Use the full path if the command to be executed cannot be found on the system PATH.
  1. In the Command Arguments field, enter any argument that needs to be passed to the command listed in the Command Name field. In the Command Execution Interval field, enter a value in seconds for how often the command should be executed (in this case, we will use 60 seconds). If the output is streamed, then leave this field empty and check the Streaming Output field:
  2. In the Source type section, you have the option to either select a predefined source type or select Manual and enter a value. For this recipe, select Manual as the source type and enter cp01_modular_input as the value for the source type.
  3. Click Next.
  4. If everything was successful, you should see a Modular input has been created successfully message:
  1. Click on the Start searching button. The Search & Reporting app will open with the search already populated based on the settings supplied earlier in the recipe. Splunk is now configured to execute the modular input you provided, every 60 seconds, in accordance with the specified interval. You can search for this data returned by the scripted input using the following search over an All time time range:
sourcetype=cp01_modular_input 

How it works...

Modular inputs are bundled as Splunk apps and, once installed, contain all the necessary configuration and code to display them in the Data inputs section of Splunk. In this recipe, you installed a modular input application that allows for periodic execution of commands. You configured the command to execute every minute and to index the results of the command each time, giving the results a source type of cp01_modular_input.

Modular inputs can be written in several languages and need to follow only a set of interfaces that expose the configuration options and runtime behaviors. Depending on the design of the input, they will either run persistently or run at an interval and will send data to Splunk as they receive it.

You can find several other modular inputs, including REST API, SNMP, and PowerShell, on the Splunk Apps site (https://splunkbase.splunk.com).

There's more...

See also

  • The Indexing files and directories recipe
  • The Getting data through network ports recipe
  • The Using scripted inputs recipe
 

Using the Universal Forwarder to gather data

Most IT environments today range from multiple servers in the closet of your office to hundreds of endpoint servers located in multiple geographically distributed data centers.

When the data we want to collect is not located directly on the server where Splunk is installed, the Splunk Universal Forwarder (UF) can be installed on your remote endpoint servers and used to forward data back to Splunk to be indexed.

The Universal Forwarder is like the Splunk server in that it has many of the same features, but it does not contain Splunk web and doesn't come bundled with the Python executable and libraries. Additionally, the Universal Forwarder cannot process data in advance, such as performing line breaking and timestamp extraction.

This recipe will guide you through configuring the Splunk Universal Forwarder to forward data to a Splunk indexer and will show you how to set up the indexer to receive the data.

Getting ready

To step through this recipe, you will need a server with the Splunk Universal Forwarder installed but not configured. You will also need a running Splunk Enterprise server. No other prerequisites are required.

To obtain the Universal Forwarder software, you need to go to https://www.splunk.com/download and register for an account if you do not already have one. Then, either download the software directly to your server or download it to your laptop or workstation and upload it to your server using a file transfer process such as SFTP.
For more information on how to install and manage the Universal Forwarder, visit https://docs.splunk.com/Documentation/Forwarder/latest/Forwarder/Abouttheuniversalforwarder.

How to do it...

Follow these steps to configure the Splunk Forwarder to forward data and the Splunk indexer to receive data:

  1. On the server with the Universal Forwarder installed, open a command prompt if you are a Windows user or a terminal window if you are a Unix user.
  2. Change to the $SPLUNK_HOME/bin directory, where $SPLUNK_HOME is the directory in which the Splunk forwarder was installed.
  3. For Unix, the default installation directory will be /opt/splunkforwarder/bin. For Windows, it will be C:/Program Files/SplunkUniversalForwarder/bin.
If using Windows, omit ./ in front of the Splunk command in the upcoming steps.
  1. Start the Splunk forwarder, if not already started, using the following command:
./splunk start   
  1. Accept the license agreement.
  2. Enable the Universal Forwarder to autostart, using the following command:
./splunk enable boot-start   
  1. Set the indexer that this Universal Forwarder will send its data to. Replace the host value with the value of the indexer as well as the username and password for the Universal Forwarder, using the following command:
    ./splunk add forward-server <host>:9997 -auth <username>:<password>  
  1. The username and password to log in to the forwarder (default is admin:changeme) is <username>:<password>.
Additional receiving indexers can be added in the same way by repeating the command in the previous step with a different indexer host or IP. Splunk will automatically load balance the forwarded data if more than one receiving indexer is specified in this manner. Port 9997 is the default Splunk TCP port and should only be changed if it cannot be used for some reason.
  1. On the receiving Splunk indexer servers, log in to your receiving Splunk indexer server. From the home launcher, in the top right-hand corner, click on the Settings menu item and then select the Forwarding and receiving link:
  2. Click on the Configure receiving link:
  3. Click on New.
  4. Enter 9997 in the Listen on this port field:
  1. Click on Save and restart Splunk. The Universal Forwarder is installed and configured to send data to your Splunk server, and the Splunk server is configured to receive data on the default Splunk TCP port 9997.

How it works...

When you tell the forwarder which server to send data to, you basically add a new configuration stanza into an outputs.conf file behind the scenes. On the Splunk server, an inputs.conf file will contain a [splunktcp] stanza to enable receiving. The outputs.conf file on the Splunk forwarder will be located in $SPLUNK_HOME/etc/system/local, and the inputs.conf file on the Splunk server will be located in the local directory of the app you were in (the launcher app in this case) when configuring receiving.

Using forwarders to collect and forward data has many advantages. The forwarders communicate with the indexers on TCP port 9997 by default, which makes for a very simple set of firewall rules that need to be opened. Forwarders can also be configured to load balance their data across multiple indexers, increasing search speeds and availability. Additionally, forwarders can be configured to queue the data they collect if communication with the indexers is lost. This can be extremely important when collecting data that is not read from logfiles, such as performance counters or syslog streams, as the data cannot be re-read.

There's more...

While configuring the settings of the Universal Forwarder can be performed using the command-line interface of Splunk, as outlined in this recipe, there are several other methods to update the settings quickly and to allow for customization of the many configuration options that Splunk provides.

Adding the receiving indexer via outputs.conf

The receiving indexers can be directly added to the outputs.conf configuration file on the Universal Forwarder. Edit $SPLUNK_HOME/etc/system/local/outputs.conf, add your input, and then restart the UF. The following example configuration is provided, where two receiving indexers are specified. The [tcpout-server] stanza can be leveraged to add output configurations specific to an individual receiving indexer:

[tcpout] 
defaultGroup = default-autolb-group 
 
[tcpout:default-autolb-group] 
disabled = false 
server = mysplunkindexer1:9997,mysplunkindexer2:9997 
 
[tcpout-server://mysplunkindexer1:9997] 
[tcpout-server://mysplunkindexer2:9997] 
If nothing has been configured in inputs.conf on the UF, but outputs.conf is configured with at least one valid receiving indexer, the Splunk forwarder will only send internal forwarder health-related data to the indexer. It is therefore possible to configure a forwarder correctly and the forwarder be detected by the Splunk indexers, but not actually send any real data.
 

Receiving data using the HTTP Event Collector

The HTTP Event Collector (HEC) is another highly scalable way of getting data into Splunk. The HEC listens for HTTP requests containing JSON objects and sends the data that has been collected to be indexed.

In this recipe, you will learn how to configure the Splunk HTTP Event Collector to receive data coming from an example Inventory Scanner. This example inventory scan HEC configuration will be used in Chapter 10, Above and Beyond – Customization, Web Framework, REST API, HTTP Event Collector, and SDKs.

Getting ready

To step through this recipe, you will need a running Splunk Enterprise server. You should be familiar with navigating the Splunk user interface and using the Splunk search language. This recipe will use the open source command-line tool, curl. There are other command-line tools available, such as wget. The curl tool is usually installed by default on most Mac and Linux systems but can be downloaded for Windows systems as well.

For more information on curl, visit http://curl.haxx.se/.

How to do it...

Perform these steps to create a custom search command to format product names:

  1. Log in to your Splunk server.
  2. Select the Search & Reporting application.
  3. Click on Settings and then on Data Inputs:
  4. Click on HTTP Event Collector:
  5. Click the Global Settings button:
  1. Set All Tokens to Enabled, and set the DefaultIndex to main. Then, click the Save button:
  2. Click the New Token button:
  3. Set the Name to Inventory Scanner and the Source name override to inventory:scanner, and click the Next button:
  1. Select New for the Source Type and enter inventory:scanner as the value:
  1. Under the Index section, click on main so that it gets moved to the SelectedItem(s) list and click the Review button:
  2. Click Review and confirm your selections, then click Submit.
  3. After the form submits, you will be presented with the token. This token will be needed for the recipe in Chapter 10, Above and Beyond – Customization, Web Framework, REST API, HTTP Event Collector, and SDKs:

How it works...

To get the HEC to work, you firstly configured a few global settings. These included the default index, default source type, and the HTTP port that Splunk will listen on. These default values, such as index and source type, will be used by the HEC, unless the data itself contains the specific values to use. The port commonly used for the HEC is port 8088. This single port can receive multiple different types of data since it is all differentiated by the token that is passed with it and by interpreting the data within the payload of the request.

After configuring the defaults, you then generated a new token, specifically for the inventory scanner data. You provided a specific source type for this data source and selected the index that the data should go to. These values will override the defaults and help to ensure that data is routed to the correct index.

The HEC is now up and running and listening on port 8088 for the inventory scan HTTP data to be sent to it.

 

Getting data from databases using DB Connect

Splunk DB Connect is a popular application developed by Splunk that allows you to easily get data into Splunk from many common databases. In this recipe, you will install DB Connect and configure it to connect to an external database's product inventory table. This product inventory table will be used in Chapter 7, Enriching Data – Lookups and Workflows.

DB Connect has a dedicated Splunk manual that can be found at https://docs.splunk.com/Documentation/DBX/latest/DeployDBX.

Getting ready

To step through this recipe, you will need a running Splunk Enterprise server. You should be familiar with navigating the Splunk user interface.

Additionally, it is recommended that you have one of the following supported databases installed:

  • DB2
  • Informix
  • MemSQL
  • MS SQL
  • MySQL
  • Oracle
  • PostgreSQL
  • SAP SQL
  • Sybase
  • Teradata

DB Connect might work with other JDBC-compatible databases and data stores, but this is not guaranteed. DB Connect 3 has several prerequisites detailed in the installation manual. Before attempting this recipe, please ensure that you have installed the Java Platform, Standard Edition Development Kit (JDK) 8 from Oracle. Additionally, you will also need to download the database drivers for your specific database.

How to do it...

Assuming JDK 8 is installed and your required database drivers are downloaded, follow the steps in this recipe to generate a local Splunk lookup using data from an external database and DB Connect:

  1. In your database application, create a new database called productdb, and within the database, create a new table called productInventory. Insert the contents of the provided productInventory.csv file into the new database table. The new table will resemble the following screenshot:
  1. Once the DB table is built, you need to install the DB Connect application to connect to it. From the drop-down application menu, select Find More Apps:
  2. Search for the Splunk DB Connect application and then select it to install it. You will have to enter your splunk.com account credentials after hitting the Install button. When prompted, select to Restart Splunk:
If your environment has no internet access, you can download the DB Connect application from the Splunk app store at https://splunkbase.splunk.com/app/2686/. Once it is downloaded, you can upload and install the application to your Splunk environment by selecting Manage Apps from Step 2.
  1. After logging back in, select the Splunk DB Connect from the drop-down application menu. You will see a welcome notice initially. Click on the green Setup button to continue.
  2. The next screen will display an error warning if the DB Connect task server is not running. If it is not running, then you will need to enter the correct JRE Installation Path. The rest of the settings we will leave as they are for now. Click Save and ensure the task server is running, then click the Drivers tab:
  3. On the next screen, you will see a list of supported databases and whether any drivers are correctly installed. At this point, you must copy the database driver for your database over to DB Connect. Follow the instructions in the DB Connect installation manual to do this. Then, click the Reload button to ensure the driver is now installed. Once you see a green check mark next to the database you are looking to use, the driver has been detected properly:
  1. In the navigation bar, click on Configuration, then Settings, then select the Identities tab. Then, click New Identity to add a new database identity:
  2. Add a new database identity by entering the Identity Name, Username, and Password for the user that will be connecting to the database. Then, click Save to create the identity:
  3. In the navigation bar, click on Configuration, then Settings, then select the Connections tab. Then, click the New Connection to add a new database identity. Enter in the required database connection details. You will need to enter the Host, the Connection Type, the Default Database, and then select the newly created identity from the Identity drop-down box. The Default Database will match the name of your database—in this case, productdb. When done, select Save. The connection will be validated when saved and will report back any errors:
  4. Now, test that you are able to view the product inventory table by clicking on Data Lab and then SQL Explorer. Select your product database and then run the following SQL query:
select * from productInventory;

You should now be able to see the inventory table and your database connection is ready to go. We will use this data and connection in Chapter 7, Enriching Data – Lookups and Workflows:

How it works...

DB Connect enables real-time integration between Splunk and traditional relational databases. In this recipe, you installed the DB Connect application and configured it to talk to a database. When installed, DB Connect sets up something called a Java Bridge Server that is essentially a Java Virtual Machine (JVM) constantly running in the background. The Java Bridge Server helps speed up connectivity to external databases by allocating memory and caching a lot of the metadata associated with the database tables.

 

Loading the sample data for this book

While most of the data you will index with Splunk will be collected in real time, there might be instances where you have a set of data that you would like to put into Splunk, either to backfill some missing or incomplete data, or just to take advantage of its searching and reporting tools.

This recipe will show you how to perform one-time bulk loads of data from files located on the Splunk server. We will also use this recipe to load the data samples that will be used throughout the subsequent chapters as we build our operational intelligence app in Splunk.

There are three files that make up our sample data. The first is access_log, which represents the data from our web layer and is modeled on an Apache web server. The second file is app_log, which represents the data from our application layer and is modeled on log4j log data from our custom middleware application. The third file is metric_csv data that represents sensor readings from HVAC units.

Getting ready

To step through this recipe, you will need a running Splunk server and you should have a copy of the sample data generation app (OpsDataGen.spl) for this book.

How to do it...

Follow these steps to load the sample data generator on your system:

  1. Log in to your Splunk server using your credentials.
  2. From the Apps menu in the upper left-hand corner of the home screen, click on the gear icon.
  1. The Apps settings page will load. Then, click on the Install app from file button:
  1. Select the location of the OpsDataGen.spl file on your computer and then click on the Upload button to install the application:
  2. After installation, a message should appear in a blue bar at the top of the screen, letting you know that the app has installed successfully. You should also now see the OpsDataGen app in the list of apps:
  3. By default, the app installs with the data-generation scripts disabled. In order to generate data, you will need to enable either a Windows or Linux script, depending on your Splunk operating system. To enable the script, select the Settings menu from the top right-hand side of the screen and then select Data inputs:
  4. From the Data inputs screen that follows, select Scripts.
  5. On the Scripts screen, locate the OpsDataGen script for your operating system and click on Enable:
    • For Linux, it will be $SPLUNK_HOME/etc/apps/OpsDataGen/bin/AppGen.path
    • For Windows, it will be $SPLUNK_HOME/etc/appsOpsDataGen/bin/AppGen-win.path

The following screenshot displays both the Windows and Linux inputs that are available after installing the OpsDataGen app. It also displays where to click to enable the correct one based on the operating system Splunk is installed on:

  1. Select the Settings menu from the top right-hand side of the screen, select Data inputs, and then select Files & directories.
  2. On the Files & directories screen, locate the three OpsDataGen inputs for your operating system and for each click on Enable:
    • For Linux, it will be $SPLUNK_HOME/etc/apps/OpsDataGen/data/access_log, $SPLUNK_HOME/etc/apps/OpsDataGen/data/app_log, and $SPLUNK_HOME/etc/apps/OpsDataGen/data/hvac_log
    • For Windows, it will be $SPLUNK_HOME\etc\apps\OpsDataGendata\access_log, $SPLUNK_HOME\etc\apps\OpsDataGendata\app_log, and $SPLUNK_HOME\etc\apps\OpsDataGendata\hvac_log

The following screenshot displays both the Windows and Linux inputs that are available after installing the OpsDataGen app. It also displays where to click to enable the correct one based on the operating system Splunk is installed on:

  1. The data will now be generated in real time. You can test this by navigating to the Splunk search screen and running the following search over an All time (real-time) time range:
index=main sourcetype=log4j OR sourcetype=access_combined 
  1. After a short while, you should see data from both the source types flowing into Splunk. The data generation is now working, as displayed in the following screenshot:
  1. You can also test that the metric data is being generated by navigating to the Splunk search screen and running the following search over an All Time range:
| mcatalog values(_dims) WHERE index=hvac 

How it works...

In this case, you installed a Splunk application that leverage a scripted input. The script we wrote generates data for three source types. The access_combined source type contains sample web access logs, the metrics_csv source type contains sensor metrics, and the log4j source type contains application logs. These data sources will be used throughout the recipes in the book. Applications will also be discussed in more detail later on.

See also

  • The Indexing files and directories recipe
  • The Getting data through network ports recipe
  • The Using scripted inputs recipe
 

Data onboarding – defining field extractions

Splunk has many built-in features, including knowledge of several common source types, which lets it automatically know which fields exist within your data. Splunk, by default, also extracts any key-value pairs present within the log data and all the fields within JSON-formatted logs. However, often the fields within raw log data cannot be interpreted out of the box, and this knowledge must be provided to Splunk to make these fields easily searchable.

The sample data that we will be using in subsequent chapters contains data we wish to present as fields to Splunk. Much of the raw log data contains key-value fields that Splunk will extract automatically, but there is one field we need to tell Splunk how to extract, representing the page response time. To do this, we will be adding a custom field extraction, which will tell Splunk how to extract the field for us.

Getting ready

To step through this recipe, you will need a running Splunk server with the operational intelligence sample data loaded. No other prerequisites are required.

How to do it...

Follow these steps to add a custom field extraction for a response:

  1. Log in to your Splunk server.
  2. In the top right-hand corner, click on the Settings menu and then click on the Fields link.
  1. Click on the Field extractions link:
  2. Click on New.
  3. In the Destination app field, select the search app, and in the Name field, enter response. Set the Apply to dropdown to sourcetype and the named field to access_combined. Set the Type dropdown to Inline, and for the Extraction/Transform field, carefully enter the (?i)^(?:[^"]*"){8}s+(?P<response>.+) regex:
  4. Click on Save.
  5. On the Field extractions listing page, find the recently added extraction, and in the Sharing column, click on the Permissions link:
  1. Update the Object should appear in setting to All apps. In the Permissions section, for the Read column, check Everyone, and in the Write column, check admin. Then, click on Save:
  2. Navigate to the Splunk search screen and enter the following search over the Last 60 minutes time range:
index=main sourcetype=access_combined 
  1. You should now see a field called response extracted on the left-hand side of the search screen under the Interesting Fields section.

How it works...

All field extractions are maintained in the props.conf and transforms.conf configuration files. The stanzas in props.conf include an extraction class that leverages regular expressions to extract field names and/or values to be used at search time. The transforms.conf file goes further and can be leveraged for more advanced extractions, such as reusing or sharing extractions over multiple sources, source types, or hosts.

See also

  • The Loading the sample data for this book recipe
  • The Data onboarding – defining event types and tags recipe
 

Data onboarding - defining event types and tags

Event types in Splunk are a way of categorizing common types of events in your data to make them easier to search and report on. One advantage of using event types is that they can assist in applying a common classification to similar events. Event types essentially turn chunks of search criteria into field/value pairs. Tags help you search groups of event data more efficiently and can be assigned to any field/value combination, including event types.

For example, Windows log-on events could be given an event type of windows_logon, Unix log-on events could be given an event type of unix_logon, and VPN log-on events could be given an event type of vpn_logon. We could then tag these three event types with a tag of logon_event. A simple search for tag="logon_event" would then search across the Windows, Unix, and VPN source types and return all the log-on events. Alternatively, if we want to search only for Windows log-on events, we will search for eventtype=windows_logon.

This recipe will show how to define event types and tags for use with the sample data. Specifically, you will define an event type for successful web server events.

Getting ready

To step through this recipe, you will need a running Splunk server with the operational intelligence sample data loaded. No other prerequisites are required.

How to do it...

Follow these steps to define an event type and associated tag:

  1. Log in to your Splunk server.
  1. From the home launcher in the top right-hand corner, click on the Settings menu item and then click on the Event types link:
  2. Click on the New button.
  3. In the Destination App dropdown, select search. Enter HttpRequest-Success in the Name field. In the Search string text area, enter sourcetype=access_combined status=2*. In the Tag(s) field, enter webserver and then click on Save:
  4. The event type is now created. To verify that this worked, you should now be able to search by both the event type and the tag that you created. Navigate to the Splunk search screen in the Search and Reporting app and enter the following search over the Last 60 minutes time range to verify that eventtype is working:
eventtype="HttpRequest-Success"  
  1. Enter the following search over the Last 60 minutes time range to verify that the tag is working:
tag="webserver" 

How it works...

Event types are applied to events at search time and introduce an eventtype field with user-defined values that can be used to quickly sift through large amounts of data. An event type is essentially a Splunk search string that is applied against each event to see if there is a match. If the event type search matches the event, the eventtype field is added, with the value of the field being the user-defined name for that event type.

The common tag value allows for a grouping of event types. If multiple event types had the same tag, then your Splunk search could just search for that particular tag value, instead of needing to list out each individual event type value.

Event types can be added, modified, and deleted at any time without the need to change or reindex your data, as they are applied at search time.

Event types are stored in eventtypes.conf, which can be found in $SPLUNK_HOME/etc/system/local/, a custom app directory in $SPLUNK_HOME/etc/apps/, or a user's private directory, $SPLUNK_HOME/etc/users/.

There's more...

While adding event types and tags can be done through the web interface of Splunk, as outlined in this recipe, there are other approaches to add them in bulk quickly and to allow for customization of the many configuration options that Splunk provides.

Adding event types and tags using eventtypes.conf and tags.conf

Event types in Splunk can be manually added to the eventtypes.conf configuration files. Edit or create $SPLUNK_HOME/etc/system/local/eventtypes.conf and add your event type. You will need to restart Splunk after this:

[HttpRequest-Success] 
search = status=2* 

Tags in Splunk can be manually added to the tags.conf configuration file. Edit or create $SPLUNK_HOME/etc/system/local/tags.conf and add your tag. You will need to restart Splunk after this:

[eventtype=HttpRequest-Success] 
webserver = enabled 
In this recipe, you tagged an event type. However, tags do not always need to be associated with event types. You can tag any field/value combination found in an event. To create new tags independently, click on the Settings menu and select Tags.

See also

  • The Loading the sample data for this book recipe
  • The Data onboarding – defining field extractions recipe
 

Installing the Machine Learning Toolkit

The Splunk Machine Learning Toolkit extends Splunk with additional search commands, visualizations, assistants, and examples to assist in developing and working with machine learning concepts. Machine learning tools and processes can be applied to your Splunk data to assist in predictive analytics, trending, anomaly detection, and outlier detection.

This recipe will show you how to install the Machine Learning Toolkit and the necessary prerequisites, which will be used in Chapter 6, Diving Deeper – Advanced Searching, Machine Learning, and Predictive Analytics.

For more information on the Machine Learning Toolkit, check out https://docs.splunk.com/Documentation/MLApp/latest/User/About.

Getting ready

To step through this recipe, you will need a running Splunk server with the operational intelligence sample data loaded. No other prerequisites are required.

How to do it...

Follow these steps to define an event type and associated tag:

  1. Log in to your Splunk server.
  2. From the Apps menu in the upper left-hand corner of the home screen, click on the gear icon.
  3. The Apps settings page will load. Then, click on the Browse More Apps button.
  4. In the search field, enter Scientific Computing and press enter.
  5. The search results will return multiple Python for Scientific Computing apps — one for each different supported operating system (Windows and Linux 32-bit or 64-bit). In the search results, click on the Install button for the app that matches the correct operating system you have Splunk installed on:
  6. Enter your splunk.com credentials, check the checkbox to accept the terms and conditions, and click on Login and Install. Splunk should return with a message saying that the app was installed successfully.
  7. If prompted to restart Splunk, click the Restart later button.
  8. In the search field, enter Machine Learning and press enter.
  9. In the search results, click on the Install button for Splunk Machine Learning Toolkit:
  1. Enter your Splunk.com credentials, check the checkbox to accept the terms and conditions, and click on Login and Install. Splunk should return with a message saying that the app was installed successfully.
  2. After the app has installed, click the Restart Splunk button. After Splunk restarts, log back in to Splunk. You should then, in the Apps launcher, see the Machine Learning Toolkit installed, as shown in the following screenshot:

How it works...

The Machine Learning Toolkit (MLTK) app is the main Splunk app that contains all the necessary knowledge objects and user interfaces that make working with machine learning possible. On its own, that would be enough to provide some basic functionality. However, to take advantage of more advanced machine learning concepts, Splunk needs to take advantage of additional Python libraries.

The Python for Scientific Computing add-on contains a Python interpreter bundled with the numpy, scipy, pandas, scikit-learn, and statsmodels libraries. These libraries are platform-specific, which is why the correct version must be installed.

The Machine Learning Toolkit also provides the ability to customize and extend the application with your own custom models and algorithms, which makes it a very powerful platform.

With the MLTK installed, you are now ready for Chapter 6, Diving Deeper - Advanced Searching, Machine Learning and Predictive Analytics.

About the Authors
  • Josh Diakun

    Josh Diakun is an IT operations and security specialist with a focus on creating data-driven operational processes. He has over 10 years of experience managing and architecting enterprise-grade IT environments. For the past 7 years, he has been architecting, deploying and developing on Splunk as the core platform for organizations to gain security and operational intelligence. Josh is a founding partner at Discovered Intelligence, a company specializing in data intelligence services and solutions. He is also a co-founder of the Splunk Toronto User Group.

    Browse publications by this author
  • Paul R. Johnson

    Paul R. Johnson has over 10 years of data intelligence experience in the areas of information security, operations, and compliance. He is a partner at Discovered Intelligence, a company specializing in data intelligence services and solutions. Paul previously worked for a Fortune 10 company, leading IT risk intelligence initiatives and managing a global Splunk deployment. Paul co-founded the Splunk Toronto User Group and lives and works in Toronto, Canada.

    Browse publications by this author
  • Derek Mock

    Derek Mock is a software developer and big data architect who specializes in IT operations, information security, and cloud technologies. He has 15 years' experience developing and operating large enterprise-grade deployments and SaaS applications. He is a founding partner at Discovered Intelligence, a company specializing in data intelligence services and solutions. For the past 6 years, he has been leveraging Splunk as the core tool to deliver key operational intelligence. Derek is based in Toronto, Canada, and is a co-founder of the Splunk Toronto User Group.

    Browse publications by this author
Latest Reviews (4 reviews total)
Gut zu lesen und sehr infromativ
so far book is great , the steps are very accurate, still reading and following along, but very well put together
It's been more than 2 weeks since my purchase and I am still not able to access the videos of this course, which is really disappointing :( it's the second time that I have received the email from customer care that it will take end of next week to fix the issue... Hoping to get to access the videos this time otherwise I am asking for the refund.
Splunk Operational Intelligence Cookbook - Third Edition
Unlock this book and the full library FREE for 7 days
Start now