SQL Server Integration Services (SSIS)

Exclusive offer: get 50% off this eBook here
Microsoft SQL Server 2012 with Hadoop

Microsoft SQL Server 2012 with Hadoop — Save 50%

Integrate data between Apache Hadoop and SQL Server 2012 and provide business intelligence on the heterogeneous data with this book and ebook

$23.99    $12.00
by Debarchan Sarkar | September 2013 | Enterprise Articles

In this article by Debarchan Sarkar, the author of Microsoft SQL Server 2012 with Hadoop, we will learn how to create an SQL Server Integration Services package to move data from Hadoop to SQL Server using the Hive ODBC driver.

Microsoft SQL Server is a complete suite of tools that include an RDBMS system, a multidimensional OLAP and tabular database engines, as well as other services, for example a broker service, a scheduling service (SQL Agent), and many more. As discussed it has become extremely important these days to integrate data between different sources. SQL Server also offers a powerful business intelligence stack, which provides rich features for data mining and interactive reporting. One of these BI components is an extract, transform, and load (ETL) tool called SQL Server Integration Services (SSIS). SSIS offers the ability to merge structured and un-structured data by importing Hive data into SQL Server and apply powerful analytics on the integrated data. Throughout the rest of this article, we will get a basic understanding on how SSIS works and create a simple SSIS package to import data from Hive to SQL Server.

(For more resources related to this topic, see here.)

SSIS as an ETL – extract, transform, and load tool

The primary objective of an ETL tool is to be able to import and export data to and from heterogeneous data sources. This includes the ability to connect to external systems, as well as to transform or clean the data while moving the data between the external systems and the databases. SSIS can be used to import data to and from SQL Server. It can even be used to move data between external non-SQL systems without requiring SQL server to be the source or the destination. For instance, SSIS can be used to move data from an FTP server to a local flat file.

SSIS also provides a workflow engine for automation of the different tasks (for example, data flows, tasks executions, and so on.) that are executed in an ETL job. An SSIS package execution can itself be one step that is part of an SQL Agent job, and SQL Agent can run multiple jobs independent of each other.

An SSIS solution consists of one or more package, each containing a control flow to perform a sequence of tasks. Tasks in a control flow can include calls to web services, FTP operations, file system tasks, automation of command line commands, and others. In particular, a control flow usually includes one or more data flow tasks, which encapsulate an in-memory, buffer-based pipeline of data from a source to a destination, with transformations applied to the data as it flows through the pipeline. An SSIS package has one control flow, and as many data flows as necessary. Data flow execution is dictated by the content of the control flow.

A detailed discussion on SSIS and its components are outside the scope of this article and it assumes that you are familiar with the basic SSIS package development using Business Intelligence Development Studio (SQL Server 2005/2008/2008 R2) or SQL Server Data Tools (SQL Server 2012). If you are a beginner in SSIS, it is highly recommended to read from a bunch of good SSIS books available as a prerequisite. In the rest of this article, we will focus on how to consume Hive data from SSIS using the Hive ODBC driver.

The prerequisites to develop the package shown in this article are SQL Server Data Tools, (which comes as a part of SQL Server 2012 Client Tools and Components) and the 32-bit Hive ODBC Driver installed. You will also need your Hadoop cluster up with Hive running on it.

Developing the package

SQL Server Data Tools (SSDT) is the integrated development environment available from Microsoft to design, deploy, and develop SSIS packages. SSDT is installed when you choose to install SQL Server Client tools and Workstation Components from your SQL Server installation media. SSDT supports creation of Integration Services, Analysis Services, and Reporting Services projects. Here, we will focus on Integration Services project type.

Creating the project

  1. Launch SQL Server Data Tools from SQL Server 2012 Program folders as shown in the following screenshot:

  2. Create a new Project and choose Integration Services Project in the New Project dialog as shown in the following screenshot:

  3. This should create the SSIS project with a blank Package.dtsx inside it visible in the Solution Explorer window of the project as shown in the following screenshot:

Creating the Data Flow

A Data Flow is a SSIS package component, which consists of the sources and destinations that extract and load data, the transformations that modify and extend data, and the paths that link sources, transformations, and destinations. Before you can add a data flow to a package, the package control flow must include a Data Flow task. The Data Flow task is the executable within the SSIS package, which creates, orders, and runs the data flow. A separate instance of the data flow engine is opened for each Data Flow task in a package. To create a Data Flow task, perform the following steps:

  1. Double-click (or drag-and-drop) on a Data Flow Task from the toolbox in the left. This should place a Data Flow Task in the Control Flow canvas of the package as in the following screenshot:

  2. Double-click on the Data Flow Task or click on the Data Flow tab in SSDT to edit the task and design the source and destination components as in the following screenshot:

Creating the source Hive connection

The first thing we need to do is create a connection manager that will connect to our Hive data tables hosted in the Hadoop cluster. We will use an ADO.NET connection, which will use the DSN HadoopOnLinux we created earlier to connect to Hive. To create the connection, perform the following steps:

  1. Right-click on the Connection Managers section in the project and click on New ADO.Net Connection... as shown in the following screenshot:

  2. From the list of providers, navigate to .Net Providers | ODBC Data Provider and click on OK in the Connection Manager window as shown in the following screenshot:

  3. Select the HadoopOnLinux DSN from the Data Sources list. Provide the Hadoop cluster credentials and test connection should succeed as shown in the following screenshot:

Summary

In this way we learned how to create an SQL Server Integration Services package to move data from Hadoop to SQL Server using the Hive ODBC driver.

Resources for Article:


Further resources on this subject:


Microsoft SQL Server 2012 with Hadoop Integrate data between Apache Hadoop and SQL Server 2012 and provide business intelligence on the heterogeneous data with this book and ebook
Published: August 2013
eBook Price: $23.99
Book Price: $39.99
See more
Select your format and quantity:

About the Author :


Debarchan Sarkar

Debarchan Sarkar (@debarchans) is working with Microsoft Escalation services and has written books on SQL Server BI and big data. His total tenure at Microsoft is 6 years, and he was with the SQL Server BI team before diving deep into big data and the Hadoop world. He is an SME in SQL Server Integration Services and is passionate about the present-day Microsoft self-service BI tools and data analysis, especially social-media brand sentiment analysis. Debarchan is from Calcutta, India, and is presently located in Bangalore, India, working in Microsoft's Global Technical Support Center. He owns and maintains his big data learning group on Facebook and has been a speaker at several Microsoft internal and external community events. Apart from his passion for technology, he is interested in visiting new places, listening to music, meeting new people, and learning new things because he is a firm believer that "Known is a drop; the unknown is an ocean".

Books From Packt


Microsoft SQL Server 2012 Performance Tuning Cookbook
Microsoft SQL Server 2012 Performance Tuning Cookbook

Microsoft SQL Server 2012 Security Cookbook
Microsoft SQL Server 2012 Security Cookbook

Microsoft SQL Server 2012 Integration Services: An Expert Cookbook
Microsoft SQL Server 2012 Integration Services: An Expert Cookbook

Getting Started with SQL Server 2012 Cube Development
Getting Started with SQL Server 2012 Cube Development

Instant MDX Queries for SQL Server 2012 [Instant]
Instant MDX Queries for SQL Server 2012 [Instant]

What's New in SQL Server 2012
What's New in SQL Server 2012

Hadoop MapReduce Cookbook
Hadoop MapReduce Cookbook

Hadoop Beginner's Guide
Hadoop Beginner's Guide


Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software