Home Data Microsoft SQL Server 2012 with Hadoop

Microsoft SQL Server 2012 with Hadoop

By Debarchan Sarkar
books-svg-icon Book
eBook $25.99 $17.99
Print $43.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $25.99 $17.99
Print $43.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
About this book
With the explosion of data, the open source Apache Hadoop ecosystem is gaining traction, thanks to its huge ecosystem that has arisen around the core functionalities of its distributed file system (HDFS) and Map Reduce. As of today, being able to have SQL Server talking to Hadoop has become increasingly important because the two are indeed complementary. While petabytes of unstructured data can be stored in Hadoop taking hours to be queried, terabytes of structured data can be stored in SQL Server 2012 and queried in seconds. This leads to the need to transfer and integrate data between Hadoop and SQL Server. Microsoft SQL Server 2012 with Hadoop is aimed at SQL Server developers. It will quickly show you how to get Hadoop activated on SQL Server 2012 (it ships with this version). Once this is done, the book will focus on how to manage big data with Hadoop and use Hadoop Hive to query the data. It will also cover topics such as using in-memory functions by SQL Server and using tools for BI with big data. Microsoft SQL Server 2012 with Hadoop focuses on data integration techniques between relational (SQL Server 2012) and non-relational (Hadoop) worlds. It will walk you through different tools for the bi-directional movement of data with practical examples. You will learn to use open source connectors like SQOOP to import and export data between SQL Server 2012 and Hadoop, and to work with leading in-memory BI tools to create ETL solutions using the Hive ODBC driver for developing your data movement projects. Finally, this book will give you a glimpse of the present day self-service BI tools such as Excel and PowerView to consume Hadoop data and provide powerful insights on the data.
Publication date:
August 2013
Publisher
Packt
Pages
96
ISBN
9781782177982

 

Chapter 1. Introduction to Big Data and Hadoop

Suddenly, Big Data is the talk of the town. Every company ranging from enterprise-level to small-scale startups has money for Big Data. The storage and hardware costs have dramatically reduced over the past few years enabling the businesses to store and analyze data, which were earlier discarded due to storage and processing challenges. There has never been a more exciting time with respect to the world of data. We are seeing the convergence of significant trends that are fundamentally transforming the industry and a new era of tech innovation in areas such as social, mobile, advanced analytics, and machine learning. We're seeing an explosion of data where there is an entirely new scale and scope to the kinds of data we are trying to gain insights from. In this chapter, we will get an insight on what Big Data is and how the Apache Hadoop framework comes in the picture when implementing Big Data solutions. After reading through the chapter, you will be able to understand:

  • What is Big Data and why now

  • Business needs for Big Data

  • The Apache Hadoop framework

 

Big Data – what's the big deal?


There's a lot of talk about Big Data—estimates are that the total amount of digital information in the world is increasing ten times every five years, with 85 percent of this data coming from new data types for example, sensors, RFIDs, web logs, and so on. This presents a huge opportunity for businesses that tap into this new data to identify new opportunity and areas for innovation.

However, having a platform that supports the data trend is only a part of today's challenge; you need to also make it easier for people to access so that they can gain insight and make better decisions. If you think about the user experience, with everything we are able to do on the Web, our experiences through social media sites, how we're discovering, sharing, and collaborating in new ways, user expectations of their business, and productivity applications are changing as well.

One of the first questions we should set out to answer is a simple definitional one: how is Big Data different from traditional large data warehouses? International Data Corporation has the most broadly accepted theory of classifying Big Data as the three Vs:

  • Volume: Data volume is exploding. In the last few decades, computing and storage capacity have grown exponentially, driving down hardware and storage costs to near zero and making them a commodity. The current data processing needs are evolving and are demanding analysis of petabytes and zetabytes of data with industry standard hardware within minutes if not seconds.

  • Variety: The variety of data is increasing. It's all getting stored and nearly 85 percent of new data is unstructured data. The data can be in the form of tweets, JSONs with variable attributes and elements of which users may want to process selective ones.

  • Velocity: The velocity of data is speeding up the pace of business. Data capture has become nearly instantaneous, thanks to new customer interaction points and technologies. Real-time analytics is more important than ever. The ratio of data remittance rate continues to be way higher than the data consumption rate; coping with the speed of data continues to be a challenge. Think about a software that can let you message or type as fast as the speed of your thought.

Today, every organization finds it difficult to manage and track the right dataset within itself, the challenge is even greater when they need to look out for data which is external to the system. A typical analyst spends too much time searching for the right data from thousands of sources, which adversely impacts productivity. We will move from a world of search to one of discovery, where information is brought to the user based on who you are, and what you are working on. There has never been such an abundance of externally available and useful information as there is today. The challenge is how do you discover what is available and how do you connect to it?

To answer today's types of question, you need new ways to discover and explore data. By this we mean, data that may reside in a number of different domains such as:

  • Personal data: This is data created by me, or by my peers, but relevant for the task at hand.

  • Organizational data: This is data that is maintained and managed across the organization.

  • Community data: This is external data such as curated third party datasets that are shared into the public domain. Examples include Data.gov, Twitter, Facebook, and so on.

  • World data: This is all the other data that is available on the global stage, for example, data from sensors or logfiles, and for which technologies such as Hadoop for Big Data have emerged.

You could derive much deeper business insight and trends by combining the data you need across personal, corporate, community, and world data. You can connect and combine data from hundreds of trusted data providers—data includes demographic data, environment data, financial data, retail and sports data, social data such as twitter and facebook as well as data cleansing services. You can combine this data with your personal data through self-service tools, for example, PowerPivot, you can use reference data for cleansing your corporate data with SQL Server 2012, or you can use it in your custom applications.

Existing RDBMS solutions as SQL Server are good in managing challenging volumes of data, but it falls short when the data is unstructured or semi-structured with variable attributes such as the ones discussed previously. The current world seems almost obsessed with social media sentiments, tweets, devices, and so on; without the right tools, your company is adrift in a sea of data. You need the ability to unleash the wave of new value made possible by Big Data. It's all and every bit of data that you should be able to easily monitor and manage regardless of type or structure. That's why organizations are trending to build an end-to-end data platform for nearly all data and easy-to-use tools to analyze it. Regardless of data type, location (on-premises or in the cloud), or size, you have the power of familiar tools coupled with high-performance technologies to serve your business needs from data storage, processing, and all the way to visualization. The benefits of Big Data are not limited only to Business Intelligence (BI) experts or data scientists. Nearly everyone in your organization can analyze and make more informed decisions with the right tools.

In a traditional business environment, the data to power your reporting mechanism will usually come from tables in a database. However, it's increasingly necessary to supplement this with data obtained from outside your organization. This may be commercially available datasets, such as those available from Windows Data Market and elsewhere, or it may be data from less structured sources such as feeds, e-mails, logfiles, and more. You will, in most cases, need to cleanse, validate, and transform this data before loading it into an existing database. Extract, Transform, and Load (ETL) operations can use Big Data solutions to perform pattern matching, data categorization, deduplication, and summary operations on unstructured or semi-structured data to generate data in the familiar rows and columns format that can be imported into a database table. The following figure will give you a conceptual view of Big Data:

Big Data requires some level of machine learning or complex statistical processing to produce insights. If you have to use non-standard techniques to process and host it; it's probably Big Data.

The data store in a Big Data implementation is usually referred to as a NoSQL store, although this is not technically accurate because some implementations do support a SQL-like query language. NoSQL storage is typically much cheaper than relational storage, and usually supports a write-once capability that allows only for data to be appended. To update data in these stores you must drop and recreate the relevant file. This limitation maximizes performance; Big Data storage implementations are usually measured by throughput rather than capacity because this is usually the most significant factor for both storage and query efficiency. This approach also provides better performance and maintains the history of changes to the data.

Note

However, it is extremely important to note that, in addition to supporting all types of data, moving data to and from a non-relational store such as Hadoop and a relational data warehouse such as SQL Server is one of the key Big Data customer usage patterns. Throughout this book, we will explore how we can integrate Hadoop and SQL Server and derive powerful visualization on any data using the SQL Server BI suite.

 

The Apache Hadoop framework


Hadoop is an open source software framework that supports data-intensive distributed applications available through the Apache Open Source community. It consists of a distributed file system HDFS, the Hadoop Distributed File System and an approach to distributed processing of analysis called MapReduce. It is written in Java and based on the Linux/Unix platform.

It's used (extensively now) in the processing of streams of data that go well beyond even the largest enterprise datasets in size. Whether it's sensor, clickstream, social media, location-based, or other data that is generated and collected in large gobs, Hadoop is often on the scene in the service of processing and analyzing it. The real magic of Hadoop is its ability to move the processing or computing logic to the data where it resides as opposed to traditional systems, which focus on a scaled-up single server, move the data to that central processing unit and process the data there. This model does not work on the volume, velocity, and variety of data that present day industry is looking to mine for business intelligence. Hence, Hadoop with its powerful fault tolerant and reliable file system and highly optimized distributed computing model, is one of the leaders in the Big Data world.

The core of Hadoop is its storage system and its distributed computing model:

HDFS

Hadoop Distributed File System is a program level abstraction on top of the host OS file system. It is responsible for storing data on the cluster. Data is split into blocks and distributed across multiple nodes in the cluster.

MapReduce

MapReduce is a programming model for processing large datasets using distributed computing on clusters of computers. MapReduce consists of two phases: dividing the data across a large number of separate processing units (called Map), and then combining the results produced by these individual processes into a unified result set (called Reduce). Between Map and Reduce, shuffle and sort occur. Hadoop cluster, once successfully configured on a system, has the following basic components:

NameNode

This is also called the Head Node/Master Node of the cluster. Primarily, it holds the metadata for HDFS during processing of data which is distributed across the nodes; it keeps track of each HDFS data block in the nodes.

Note

The NameNode is the single point of failure in a Hadoop cluster.

Secondary NameNode

This is an optional node that you can have in your cluster to back up the NameNode if it goes down. If a secondary NameNode is configured, it keeps a periodic snapshot of the NameNode configuration to serve as a backup when needed. However, there is no automated way for failing over to the secondary NameNode; if the primary NameNode goes down, a manual intervention is needed. This essentially means that there would be an obvious down time in your cluster in case the NameNode goes down.

DataNode

These are the systems across the cluster which store the actual HDFS data blocks. The data blocks are replicated on multiple nodes to provide fault tolerant and high availability solutions.

JobTracker

This is a service running on the NameNode, which manages MapReduce jobs and distributes individual tasks.

TaskTracker

This is a service running on the DataNodes, which instantiates and monitors individual Map and Reduce tasks that are submitted.

The following figure shows you the core components of the Apache Hadoop framework:

Additionally, there are a number of supporting projects for Hadoop, each having their unique purpose for example, to feed input data to Hadoop system, a data warehousing system for ad hoc queries on top of Hadoop, and many more. The following are a few worth mentioning:

Hive

Hive is a supporting project for the main Apache Hadoop project and is an abstraction on top of MapReduce, which allows users to query the data without developing MapReduce applications. It provides the user with a SQL-like query language called Hive Query Language (HQL) to fetch data from Hive store. This makes it easier for people with SQL skills to adapt to Hadoop environment quickly.

Pig

Pig is an alternative abstraction on MapReduce, which uses dataflow scripting language called PigLatin. This is favored by programmers who already have scripting skills. You can run PigLatin statements interactively in a command line Pig shell named Grunt. You can also combine a sequence of PigLatin statements in a script, which can then be executed as a unit. These PigLatin statements are used to generate MapReduce jobs by the Pig interpreter and are executed on the HDFS data.

Flume

Flume is another open source implementation on top of Hadoop, which provides a data-ingestion mechanism for data into HDFS as data is generated.

Sqoop

Sqoop provides a way to import and export data to and from relational database tables (for example, SQL Server) and HDFS.

Oozie

Oozie allows creation of workflow of MapReduce jobs. This is familiar with developers who have worked on Workflow and communication foundation based solutions.

HBase

HBase is Hadoop database, a NoSQL database. It is another abstraction on top of Hadoop, which provides a near real-time query mechanisms to HDFS data.

Mahout

Mahout is a machine-learning library that contains algorithms for clustering and classification. One major focus of machine-learning research is to automatically learn to recognize complex patterns and make intelligent decisions based on data.

The following figure gives you a 1000 feet view of the Apache Hadoop and the various supporting projects that form this amazing ecosystem:

We will be exploring some of these components in the subsequent chapters of this book, but for a complete reference, please visit the Apache website http://hadoop.apache.org/.

Setting up this ecosystem along with the required supporting projects could be really non-trivial. In fact the only drawback this implementation has, is the effort needed to set up and administer a Hadoop cluster. This is basically the reason that many vendors are coming up with their own distribution of Hadoop bundled and distributed as a data processing platform. Using these distributions, enterprises would be able to set up Hadoop clusters in minutes through simplified and user-friendly cluster deployment wizards and also use the various dashboards for monitoring and instrumentation purposes. Some of the present day distributions are CH4 from Cloudera, Hortonworks Data Platform, and Microsoft HDInsight, which are quickly gaining popularity. These distributions are outside the scope of this book and won't be covered; please visit the respective websites for detailed information about these distributions.

 

Summary


In this chapter, we went through what Big Data is and why it is one of the compelling needs of the industry. The diversity of data that needs to be processed has taken Information Technology to heights that were never imagined before. Organizations that are able to take advantage of Big Data to parse any and every data will be able to more effectively differentiate and derive new value for the business, whether it is in the form of revenue growth, cost savings, or creating entirely new business models. For example, financial firms using machine learning to build better fraud detection algorithms, go beyond the simple business rules involving charge frequency and location to also include an individual's customized buying patterns ultimately leading to a better customer experience.

When it comes to Big Data implementations, these new requirements challenge traditional data management technologies and call for a new approach to enable organizations to effectively manage, enrich, and gain insights from any data. Apache Hadoop is one of the undoubted leaders in the Big Data industry. The entire ecosystem, along with its supporting projects provides the users a highly reliable, fault tolerant framework that can be used for massively parallel distributed processing of unstructured and semi-structured data.

In the next chapter, you will see how to use the Sqoop connector to move Hadoop data to SQL Server 2012 and vice versa. Sqoop is another open source project, which is designed for bi-directional import/export of data from Hadoop from/to any Relational Database Management System; we will see its usage as a first step of data integration between Hadoop and SQL Server 2012.

About the Author
  • Debarchan Sarkar

    Debarchan Sarkar is a Microsoft Data Platform engineer. He specializes in the Microsoft SQL Server Business Intelligence stack. Debarchan is a subject matter expert in SQL Server Integration Services and delves deep into the open source world, specifically the Apache Hadoop framework. He is currently working on a technology called HDInsight, which is Microsoft's distribution of Hadoop on Windows. He has authored various books on SQL Server and Big Data, including Microsoft SQL Server 2012 with Hadoop, Packt Publishing, and Pro Microsoft HDInsight: Hadoop on Windows, Apress. His Twitter handle is @debarchans.

    Browse publications by this author
Latest Reviews (1 reviews total)
Microsoft SQL Server 2012 with Hadoop
Unlock this book and the full library FREE for 7 days
Start now