Scaling Big Data with Hadoop and Solr


Scaling Big Data with Hadoop and Solr
eBook: $26.99
Formats: PDF, PacktLib, ePub and Mobi formats
$22.94
save 15%!
Print + free eBook + free PacktLib access to the book: $71.98    Print cover: $44.99
$44.99
save 37%!
Free Shipping!
UK, US, Europe and selected countries in Asia.
Also available on:
Overview
Table of Contents
Author
Support
Sample Chapters
  • Understand the different approaches of making Solr work on Big Data as well as the benefits and drawbacks
  • Learn from interesting, real-life use cases for Big Data search along with sample code
  • Work with the Distributed Enterprise Search without prior knowledge of Hadoop and Solr

Book Details

Language : English
Paperback : 144 pages [ 235mm x 191mm ]
Release Date : August 2013
ISBN : 1783281375
ISBN 13 : 9781783281374
Author(s) : Hrishikesh Vijay Karambelkar
Topics and Technologies : All Books, Big Data and Business Intelligence, Open Source

Table of Contents

Preface
Chapter 1: Processing Big Data Using Hadoop MapReduce
Chapter 2: Understanding Solr
Chapter 3: Making Big Data Work for Hadoop and Solr
Chapter 4: Using Big Data to Build Your Large Indexing
Chapter 5: Improving Performance of Search while Scaling with Big Data
Appendix A: Use Cases for Big Data Search
Appendix B: Creating Enterprise Search Using Apache Solr
Appendix C: Sample MapReduce Programs to Build the Solr Indexes
Index
  • Chapter 1: Processing Big Data Using Hadoop MapReduce
    • Understanding Apache Hadoop and its ecosystem
      • The ecosystem of Apache Hadoop
        • Apache HBase
        • Apache Pig
        • Apache Hive
        • Apache ZooKeeper
        • Apache Mahout
        • Apache HCatalog
        • Apache Ambari
        • Apache Avro
        • Apache Sqoop
        • Apache Flume
    • Storing large data in HDFS
      • HDFS architecture
        • NameNode
        • DataNode
        • Secondary NameNode
      • Organizing data
      • Accessing HDFS
    • Creating MapReduce to analyze Hadoop data
      • MapReduce architecture
        • JobTracker
        • TaskTracker
    • Installing and running Hadoop
      • Prerequisites
      • Setting up SSH without passphrases
      • Installing Hadoop on machines
      • Hadoop configuration
      • Running a program on Hadoop
    • Managing a Hadoop cluster
    • Summary
    • Chapter 2: Understanding Solr
      • Installing Solr
      • Apache Solr architecture
        • Storage
        • Solr engine
          • The query parser
          • Interaction
          • Client APIs and SolrJ client
          • Other interfaces
      • Configuring Apache Solr search
        • Defining a Schema for your instance
        • Configuring a Solr instance
          • Configuration files
        • Request handlers and search components
          • Facet
          • MoreLikeThis
          • Highlight
          • SpellCheck
          • Metadata management
      • Loading your data for search
        • ExtractingRequestHandler/Solr Cell
        • SolrJ
      • Summary
      • Chapter 3: Making Big Data Work for Hadoop and Solr
        • The problem
        • Understanding data-processing workflows
          • The standalone machine
          • Distributed setup
          • The replicated mode
          • The sharded mode
        • Using Solr 1045 patch – map-side indexing
          • Benefits and drawbacks
            • Benefits
            • Drawbacks
        • Using Solr 1301 patch – reduce-side indexing
          • Benefits and drawbacks
            • Benefits
            • Drawbacks
        • Using SolrCloud for distributed search
          • SolrCloud architecture
          • Configuring SolrCloud
          • Using multicore Solr search on SolrCloud
          • Benefits and drawbacks
            • Benefits
            • Drawbacks
        • Using Katta for Big Data search (Solr-1395 patch)
          • Katta architecture
          • Configuring Katta cluster
          • Creating Katta indexes
          • Benefits and drawbacks
            • Benefits
            • Drawbacks
        • Summary
        • Chapter 4: Using Big Data to Build Your Large Indexing
          • Understanding the concept of NOSQL
          • The CAP theorem
            • What is a NOSQL database?
              • The key-value store or column store
              • The document-oriented store
              • The graph database
            • Why NOSQL databases for Big Data?
            • How Solr can be used for Big Data storage?
          • Understanding the concepts of distributed search
            • Distributed search architecture
            • Distributed search scenarios
          • Lily – running Solr and Hadoop together
            • The architecture
              • Write-ahead Logging
              • The message queue
              • Querying using Lily
              • Updating records using Lily
            • Installing and running Lily
          • Deep dive – shards and indexing data of Apache Solr
            • The sharding algorithm
            • Adding a document to the distributed shard
          • Configuring SolrCloud to work with large indexes
            • Setting up the ZooKeeper ensemble
            • Setting up the Apache Solr instance
            • Creating shards, collections, and replicas in SolrCloud
          • Summary
          • Chapter 5: Improving Performance of Search while Scaling with Big Data
            • Understanding the limits
            • Optimizing the search schema
              • Specifying the default search field
              • Configuring search schema fields
              • Stop words
              • Stemming
            • Index optimization
              • Limiting the indexing buffer size
              • When to commit changes?
              • Optimizing the index merge
              • Optimize an option for index merging
              • Optimizing the container
              • Optimizing concurrent clients
              • Optimizing the Java virtual memory
            • Optimization the search runtime
              • Optimizing through search queries
                • Filter queries
              • Optimizing the Solr cache
                • The filter cache
                • The query result cache
                • The document cache
                • The field value cache
                • Lazy field loading
              • Optimizing search on Hadoop
            • Monitoring the Solr instance
              • Using SolrMeter
            • Summary

                  Hrishikesh Vijay Karambelkar

                  Hrishikesh Karambelkar is a software architect with a blend of entrepreneurial and professional experience. His core expertise involves working with multiple technologies such as Apache Hadoop and Solr, and architecting new solutions for the next generation of a product line for his organization. He has published various research papers in the domains of graph searches in databases at various international conferences in the past. On a technical note, Hrishikesh has worked on many challenging problems in the industry involving Apache Hadoop and Solr.
                  Sorry, we don't have any reviews for this title yet.

                  Code Downloads

                  Download the code and support files for this book.


                  Submit Errata

                  Please let us know if you have found any errors not listed on this list by completing our errata submission form. Our editors will check them and add them to this list. Thank you.


                  Errata

                  - 3 submitted: last submission 06 Jul 2014

                  Errata type: Technical | Page number: 9

                  Should be:
                  across a cluster of commodity servers
                  Instead of:
                  across a commodity of clustered servers

                  Errata type: Typo | Page number: 38

                  CSVS

                  Should be:

                  CSV

                  Errata type: Technical | Page number: 52

                  In the bullet point:
                  With reduced size index generation, it is possible to preserve the weights of
                  documents, which can contribute while performing a prioritization during a
                  search query.

                  reduced size

                  should be

                  reduce side

                  Errata type: Technical | Page number: 20 and 21

                  Apache Zookeeper establishes co-ordination among the cluster, and Hadoop NameNode, DataNode, JobTracker and TaskTracker consume its services.

                  Errata type: Typo : Page number: 120

                  The solrHome is the patch where solr.zip is stored.
                  Should be:
                  The solrHome is the path where solr.zip is stored.

                  Sample chapters

                  You can view our sample chapters and prefaces of this title on PacktLib or download sample chapters in PDF format.

                  Frequently bought together

                  Scaling Big Data with Hadoop and Solr +    Hadoop Operations and Cluster Management Cookbook =
                  50% Off
                  the second eBook
                  Price for both: $39.00

                  Buy both these recommended eBooks together and get 50% off the cheapest eBook.

                  What you will learn from this book

                  • Understand Apache Hadoop, its ecosystem, and Apache Solr
                  • Learn different industry-based architectures while designing Big Data enterprise search and understand their applicability and benefits
                  • Write map/reduce tasks for indexing your data
                  • Fine-tune the performance of your Big Data search while scaling your data
                  • Increase your awareness of new technologies available today in the market that provide you with Hadoop and Solr
                  • Use Solr as a NOSQL database
                  • Configure your Big Data instance to perform in the real world
                  • Address the key features of a distributed Big Data system such as ensuring high availability and reliability of your instances
                  • Integrate Hadoop and Solr together in your industry by means of use cases

                  In Detail

                  As data grows exponentially day-by-day, extracting information becomes a tedious activity in itself. Technologies like Hadoop are trying to address some of the concerns, while Solr provides high-speed faceted search. Bringing these two technologies together is helping organizations resolve the problem of information extraction from Big Data by providing excellent distributed faceted search capabilities.

                  Scaling Big Data with Hadoop and Solr is a step-by-step guide that helps you build high performance enterprise search engines while scaling data. Starting with the basics of Apache Hadoop and Solr, this book then dives into advanced topics of optimizing search with some interesting real-world use cases and sample Java code.

                  Scaling Big Data with Hadoop and Solr starts by teaching you the basics of Big Data technologies including Hadoop and its ecosystem and Apache Solr. It explains the different approaches of scaling Big Data with Hadoop and Solr, with discussion regarding the applicability, benefits, and drawbacks of each approach. It then walks readers through how sharding and indexing can be performed on Big Data followed by the performance optimization of Big Data search. Finally, it covers some real-world use cases for Big Data scaling.

                  With this book, you will learn everything you need to know to build a distributed enterprise search platform as well as how to optimize this search to a greater extent resulting in maximum utilization of available resources.

                  Approach

                  This book is a step-by-step tutorial that will enable you to leverage the flexible search functionality of Apache Solr together with the Big Data power of Apache Hadoop.

                  Who this book is for

                  Scaling Big Data with Hadoop and Solr provides guidance to developers who wish to build high-speed enterprise search platforms using Hadoop and Solr. This book is primarily aimed at Java programmers who wish to extend the Hadoop platform to make it run as an enterprise search without any prior knowledge of Apache Hadoop and Solr.

                  Code Download and Errata
                  Packt Anytime, Anywhere
                  Register Books
                  Print Upgrades
                  eBook Downloads
                  Video Support
                  Contact Us
                  Awards Voting Nominations Previous Winners
                  Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
                  Resources
                  Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software