Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
HBase Administration Cookbook

You're reading from  HBase Administration Cookbook

Product type Book
Published in Aug 2012
Publisher Packt
ISBN-13 9781849517140
Pages 332 pages
Edition 1st Edition
Languages
Author (1):
Yifeng Jiang Yifeng Jiang
Profile icon Yifeng Jiang

Table of Contents (16) Chapters

HBase Administration Cookbook
Credits
About the Author
Acknowledgement
About the Reviewers
www.PacktPub.com
Preface
Setting Up HBase Cluster Data Migration Using Administration Tools Backing Up and Restoring HBase Data Monitoring and Diagnosis Maintenance and Security Troubleshooting Basic Performance Tuning Advanced Configurations and Tuning

Changing the kernel settings


HBase is a database running on Hadoop, and just like other databases, it keeps a lot of files open at the same time. Linux limits the number of file descriptors that any one process may open; the default limits are 1024 per process. To run HBase smoothly, you need to increase the maximum number of open file descriptors for the user, who started HBase. In our case, the user is called hadoop.

You should also increase Hadoop's nproc setting. The nproc setting specifies the maximum number of processes that can exist simultaneously for the user. If nproc is too low, an OutOfMemoryError error may happen.

We will describe how to show and change the kernel settings, in this recipe.

Getting ready

Make sure you have root privileges on all of your servers.

How to do it...

You will need to make the following kernel setting changes to all servers of the cluster:

  1. 1. To confirm the current open file limits, log in as the hadoop user and execute the following command:

    hadoop$ ulimit -n
    1024
    
  2. 2. To show the setting for maximum processes, use the -u option of the ulimit command:

    hadoop$ ulimit -u
    unlimited
    
  3. 3. Log in as the root user to increase open file and nproc limits. Add the following settings to the limits.conf file:

    root# vi /etc/security/limits.conf
    hadoop soft nofile 65535
    hadoop hard nofile 65535
    hadoop soft nproc 32000
    hadoop hard nproc 32000
    
  4. 4. To apply the changes, add the following line into the /etc/pam.d/common-session file:

    root# echo "session required pam_limits.so" >> /etc/pam.d/common-session
    
  5. 5. Log out and back in again, as the hadoop user, and confirm the setting values again; you should see the above changes have been applied:

    hadoop$ ulimit -n
    65535
    hadoop$ ulimit -u
    32000
    

How it works...

The previous setting changes the hadoop user's open file limit to 65535. It also changes the hadoop user's max processes number to 32000. With this change of the kernel setting, HBase can keep enough files open at the same time and also run smoothly.

See also

You have been reading a chapter from
HBase Administration Cookbook
Published in: Aug 2012 Publisher: Packt ISBN-13: 9781849517140
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}