Mastering Parallel Programming with R

Master the robust features of R parallel programming to accelerate your data science computations

Mastering Parallel Programming with R

This ebook is included in a Mapt subscription
Simon R. Chapple et al.

4 customer reviews
Master the robust features of R parallel programming to accelerate your data science computations
$0.00
$14.00
$34.99
$29.99p/m after trial
RRP $27.99
RRP $34.99
Subscription
eBook
Print + eBook
Start 30 Day Trial
Subscribe and access every Packt eBook & Video.
 
  • 4,000+ eBooks & Videos
  • 40+ New titles a month
  • 1 Free eBook/Video to keep every month
Start Free Trial
 
Preview in Mapt

Book Details

ISBN 139781784394004
Paperback244 pages

Book Description

R is one of the most popular programming languages used in data science. Applying R to big data and complex analytic tasks requires the harnessing of scalable compute resources.

Mastering Parallel Programming with R presents a comprehensive and practical treatise on how to build highly scalable and efficient algorithms in R. It will teach you a variety of parallelization techniques, from simple use of R’s built-in parallel package versions of lapply(), to high-level AWS cloud-based Hadoop and Apache Spark frameworks. It will also teach you low level scalable parallel programming using RMPI and pbdMPI for message passing, applicable to clusters and supercomputers, and how to exploit thousand-fold simple processor GPUs through ROpenCL. By the end of the book, you will understand the factors that influence parallel efficiency, including assessing code performance and implementing load balancing; pitfalls to avoid, including deadlock and numerical instability issues; how to structure your code and data for the most appropriate type of parallelism for your problem domain; and how to extract the maximum performance from your R code running on a variety of computer systems.

Table of Contents

Chapter 1: Simple Parallelism with R
Aristotle's Number Puzzle
The R parallel package
The segue package
Summary
Chapter 2: Introduction to Message Passing
Setting up your system environment for MPI
The MPI standard
The MPI API
Summary
Chapter 3: Advanced Message Passing
Grid parallelism
Inspecting and managing communications
Variants on lapply()
Summary
Chapter 4: Developing SPRINT, an MPI-Based R Package for Supercomputers
About ARCHER
Calling MPI code from R
Building an MPI R package – SPRINT
Adding a new function to the SPRINT package
Genomics analysis case study
Genomics with a supercomputer
Summary
Chapter 5: The Supercomputer in Your Laptop
OpenCL
The ROpenCL package
Summary
Chapter 6: The Art of Parallel Programming
Understanding parallel efficiency
Numerical approximation
Random numbers
Deadlock
Reducing the parallel overhead
Adaptive load balancing
Three steps to successful parallelization
What does the future hold?
Hybrid parallelism
Summary

What You Will Learn

  • Create and structure efficient load-balanced parallel computation in R, using R’s built-in parallel package
  • Deploy and utilize cloud-based parallel infrastructure from R, including launching a distributed computation on Hadoop running on Amazon Web Services (AWS)
  • Get accustomed to parallel efficiency, and apply simple techniques to benchmark, measure speed and target improvement in your own code
  • Develop complex parallel processing algorithms with the standard Message Passing Interface (MPI) using RMPI, pbdMPI, and SPRINT packages
  • Build and extend a parallel R package (SPRINT) with your own MPI-based routines
  • Implement accelerated numerical functions in R utilizing the vector processing capability of your Graphics Processing Unit (GPU) with OpenCL
  • Understand parallel programming pitfalls, such as deadlock and numerical instability, and the approaches to handle and avoid them
  • Build a task farm master-worker, spatial grid, and hybrid parallel R programs

Authors

Table of Contents

Chapter 1: Simple Parallelism with R
Aristotle's Number Puzzle
The R parallel package
The segue package
Summary
Chapter 2: Introduction to Message Passing
Setting up your system environment for MPI
The MPI standard
The MPI API
Summary
Chapter 3: Advanced Message Passing
Grid parallelism
Inspecting and managing communications
Variants on lapply()
Summary
Chapter 4: Developing SPRINT, an MPI-Based R Package for Supercomputers
About ARCHER
Calling MPI code from R
Building an MPI R package – SPRINT
Adding a new function to the SPRINT package
Genomics analysis case study
Genomics with a supercomputer
Summary
Chapter 5: The Supercomputer in Your Laptop
OpenCL
The ROpenCL package
Summary
Chapter 6: The Art of Parallel Programming
Understanding parallel efficiency
Numerical approximation
Random numbers
Deadlock
Reducing the parallel overhead
Adaptive load balancing
Three steps to successful parallelization
What does the future hold?
Hybrid parallelism
Summary

Book Details

ISBN 139781784394004
Paperback244 pages
Read More
From 4 reviews

Read More Reviews