CUDA Cookbook

More Information
Learn
  • Understand NVIDIA GPU architecture and how to start GPU programming
  • Learn how to analyze GPU application performance and discover optimization strategies
  • Explore GPU programming, profile, and debugging tools
  • Get insights into parallel programming algorithms and their implementations
  • Discover how to scale GPU-accelerated applications in multi-GPU and multi nodes
  • Delve into versatile GPU programming platforms with accelerated libraries, Python, and OpenACC
  • Find out how GPUs can accelerate deep learning algorithms such as CNNs and RNNs
About

Compute Unified Device Architecture (CUDA) is NVIDIA’s GPU computing platform and application programming interface (API). It is designed to work with programming languages such as C, C++, and Python. CUDA can leverage GPU’s parallel computing power for various high-performance computing applications in the fields of science, healthcare, and deep learning.

The CUDA Cookbook is designed to help you learn GPU parallel programming and guide you with its modern-day application. With its help, you’ll be able to discover various CUDA programming recipes for modern GPU architectures. The book will not only guide you through GPU features, tools, and APIs, but also help you understand how to analyze performance with sample parallel programming algorithms. This useful book will ensure you gain plenty of optimization experience and insights into CUDA programming platforms with various libraries, open accelerators (OpenACC), and other languages. As you progress, you’ll even discover how to generate additional computing power with multiple GPUs in a box or multiple boxes. As you reach the concluding chapters, you’ll explore recipes on how CUDA accelerates deep learning algorithms, including convolutional neural networks (CNNs) and recurrent neural networks (RNNs).

By the end of this book, you will be equipped with the skills you need to use the power of GPU computing in your applications.

Features
  • Learn parallel programming principles, practices, and performance analysis in GPU programming
  • Explore distributed GPU programming and GPU programming languages apart from CUDA
  • Understand GPU acceleration for deep learning and learn to analyze its performance
Page Count 513
Course Length 15 hours 23 minutes
ISBN 9781788996242
Date Of Publication 17 Jul 2019

Authors

Bharatkumar Sharma

Bharatkumar Sharma obtained master degree in Information Technology from Indian Institute of Information Technology, Bangalore. He has around 10 years of development and research experience in domain of Software Architecture, Distributed and Parallel Computing. He is currently working with NVIDIA as a Senior Solution Architect, South Asia.

Jack Han

Jack Han is a Solutions architect at NVIDIA. He has extensive experiences in parallel architecture, using CUDA, and embedded device development. He has developed medical imaging device and media transcoder using CUDA. He has completed his MSE from Seoul National University and BSE from Hanyang University. He has previously worked in companies like Samsung Medison Co., Ltd, and Hyundai Heavy Industries Co., Ltd.