Reader small image

You're reading from  Hands-On GPU Computing with Python

Product typeBook
Published inMay 2019
Reading LevelIntermediate
PublisherPackt
ISBN-139781789341072
Edition1st Edition
Languages
Right arrow
Author (1)
Avimanyu Bandyopadhyay
Avimanyu Bandyopadhyay
author image
Avimanyu Bandyopadhyay

Avimanyu Bandyopadhyay is currently pursuing a PhD degree in Bioinformatics based on applied GPU computing in Computational Biology at Heritage Institute of Technology, Kolkata, India. Since 2014, he developed a keen interest in GPU computing, and used CUDA for his master's thesis. He has experience as a systems administrator as well, particularly on the Linux platform. Avimanyu is also a scientific writer, technology communicator, and a passionate gamer. He has published technical writing on open source computing and has actively participated in NVIDIA's GPU computing conferences since 2016. A big-time Linux fan, he strongly believes in the significance of Linux and an open source approach in scientific research. Deep learning with GPUs is his new passion!
Read more about Avimanyu Bandyopadhyay

Right arrow

Fundamentals of GPU Programming

In this chapter, we will be moving on from the hardware perspective of GPUs toward a computing perspective, as is the primary objective of this book. We will begin with an introduction to GPU programming and fundamental ways to set up three different platforms, namely CUDA, ROCm, and Anaconda. NVIDIA and AMD GPUs will be revisited here to explore the practical usage of GPUs with the three platforms.

The concept of Python programming integrated with GPU code invocation will be discussed. Anaconda users and Python programming enthusiasts will be motivated to invoke GPUs within their program code with CuPy and Numba (formerly Accelerate) via Anaconda. Additionally, we will learn about a few basics of hands-on GPU programming, GPU programmable platforms, CUDA, CUDA libraries, OpenCL, and ROCm.

Moving on, we will explore the Python programming aspect...

GPU-programmable platforms

Since our primary subject is GPU computing, it is essential that we learn the fundamentals of GPU programming from a computing perspective. GPU programming is the foundation of GPU computing. The question that should be pondered upon here, is why is it significant in GPU computing? To know the answer, let's first understand the basic differences between programming and computing. Here is a short comparison:

Programming

Computing

Involves developing programs.

Involves performing computations through programs.

Intends to solve multiple problems in a generalized manner.

Focused on solving a single problem that can be interdisciplinary in nature.

Programming is applied to broader scenarios.

Computing is applied to a single specific scenario that is based on and chosen from those broader scenarios.

Here is a simple example...

Basic CUDA concepts

Since we have discussed various points of Compute Unified Device Architecture (CUDA) in our earlier chapters, let's now focus on its technical aspects and GPU programmability.

Apart from the installation procedure that's unique to CUDA, the remaining concepts that will be discussed here in brief will be useful for getting started with GPU programming on all platforms.

Installing and testing

Before we discuss some fundamental GPU programming concepts, it is essential that we revisit the CUDA installation and testing procedure that we covered earlier in Chapter 2, Designing a GPU Computing Strategy, while concluding the DIY section. Step-by-step screenshots of a fresh Ubuntu 18.04 Linux installation...

Basic ROCm concepts

In Chapter 3, Setting Up a GPU Computing Platform with NVIDIA and AMD, we discussed AMD ROCm and also compared it with NVIDIA CUDA. Like we did in the previous section of this chapter, let's look into the Radeon Open Compute Platform in a similar manner.

AMD ROCm includes a set of fundamental ways to set up a GPU programming platform for Open Compute. The basic ROCm concepts you need to know to start programming on AMD or NVIDIA GPUs are as follows.

Installation procedure and testing

According to the official documentation, an ROCm installation requires performing the following Terminal-based tasks that are compatible with Ubuntu 18.04:

  1. Update your system and install libnuma-dev using the following...

The Anaconda Python distribution for package management and deployment

Anaconda is a distribution for package management and deployment. It facilitates the development of scientific computing and machine learning through Python and R (a programming language focused on statistical computing). Anaconda simplifies the process of managing various packages and also their deployment. The Anaconda repository maintains more than 1,000 professionally built packages for data science.

It has a package management system called conda to install various scientific packages. It also provides a build feature for building your own Python packages and uploading them to Anaconda servers. conda can be used for installing, executing, and also updating packages, along with their dependencies. It can facilitate software for any language, even though it was made for Python packages.

The Anaconda distribution...

GPU-enabled Python programming

The fundamental concept behind Python programming on GPU devices is based on what we have learned so far about CUDA, ROCm, and Anaconda. It is all about using their integrations with Python developed as PyCUDA, PyOpenCL, CuPy, and Numba, respectively.

With PyCUDA, you can use Python with NVIDIA GPUs, while with PyOpenCL, you can use NVIDIA, AMD GPUs and other massively parallel compute devices. CuPy allows you to implement NumPy like features on an NVIDIA GPU. After installing Accelerate with Conda, you can import the numba package very easily within your code to leverage your GPU device. We will explore this in detail in Chapter 8, Working with Anaconda, CuPy, and Numba for GPUs.

The dual advantage

...

Summary

In this chapter, we learned about the basic differences between programming and computing. We learned about some of the fundamental concepts regarding how CUDA, ROCm, and Numba leverage GPUs. We also learned the many libraries facilitated by CUDA, ROCm, and Numba. The features of PyCUDA, PyOpenCL, and Numba were mentioned and highlighted.

Now that we're at the end of this chapter, you should be able to install CUDA, ROCm, and Anaconda on an Ubuntu-based system. You should also be able to set up the hipify tool and start porting existing CUDA code to its HIP version, especially if you are a research-code enthusiast. You are now familiar with the configurational differences between OpenCL with CUDA and OpenCL with ROCm. You have also learned the various reasons behind why Python is a great choice for GPU programming.

Before we start our hands-on experience with programming...

Further reading

More information on PyCUDA, PyOpenCL, CuPy, and Numba with their features can be obtained from the following comprehensive resources:

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Hands-On GPU Computing with Python
Published in: May 2019Publisher: PacktISBN-13: 9781789341072
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Avimanyu Bandyopadhyay

Avimanyu Bandyopadhyay is currently pursuing a PhD degree in Bioinformatics based on applied GPU computing in Computational Biology at Heritage Institute of Technology, Kolkata, India. Since 2014, he developed a keen interest in GPU computing, and used CUDA for his master's thesis. He has experience as a systems administrator as well, particularly on the Linux platform. Avimanyu is also a scientific writer, technology communicator, and a passionate gamer. He has published technical writing on open source computing and has actively participated in NVIDIA's GPU computing conferences since 2016. A big-time Linux fan, he strongly believes in the significance of Linux and an open source approach in scientific research. Deep learning with GPUs is his new passion!
Read more about Avimanyu Bandyopadhyay