Python Parallel Programming Solutions [Video]

Python Parallel Programming Solutions [Video]

Cookbook
Giancarlo Zaccone

Master efficient parallel programming to build powerful applications using Python
$25.00
RRP $124.99

Instantly access this course right now and get the skills you need in 2017

With unlimited access to a constantly growing library of over 4,000 eBooks and Videos, a subscription to Mapt gives you everything you need to learn new skills. Cancel anytime.

Free Sample

Video Details

ISBN 139781787280496
Course Length3 hours 59 minutes

Video Description

This course will teach you parallel programming techniques using examples in Python and help you explore the many ways in which you can write code that allows more than one process to happen at once.

Starting with introducing you to the world of parallel computing, we move on to cover the fundamentals in Python. This is followed by exploring the thread-based parallelism model using the Python threading module by synchronizing threads and using locks, mutex, semaphores queues, GIL, and the thread pool. Next you will be taught about process-based parallelism, where you will synchronize processes using message passing and will learn about the performance of MPI Python Modules.

Moving on, you’ll get to grips with the asynchronous parallel programming model using the Python asyncio module, and will see how to handle exceptions. You will discover distributed computing with Python, and learn how to install a broker, use Celery Python Module, and create a worker. You will understand anche Pycsp, the Scoop framework, and disk modules in Python. Further on, you will get hands-on in GPU programming with Python using the PyCUDA module and will evaluate performance limitations.

Style and Approach

A step-by-step guide to parallel programming using Python, with videos that feature one or more programming examples. It is a practically-oriented course and has all the necessary underlying parallel computing concepts.

Table of Contents

Getting Started with Parallel Computing and Python
The Parallel Computing Memory Architecture
Memory Organization
Memory Organization (Continued)
Parallel Programming Models
Designing a Parallel Program
Evaluating the Performance of a Parallel Program
Introducing Python
Working with Processes in Python
Working with Threads in Python
Thread-Based Parallelism
Defining a Thread
Determining the Current Thread
Using a Thread in a Subclass
Thread Synchronization with Lock
Thread Synchronization with RLock
Thread Synchronization with Semaphores
Thread Synchronization with a Condition
Thread Synchronization with an Event
Using the "with" Statement
Thread Communication Using a Queue
Evaluating the Performance of Multithread Applications
Process-Based Parallelism
Spawning a Process
Naming a Process
Running a Process in the Background
Killing a Process
Using a Process in a Subclass
Exchanging Objects between Processes
Synchronizing Processes
Managing a State between Processes
Using a Process Pool
Using the mpi4py Python Module
Point-to-Point Communication
Avoiding Deadlock Problems
Using Broadcast for Collective Communication
Using Scatter for Collective Communication
Using Gather for Collective Communication
Using Alltoall for Collective Communication
The Reduction Operation
Optimizing the Communication
Asynchronous Programming
Using the concurrent.futures Python Modules
Event Loop Management with Asyncio
Handling Coroutines with Asyncio
Manipulating a Task with Asyncio
Dealing with Asyncio and Futures
Distributed Python
Using Celery to Distribute Tasks
Creating a Task with Celery
Scientific Computing with SCOOP
Handling Map Functions with SCOOP
Remote Method Invocation with Pyro4
Chaining Objects with Pyro4
Developing a Client-Server Application with Pyro4
Communicating Sequential Processes with PyCSP
A Remote Procedure Call with RPyC
GPU Programming with Python
Using the PyCUDA Module
Building a PyCUDA Application
Understanding the PyCUDA Memory Model with Matrix Manipulation
Kernel Invocations with GPU Array
Evaluating Element-Wise Expressions with PyCUDA
The MapReduce Operation with PyCUDA
GPU Programming with NumbaPro
Using GPU-Accelerated Libraries with NumbaPro
Using the PyOpenCL Module
Building a PyOpenCL Application
Evaluating Element-Wise Expressions with PyOpenCl
Testing Your GPU Application with PyOpenCL

What You Will Learn

  • Synchronize multiple threads and processes to manage parallel tasks
  • Implement message passing communication between processes to build parallel applications
  • Program your own GPU cards to address complex problems
  • Manage computing entities to execute distributed computational tasks
  • Write efficient programs by adopting the event-driven programming model
  • Explore the cloud technology with DJango and Google App Engine
  • Apply parallel programming techniques that can lead to performance improvements

Authors

Table of Contents

Getting Started with Parallel Computing and Python
The Parallel Computing Memory Architecture
Memory Organization
Memory Organization (Continued)
Parallel Programming Models
Designing a Parallel Program
Evaluating the Performance of a Parallel Program
Introducing Python
Working with Processes in Python
Working with Threads in Python
Thread-Based Parallelism
Defining a Thread
Determining the Current Thread
Using a Thread in a Subclass
Thread Synchronization with Lock
Thread Synchronization with RLock
Thread Synchronization with Semaphores
Thread Synchronization with a Condition
Thread Synchronization with an Event
Using the "with" Statement
Thread Communication Using a Queue
Evaluating the Performance of Multithread Applications
Process-Based Parallelism
Spawning a Process
Naming a Process
Running a Process in the Background
Killing a Process
Using a Process in a Subclass
Exchanging Objects between Processes
Synchronizing Processes
Managing a State between Processes
Using a Process Pool
Using the mpi4py Python Module
Point-to-Point Communication
Avoiding Deadlock Problems
Using Broadcast for Collective Communication
Using Scatter for Collective Communication
Using Gather for Collective Communication
Using Alltoall for Collective Communication
The Reduction Operation
Optimizing the Communication
Asynchronous Programming
Using the concurrent.futures Python Modules
Event Loop Management with Asyncio
Handling Coroutines with Asyncio
Manipulating a Task with Asyncio
Dealing with Asyncio and Futures
Distributed Python
Using Celery to Distribute Tasks
Creating a Task with Celery
Scientific Computing with SCOOP
Handling Map Functions with SCOOP
Remote Method Invocation with Pyro4
Chaining Objects with Pyro4
Developing a Client-Server Application with Pyro4
Communicating Sequential Processes with PyCSP
A Remote Procedure Call with RPyC
GPU Programming with Python
Using the PyCUDA Module
Building a PyCUDA Application
Understanding the PyCUDA Memory Model with Matrix Manipulation
Kernel Invocations with GPU Array
Evaluating Element-Wise Expressions with PyCUDA
The MapReduce Operation with PyCUDA
GPU Programming with NumbaPro
Using GPU-Accelerated Libraries with NumbaPro
Using the PyOpenCL Module
Building a PyOpenCL Application
Evaluating Element-Wise Expressions with PyOpenCl
Testing Your GPU Application with PyOpenCL

Video Details

ISBN 139781787280496
Course Length3 hours 59 minutes
Read More

Read More Reviews