Reader small image

You're reading from  Building a BeagleBone Black Super Cluster

Product typeBook
Published inNov 2014
Publisher
ISBN-139781783989447
Edition1st Edition
Right arrow
Author (1)
Andreas J Reichel
Andreas J Reichel
author image
Andreas J Reichel

Andreas Josef Reichel was born in 1982 in Munich, Bavaria, to Josef and Ursula. He went to an elementary school from 1989 to 1993 and continued with lower secondary education for 4 years and started with middle school in 1996. In 1999, he finished school as the best graduate of the year. From 2000 to 2001, he went to Fachoberschule and got his subject-linked university entrance qualification, with which he began to study Physical Technology at the University of Applied Sciences in Munich. After two semesters, he got his preliminary diploma and began with general studies of Physics at the Ludwig Maximilian University of Munich in 2003. In 2011, he completed Dipl.-Phys. (Univ.) in experimental physics with the THz characterization of thin semiconductor films in photonics and optoelectronics. Now, he is working on his dissertation to Dr. rer. nat. on plasma etching processes for semiconductors at the Walter Schottky Institute of the Technische Universität München in Garching. In his spare time, he has been learning programming languages such as BASIC, Pascal, C/C++, x86 and x64 Assembler, as well as HTML, PHP, JavaScript, and the database system MySQL and has been programming since he was 13 years old. Since 1995, he has been an active hobby musician in different accordion ensembles and orchestras. He also loves to learn about languages and drawing, and he began practicing Chinese martial arts in 2012. He invests most of his free time in hobby electronic projects and family genealogical research. He was the co-author of Charge carrier relaxation and effective masses in silicon probed by terahertz spectroscopy, S. G. Engelbrecht, A. J. Reichel, and R. Kersting, Journal of Applied Physics.
Read more about Andreas J Reichel

Right arrow

Chapter 4. Parallel Computing with OpenMPI and ScaLAPACK

The advantage of a cluster compared to a normal personal computer is its capability of performing tasks in parallel and thereby reducing the overall calculation time. On modern computer platforms, there are two main architectures that perform parallel computations:

  • Shared memory systems

  • Distributed memory systems

On the shared memory system architecture, different CPUs or CPU cores share the same main memory. This means that every program that is loaded from one core or CPU can principally be used by other cores or CPUs as well. To optimally use such a system, the memory or process management has to take care that two processes do not access the same locations of the shared main memory at the same time. Otherwise, one process might have to wait for another until it has completed its operation or freed some locked resources. Also, software programmers have to take care of the fact that only one process can alter such memory areas at a...

MPI – Message Passing Interface


MPI stands for Message Passing Interface. As a software standard, it realizes the most important part of cluster intercommunication. It is the fundamental software layer upon which all program control and data communication is based. Although MPI defines a system with which the nodes of a cluster can communicate, it does not define the strict protocol with which this communication takes place. Despite this, every MPI implementation requires a standard application programming interface (API). In our case, MPI will make use of the TCP/IP protocol by sending data packets across the cluster's network backbone. The development of MPI began in 1992 and the following important key features, among others, were required:

  • Peer-to-peer communication

  • Global communication

  • Implementation in C and Fortran77

These are included in the MPI 1.0 standard. Beginning with Version 2 of MPI, the following important features were added:

  • Parallel data input and output

  • Dynamic process management...

Installing and configuring OpenMPI


The following subsection will guide you through the process of downloading and configuring the OpenMPI package in Ubuntu. It will also provide you with basic programming examples in order to create simple cluster applications and make you understand the basic usage and functioning of the MPI standard interface.

Downloading and installing OpenMPI packages

Thankfully, OpenMPI is regarded as a standard package under Ubuntu Linux. You can download a precompiled version with the following command:

sudo apt-get install openmpi1.5-bin openmpi1.5-doc libopenmpi1.5-dev

Keep in mind that this has to be done on every single node. You can also execute commands using the ssh command from your master node.

For the following tutorial, let's assume that you have the following network setup:

  • 192.168.0.16: This is your master node with the name gatekeeper

  • 192.168.0.17: This is your first slave node with the name beowulf1

  • 192.168.0.18: This is your second slave node with the...

Installing and configuring ScaLAPACK


To install ScaLAPACK, you can download it from netlib at http://www.netlib.org/scalapack/#_scalapack_version_2_0_2.

Download the source tarball into a folder in your /var/mpishare directory, and extract it into a convenient directory, for example, /var/mpishare/sca. As ScaLAPACK depends on many different libraries, it might not be easy to compile it successfully. To simplify things, there is a handy setup script written in Python. You can download it from http://www.netlib.org/scalapack/scalapack_installer.tgz.

Extract the installer script into the source directory of ScaLAPACK. Then, you can run the script with the following command:

./setup.py --prefix=/var/mpishare/sca 
--mpiincdir=/usr/include/mpi/ --downall

The prefix option tells the setup script where to install the ScaLAPACK libraries, which will be a subdirectory called lib below this folder. The mpiincdir command specifies the location of the OpenMPI include files, and the downall option tells...

Summary


This chapter showed you the first steps into the world of parallel computations with ScaLAPACK on a BeagleBone Black cluster by distributing a problem between four cluster nodes.

To introduce you to this technique, the predominating computer architectures were explained in the beginning of the chapter, and you were introduced to the message-passing interface OpenMPI. OpenMPI is the basis for our BeagleBone Black cluster and provides the communication layer for data transfer between the cluster nodes. The chapter continued with providing an easy guide on how to install and configure OpenMPI on your BeagleBone Black boards. The MPI part of this chapter ended with some examples of how to use the API to transfer simple integer values between the cluster nodes.

To introduce you to the mathematical computations, the chapter continued with the installation procedure for ScaLAPACK, where you learned how to download and compile free library source code. Although it is not possible to go into...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Building a BeagleBone Black Super Cluster
Published in: Nov 2014Publisher: ISBN-13: 9781783989447
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Andreas J Reichel

Andreas Josef Reichel was born in 1982 in Munich, Bavaria, to Josef and Ursula. He went to an elementary school from 1989 to 1993 and continued with lower secondary education for 4 years and started with middle school in 1996. In 1999, he finished school as the best graduate of the year. From 2000 to 2001, he went to Fachoberschule and got his subject-linked university entrance qualification, with which he began to study Physical Technology at the University of Applied Sciences in Munich. After two semesters, he got his preliminary diploma and began with general studies of Physics at the Ludwig Maximilian University of Munich in 2003. In 2011, he completed Dipl.-Phys. (Univ.) in experimental physics with the THz characterization of thin semiconductor films in photonics and optoelectronics. Now, he is working on his dissertation to Dr. rer. nat. on plasma etching processes for semiconductors at the Walter Schottky Institute of the Technische Universität München in Garching. In his spare time, he has been learning programming languages such as BASIC, Pascal, C/C++, x86 and x64 Assembler, as well as HTML, PHP, JavaScript, and the database system MySQL and has been programming since he was 13 years old. Since 1995, he has been an active hobby musician in different accordion ensembles and orchestras. He also loves to learn about languages and drawing, and he began practicing Chinese martial arts in 2012. He invests most of his free time in hobby electronic projects and family genealogical research. He was the co-author of Charge carrier relaxation and effective masses in silicon probed by terahertz spectroscopy, S. G. Engelbrecht, A. J. Reichel, and R. Kersting, Journal of Applied Physics.
Read more about Andreas J Reichel