Reader small image

You're reading from  Effective Concurrency in Go

Product typeBook
Published inApr 2023
PublisherPackt
ISBN-139781804619070
Edition1st Edition
Concepts
Right arrow
Author (1)
Burak Serdar
Burak Serdar
author image
Burak Serdar

Burak Serdar is a software engineer with over 30 years of experience in designing and developing distributed enterprise applications that scale. He's worked for several start-ups and large corporations, including Thomson and Red Hat, as an engineer and technical lead. He's one of the co-founders of Cloud Privacy Labs where he works on semantic interoperability and privacy technologies for centralized and decentralized systems. Burak holds BSc and MSc degrees in electrical and electronics engineering, and an MSc degree in computer science.
Read more about Burak Serdar

Right arrow

Concurrency – A High-Level Overview

For many who don’t work with concurrent programs (and for some who do), concurrency means the same thing as parallelism. In colloquial speech, people don’t usually distinguish between the two. But there are some clear reasons why computer scientists and software engineers make a big deal out of differentiating concurrency and parallelism. This chapter is about what concurrency is (and what it is not) and some of the foundational concepts of concurrency.

Specifically, we’ll cover the following main topics in this chapter:

  • Concurrency and parallelism
  • Shared memory versus message passing
  • Atomicity, race, deadlocks, and starvation

By the end of this chapter, you will have a high-level understanding of concurrency and parallelism, basic concurrent programming models, and some of the fundamental concepts of concurrency.

Technical Requirements

This chapter requires some familiarity with the Go language. Some of the examples use goroutines, channels, and mutexes.

Concurrency and parallelism

There was probably a time when concurrency and parallelism meant the same thing in computer science. That time is long gone now. Many people will tell you what concurrency is not: “concurrency is not parallelism,” but when it comes to telling what concurrency is, a simple definition is usually elusive. Different definitions of concurrency give different aspects of the concept because concurrency is not how the real world works. The real world works with parallelism. I will try to summarize some of the core ideas behind concurrency, hoping you can understand the abstract nature of it well enough so that you can apply it to solve practical problems.

Many things around us act independently at the same time. There are probably people around you minding their own business, and sometimes, they interact with you and with each other. All these things happen in parallel, so parallelism is the natural way of thinking about multiple independent things...

Shared memory versus message passing

If you have been developing with Go for some time, you have probably heard the phrase “Do not communicate by sharing memory. Instead, share memory by communicating.” Sharing memory among the concurrent blocks of a program creates vast opportunities for subtle bugs that are hard to diagnose. These problems manifest themselves randomly, usually under load that cannot be simulated in a controlled test environment, and they are hard or impossible to reproduce. What cannot be reproduced cannot be tested, so finding such problems is usually a matter of luck. Once found, they are usually easy to fix with very minor changes. That adds insult to injury. Go supports both shared memory and message-passing models, so we will spend some time looking at what the shared memory and message-passing paradigms are.

In a shared memory system, there can be multiple processors or cores with multiple execution threads that use the same memory. In a Uniform...

Atomicity, race, deadlocks, and starvation

To write and analyze concurrent programs successfully, you have to be aware of some key concepts: atomicity, race, deadlocks, and starvation. Atomicity is a property you have to carefully exploit for safe and correct operation. Race is a natural condition related to the timing of events in a concurrent system, and can create irreproducible subtle bugs. You have to avoid deadlocks at all costs. Starvation is usually related to scheduling algorithms, but can also be caused by bugs in the program.

A race condition is a condition in which the outcome of a program depends on the sequence or timing of concurrent executions. A race condition is a bug when at least one of the outcomes is undesirable. Consider the following data type representing a bank account:

type Account struct {
     Balance int
}
func (acct *Account) Withdraw(amt int) error {
     if acct.Balance < amt {
  ...

Summary

The main theme in this chapter was that concurrency is not parallelism. Parallelism is an intuitive concept people are used to because the real world works in parallel. Concurrency is a mode of computation where blocks of code may or may not run in parallel. The key here is to make sure we get the correct result no matter how the program is run.

We also talked about the two main concurrency programming paradigms: message passing and shared memory. Go permits both, which makes it easy to program, but equally easy to make mistakes. The last part of this chapter was about fundamental concepts of concurrent programming – that is, race conditions, atomicity, deadlocks, and livelock concepts. The important point to note here is that these are not theoretical concepts – these are real situations that affect how programs run and how they fail.

We tried to avoid Go specifics in this chapter as much as possible. The next chapter will cover Go concurrency primitives...

Question

We looked at the dining philosopher’s problem with a single philosopher that walks when they are thinking. What problems can you foresee if there are two philosophers?

Further reading

The literature on concurrency is very rich. These are only some of the seminal works in the field of concurrency and distributed computing that are related to the topics we discussed in this chapter. Every serious software practitioner should at least have a basic understanding of these.

The following paper is easy to read and short. It defines mutual exclusion and critical sections: E. W. Dijkstra. 1965. Solution of a problem in concurrent programming control. Commun. ACM 8, 9 (Sept. 1965), 569. https://doi.org/10.1145/365559.365617.

This is the CSP book. It defines the CSP as a formal language: Hoare, C. A. R. (2004) [originally published in 1985 by Prentice Hall International]. “Communicating Sequential Processes” (PDF). Usingcsp.com.

The following paper talks about the ordering of events in a distributed system: Time, Clocks and the Ordering of Events in a Distributed System, Leslie Lamport, Communications of the ACM 21, 7 (July 1978), 558...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Effective Concurrency in Go
Published in: Apr 2023Publisher: PacktISBN-13: 9781804619070
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Burak Serdar

Burak Serdar is a software engineer with over 30 years of experience in designing and developing distributed enterprise applications that scale. He's worked for several start-ups and large corporations, including Thomson and Red Hat, as an engineer and technical lead. He's one of the co-founders of Cloud Privacy Labs where he works on semantic interoperability and privacy technologies for centralized and decentralized systems. Burak holds BSc and MSc degrees in electrical and electronics engineering, and an MSc degree in computer science.
Read more about Burak Serdar