Notes on Go concurrency
Concurrent computing is a form of computing in which several computations are executed during overlapping time periods—concurrently—instead of sequentially (one completing before the next starts). This is a property of a system—this may be an individual program, a computer, or a network—and there is a separate execution point or “thread of control” for each computation (“process”). A concurrent system is one where a computation can advance without waiting for all other computations to complete
Two common ways of concurrency are,
- Share memory
- Message passing
Share memory
Here the concurrent components communicate by altering the contents of shared memory locations. This form of concurrency requires some form of locking to coordinate between threads,
- mutexes
- semaphores
- monitors
Programs implementing those (correctly) are known as thread-safe
Message passing
Here the concurrent components communicate by exchanging messages.
Concurrency can happen at programming level or operating system level. At programming level we encounter
- Channels
- Coroutines
- Futures and promises
At operating system level,
- Computer multitasking
- Process
- Thread
On threads
A thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system. The scheduler is responsible for assigning resources to a given work (a set of instructions to be executed), in this case, the thread
All OS support multiple threads of execution. The problems that can arise when dealing with,
- Shared Data
- Signaling between threads
Concurrency primitives
A set of concurrent primitives usually exposed by the OS in some way to programming languages are,
- Locks (a.k.a. Mutex -> MUTually EXclusive): Only one thread executes in selected regions of code
- Monitors: are like locks but require explicit unlocking
- Semaphores: support a wide range of coordination scenarios
- Wait-Notify: like semaphores but require explicit programming of missed notify triggers
- Conditional values: thread sleeping according to a given condition
- Channels and buffers: listen/collect messages
- Non-blocking data structures: nonblocking queue, atomic counters. Allow access from many threads without using locks or minimal amount of locks
Locks and semaphores can do every concurrency use case you can imagine.
Go routines
Go has its own way of dealing with concurrency. The go routine is a fundamental concept that represents the execution of a concurrent routine. A goroutine is a lightweight thread managed by the Go runtime, and run in the same address space of the “parent” process, so access to shared memory must be synchronized.
To spawn go routines, Go uses the reserved word go
. For instance,
fmt.Printf("hello ")
go LauchThisConcurrently()
fmt.Printf("world")
Would print "hello world"
with no delay between words since the LauchThisConcurrently
function call
will be execute concurrently. But, how do we synchronize jobs performed by go routines? we can use WaitGroups
Exploring sync.WaitGroup
Consider the following go-routine call,
package main
import (
"fmt"
)
func Concurrent() {
for i := 0; i < 5; i++ {
fmt.Println("Concurrent count", i)
}
}
func main() {
go Concurrent()
fmt.Println("Hello, playground")
}
As you see, the playground exits without showing anything on the execution of the go routine.
If we want to actually wait for the execution, we would need to instance a wait group sync.WaitGroup
. Consider the following modification to our code,
package main
import (
"fmt"
"sync"
)
func Concurrent(wg *sync.WaitGroup) {
for i := 0; i < 5; i++ {
fmt.Println("Concurrent count", i)
}
wg.Done()
}
func main() {
var wg sync.WaitGroup
wg.Add(1)
go Concurrent(&wg)
fmt.Println("Hello, playground")
wg.Wait()
}
Now, we have a sync.WaitGroup
which upon calling wg.Add(1)
we are telling it that he should wait for the end of the execution of a go-routine, –one, sice we did wg.Add(1)
–, all the execution will lock at wg.Wait()
and continue once all go-routines end.
Thus, we can do something more advanced like,
package main
import (
"fmt"
"sync"
)
func Concurrent(wg *sync.WaitGroup, id int) {
for i := 0; i < 5; i++ {
fmt.Printf("%d worker, counting %d\n", id, i)
}
wg.Done()
}
func main() {
var workers = 3
var wg sync.WaitGroup
wg.Add(workers)
for i := 0; i < workers; i++ {
go Concurrent(&wg, i)
}
fmt.Println("Hello, playground")
wg.Wait()
}