A Programmer's Guide To Go Part 3 - Goroutines & Concurrency
Written by Mike James   
Thursday, 13 February 2020
Article Index
A Programmer's Guide To Go Part 3 - Goroutines & Concurrency
Blocking Operations
Channels
Buffered Channels

Go is renowned for its easy-to-use approach to concurrency - it is part of the language. In this final part of our look at the key points of Go we look in depth at how goroutines and channels work together to orchestrate concurrency and parallelism.

A Programmers Guide To Languages

languagecover

Contents

  1. Octave
  2. Scratch 3 *revised
  3. Node.js *revised
  4. App Inventor 2 *revised
  5. Getting Started With .NET IL
  6. NetLogo *revised
  7. Small Basic  *revised
  8. TypeScript *revised
  9. Google Apps Script
  10. Python
  11. Google's Go
    1. A Programmer's Guide To Go With Visual Studio Code *revised
    2. A Programmer's Guide To Go Part 2 - Objects And Interfaces *revised
    3. A Programmer's Guide To Go Part 3 - Goroutines And Concurrency *revised
  12. Guide to F#
  13. Ruby
    1. Object-Oriented Approach
    2.    A Functional Language

goicon1

Part 3 Concurrency, goroutines and channels

Go does take a particularly clean approach to object oriented programming but one of the reasons people are attracted to it is that it offers a very simple and efficient implementation of concurrency. While other languages offer threading and similar but lighter-weight ways of working with concurrency, Go hardly introduces anything new - you just use goroutines and channels for communication. However, you do need to be careful not to misunderstand what you are getting.

The goroutine

Go provides a very simply and direct approach - the goroutine. 

A goroutine is just a function invoked using the go statement. Writing 

go function(parameters)

the function can be a method and the parameters are evaluated as usual.

The big difference is that the calling program doesn't wait for the function to finish - i.e. this is a non-blocking call and the gofunction starts executing on its own, but in the same address space as the invoking code.

Any return values that a goroutine might have are ignored when it ends and if the invoking code is terminated the goroutine also terminates. 

So if you want to start  function doing something while the main program gets on with something else then you need a goroutine.

One thread?

This all sounds very simple, but you need to be very clear how it all works so let's look at at the simplest possible example. 

Let's start off with a function that just prints 1000 values

func count() { 
   for i := 0; i < 1000; i++ {
        fmt.Println(i)
    }
}

Notice that i is a local variable. 

Now if we have a simple main function to call the function everything works as you would expect - there is nothing new here:

func main() {
    count()
}

If you run this then you see the 1000 values printed and then the usual

"Success process exited with code 0"

message that indicated that the main function terminated without error.  

Now convert the call to a goroutine

func main() {
   go count()
}

If you try this out you will discover that you don't see anything printed at all. The reason is that the main program comes to an end before the goroutine gets a chance to run - so no go. 

To give the goroutine time to execute we need to keep the main function running. The most obvious thing to do is to put the main function into an infinite loop:

func main() {
    runtime.GOMAXPROCS(1)
    go count()
    for {    }
}
don't worry about the call to GOMAXPROCS, all witl be explained. Even though the main goroutine keeps running the count goroutine never gets to run.
 

What is going on?

Go makes use of a fairly simple scheduling algorithm that has evolved over time.

By default the number of threads used to run goroutines is set by the number of cores the machine has but you can set it to be whatever you want using the GOMAXPROCS function. In this case it has been set to one meaning that there is a single thread to run all of the goroutines. With this extra information you should be able to see what is happening. The main goroutine never gives up the single thread available for execution and so the count go routine never gets to run. 

For the goroutine to actually run the main function has to block and release the thread. That is the main function has to be suspended waiting for something to happen to free up the single thread and for one of the other goroutines to start executing.

There is an important change in the Go scheduler that was introeduced in Go 1.2. Up to 1.2 scheduling was cooperative in the sense that another goroutine only got a chance at running when the currently executing Go routine released the thread. After Go 1.2 the scheduler is invoked occasionally when you call a function. This means that any loop that contains a function call can be interrupted and the thread passed to another goroutine.

By default the Go runtime allocates as many threads as the processor has cores. This makes things a little more diffcult to understand when you first start using goroutines. For example if in the previous example you remove the call to GOMAXPROC and run the program on a single cored processor you will get the same behaviour. However if you run the program on a machine that has more than one core then you will see the count goroutine run, print 0 to 999 and then free its thread and the main goroutine will never end keeping its thread occupied until you stop the debugging.

It is an important design principle that the behavior of a program shouldn't depend on the number of threads or cores it has available to run it. This means our example isn't a good one but it does demonstrate how things work.

For the moment it is simpler to restrict our attention to a single thread and so GOMAXPROC(1) is at the start of each of the following examples - i.e. there is only one thread involved in running the goroutines and what happens doesn't depend on what machine you are using.

<ASIN:1469769166>

<ASIN:0321774639>



Last Updated ( Friday, 14 February 2020 )