|A Programmer's Guide To Go Part 3 - Goroutines & Concurrency|
|Written by Mike James|
|Thursday, 13 February 2020|
Page 3 of 4
So far the facilities for concurrent programming look a little ad-hoc if easy to use. The whole thing suddenly starts to make sense when you learn about channels. A channel is a simple data structure - something like a typed array or buffer - that can be shared safely between goroutines.
To declare a channel you use something like:
where type is the type of data stored int he channel and size is the number of elements. If you don't specify a size you get a single element or unbuffered channel.
To assign a value to a channel you use the <- operator and to retrieve a value you use the -> operator. For example:
This is where things get interesting. As well as allowing values to be passed channels also act as a blocking mechanism that frees a thread to run another goroutine. For a single element channel the rules are:
a routine that performs a channel read blocks until a value is available.
a routine that performs a channel write blocks until the value has been read by another routine
These two rules result in the more or less automatic passing of the thread of execution between goroutines - you don't need to worry about calling Sleep or another blocking operation simply reading and writing to a channel frees the thread to run another goroutine.
To see this in action consider the following main program:
This creates an unbuffered int channel calls a goroutine, passing it the channel and then retrieves a value from the channel. When the main program reached the channel read it blocks because the goroutine is also blocked and there is no value in the channel. The thread is freed up and runs the goroutine which could be something like:
This stores -1 in the channel and blocks so freeing the thread of execution which returns to the the main program. This retrieves the value from the channel and ends along with the goroutine.
You can see that the channel in this case both allows communication between the two routines i.e. the passing of the value -1 and the automatic sharing of the thread of execution. That is when the main program needed a value from the goroutine it blocked which allowed the goroutine to execute until it had the value that the main program needed when it blocked and the main program started. All very neat and notice that you get exactly the same behavior no matter how many threads are allocated or how long the processes take to complete.
For a slightly more advanced example consider:
This calls the goroutine count and then blocks waiting for values that it returns until it gets a -1 which it treats as a "terminate" condition. The goroutine is:
Notice that the for loop in the goroutine blocks and is effectively suspended until the main program has finished processing the value and asks for another. This is more subtle than you might think at first because while the goroutine unblocks when the main routine reads the value from the channel it doesn't get the thread back until the main routine asks for another value and blocks. Consider how this would be different if you allocated two threads to run program? The answer is that it wouldn't be different at all and this is a good thing.
The complete program is:
Go's channel based concurrency is subtle but mostly safe and easy to use. The trick is that for a goroutine to run it has to be unblocked and there has to be a thread free to run it.
|Last Updated ( Friday, 14 February 2020 )|