Go Simplified: Channels
Share information, not resources

Welcome back to the Go Simplified series. We covered scheduling and context switching the last time around. In case you have not checked it out, you can do so here.
Today we shall discuss channels and below are the questions I shall answer for you:
- Mechanics behind channels
- How channels work ?
- How send and receive works underneath ?
Okay so first thing first, what is a channel ?
Go uses the concept of goroutines which can be thought of as “threadlets”. They behave like threads, have similar states and follow a fork join model. Sometimes, communication is required between goroutines and this is where channels come in use. Channels provide a structure for you to communicate efficiently between goroutines.
Sounds sweet, what else can I do with channels ?
Well, you can use them to synchronize goroutine. They are typed so you can rest assured that you won’t get a surprise when reading values from channels. In addition, they are thread safe so the variables can be used concurrently by multiple goroutines.
The hchan struct !
All channels use the hchan struct behind the scenes. Let us take a look at what all is there here:
type hchan struct {
qcount uint // total data in the queue
dataqsiz uint // size of the circular queue
buf unsafe.Pointer // points to an array of dataqsiz elements
elemsize uint16
closed uint32
elemtype *_type // element type
sendx uint // send index
recvx uint // receive index
recvq waitq // list of recv waiters
sendq waitq // list of send waiters
// lock protects all fields in hchan, as well as several
// fields in sudogs blocked on this channel.
//
// Do not change another G's status while holding this lock
// (in particular, do not ready a G), as this can deadlock
// with stack shrinking.
lock mutex
}
- All goroutines operating on the channel must first acquire a lock on the channel using the mutex.
- buf happens to be a circular ring buffer where the data is stored when using buffered channels [more on buffered channels later in the post].
- recvq and sendq are waiting queues for blocked goroutines who wanted to operate on the channel. They follow the waitq struct.
- dataq size is the size of the buffer and qcount is the total elements in the queue.
The waitq struct
The waitq struct looks like below and denotes a linked list of goroutines.
type waitq struct {
first *sudog
last *sudog
}
Linked lists are denoted by the sudog struct. Let’s check out how that looks like:
type sudog struct {
// The following fields are protected by the hchan.lock of the
// channel this sudog is blocking on. shrinkstack depends on
// this for sudogs involved in channel ops.
g *g
next *sudog
prev *sudog
elem unsafe.Pointer // data element (may point to stack)
// The following fields are never accessed concurrently.
// For channels, waitlink is only accessed by g.
// For semaphores, all fields (including the ones above)
// are only accessed when holding a semaRoot lock.
acquiretime int64
releasetime int64
ticket uint32
// isSelect indicates g is participating in a select, so
// g.selectDone must be CAS'd to win the wake-up race.
isSelect bool
// success indicates whether communication over channel c
// succeeded. It is true if the goroutine was awoken because a
// value was delivered over channel c, and false if awoken
// because c was closed.
success bool
parent *sudog // semaRoot binary tree
waitlink *sudog // g.waiting list or semaRoot
waittail *sudog // semaRoot
c *hchan // channel
}
- g is a reference to the goroutine.
- elem field is what points to the memory which contains the value to be sent or the location where a value shall be received.
Operation: Initialization
- hchan struct is allocated in heap when we use the make() function
- make() then returns a pointer to the allocated memory.
- Since the output of make() is a pointer is can be sent between functions to send/receive data.
Note: create a channel using the following code and debug to see the initial values.
ch:= make(chan int, 4)
Operation: Send/Receive Data on a Buffered Channel
So we know what happens when we create a channel. Let’s understand what happens when we have a couple of goroutines sending and receiving from a channel.
Setting:
We have our channel from the last operation. Let’s assume we have a list of values which need to be sent to the channel by goroutine G1 and another goroutine G2 needs to read these values from the channel.
Scenario 1: G1 executes first and puts values into the channel while G2 follows and reads the values
- G1 will acquire the lock on the channel first.
- Then it enqueues the element in the circular ring buffer. Note that this is a memory copy. The element is copied into the buffer.
- Then it increases the value of sendx to 1 since 1 value was put inside the channel and releases the lock.
- G2 comes along now and tries to acquire the value from the channel.
- G2 acquires a lock on the hchan struct.
- It dequeues the element from the buffer and copies the value to a variable.
- G2 then increments the recvx to 1 and releases the lock.
Note: There is no memory sharing. Values are copied to and from the hchan struct and hchan is protected by the mutex lock.
Scenario 2: Buffer is full as G2 hasn’t been around and G1 has clogged the pipe
- G1 enqueues 4 values inside our channel which makes the buffer full.
- Now it needs to wait for dequeue operation before more values can be queued inside the buffer.
- Buffer is currently in blocked state.
- G1 will now create a sudog struct where g element holds reference to the goroutine G1 and the value to be sent is stored in the elem field for the sudog struct.
- Now this sudog is enqueued into the sendQ list and G1 requests the scheduler to remove itself from the OS thread so that other goroutines can be put on the OS thread.
- Now G2 comes along, acquires the lock and dequeues the first element from the queue and copies the value into a vairable
- After this, it pops the waiting G1 on the sendQ. Then it enqueues the 4th value into the buffer. Note that it’s G2 which enqueues the value which lets G1 rest easy as it does not need to make a comeback.
- G2 now sets state of G1 as runnable by calling the goready function for G1.
- G1 is now moved to runnable state and gets added to local run queue and will get it’s turn when a chance is available.
Scenario 3: Buffer is empty since G1 has been away and G2 is trying to read from an empty channel
- G2 has won the race and is the first to reach the channel and acquires a lock.
- Since the channel is empty, G2 now creates a sudog struct for itself and enqueues it into the revcq of the channel. The elem field will now hold reference to a stack variable where G2 wants to accept the value from the channel.
- G2 now calls the scheduler to park itself.
- Context switching now occurs and G1 is given a chance to populate the channel.
- G1 first checks if any goroutines are waiting in recvq of channel and finds G2.
- G1 now copies the value directly into the stack of G2. It’s writing into the variable of G2. This takes off some burden off of G2.
- G1 now pops G2 and puts it into the runnable state using the scheduler.
- G2 now is ready and will be scheduled whenever a chance is available.
So that was smooth but what about unbuffered channels ?
Well, we sort of have seen what will happen in our previous scenarios. An unbuffered channel is basically one where the length of the buffer is practically 0 since the buf value is only used for buffered channels.
Scenario 1 : Send on an unbuffered channel
- G1 wants to send data to the unbuffered channel.
- G1 checks if there are any waiting goroutines in recvq.
- If found, G1 will write the value directly into the stack variable of G2.
- G1 then puts G2 into a runnable state.
- If there is no receiver goroutine then sender gets parked in sendq and data is put in the elem field.
- G2 now comes in, copies the data to it’s stack variable and puts G1 back to runnable state after popping it from sendq.
Scenario 2: Receive on an unbuffered channel
- G2 wants to receive from an unbuffered channel.
- It checks if there are any blocked goroutines on the sendq.
- If found, it copies the value from the elem field of the sudog struct into it’s stack variable.
- It then puts the sender goroutine back to a runnable state.
- If no sender goroutine was found, G2 puts itself inside recvq and a reference to it’s stack variable is stored inside the elem field.
- G1 i.e. sender comes along and sees G2 stuck in recvq.
- It puts the value directly into G2’s stack variable and pops it, putting it back to runnable state.
Summary
In conclusion, channels are a beautiful construct allowing you ease of communication between your goroutines efficiently. In this article I have made an assumption that you would have read my article on Context Switching and Scheduling. If you haven’t I would urge you to take a look.
Hope you had a nice time reading this article. If you did, show some love in the form of claps and comments. Feedbacks are more than welcome and encouraged.
For now, may your have bug free code in production always. See ya until next time.