### Key Takeaways

- Concurrency is a way to structure the solution to a problem. It is therefore the result of a design activity and not of the availability of multi-core parallel processors.
- Applying concurrency can lead to elegant solutions which are simple to reason about.
- When used with multi-core hardware, concurrency enables true parallelism. This can bring benefits in terms of performances, but they need to be weighted against the costs of concurrency orchestration.
- Go’s concurrency primitives make it easy to build concurrent solutions.

Often concurrency and parallelism are mixed and used as synonyms in opposition to the idea of sequential processing. Reality is that concurrency and parallelism are different concepts, with the former being related to how a solution is structured and the latter describing how computational processes are distributed on available hardware.

In this article, we want to show how achieving concurrency is the result of an appropriate design. A concurrent solution may turn out to be more elegant and easier to reason about than an equivalent sequential algorithm. To illustrate these concepts we use, as an example, the Hilbert’s Hotel mathematical problem.

Based on a concurrent design, we move to implementation using Go, which offers clean and powerful concurrency primitives.

While concurrency offers the possibility to boost performances when paired with multi-core processors, it comes with its own costs which have to be considered if we want to optimize computational efficiency.

The full codebase of the Hilbert Hotel solution, from which the snippets shown in the article have been extracted, is provided in this public repo.

## Concurrency is not parallelism

As Rob Pike clearly stated, “concurrency is not parallelism”:

“Concurrency is the composition of independently executing processes, while parallelism is the simultaneous execution of (possibly related) computations. Concurrency is about dealing with lots of things at once. Parallelism is about doing lots of things at once.”

“Concurrency provides a way to structure a solution to solve a problem that may (but not necessarily) be parallelizable.”

So concurrency is about modeling a solution to a problem, which means it is first and foremost the result of a design activity. Concurrency may leverage parallelism to gain efficiency, but parallel-capable multi-core hardware does not bring concurrency by itself.

A famous example of concurrency, achieved by design in Go, is the solution of the prime sieves algorithm, an algorithm that generates prime numbers in a very elegant way. In this article, we want to illustrate another example derived from mathematics, the Hilbert’s Hotel problem, and show how this problem can be solved elegantly with a concurrent solution.

## Hilbert’s Hotel

Hilbert’s Hotel is a problem about infinity.

Imagine Hilbert is the owner of an Hotel which has an infinite number of rooms.

One day a bus arrives at the Hilbert’s Hotel. This is a bus with an infinite number of passengers who all want to stay at the Hilbert’s Hotel. Since Hilbert’s Hotel has an infinite number of rooms, there is no problem to accommodate an infinite number of guests.

However, one day something new happens. That day an infinite number of buses, each with an infinite number of passengers, arrive at Hilbert’s Hotel and all passengers of all buses want to stay overnight.

HIlbert thinks a bit and then says “No problem, we can accommodate all of you”. He starts drawing a triangle of numbers and hands over the room keys to all passengers following the diagonals of the triangle.

In this way Hilbert is able to give each passenger of each bus a unique room key, even if the buses are infinite.

## A non-concurrent algorithm for Hilbert’s Hotel

We can create an algorithm to solve the problem of the Hilbert’s Hotel just replicating what Hilbert does.

For instance, we could create an array of arrays of integers representing the triangle of numbers and then loop through this array of arrays to build the diagonals and assign the passengers of each bus their room keys, progressing over the diagonals.

An implementation with this approach is shown at the end of the article.

Other non-concurrent approaches are also possible.

But let’s try to look at the problem from a different angle.

## A recursive concurrent algorithm for Hilbert’s Hotel

Let’s assume Hilbert has some clerks working at the Hotel, actually an infinite number of clerks. Each clerk will take care of handling the keys to all the passengers of one bus. The clerk that takes care of the first bus (Bus 1 Clerk) has next to him the clerk that takes care of the second bus (Bus 2 Clerk) who has next to him the clerk that takes care of the third bus (Bus 3 Clerk) and so on.

Then there is one more clerk (the Room Key Clerk), a clerk who has the task to sequentially hand over all the keys of all the rooms to the first clerk (Bus 1 Clerk). First the key of room 1, then the key of room 2, then the key of room 3 and so on.

The Bus 1 Clerk (who receives the keys from the Room Key Clerk) knows that the first key it receives is for the first passenger of his bus. So Bus 1 Clerk will prepare the welcome kit containing the key of Room 1 for the first passenger of the first bus. Bus 1 Clerk also knows that the second key it receives is not for his bus but for the next clerk. The third key is for the second passenger of Bus 1, so it is managed by Bus 1 Clerk, but the fourth and fifth keys are not for Bus 1 and so Bus 1 Clerk passes them to the next clerk. When all welcome kits for all the passengers of Bus 1 are ready (we have to imagine that we set a limit to the number of guests, to avoid a never ending program), Bus 1 Clerk will return all of them to Hilbert. As we will see later, returning the keys to Hilbert is a useful step to show how a final “reduce-like” operation can be implemented concurrently.

The next clerk, Bus 2 Clerk, behaves in the same way as the first clerk. It knows that the first key it will receive is for the first passenger of his bus (the second bus, Bus 2) while the second key will be for the next clerk, Bus 3 Clerk, and so on and so forth.

At a certain point, since our program cannot go on forever as in the original Hilbert problem case, we will have to stop handing out keys and we will have to signal to all clerks that they have to stop working. At that moment, the welcome kits they have prepared will have to be sent to Hilbert. Eventually, when the last clerk is told to stop working, it will communicate to Hilbert that also he, Hilbert, can stop working, since he has received all the welcome kits prepared and can get a well deserved rest.

## Implementation with Go

A concurrent implementation of this algorithm in Go is pretty straightforward, given the powerful concurrency primitives the language offers.

Each actor, Hilbert and the clerks, is represented by a goroutine.

**Hilbert**is the main goroutine which launches the entire process and collects the welcome kits created by the bus clerks.- The
**Key Handler Clerk**is a goroutine, launched by Hilbert, which generates sequentially the series of room keys and passes each key to the Bus 1 Clerk until the upper limit on the number of keys is reached. - The
**Bus 1 Clerk**is another goroutine launched by Hilbert. It receives the keys from the Key Handler Clerk via a channel and implements its logic which is: “prepare the welcome kit for the passengers of your bus and pass the keys that are not for your bus to the next clerk”. - The
**Bus 2 Clerk**is another goroutine launched by the Bus 1 clerk. It behaves the same as Bus 1 Clerk. - All the
**Bus Clerks**behave the same and therefore, while each one is executed as a separate goroutine, they all share the same code.

Let’s start with the implementation of Hilbert.

```
func Hilbert(upTo int) {
keysCh := make(chan int)
go RoomKeysClerk(upTo, keysCh)
make(chan []hilberthotel.WelcomeKit)
go BusClerk(1, keysCh, welcomeKitCh)
for envelope := range welcomeKitCh {
fmt.Println(envelope)
}
}
```

It is a pretty simple piece of code.

- First it creates a channel for the keys (implemented as values of type
`int`

),`keysCh`

, and launches the Room Key Clerk goroutine passing it the channel. - Then it creates another channel over which it will receive the Welcome Kits, welcomeKitCh, and launches the first Bus Clerk passing it both
`keysCh`

and`welcomeKitCh`

(which are respectively the channel from which the clerk will receive the keys and the channel to send all the welcome kits when the Bus Clerk stops working). - Finally, it starts a receive loop on the welcomeKitCh channel to receive the kits the clerks will prepare.

The implementation of the Room Key Clerk is even simpler.

```
func RoomKeysClerk(upTo int, keysCh chan<- int) {
for i := 0; i < upTo; i++ {
keysCh <- i + 1
}
close(keysCh)
}
```

It receives `keysCh`

, the channel over which it has to send the keys, and starts a send loop until the limit is reached, after which `keysCh`

is closed.

The implementation of the Bus Clerk is not much more complex.

```
func BusClerk(busNumber int, roomKeysCh <-chan int, welcomeKitsCh chan<- []hilberthotel.WelcomeKit, buffer int, delay time.Duration) {
var count = 0
var keyPosition = 0
var nextClerkCh chan int
welcomeKits := []hilberthotel.WelcomeKit{}
for roomKey := range roomKeysCh {
if nextClerkCh == nil {
nextClerkCh = make(chan int, buffer)
go BusClerk(busNumber+1, nextClerkCh, welcomeKitsCh, buffer, delay)
}
if count == keyPosition {
kit := hilberthotel.NewWelcomeKit(busNumber, keyPosition, roomKey, delay)
welcomeKits = append(welcomeKits, kit)
keyPosition++
count = 0
continue
}
nextClerkCh <- roomKey
count++
}
if nextClerkCh != nil {
welcomeKitsCh <- welcomeKits
close(nextClerkCh)
} else {
close(welcomeKitsCh)
}
```

- Each Bus Clerk, that is each goroutine implementing a Bus Clerk, receives the number of the bus it is dedicated to, the
`welcomeKitCh`

channel over which to send the Welcome Kits at the end of the processing, and the`keysCh`

channel from which to read key numbers. - After initializing an internal counter count, a variable keyPosition to hold the position (in the triangle row) where the key of the next passenger will be found, and a variable
`nextClerkCh`

for the channel that will connect the next Bus Clerk, it starts a loop reading from the`keysCh`

channel. - With the first key received, which is signaled by the fact that
`nextClerkCh`

is nil, it initializes`nextClerkCh`

and launches the next Bus Clerk (here we see the recursivity of the algorithm). - Comparing
`count`

and`keyPosition`

, it decides whether the key is for the passenger of its bus or for another bus. In the first case, it creates a Welcome Kit and stores it in its own collection. In the second case, it just passes the key to the next Bus Clerk. - When there are no more keys to receive (this is signaled by the fact that the
`keysCh`

channel is closed), it closes the`nextClerkCh`

channel that it created to communicate to the next Bus Clerk. - The last Bus Clerk, which never had the chance to receive any room key, will close the
`welcomeKitCh`

channel, signaling to Hilbert that he has to terminate its job.

## It is all about composition of fairly independent processes

We can appreciate that the solution described above is based on the combination of “fairly independent” logical processes:

- The Room Key Clerk performs its task and is “fairly independent” from the Bus Clerk 1. We say “fairly independent” and not “fully independent” because the Room Key Clerk still has to wait for Bus Clerk 1 to receive the keys from the
`keysCh`

channel (even if buffering is used for the`keysCh`

channel, the buffer can still fill up forcing the Room Key Clerk to wait until the Bus Clerk 1 reads from the channel). - Similarly Bus Clerk 1 depends on Bus Clerk 2 to read the keys from the keysCh channel they share, and so on.
- Eventually all bus clerks depend on Hilbert to receive the welcome kits from the
`welcomeKitCh`

channel.

Overall though we can say that all clerks and Hilber perform their tasks in a “fairly independent” way and just need to be coordinated, via channel send/receive communication, to achieve the desired result.

## Concurrency has a cost but enables parallelism, which brings benefits

While the solution provided by our concurrent design is certainly elegant, it brings its own costs:

- Spawning as many goroutines as the number of buses, plus Hilbert’s and the Room Key Clerk’s
- Scheduling the active goroutines on and off the available cores
- Initializing as many channels as the number of goroutines
- Sending and receiving operations over the channels.

On the other hand, the benefit of the design is that, if the concurrent algorithm runs on multi-core hardware, the parallelism which is intrinsic in the concurrent design will bring benefits in terms of performance.

## The heavier the work of each concurrent goroutine, the higher the boost from parallelism

Each Bus Clerk has some core work to do and some concurrency orchestration work (i.e. work related to the orchestration of concurrency, such as setting up the goroutine as well as sending/receiving the keys over the channels).

The core work is what benefits from parallelism. In Hilbert’s example, the core work is the preparation of the welcome kit for each guest (i.e. the execution of the NewWelcomeKit function). The more core work we can do in parallel, the more guests we are able to serve in the same amount of time.

But in order to enable parallelism, we also have to perform concurrency orchestration work.

The more predominant core work is versus concurrency orchestration work, the more we gain from parallelism.

In Hilbert’s case, the core work performed by each Bus Clerk coroutine is represented by the preparation of the welcome kit. Therefore, the heavier and more time consuming the preparation of a welcome kit, the more we gain by a concurrent design running on multi-core hardware. On the other hand, the more guests we want to process, the heavier the concurrency orchestration work (since we have to spin off more goroutines and send/receive more keys) and therefore the higher becomes the cost exacted by concurrency.

Using the example code provided, we can run benchmarks to measure the difference in performance while varying the number of guests and the effort to build each welcome kit.

## Conclusions

Concurrency is the composition of “fairly independent” processes which need to be orchestrated to achieve the desired goal.

A concurrent solution is the result of proper design and does not come by magic out of multi-core hardware.

At the same time, when run on a multi-core hardware, a concurrent solution can exploit (transparently if we use a language like Go) the benefits of parallelism and get a performance boost.

## Appendix

### Recursion without concurrency

The same basic design we have used to build the concurrent solution presented here can be used to derive a non-concurrent recursive solution:

- The Room Key Clerk is turned into a for loop that generates keys and passes them to the first Bus Clerk.
- Each Bus Clerk is implemented as a closure, i.e. a function that captures some state from its context (the same counters count, keyPosition, and nextClerk we have seen in the concurrent code).
- Hilbert is the function that triggers the entire execution and collects the welcome kits built by the various Bus Clerks (closures).

The code implementing this algorithm can be seen here.

### A non-recursive, non-concurrent solution

An example of a non-recursive, non-concurrent algorithm to solve Hilbert’s Hotel problem can be found here.

## Community comments