Tuesday, 31 October 2017

On Using Multiple Source Control Repositories for Microservices

The nature of microservices and its small services with limited bounded contexts means that most functionality within the platform entails communication between services. In terms of network overhead this is largely mitigated by efficient and low latency communication protocols such as gRPC. However, this means that most functional changes made by developers require making changes to multiple services. Certainly, it follows that the smaller we make our services, the more likely a change is to cross service boundaries.


A graph showing the relationship between service size and the typical number of services affected by a change




This overhead, along with many others such as frequent re-implementation of common functionality, is an accepted cost of using microservices as services are naturally more granular, composable and reusable.

There are two accepted processes for version controlling microservices, the 'monorepo' and the repository per service 'manyrepo' approach. With the monorepo, all services are kept in the same source control repository. When studying microservices from a theoretical perspective it seems logical to add yet another layer of separation between services by adopting manyrepo but there are some real issues and overheads associated with doing so, which your author has recently been experiencing first hand! Below I have attempted to communicate the relative pros and cons for the 'manyrepo' approach.


Pros

  • The separation of repos with explicit dependencies upon one another makes the scope of any commit easy to reason about. That is, to say which services a commit affects and requires deployment of. With a single repo 'the seams' between services aren't as well defined / bounded. This makes it harder to release services independently with confidence, thus hampering the continuous deployment of independent releases that is considered crucial to  microservices based architectures. That is to say the monorepo, to an extent encourages lock-step releases.
  • It scales better with large organisations of many developers, it ensures that developer's version control workflows do not become congested.
  • It helps repos stay smaller and so keeps inbound network traffic for pulls smaller for developers providing a faster workflow.

Cons

  • With multiple repos changelogs become fragmented across multiple pull requests making it harder to review, deploy and roll back a single feature. Indeed, deploying a feature can require deployments across mulitiple services, making it more difficult to track the status of features through environments. There is a lot of version control overhead here.
  • Making common changes to each service. Making a certain change to N services requires N git pushes.
  • It is harder to detect API breakage as the tests in the consumer of the API will not run until a change to that service is pushed. Note that this can be resolved with 'contract testing', that is, testing of the API of a service within the service itself, which you do not get for free in the consumer.
  • It is more difficult to build a continuous integration system where all dependencies are not immediately available via the same repository.

This a very interesting topic that definitely warrants further discussion and debate, indeed the the choice of monorepo vs manyrepo has interesting implications for release strategy and whether supporting varied permutations of co-existing versions is worth the very real and visible overhead that it incurs.

Lots of the workflow pros of the manyrepo don't really come into affect until you have a very large engineering staff which most companies won't / are unlikely to ever have. Also google have solved some of these problems for monorepos by doing fancy stuff with virtual filesystems.

Personally I considered the manyrepo to be superior until I had to experience the increased developer overhead first hand. However it remains to be seen if this encouragement to think about services as separate distinct entities is worth it in the long run.

Thursday, 19 October 2017

Go Concurrency Patterns #1 - Signalling Multiple Readers - Closing Channels in Go

The traditional use of a channel close is to signal from the writer to the reader that there is no more data to be consumed. When a channel is closed and empty a reader will exit its 'for range' loop over such a channel. This is because of some handy shorthand in 'for range', note that the two expressions are functionally equivalent:


In the first snippet the check for the channel close is implicit. The two value receive format is how we normally check for a channel close, this is what the spec has to say on it:

 A receive expression used in an assignment or initialization of the special form

 x, ok = <-ch
 x, ok := <-ch
 var x, ok = <-ch
 var x, ok T = <-ch

 yields an additional untyped boolean result reporting whether the communication succeeded. The value of ok is true if the value received was delivered by a successful send operation to the channel, or false if it is a zero value generated because the channel is closed and empty.

Here is a functional example of a writer closing a data channel to signal a reader to stop reading, this is handled nicely in the language with the implicit close check in for range.


The close of a data channel should be performed by the writer as a write on a closed channel causes a panic, thus a close by the reader has the potential to induce a panic. Similarly a close with multiple writers could also induce a panic. The good news is that the excellent go race detector can detect such races.

However the reader can signal the writer to stop writing by utilising a secondary signalling channel. This could be done via a send to the secondary channel. However, as a signalling mechanism, closing a channel instead of a channel send has the benefit of working for multiple readers of the signalling channel. It is important this signalling channel has no data written to it as we only receive a close signal when the channel is both closed and empty.

The use of a channel of type empty struct has the benefit that it cannot be used to pass data and thus its usage as a signalling channel is more apparent. Dave Cheney has an interesting post on the curious empty struct type.

Here is an example that demonstrates the use of a secondary channel to shutdown multiple writers initiated by the reader:

Hopefully you find this useful for shutting down errant goroutines. Indeed, this is how the context package implements its own cancellation functionality. Context is the idiomatic way to perform cancellation that should also be honoured by http transports and libraries and the like.