Monday 4 December 2017

Better error logging with with error wrapping

This December, why not give the gift of better error logging?

For a while I have thought that the functionality provided by the stdlib errors package in the standard library was insufficient. One issue with errors is that they lack the context, namely stack information, that say, an exception would provide in another language. With the stdlib package, you can see the line number where an error is logged but you cannot see where the error was generated which is usually more important. This is because good error handling practice is to log the error at its highest level of propagation, but it is here where it is most far removed from the site of the error generation. You do get the string that is set when the error is generated but these can often be dynamically generated or reused in multiple places so they are no replacement for a stack trace.

This problem can be resolved by use of the third party pkg/errors package This package was designed to be used as a drop in replacement for the stdlib errors package, providing a superset of the stlib package's functionality.

pkg/errors provides some functions that can annotate errors with stack information and messages at certain levels of the stack. New, Wrap and WithStack all annotate an error with stack information whilst Wrap also enables you to provide a message relevant to that level of the stack. WithStack and Wrap are smart enough, using the runtime package to examine an error and work out where it was generated even after that function has returned, which is useful if you have no control over the code that generates the error. Where you do it is best to annotate the error at generation to be sure that you don't forget to do it.

Here is an example which demonstrates annotating an error with stack information at generation and providing a contextual message at another point of the call stack. What's so great about pkg/errors is that it is instantly swappable with the stdlib package so you can enrich your errors with a one line import change!

Pretty cool, huh, happy wrapping!

Tuesday 31 October 2017

On Using Multiple Source Control Repositories for Microservices

The nature of microservices and its small services with limited bounded contexts means that most functionality within the platform entails communication between services. In terms of network overhead this is largely mitigated by efficient and low latency communication protocols such as gRPC. However, this means that most functional changes made by developers require making changes to multiple services. Certainly, it follows that the smaller we make our services, the more likely a change is to cross service boundaries.

A graph showing the relationship between service size and the typical number of services affected by a change

This overhead, along with many others such as frequent re-implementation of common functionality, is an accepted cost of using microservices as services are naturally more granular, composable and reusable.

There are two accepted processes for version controlling microservices, the 'monorepo' and the repository per service 'manyrepo' approach. With the monorepo, all services are kept in the same source control repository. When studying microservices from a theoretical perspective it seems logical to add yet another layer of separation between services by adopting manyrepo but there are some real issues and overheads associated with doing so, which your author has recently been experiencing first hand! Below I have attempted to communicate the relative pros and cons for the 'manyrepo' approach.


  • The separation of repos with explicit dependencies upon one another makes the scope of any commit easy to reason about. That is, to say which services a commit affects and requires deployment of. With a single repo 'the seams' between services aren't as well defined / bounded. This makes it harder to release services independently with confidence, thus hampering the continuous deployment of independent releases that is considered crucial to  microservices based architectures. That is to say the monorepo, to an extent encourages lock-step releases.
  • It scales better with large organisations of many developers, it ensures that developer's version control workflows do not become congested.
  • It helps repos stay smaller and so keeps inbound network traffic for pulls smaller for developers providing a faster workflow.


  • With multiple repos changelogs become fragmented across multiple pull requests making it harder to review, deploy and roll back a single feature. Indeed, deploying a feature can require deployments across mulitiple services, making it more difficult to track the status of features through environments. There is a lot of version control overhead here.
  • Making common changes to each service. Making a certain change to N services requires N git pushes.
  • It is harder to detect API breakage as the tests in the consumer of the API will not run until a change to that service is pushed. Note that this can be resolved with 'contract testing', that is, testing of the API of a service within the service itself, which you do not get for free in the consumer.
  • It is more difficult to build a continuous integration system where all dependencies are not immediately available via the same repository.

This a very interesting topic that definitely warrants further discussion and debate, indeed the the choice of monorepo vs manyrepo has interesting implications for release strategy and whether supporting varied permutations of co-existing versions is worth the very real and visible overhead that it incurs.

Lots of the workflow pros of the manyrepo don't really come into affect until you have a very large engineering staff which most companies won't / are unlikely to ever have. Also google have solved some of these problems for monorepos by doing fancy stuff with virtual filesystems.

Personally I considered the manyrepo to be superior until I had to experience the increased developer overhead first hand. However it remains to be seen if this encouragement to think about services as separate distinct entities is worth it in the long run.

Thursday 19 October 2017

Go Concurrency Patterns #1 - Signalling Multiple Readers - Closing Channels in Go

The traditional use of a channel close is to signal from the writer to the reader that there is no more data to be consumed. When a channel is closed and empty a reader will exit its 'for range' loop over such a channel. This is because of some handy shorthand in 'for range', note that the two expressions are functionally equivalent:

In the first snippet the check for the channel close is implicit. The two value receive format is how we normally check for a channel close, this is what the spec has to say on it:

 A receive expression used in an assignment or initialization of the special form

 x, ok = <-ch
 x, ok := <-ch
 var x, ok = <-ch
 var x, ok T = <-ch

 yields an additional untyped boolean result reporting whether the communication succeeded. The value of ok is true if the value received was delivered by a successful send operation to the channel, or false if it is a zero value generated because the channel is closed and empty.

Here is a functional example of a writer closing a data channel to signal a reader to stop reading, this is handled nicely in the language with the implicit close check in for range.

The close of a data channel should be performed by the writer as a write on a closed channel causes a panic, thus a close by the reader has the potential to induce a panic. Similarly a close with multiple writers could also induce a panic. The good news is that the excellent go race detector can detect such races.

However the reader can signal the writer to stop writing by utilising a secondary signalling channel. This could be done via a send to the secondary channel. However, as a signalling mechanism, closing a channel instead of a channel send has the benefit of working for multiple readers of the signalling channel. It is important this signalling channel has no data written to it as we only receive a close signal when the channel is both closed and empty.

The use of a channel of type empty struct has the benefit that it cannot be used to pass data and thus its usage as a signalling channel is more apparent. Dave Cheney has an interesting post on the curious empty struct type.

Here is an example that demonstrates the use of a secondary channel to shutdown multiple writers initiated by the reader:

Hopefully you find this useful for shutting down errant goroutines. Indeed, this is how the context package implements its own cancellation functionality. Context is the idiomatic way to perform cancellation that should also be honoured by http transports and libraries and the like.

Wednesday 13 September 2017

Initialising String Literals at Compile Time in Go

Recently I was working on a service and realised that we had no established way of querying it for its version information. Previously, on an embedded device, I have written a file at build time to be read by the service at run time, with the location set by an env var. It would also be possible to set the version in an env var itself but these are overridable

  However a colleague suggested injecting the version string into the binary itself at compile time so I decided to investigate this novel approach.

go tool link specifies that the -X argument allows us to define arbitrary string values

go tool link
  -X definition
        add string value definition of the form

go build help explains that there is an 'ldflags' option which allows the user to specify a flag list passed through to go tool link

go build help
    -ldflags 'flag list'
        arguments to pass on each go tool link invocation.

So we can pass a string definition through the go command (build, run etc)!

In the above program we can see that we define a variable foo in package main.
Thus the fully qualified name of this variable is, thus this is the name we pass.

$ go run main.go
injected var is []

$ go run -ldflags '-s -X' main.go
injected var is [bar]

This can be used in novel ways to inject the output of external commands at compile time such as the build date.

$ go run -ldflags "-X '$(date -u '+%Y-%m-%d %H:%M:%S')'" main.go
injected var is [2017-09-13 13:44:59]

This is a nice feature that I'm sure has applications beyond my reckoning at this time. In my use case it makes for a compact, neat solution, however it does cause some indirection when reading the code making it harder to follow so I would argue should be used sparingly. It has the nice quality of being immutable over env vars which could potentially be rewritten. Anyhow it is a pretty cool linker feature!

Thursday 7 September 2017

Struct Tags in Go defines 'tag' as:

a label attached to someone or something for the purpose of identification or to give other information

The go docs only have a short paragraph covering struct tags which follows:

A field declaration may be followed by an optional string literal tag, which becomes an attribute for all the fields in the corresponding field declaration. The tags are made visible through a reflection interface and take part in type identity for structs but are otherwise ignored.

Basically struct tags allow you to attach metadata to struct fields in the form of arbitrary string values.

The format of the metadata is: `key:val` or `key:val1,val2` in the case of multiple values. By convention the key corresponds to the name of the package consuming it.

This metadata can be intended for another package or you can make use of it yourself. It can be consumed via reflection using the types reflect.Type, reflect.StructField and reflect.StructTag as can be seen in this example.

A common use case is for serialisation/ deserialisation, the 'json' tag which is used by the encoding/json package is an example of this.

It also allows you to incorporate your initialisation code in the struct definition. An example of this is the package which uses struct tags to define config structures with associated environment variables and perform boiler-plate free parsing. The package also allows you to set defaults via struct tags which again reduces initialisation code, however this is not compile-time safe.

If you are interested there is a good talk available here: video pdf

Friday 1 September 2017


Internal vs External comms in a
microservices architecture

A Microservices architecture advocates the proliferation of many small inter-communicating services. As a result, using such an architecture can result in an increased overhead in the number of network bound, 'internal' inter-service communications.

This is in addition to the 'external' existing network comms of serving the clients accessing our service. I will refer to these two forms of communication as internal and external respectively. These two forms of communication can be observed to possess differing characteristics.

Internal comms are high frequency in nature and completely under our control, that is that we can coordinate upgrade of the client and server as necessary.

External comms are of lower frequency, can come from varied sources and backwards compatibility / versioning is more of a concern as we do not have control over the clients.

REST and RPC are two technologies that can be used for network bound communications. In my experience REST is better suited to use in an external API and RPC to use in internal comms, primarily for these reasons:

REST for external comms
- easy to version using headers
- well understood and ubiquitous, ease of client implementation
- json much easier to parse and debug than a binary format

RPC for internal comms
- low cost, around 20 times less than REST
- less boilerplate required for calls (which will be frequent), it is autogenerated
- provides easy streaming

In RPC changes to models require synchronised client and server releases, thus models often become expand (ie. add fields) only. This can lead to a lot of deprecated fields and lockstep client and server releases when models are altered, making RPC a little brittle for external APIs.

The boon of low latency comms that rpc provides cannot be understated and provides us the freedom to disregard much of the costs of network bound requests and fully subscribe to a microservices architecture.

Friday 25 August 2017

Go 1.9 Released

Go 1.9 was released yesterday. Release notes are available here.

This release has couple of useful changes in addition to the introduction of type aliases.

  1. ./... no longer matches vendor directories. This means that running go test ./... or say go lint ./... from the top level of your repo in order to run on all your packages no longer annoyingly matches your vendored dependencies. Previously one would be forced to run 'go test $(go list ./... | grep -v /vendor/)', and now one can simply run 'go test ./...'. Wooo!

  2. go vet and go tool vet now support all of each other's build flags. This is a welcome change that provides some uniformity in their usage, as although go tool vet was designed for linting by package and go vet for linting by files, as mentioned in a previous post certain flags such as -shadow were not available in 'go vet'.

Congrats to the go team on another successful release!

Thursday 24 August 2017

Mocking Large Interfaces Using Embedding


On the topic of embedding, Sean Kelly gave a great talk on embedding at this year's golang uk conference, it was quite dense but lightened with constant pictures of his pet corgi!

If you are not familiar with embedding in go, it is sort of like inheritance. Only it is based upon a 'has a' rather than the 'is a' relationship of traditional inheritance in object orientated languages, this can be seen as a sort of 'sideways inheritance' or composition of one object upon another.

See example:

In the above example we can see struct E embedded in struct S. This allows struct S to inherit members of E into its own namespace.
This is useful as it enables us to reuse functionality provided by an embedded type across different structs as though this struct provided this functionality. This allows us to reuse objects across classes in the same way that we would functions.

Mocking Large Interfaces using Embedding

When we embed an interface in a struct the compiler is satisfied that the struct satisfies the interface. This allows us to create mocks with much less boilerplate than stubbing every member of the interface. The caveat is that if we don't implement a member of the interface on the mock and it is called then that will generate a nil pointer runtime error - a panic. See example:

An alternative is to use a mock generation tool such as counterfeiter which generates the mock code itself given the interface definition.

Bear in mind when mocking a large interface we should ask ourselves if a smaller one would suffice. As the go proverb says 'The bigger the interface, the weaker the abstraction'. Larger interfaces are less expressive and less composable. I have had to deal with large interfaces mainly generated from protobuf service definitions. But if you must mock a large interface embedding or generation may save you time.

Saturday 19 August 2017

Back From the Golang UK Conference 2017

So the blog has been quite quiet recently, this has mainly been due to having other things occupying my time of late as I have been going through some big changes. I'm in London! I moved down recently having spent the last eight years in Leeds.

Never fear, this was partly motivated by my desire to work on new exciting projects in go which I will be sure to discuss in time. In the shorter term expect some posts related to talks and conversations I had at the Golang UK Conference. I was lucky enough to spend two days at the conference and see some great talks, meet fellow go developers and learn about how go is used in other companies across the world. I had a really great time and I would like to thank all the speakers, delegates and especially the organisers for making it happen!

Watch this space

Friday 2 June 2017

Golang gotcha #5: launched timers cannot be garbage collected till stopped or fired

You may run into this gotcha if you run timers with long timeouts in a tight for select loop. It is idiomatic to use a timer in order to provide a timeout on a channel receive. It is common to use time.After for this, because time.After is very easy to use as it provides a '<-chan Time', equivalent to 'NewTimer(d).C', but there is no way to stop it. And as the godoc says:

The underlying Timer is not recovered by the garbage collector until the timer fires. If efficiency is a concern, use NewTimer instead and call Timer.Stop() if the timer is no longer needed.

This example demonstrates such a leak when using time.After:

This example demonstrates that the timers are not garbage collected even after the function they are launched in returns, as some may expect:

This example demonstrates that stopping the timers resolves the leak:

This may seem like an unnecessary optimisation but in the right circumstances, a tight for loop with a long lived timer, these can really add up.

Thursday 1 June 2017

Golang gotcha #4: Implicitly ignored return values

It is possible to implicitly ignore all values returned by a go function without any compilation or even vet errors. This is demonstrated in this playground example:

Now, I am imagine that you are asking "Why is this ever allowed? Madness, madness, insanity and lies!" or something less dramatic. Well consider the fmt.Printf usage in the example and lets take a look at the fmt.Printf signature.

'if ignorance is bliss then knock the smile off my face' - Rage Against the Machine


Yes, we're only bloody, implicitly ignoring that too aren't we? You see, a fair amount of the standard library relies upon this behaviour so disallowing it would break backwards compatibility :(. Though I would argue that this implicit ignoring is bad practice and should not be explicitly utilised or encouraged when it comes to user defined functions, it is an easy avenue for bugs to creep in.

In-depth discussion here.

Golang gotcha #3: Accidentally shadowed variables

A common mistake that I have often seen cause exasperation to many programmers is that of accidental variable shadowing, this is one of the first things that I look for when asked to help debug misbehaving code. This occurs via misuse of the short variable declaration clause :=.

Let's recap declaration, assignment and what the shorthand does:

So the := clause is shorthand for a declaration and an assignment, where the type of the declaration is implicitly inferred from the assignment value. It is very useful and the lack of verbosity feels almost like you are using a dynamic language.

Now when does this get us into trouble? The problem occurs when we accidentally declare and assign to a new variable in a different scope rather than assign to an existing variable in an outer scope.

'Short cuts make long delays' - Frodo Baggins

This example demonstrates an accidental variable shadowing bug in code used to determine the highest integer in a slice. The variable 'highest' is redeclared and assigned to in the scope of the if statement, shadowing the 'highest' variable declared in the scope of main. Whereas desired behaviour is assignment to the highest variable declared in the scope of the main function. Here we say that the variable highest is 'shadowed' as a result of this redeclaration. Try modifying line 14 of the code to 'highest = v' and note the change in behaviour.

Now, it is a good question as to why this is allowed, I believe that it is primarily to allow flexibility and to protect existing code from things like imports as explained here.

This is catchable by using go vet with the experimental option -shadow enabled.

Note that it is necessary to invoke vet via 'go tool vet' rather than 'go vet' in order to enable flags, see this issue.

For those interested, more in depth discussion can be seen here.

Tuesday 23 May 2017

Golang gotcha #2: Methods without pointer receivers cannot modify the actual receiver object

One gotcha that most newcomers to the language run into involves pointer receivers. A method on a struct is declared with a receiver such as those above. Meow is a method on struct type Cat with a receiver of c. c here is similar to self in Python, though we name it by convention as an acronym of the struct name rather than self. In the first example, when Meow() is called, c, the instance the method is called on, is passed as though it were an argument of the method.

If the method were instead defined with a pointer receiver as in the second example, then c would evaluate to a reference to the Cat instance.

Thus the impact of using non-pointer receivers is:
1) Additional memory overhead from copying.
2) Immutability of the actual receiver. This is the major gotcha, this means that methods declared with a non-pointer receiver cannot persist changes to the instance outside the scope of the method.

See this example for a demonstration:

I think that non-pointer receiver methods are more trouble than they are worth. It may seem wise to use them on methods that do not mutate the instance such as Get() and the like but there is a real danger it will come back to bite you in the form of a hard to diagnose bug.

Notable mention in golang docs:

Golang gotcha #1: Taking references to loop variables

The indomitable Smiler at Alton Towers
Holder of the world record for the most inversions, at 14

A colleague of mine ran into a bit of a golang gotcha recently. This relates to taking references / pointers to loop variables. I was already aware of the danger of this in the usage of closures. Closures access all variables in scope, implicitly by reference, this is in contrast to a function argument which is passed by value, this is demonstrated in the following example:

The closure in the for loop evaluates a reference to i rather than a copy of it, thus as i changes, the value that each goro has a reference to changes. In contrast, passing i as an argument to a function takes a copy. In general I dislike closures, they have a dirty scope, it is unlikely they use all variables in their scope, thus they have low cohesion and are harder to reason about. In Golang it is idiomatic to favour the explicit over the implicit, if you need access to a variable, you are better off explicitly passing it as an argument.

Now, onto the example which inspired this post:

In this case, similarly to the previous example we take a reference to a loop variable. However, the loop variable is reused in the further iterations of the loop. Thus our reference is now to a different value. It pays to be careful with loop variables and not to take references to them! An alternative solution as demonstrated in the example is to take the index or to take a copy.

Hopefully this post has illustrated some of the dangers of taking references to loop variables.

PS. I went on the Smiler last year, it really is quite good.

Thursday 16 February 2017

On Golang and Maintainability

I have talked a bit before, mainly in this post, about how Golang as a language tends to expose complexity and excludes some features that while useful can serve to hide complexity. In this post I'm going to explore this topic in more depth and explain why I think this contributes to Golang being a language better suited to writing maintainable code than Python.

Any sufficiently advanced technology is indistinguishable from magic - Arthur C Clarke

Where Python favours the implicit, Golang favours the explicit. And, where Python hides complexity in 'magic' language features, Golang forces you to go the long way round. Some language features in Python that I consider suitably magic are: decorators, properties and list comprehensions. Decorators and properties are mechanisms of indirection, and all these listed features provide handy shortcuts for developers. List comprehensions themselves are fine but nesting or using them for their side effects can quickly result in difficult to read code.

 Short cuts make long delays - Frodo Baggins

The interactive capabilities of the python interpreter can encourage a user to build up multiple lines of Python code into a single complex expression. Case in point, nested list comprehensions, these are usually the result of the condensation of a couple of loops into a one line wonder. And, programmers tend to love one line wonders, they exude elegance, and removing all those lines makes you feel warm and fuzzy inside, because readability and conciseness are easily confused.

Given the fact that it took some thought and tinkering to determine how to compress some readable for loops into such a concise representation, it is likely that the next person to come along, in the absence of the context of the expression's formation, will struggle to decode the compressed representation. In fact they may even try and rewrite it long-form in order to unravel its secrets. List comprehensions that are used for their side effects are full of even more implicit nastiness.

Maintainability comprises a number of factors but a key one is the ability of another programmer (or even you!), to come along and understand the intention of your program. Readability is not inversley proportional to LoC (number of lines of code), mistakenly in this belief programmers can be inclined to do things in complex rather than intelligble ways. The problem is that it can be difficult to distinguish the two. Perhaps a misunderstanding of the code is a indicator of a flaw of the reader or perhaps it is because a simpler representation would suffice. In the former case the writer could be forced to writing a lowest common denominator. In the latter case it pays to consider a language feature's potential cost as well as its benefits.

Language features are like power tools, we come up with excuses just to use them

Golang forgoes many shortcut features resulting in more explicit and maintainable code. I have found that whilst no means necessary, static typing also helps manage complexity and thus improve maintainability in a large application. And optimising for maintenance can be a good idea as this is often where we spend most of our time as developers.

Monday 6 February 2017

Improvements in go 1.8

This post represents notes collected on the new go release and from the state of go talk of Feb 2017, on changes in go 1.8.

Video of the talk can be found here.
Slides of the talk can be found here.

 General Improvements

  • ignore struct tags in type conversions (easier type conversions)
  • 32-bit mips support
  • osx 10.8+ supported
  • go 1.8 is last version to support ARMv5E and ARMv6 processors
  • go 1.9 will require ARMv6K 
  • go vet (sort of compiler warnings) now detects closing http.Response.Body before checking error
  • default gopath $HOME/go on unix
  • go bug command opens a bug on with version/machine information
  • Compiler backend improvements (SSA) sees cpu usage reductions of 20-30% on arm and upto 10% on x86 (SSA was already part-implemented on x86).

 Performance Improvements

  • build times faster than go 1.7 but slower than go 1.4
  • improved -race detection
  • mutex contention profiling `go test bench=. -mutexprofile=mutex.out`, can provide data on whether you should lock in a less or more granular manner, sequential could even be faster.
  • sub-millisecond (~100 microsecond) GC pause times, costing an extra 1/2% cpu.
  • defer is a 1/10th to a 1/3rd faster, but still not that fast, for example...
  • cgo is 50% faster, mostly due to removing high frequency defer calls

Additions to the Standard Library

  • sort.Slice() introduced, provides easier slice sorting
  • plugins introduced (linux only linux atm), load shared libraries at runtime, enables hot code swapping
  • added Shutdown method to http.Server, was previously very hard to stop previously, personally I had to resort to
  • HTTP/2 support introduced

Full go 1.8 release notes are here.

go 1.8 is set to be released on February 16th 2017.

Golang UK conference is on August 16th to 18th 2017.

Friday 3 February 2017

Thoughts on Two Years in Golang

In my last post, I talked/ ranted a little bit about not being swept up in new trends or languages without proper analysis of their pros/cons and suitability for use in certain scenarios. Hence after having learnt Golang from scratch two years ago and having been programming in it day in day out its about time that I collected my thoughts on it.

Now a lot can be said about the cost of learning a new language, that time spent learning the basics, making the right of passage mistakes and getting up to speed with the tooling. However, I think that Golang recognises these costs and does what it can to mitigate these for a new developer, not to say that there isn't still a cost. But, I know that for many companies, mine included, the ease at which a Golang programmer can be converted is a signifcant consideration in the choice of the language.

C and Python had a love child and they called it Golang

Strict, opinionated and boring

I think of Golang as a strict, opinionated and boring language. Now, I know that the word 'boring' has many negative connotations. But when I invoke it here I mean that it lacks many of the features that tittilate academics and occupy the minds of advanced programmers. I discussed the exclusion of exceptions in a previous article. Other non-existent features include some I miss: Generics, operator overloading, primitive sets, assertions. And some I don't: nested functions, inheritance.

I have often heard people say Golang ignores the last X years of language development. Of course there are some useful features missing but in order to keep the language small and simple you have to be strict, and evaluate the costs and benefits of adding a new feature. Terseness can be considered as a feature in and of itself. In other languages the plethora of features can be bewildering and take an age to master, with the extra folds hiding more pitfalls and stumbling blocks.

Inheritance is a big ticket item but I have found that Interface gets you most of the benefits of duck typing without dragging in the massive amount of complexity and metadata fiddling inheritance brings.

Golang has some really nice features. Goroutines are great, these are lightweight concurrency primitives, basically multiplexing upon threads. There are also channels for communicating between goroutines. It is really great that Go can do concurrency so well out of the box and I find it much more clear than Python's generators.

Importantly Golang is very quick to compile and run, out-performing C Python easily and many other Python implementations. This is an oft cited reason for switching from Python to Golang. Most of my work with Golang has been on embedded devices and this was the reason Python was never in the running. There were concerns about its GC (Garbage Collection) latency but great work has been done to bring this to sub-millisecond levels in go 1.8

It has nice concise syntax, something akin to a cross between Python and C, which is nice as I am fond of Python syntax, Java syntax makes me queasy.


Probably the best feature is the tooling available and the strength of the ecosystem in general, it is fairly comprehensive and has a strong standard library which is something I really miss in Python. It tries hard to get things right the first time and mostly succeeds.

govet and golint are great static analysis tools and gofmt and goimports can format your source code on save in compliance with the style guide, saving time and bikeshedding. Golang really benefits from the strictness here, introduced at such an early stage that everyone is forced to get on board. I am so used to auto code formatting that I also set up auto pep8 formatting in Python and didn't look back.

The source tree layout and the build process are also standardised and there are great tools for running, building, testing and generating coverage stats in a standardised way with very little effort. You get deployable static binaries with little hassle which I always found a struggle with Python. This layout and process is strictly dictated which I know will rub some people the wrong way but in my opinion it saves a lot of turmoil for a little sacrifice in freedom.

It is very easy to pull dependencies `go get`, and you're there. However the lack of versioning and no way of telling how popular a library is are problematic. There are some third party solutions to the former problem, personally I use godep and there was some attempt to fix versioning with vendoring, but I don't feel as this is a complete solution and poses its own questions. However I am always a bit horrified by the multitude of tools when I have to pull dependencies in Python  {pip, easyinstall, setuptools}, I don't think go does too bad in comparison.


Now for some gripes.

Non-pointer receiver methods, this is often a common pitfall for new go programmers. In using a method with a non-pointer receiver, the receiver itself is copied by value meaning that changes to that receiver after the function call are not persisted. See this code example.

Lack of a generic max function, this is quite embarrassing for the language as it is something that newcomers will run into fairly early. Due to the lack of generics there is no max function for all numeric types and seemingly as a result of this no max function for any numeric type, err, yea, I know.

Sensible slicing syntax, now the syntax we have is quite nice for some use cases and is appreciated but I still have to resort to slice tricks.

Being strict and opinionated has downsides, on some issues the exclusion of certain features and lack of support for certain use cases makes it seem as though some problems are being wilfully ignored, namely, generics and dependency versioning.


I find Golang a great place on the ladder of abstraction, garbage collected and static typed. I can develop faster in Python but I am more confident of my Golang code's correctness as Python hides complexity, tries to be smart and lacks the safety of the compiler. However Golang does lack some of the libaries and stacks for widespread adoption on the server though this is improving everyday. And its memory requirements may be too demanding for some extremely resource constrained embedded environments, however it has performed admirably for our embedded use case thus far. After two years I like Golang as a language, there's much much more that I have to say about it. But it suffices to say that its a language that I am now very comfortable with and productive in and I feel more confident writing maintainable and efficient code in than Python.

Saturday 21 January 2017

On Programming and Pragmatism

You know when someone wants to invoke feelings of humility and humbleness they show you that graph. You know the one, it shows that Dinosaurs lived for ages in comparison to us and how we are merely an insignificant blip on our planet's mammoth (geddit!) timeline. Well we can see the software industry in a similar position to man in this example, being about sixty years old and fledgling in comparison to traditional engineering. Take the Institute of Civil Engineers in the UK, two centuries old, with established practices and a commitment to professional review, conduct, and a collective commitment to studying and analysing past works. Morality is a seperate topic, but just imagine if we as a community of engineers had reached the maturity whereby we saw each failure as a learning opportunity and seriously analysed case studies.

I have always found that there is comfort in tradition, I think that this partly explains a few bizzare ongoing phenomena and anachronisms such as constitutional monarchy. There is comfort in tracing an unbroken line back, and knowing that your ancestors encountered similar difficulties yet persevered. However this is a comfort that the software industry is visibly bereft of. Perhaps this goes some way to explaining our identity crisises, the continual rocking of the boat every few years when 'THE NEXT BIG THING'TM comes along and all those goddamn wood-working craftsman metaphors that everyone is so fond of. I think that it is a sign of industrial immaturity that a dogmatic view that the next big thing will solve all our problems is so alive and well. New technologies have pros and cons and are designed for certain use cases over others, we should be able to evaluate their merits level headedly.

There is that constant desire to seek that silver bullet, OOP, functional programming, test-driven development, agile methodologies, they all promise to cure all ills yet come with their own set of potential abuses and weaknesses. I read a Steve Yegge post where he compared a programmer's progression to that of a child. At first the bewildering exploration of the early years, then the overconfidence of adolesence, followed by the humility of adulthood, admitting that complexity and flaws exists and always will. I see the software industry as in those heady teenage years, still chasing absolute truths.

'I know that I know nothing' - Socrates

I think that some of the best programmers are the ones who realise their limitations and check overconfidence. They program defensively, realise the human brain will never be up to the task of perfectly modelling and building these complex systems, these castles in the sky, and don't try and solve that problem by weaving more layers of abstraction, UML and object hierarchies. They behave conservatively and understand the importance of testing and don't overreach.

Have you ever found some code and thought, this is crap, who wrote this? ... git blame, oh, me? This is evidence that we are constantly improving and as we do we realise that our formerselves were misguided in some way, this is an endless path, we do not one day become enlightened and get bestowed a halo and aura by Richard Stallman. It stands to reason that there are always flaws in our understanding, this realisation is one of the humbling and empowering truths of programmer adulthood. If we had limitless understanding tests would be redundant and refactoring rare.

As developers we like to imagine ourselves as omniscient and infallable and don't like putting our mistakes on show, we lean on git rebase. This fallacy is propagated by many solutions presented in blog posts or code samples that exclude the context of their genesis and teetering development. For code review, fine, but in general there is no point in fixing up your version control history so it looks like you are some zen programming god. Improvements come in increments, everything won't be solved in the 'BIG REWRITE'TM. I'm not sure if its a cultural thing, but there is this Japanese concept in japan of 'kaizen', continuous, iterative improvement, I think this a healthier philosophy, than I am going to fix everything in one highway to the danger zone themed montage.

We have to be pragmatic lest we become lost in the complexity of our work, software is hard and stable optimal solutions take time, good engineering and clean coding can help but we have to be careful not to overreach or become swept up in heady currents of new trends

Sunday 8 January 2017

On Golang and Exceptions

I have been programming professionally in Golang for a couple of years now and I have to say that I really like the language. My first experience of Golang was a bit of a drop in the deep end, coming into a new job where I would be using Golang as my main language with no real experience. Yet, despite this, it did not take very long before I became productive. I believe that this is partly due to the simplicity of golang and its density/ economy of language features.

Golang was designed to be a small, strict and opinionated language. The small size reduces the required learning time, strictness ensures users do not form harmful habits such as ignoring warnings or leaving unused variables lying around and its opionatedness puts an end to bikeshedding about things like brace placement. This is in contrast to a language such as C++, massive and sprawling and certainly intimidating to a newcomer. The size and complexity of C++ provides many places for the concealment of pitfalls. And an effective understanding of the quirks and gotchas of the lanugage is deservedly highly valued in the corporate world. The problem with giving you this much rope is that it is long enough to hang yourself many times over. Sure, it is powerful but it is shows little respect for your sanity if you are not well directed in your work. Golang also tries to avoid introducing magic where possible. By magic I mean, a feature that hides a sufficient amount of complexity so as to appear 'magic' to the uninformed.

'Any sufficiently advanced technology is indistinguishable from magic' - Arthur C Clarke

One of the magical language features that got the chop in Golang is exceptions. Recently, when doing some work in Python I noticed that I didn't really miss exceptions, they complicated the control flow a lot and caused me much fear and consternation. This is because exceptions are magic, they can cause unexpected jumps in your code based on non-local conditions and inject complexity. They are another thing that you constantly have to think about when writing code. I find that multiple return values, available in both Python and Golang, is a much more intuitive and useful feature that largely subverts the need for exceptions.

'But it doesn't even have exceptions' - reaction of an old workmate when I told him I was now working in Golang.

I see how exceptions can be useful in standardising error reporting, which is great. We've all had to deal with a function with obscure error reporting, that say returns an int value, and we end up asking, does 0 denote an error, what do negative values mean? etc. However Golang also standardises this by providing the error type and interface providing a standard with room for extensibility.

I understand that not allowing exceptions complicates the success case code as often 'if err != nil {...)' is liberally applied. However one really needs to consider if these minor gripes are worth adding extra complexity to the language and burdening the programmer with as an extra concern.