QCon London 2022
I recently attended my first QCon conference. I don't really have much planned for this post in terms of structured content, but I did want to just brain dump some thoughts I had throughout the conference on perhaps a few themes.
What follows isn't really intended to teach or enlighten others about what I learned at the conference, just a place for me to review my notes and perhaps expand on them a little more primarily for my own benefit.
Developer enablement
The 'developer enablement' track at QCon was my first stop, because it's something I'm very interested in. I like the idea of being a force multiplier of productivity and effectiveness for engineering teams, and I think as a senior engineer on a team, your role should involve a lot of that - even more so as you progress.
Platforms, libraries and tools
The first talk I attended focused a lot on 'developer enablement' from a technical perspective. About how to provide tools, standards and platforms for your developers so that they can get their job done easier and more efficiently.
I'm absolutely an advocate of investing in common platforms, libraries, tooling in ways that retains autonomy, but provides a 'golden path' or 'guard rails' so that people can work:
- Safely and securely: good security and access control built in
- Productively: well documented, easy to use and understand tools and libraries that are 'ready to go'
- Efficiently: reusing and benefiting from the work of others across the organisation
The scale in which 'developer enablement' of this flavour is practiced obviously varies significantly business-to-business. What's a justifiable cost for some isn't for others. But I think the majority of companies with any significant software engineering teams should take 'developer enablement' seriously.
It can start as small as encouraging an 'internal open source' model, getting your teams to buy into a culture where we consider how the work they're doing may be reusable and how others may be able to contribute to it.
Or it could go as far as full on dedicated platform teams which have the purpose of developing platforms, tools and libraries to enable other engineering teams to do their job better.
During my time at the BBC, one team I always appreciated was the 'Cosmos' team. This team developed and maintained a set of services which provided a standard golden path for deploying EC2 and Lambda based services to AWS. It interacted with a team's own AWS accounts, so it maintained autonomy and independence, but would provide:
- A safer and standard permissions model
- Standard EC2 AMIs
- Automatic provisioning of service PKI / internal TLS certs into the EC2 instance or Lambda package
- Vulnerability alerts and patching
- Deploy time & dynamic run time config systems
- Dependency repo management
- Standard ways of doing common tasks such as preconfigured httpd with mutual TLS auth with the previously mentioned certificate provisioning
- a community of engineers that could support each other
- and more (this is a simplification...)
Did it have limitations? yeah, and sometimes people had to go around Cosmos, but overall I think the benefit was huge to engineering teams and the organisation as a whole.
This, along with many other tools, prebuilt templates for services (e.g. jenkins), etc all made the 'you build it, you run it' model a lot more productive and effective for us. But as I said, this investment made sense at the BBC - where there are a lot of engineering teams, it doesn't necessarily make sense everywhere.
The key thing is to take the concept of 'developer enablement' seriously, and then you can begin to see what's appropriate within the scope of your business. It may start small and grow into something more significant, with dedicated engineering resources once the value has been demonstrated.
However, I don't want to continue to ramble on about the technical side of developer enablement. It's very important and something I'll continue to advocate for - but I want to explore a different perspective on developer enablement.
'Soft' developer enablement
Perhaps I'm diluting or overloading the term by extending it beyond the traditional 'tools and platforms' that developer enablement is often associated with. It's probably not even anything new, but I do think there's something to be said about 'developer enablement' being as much about culture as it is about roles and teams that focus on it (sound familiar?).
We recognise the role that good tooling, documentation and platforms play in enabling developers to do their job better. We understand that standardising, codifying and providing that 'golden path' for technical work often provides tremendous benefit.
So let's codify and standardise the 'soft' golden path? What are the non-technical practices - that we probably already do some or all of - that we can commit to providing as levers for developers to pull?
Those levers? Perhaps things such as (but not limited to):
- Pair / mob programming
- Requesting knowledge share sessions from people you wish to learn from
- Request insight into future product and/or engineering direction from senior product or technical leadership
- Team exchange / placement (e.g. swapping engineers between teams temporarily, or temporarily placing with another team) to progress a piece of work, or just to knowledge share
- Providing a forum to speak to the wider engineering and/or product community
None of these are anything new to many organisations, my own included, but what I think may be useful is to consider these part of a toolbox that developers can independently pick up and utilise. They have a commitment from their team and other teams that these tools are available to them, and they have guidelines and recommendations on how to use and 'invoke' them.
I guess my point here is that I think by considering these sometimes common practices as part of a culture that is focused on 'enabling' developers, we can reframe and get even more value out of these practices. Admittedly it's probably a pretty subtle difference, or maybe there's no real difference and instead this is just a personal realisation of mine.
API design and patterns
A subject I've been thinking a bit about recently (in large part due to projects at work), is the design of APIs for multiple internal customers and potentially external.
During scoping, requirements gathering and implementation of APIs, we're sometimes asked "well we were thinking of doing <something> x way because y customer may want it like this", or "we need to do it like this because another customer wants that". It's a very consumer driven view of API design that also frustrates those consumers, because they'd like to be the only consumer with an input on the design of this API.
Sometimes API teams will decide that instead of being strictly consumer driven, they'll take consumer requirements and then come up with an interface and response which they believe will serve everyone well. But will it? In many cases, probably not.
A sometimes good solution is to have separate endpoints for your different consumers, so that you can tailor your APIs to different use cases. But this risks ballooning complexity and more contracts to satisfy on any changes made to the common systems powering those different API endpoints.
But perhaps we know for sure that we'll only be supporting a small set number of endpoints? Perhaps 1-2 internal, 1 external? Maybe that's manageable? Maybe even with the best intentions to stick with this, it's still likely to increase in complexity once the precedent of 'we can have a new endpoint just for us for this API' has been set.
How about we shift some responsibility of those different unique endpoints back to the consumer? To step back for a moment and give some examples of what I'm thinking about here:
An API is responsible for aggregating, transforming and applying some standard business logic to a set of data before returning it via an API endpoint.
However, this data is broad and has multiple use cases across the business.
There are multiple different types of applications or services that may want to use this data, for example:
- A web front end
- Another data service that may in turn be consumed by other services
- A third party which may use it in their services, their web and/or mobile front ends
Let's start with the first one. The 'backend for frontend' pattern aims to simplify the front end, keep it as 'dumb' as possible. It receives data from an API and data can be either be displayed or used as-is or provides extra metadata to inform the on how front end to display or use it.
So perhaps the endpoint serving this web front end will format the data in a way that makes it easier to write front end components for? Perhaps it will also be concerned about the order in which certain data is displayed?
The second one? Well another data service wouldn't be concerned with much of this presentational logic. It's likely just interested in the raw data, and depending on the data, perhaps some ordering and whatever pre-applied business logic was already applied.
Finally, the vague 'third party' - which could be many different consumers with different needs. Some could have similar requirements to the internal web front end consumers, some could be more akin to the 'generic sister data service' described in the second example.
Now that we've got these examples, how could we perhaps shift some responsibilities for these endpoints back to their consumers?
First of all, I don't believe this really applies to third party or external consumers for the most part, except in particular circumstances where there's a close, on-going relationship with that external consumer.
Therefore, focusing on the first two as internal consumers, we can find the commonalities between these two APIs - which is likely the data aggregation, sanitation, some necessary filtering or transformation and work from above that layer.
Can we provide a common platform in which consumer teams can simply create their own endpoints which run their code that applies the logic that they want?
This is a pattern I've seen implemented before, but one that was done where the different consumers were broadly serving in the same domain. I'm mostly just writing my thoughts out here so I can think through whether or not this should be applied where potentially quite significantly different domains need to be served.
In the diagram above, the 'consumer endpoint code' represents some code that is provided by the consumer to do additional transformations, application of business logic, filtering and to format the response in however they want. They, in theory, could utilise data from other sources too if appropriate (but of course there would be performance implications to that - especially doing it at this 'layer').
Could this work? I think in some circumstances - yeah. To work effectively, I believe it would require, however, the following:
- A simple, self-service and easily managed deployment process
- Well documented and tested standard interface between the common API layer and the consumer's code.
- A team willing and able to support a 'self service' platform (out of hours support, technical support, requirements, developer enablement, monitoring, etc).
That could be a sizeable piece of work that begins to change the orientation of the API team. They become more of a platform team than a product team. That might be fine, that may be the best way forward - but it may not be.
GraphQL
As part of the 'API' tracks at QCon, I took it upon myself to learn a bit more about GraphQL - because I thought - couldn't GraphQL help with some of thees challenges? Wanting a tailored response and data set from an API for your particular consumer?
I'm really not experienced with GraphQL, so I went along to a few sessions and learned enough to pique my interest in giving it a go. I've got some very unstructured notes on some of this stuff that I can't really easily translate into something articulate, but the talks from twitter and the BBC (broadly about APIs with some GraphQL stuff) were particularly interesting on the subject.
Optimising for speed and flow
Having started a new job in the past year where we started brand new teams from scratch, to build brand new platforms, this is an area of particular interest. You don't often come across opportunities to contribute at the ground level like this, with a mostly clean slate.
Therefore, shaping of ways of working, finding the most optimal ways to keep the momentum and flow going on a team when faced with an ambitious roadmap has been a big focus of mine of the past year, and which is why I took a few sessions on this track.
Again, I have a few rough notes on these that are hard to articulate but a few key standouts which I think mostly affirm many of my existing beliefs but perhaps frame them better than I could:
- The benefits of teams owning as much of their own product as possible. Architecture, implementation, testing, deployment, support and crucially - minimising the need for handovers. The 'handover' is a massive risk to the speed and flow of a team, so minimise them!
- Autonomy is good, but alignment across teams becomes quite difficult - and can become a real problem.
- Cross functional teams
- How fast the team can 'uncover organisational secrets', what are the mechanisms in which a team can figure out what the problems they're facing are.
The last one in particular is one I wish to highlight and was well framed by a talk given by two Software Engineers from the Norwegian Labour and Welfare Administration (NAV) - about their transformation from an organisation that did roughly 4 'releases' a year, with around 100,000 dev hours in each release, and largely consultant run, to a modern, 'agile' organisation with hundreds of developers and 100 cross functional autonomous teams.
In the new job I mentioned earlier in post, we've been tasked with building out a brand new platform which aims to unify several 'brands' that originate from different organisations. The result of acquisitions. Much of our challenge over the past year hasn't been in the technical or the ways of working, but in 'uncovering organisational secrets'.
To understand the quirks of the business and to find ways forward to accommodate and improve on them. I think much of our success as a team has been having that multi-disciplined team with people who understood the value of digging in and understanding this detail early on. Being proactive and fully understanding the domain they're working within.
It's not an easy task, we've found plenty of surprises along the way, but I feel I've learned a lot about the process in which we discover and find out these 'secrets', and it has been hugely valuable to myself as an engineer.
'Getting the most' out of microservices
A few highlights from this track, and it relates to the whole 'developer enablement' piece above, but many of the learnings shared centred around investment in common infrastructure and tooling as early as possible.
This is what helps makes microservice architectures successful and scalable. It's not about boxing teams in, it's about enabling them with a few well tested and supported 'golden paths' in a few key areas.
There were some really interesting learnings from a great talk done by Airbnb, from the high level architectural choices and how they got their from their monolith, to the technical detail on how they aggregate and fetch data, including an insight into their GraphQL engine and custom asynchronous field resolvers.
Allowing anyone and everyone to call a variety of different services in whatever way they please can quickly become unmanageable, especially at companies with a very large engineering team such as Airbnb, so providing that singular safe and opinionated query service in front of your services helps avoid the anarchy that so often follows.
Final thoughts
This has been a random brain dump of a post, so as I said above, I don't expect anyone to find any value out of this. It has been about me thinking through a few of the things that caught my interest whilst at the conference in a very unstructured way, so apologies if you wasted your time on this!
There was loads more I didn't mention, but I will say as a final note that QCon as a whole was very enjoyable. I'm often a bit weary of conferences as many of them I've been to have been uncomfortable experiences. But that wasn't the case with QCon.
It was well organised, good breaks between talks, well sign posted, great catering and a great ethos for avoiding sales pitches in talks by the many vendors present. It was also in a great venue, the Queen Elizabeth II conference centre just across the road from Westminster Abbey.
It's a definite recommendation from me.
Member discussion