Highlights from the April 2021 ThoughtWorks Technology Radar

17 min read

Twice a year, the consulting agency ThoughtWorks publishes a document they call the Technology Radar. In this document, they describe trends in tools, techniques, languages/frameworks, and platforms that they are seeing in their work with their clients. They break down their observations along 4 categories: Adopt, Trial, Assess and Hold.

These documents are released twice a year, and I generally try to read through them. Due to ThoughtWorks working with lots of clients (they're a huge consultancy: ~8000 employees in 48 offices around the world!), they have a perspective that those of us working at a single company don't have. The Radar is contributed to by many bright engineers, as well; people like Martin Fowler, Neal Ford, Sam Newman, Pat Kua and more. The way I like to think of these documents is as a fireside chat with veterans of the industry: they've been around the block a few times and have interesting things to share. Maybe they're useful to me, maybe not, but it's still great to hear these stories and allow them to shape my own perspective.

Below you will find the "blips" - individual trends in their assessment - that I found particularly interesting or that resonated with me.

Adoption Recommendation Highlights

The ThoughtWorks definition of their "adopt" classification is:

We feel strongly that the industry should be adopting these items. We use them when appropriate on our projects.

Of all the suggestions for adoption, four stood out to me.

Technique Adoption - Design Systems

The Technology Radar has this to say about design systems:

As application development becomes increasingly dynamic and complex, it's a challenge to deliver accessible and usable products with consistent style. This is particularly true in larger organizations with multiple teams working on different products. Design systems define a collection of design patterns, component libraries and good design and engineering practices that ensure consistent digital products. Built on the corporate style guides of the past, design systems offer shared libraries and documents that are easy to find and use. Generally, guidance is written down as code and kept under version control so that the guide is less ambiguous and easier to maintain than simple documents. Design systems have become a standard approach when working across teams and disciplines in product development because they allow teams to focus. They can address strategic challenges around the product itself without reinventing the wheel every time a new visual component is needed.

Design Systems are fantastic and especially help backend-heavy engineering teams make their applications look better, have ergonomic UX, and generally shine compared to how they would be without a design system. This is a great way to scale a smaller UI/UX team to have larger, organization-wide impact.

Building the design system is only one piece of the puzzle, though. To be successful, it needs adoption. Without organizational mandate, the design team needs to socialize it, make it easily accessible, and advocate for its use and growth. As teams begin to use it and see its value, organic growth and feedback will kick in to foster its continued use and adoption.

Our company is just beginning to build out its own design system with React and Storybook. I'm really looking forward to seeing if it gains traction and adoption outside the immediate team that is building it.

Technique Adoption - Applying The Expand-Contract Pattern to APIs

API versioning is hard. I was encouraged to see this one in the radar's "adopt" section this year:

The API expand-contract pattern, sometimes called parallel change, will be familiar to many, especially when used with databases or code; however, we only see low levels of adoption with APIs. Specifically, we're seeing complex versioning schemes and breaking changes used in scenarios where a simple expand and then contract would suffice. For example, first adding to an API while deprecating an existing element, and then only later removing the deprecated elements once consumers are switched to the newer schema. This approach does require some coordination and visibility of the API consumers, perhaps through a technique such as consumer-driven contract testing.

I'm not a fan of "versioning schemes" in REST APIs, but I am a proponent of upholding a contract with my consumers. Prepending versions onto an API (like www.myservice.dev/api/v1/...) is ugly and frequently only ever ends up being /v1/ for the application's entire life. Other versioning schemes, such as requiring a specific Accept header, I have not seen much of in practice.

The API expand-contract pattern, on the other hand, offers a pragmatic approach to introducing changes using a migration strategy embracing the concept of evolutionary architecture. Instead of cut-overs that require lock-step collaboration, we can instead think of our changes as a slowly evolving system. Using the expand-contract pattern, we can change our system organically, allowing our consumers to migrate at their own pace. This is the path that most aligns with my experience and preference.

How the Expand-Contract Pattern Works

One of the best practices for consuming a JSON API is to ignore any keys you don't use. In other words, If you are consuming an API that has a signature like this:

{
  "id": 123,
  "author": {
    "name": "Ada Lovelace",
    "email": "ada@chadxz.dev"
  },
  "commit": "dd252e7"
}

... you should avoid doing anything that would assert the exact structure of the payload. Doing so would couple your implementation too closely to the response, whereas you could instead code only to the keys you need and ignore the remainder. That way if the payload changed to add a new key:

{
  "id": 123,
  "author": {
    "name": "Ada Lovelace",
    "email": "ada@chadxz.dev"
  },
  "commit": "dd252e7",
+  "hash": "dd252e7478279c6391b50421cec801a652040986"
}

... your code continues to work without any changes. This allows the API producer to evolve an API without breaking you. They can then deprecate the commit field and schedule it to be removed, allowing their consumers to migrate to the new hash field if they need it.

This pattern could also be followed by adding a new API while keeping the deprecated API in place for a while. For example, in the gRPC ecosystem with the protobuf RPC format, they recommend never removing fields, renaming fields or messages, changing field types, etc. Eventually when you do need to cull the old versions, you would release a new version of your service with a new protobuf version, deprecate the old API, and allow the two to live alongside one another until the old API's End of Life (EOL) date hits.

You can read more about the expand-contract pattern in the corresponding Martin Fowler wiki article.

Other Noteworthy Adoption Recommendations

Trial Recommendation Highlights

The definition of their "trial" classification is:

Worth pursuing. It is important to understand how to build up this capability. Enterprises should try this technology on a project that can handle the risk.

There were 2 main techniques they qualified for trial that I strongly agree with, and others that piqued my interest that I want to learn more about.

Technique Trial - Lightweight Approach to RFCs

This recommendation was confusing to me:

As organizations drive toward evolutionary architecture, it's important to capture decisions around design, architecture, techniques and teams' ways of workings. The process of collecting and aggregating feedback that will lead to these decisions begin with Request for Comments (RfCs). RfCs are a technique for collecting context, design and architectural ideas and collaborating with teams to ultimately come to decisions along with their context and consequences. We recommend that organizations take a lightweight approach to RFCs by using a simple standardized template across many teams as well as version control to capture RfCs.

It's important to capture these in an audit of these decisions to benefit future team members and to capture the technical and business evolution of an organization. Mature organizations have used RfCs in autonomous teams to drive better communication and collaboration especially in cross-team relevant decisions.

It's unclear what they are comparing lightweight RFCs to... "Heavyweight" RFCs? There's no description or elaboration on what the problematic RFCs are and what to avoid. Despite this, RFCs are an area of active interest for me, so I wanted to highlight this.

At work, we have begun to try Architecture Decision Records to capture some of our tribal knowledge. ADRs are documents that capture the context behind major decisions made about an application's architecture. Things like "why was framework X chosen?", "why is this service using basic authentication vs OAuth2?", or "why are we using X test framework?" Along with the why, other examples of topics covered by these documents are alternatives that were considered or pressures placed on the team when the decision was made. The documents are stored in the source repository with the application, and aim to share the knowledge that would otherwise be unavailable unless the engineer that made that decision is still on the team. I like ADRs so far, but it is still a new process for our team, so we're evaluating it to see if it is a good fit.

RFCs and ADRs are mostly designed to serve the same purpose, but with RFCs the emphasis is on seeking feedback and/or approval of your proposal. This emphasis on collaboration benefits all engineers in the company by enhancing accountability and cross-team knowledge sharing. I would relish the opportunity to adopt an RFCs model for cross-team collaboration on best practices, technology adoption, and major architecture changes!

For the time being, I'm going to continue to encourage ADR use in our greenfield applications, advocate for designs to be documented on our internal wiki, and ensure those designs are shared out to those within our team and on other teams that may have some expertise on a given subject. I'm hoping these actions will further develop our culture of documentation and collaboration. Long-term I hope to introduce a process for RFCs.

To learn more about RFCs, I recommend this article by Gergely Orosz.

Technique Trial - Hypothesis Driven Legacy Renovation

Hypothesis-driven legacy renovation is described this way:

We're often asked to refresh, update or remediate legacy systems that we didn't originally build. Sometimes, technical issues need our attention such as improving performance or reliability. One common approach to address these issues is to create "technical stories" using the same format as a user story but with a technical outcome rather than a business one. But these technical tasks are often difficult to estimate, take longer than anticipated or don't end up having the desired outcome. An alternative, more successful method is to apply hypothesis-driven legacy renovation. Rather than working toward a standard backlog, the team takes ownership of a measurable technical outcome and collectively establishes a set of hypotheses about the problem. They then conduct iterative, time-boxed experiments to verify or disprove each hypothesis in order of priority. The resulting workflow is optimized for reducing uncertainty rather than following a plan toward a predictable outcome.

At work our team is responsible for dozens of legacy applications, and we employ this technique! Here is an example.

We have a legacy API that has served the company well and largely been stable for years. It had recently been modified to accept events from Salesforce's Outbound Message Queue and save them to a database. Subsequently, we found that our Sales team would occasionally run mass updates in Salesforce, resulting in a torrent of calls to the API that would bog it down for hours. Each time this happened, we would sit down after remediation to discuss what went wrong and how we might improve the system. Most of our initial changes did not result in significant improvement, so we decided to turn the problem into a project for someone to dig in to. The project owner did some deeper performance analysis, wrote up their hypotheses about the cause of the problem, and we regrouped as a team to discuss the proposals and decide what actions to take.

This approach to problem-solving and "legacy renovation" is a great team-building technique! It also gave the individual leading the project a chance to own something important to the team, drive the discussion and perform the experiments. Overall, I recommend this approach!

Other Interesting Trial Blips

Assessment Recommendation Highlights

Worth exploring with the goal of understanding how it will affect your enterprise.

The assessment recommendations that stood out to me to highlight were either things new to me that I am aware of and already excited about, or things I'm actively interested in.

Hold Recommendation Highlights

Proceed with caution.

There were two main hold recommendations that stood out to me that I want to highlight, but a few additional ones that surprised me or felt were noteworthy.

Hold This Technique - Peer Review == Pull Request

This anti-pattern hits particularly close to home for me:

Some organizations seem to think peer review equals pull request; they've taken the view that the only way to achieve a peer review of code is via a pull request. We've seen this approach create significant team bottlenecks as well as significantly degrade the quality of feedback as overloaded reviewers begin to simply reject requests. Although the argument could be made that this is one way to demonstrate code review "regulatory compliance" one of our clients was told this was invalid since there was no evidence the code was actually read by anyone prior to acceptance. Pull requests are only one way to manage the code review workflow; we urge people to consider other approaches, especially where there is a need to coach and pass on feedback carefully.

I have been a long-time advocate of adapting code review to the work being done. Many times a pull request is fine, but software engineers should familiarize themselves (and practice!) other forms of peer review when appropriate. Pair programming, mob programming, and RFCs are other forms of peer review that should be considered depending on the scope of the review that is desired.

It is also critical for junior engineers to have active mentorship. It is not enough for them to rely solely on pull requests for feedback and learning. Everyone learns differently, and teams need to consider that the way they normally provide each other feedback may not work for an engineer that is just getting started in their career.

So branch out and try a new form of peer review! You might be surprised to find that it is a much more rewarding experience than what you are used to.

Hold this Technique - GitOps

I was surprised to see GitOps on the hold list, but their explanation makes sense:

We suggest approaching GitOps with a degree of care, especially with regard to branching strategies. GitOps can be seen as a way of implementing infrastructure as code that involves continuously synchronizing and applying infrastructure code from Git into various environments. When used with a "branch per environment" infrastructure, changes are promoted from one environment to the next by merging code. While treating code as the single source of truth is clearly a sound approach, we're seeing branch per environment lead to environmental drift and eventually environment-specific configs as code merges become problematic or even stop entirely. This is very similar to what we've seen in the past with long-lived branches with GitFlow.

At work, we use Git to store our infrastructure as Ansible playbooks, and the different environments are represented as separate inventories you can apply a single deployment playbook to. Our development workflow involves short-lived feature branches, but all environments are deployed from the main branch. This helps us avoid the drift problem described above and makes us think carefully about when and where we have our environments differ from one another. It is also the "painful path" to allow environments to diverge using this model... a good thing!

Other Noteworthy Hold Recommendations

Conclusion

The ThoughtWorks Technology Radar is a great resource for learning about the experiences of others in our industry. It is one of the many ways that I keep up, so that I can take those learnings back to my company and incorporate them where it makes sense. If you find these highlights interesting too, you can read the entire radar at the website, and signup there to be notified when new versions are available.

In the future I plan to write about other ways that I keep up with news, industry topics, and educational resources. If you'd be interested in that, stay tuned.