Major versions in the Node Cloud SDKs

Megan Potter
4 min readJul 26, 2023

--

A question came up recently in regards to major version releases on the Node Pub/Sub library. To boil it down, what is our thought process on when and how to release a major, breaking change for a library? In this particular case, the question was centred around the end of Node 12 support.

Disclaimer up front: This blog post isn’t the official word on anything; please, always check cloud.google.com for official answers and policies.

Note that the link above is about the APIs themselves, not client libraries — but we try to follow them for client libraries, too. Also note that what I’m talking about here is “end of support” rather than “deprecation”. The former is about how many person-hours we spend on maintaining older code, and the latter is about discontinuing it entirely.

In any case, I wanted to try to demystify this topic a little bit, in friendly, non-corporate language.

First, I’d like to link to this article that my colleague wrote, and which I poked at a little bit, too:

There have been some deviations from this because of our much longer than expected cycle to drop Node 12, but what that article says is essentially still our intent as developer relations and Cloud SDK. (It is also very similar to the policies at AWS, for example.) TL;DR, we want to support Node versions for 6 months past Node declaring them as unsupported.

I think that all of us know that, for customers and users, dropping support for things willy-nilly is extremely frustrating. Before I worked at Google, I worked at other companies just trying to make the best of still other companies’ APIs and SDKs. It’s part of our job as DevRel to feel your pain there and try to drive change if we can. It’s something I was excited about back when I interviewed.

Unfortunately, for Node specifically, we don’t get a huge amount of choice in the matter — V8, JavaScript, Node, and even TypeScript are fairly stable and backwards-compatible now. But the vast npm package ecosystem is not. Many prominent packages will bump their required engine versions on the day of the Node engine maintenance ending. This often means that there are no more security patches or fixes on the branch that supported the version we need to support. Security vendors quickly come behind and remind us of that fact. :)

And you can imagine that there is a transitive network effect — the packages that depend on those packages may be forced to upgrade the required engine, and so on. That’s just the uncontrollable fact of a huge open source ecosystem.

So here is what we’ve got, currently.

The three pillars (ugh) of this are:

  1. We intend to support Node maintenance versions 6 months past their “best by” date. As the linked article says, “This may vary depending on critical security patches”. This means that, even if we make a new major/breaking version that requires the next Node version, we intend to do our best to keep supporting the previous one for 6 months; but ideally, we also prefer not to make that new major until the time window has passed. But see above, re: npm — there may still be circumstances that prevent us from doing exactly what we need to do.
  2. We aim to minimize mandatory API changes. No one likes to rewrite their code for a new library version. There’s no one-size-fits-all answer to this problem, because sometimes changes simply do have to be made. But e.g. for the area I work most in (Pub/Sub) we try super hard to make all of the raw RPCs backwards compatible, and though we have some new SDK APIs planned, these are (a) meant to be pretty close the existing ones, and (b) planned to include a compatibility layer that will let you work with minimal changes for a while. We want to reduce your friction and pain for using our services. The Pub/Sub SDK team is pretty on board with this idea, because it lets us evolve without breaking all of your apps, but we’ve been advocating to other teams, as well, and it’s an idea that has been well-received.
  3. We want to minimize “major churn fatigue”. Semver lets us make new major versions whenever something might break, but there is an internal culture of trying to avoid too many of these in a small time window.

So, to summarize, this is a hard problem; but we are all interested in trying to find good compromises, going forward. And hopefully this post will help to understand the mindset behind some of these releases and actions!

--

--

Megan Potter
Megan Potter

Written by Megan Potter

Software Engineer at Google, for Google Cloud Platform, in Ontario, Canada

No responses yet