DevOps

Recently, I have published an article where I describe a few tricks about how to migrate your project from multi-repository to mono-repository. Some of you really appreciated the topic:

However, some of you wanted more details about the build process itself. So, I have written about that in depth.

Read Full Article

We had a lot of repositories for different services. There are 20K+ commits in 15+ repositories. Each repository has its own Dockerfile, tests, lint rules, etc.

Turns out, it’s hard to maintain, especially when you have dependent repositories across. I.e. you have repository api that is using a package from another repository, let’s say commons. If you publish an update in commons, you need to go through all the dependent repositories and update commons there.

Now, just imagine how long it takes, to make a clone of each repository, make an update there and push changes back to remote. It’s hard to say for me, but these kinds of updates were leading to half a day work just for updating the changes in other repositories. Therefore, we allocated resources for changing that.

But, before I started migration to a mono repository, I spent some time investigating the pros and cons of other alternatives.

Read Full Article

Turns out that Node.js cannot receive signals and handle them appropriately (if it runs as PID 1). By signals, I mean kernel signals like SIGTERM, SIGINT, etc.

The following code wouldn’t work at all if you run Node.js as PID 1:

process.on('SIGTERM', function onSigterm() {
  // do the cleaning job, but it wouldn't
  process.exit(0);
});

As a result, you will get a zombie process that will be terminated forcefully via SIGKILL signal, meaning that your “clean up” code will not be called at all.

So what, you might say. I’ll describe a real case.

Read Full Article