Monolith migration: rewrite or refactor?

When talking about big changes in a software, there are two approaches: gradual refactoring or big bang rewrites. Which one is suitable when migrating to microservices? As with any sensible answer: it depends.

If your application already has a poor performance or design issues (such as being very tightly coupled to a database), or if the application is written on a very old platform and language, it may not even be possible to do gradual rewrites. Imaging a system that is written with Fortran language and you’d have to find developer that are fluent in Fortran to be able to understand and refactor the code-base.

On the other hand, if you are on a somewhat modern technology stack and have a decent architecture, it’d be quite possible to gradually migrate to a microservices design.

Kazanavičius & Mažeika (2019) state that the main challenge when gradually migrating to microservices is “the extraction of microservices from existing legacy monolithic code bases”.

References

Kazanavičius, J., & Mažeika, D. (2019, 25-25 April 2019). Migrating Legacy Software to Microservices Architecture. 2019 Open Conference of Electrical, Electronic and Information Sciences (eStream)

Monolith migartion to microservices: tools

You have finally decided that it is time to move to a microservices design and reap all the benefits. Let’s go through the the code and start refactoring…but wait. Before you do that, let’s see what are the tools that you need in place beforehand.

The requirements can be summarized as, infrastructural, environments, monitoring and logging. Let’s review each quickly.

Infrastructural needs

As stated before, running a microservices-base application is not exactly like running a monolith. So you will be working quite differently when developing a microservices application. First thing that can probably be put in place, is a Continuous Integration/Delivery pipeline if your organization does not have one. As this pipeline will be critical to your organization when releasing updates, make sure you have a group responsible for maintaining it.

Environments

Even when running monoliths, a lot of organizations choose to run their applications on virtual machines. That is a good start, if you are running on bare-metal. To take a further step, you can use docker as a virtualization technology and with these pave the way to running your application in the cloud. Once you decompose your monolith, you can decide which parts should be running on the virtual machines, which services should run in a docker container and which services can be run directly in the cloud.

Monitoring and Logging

With microservices you’d have to monitor tens to hundreds of services, machines, virtual machines, docker containers, database, etc and this is not an easy task. The monitoring tool that you choose should not only monitor services being up and running, but that they are actually doing work. Logging is also different as each piece of the system will be running on various services (not to mentioned scaling where you have more than one service running for the same task). A better approach to logging is to use structured logging backed by a structure log server that allows searching. Your logs should also use correlation and context so that you can trace a workflow end-to-end, regardless of where it is run.

Migration of Monolith to Microservices

As a result of microservices becoming so popular, the monolithic architecture got a bad rap, but there are lots of software systems that are perfectly fine to run as a monolith.Think about it: you only have one application to release, deploy, maintain and monitor. Due to the components of the system being tightly coupled, you can test things a lot easier and problems are easier to find. The question is, which architectural style is suitable for what purpose?

According to Kazanavičius & Mažeika (2019) monolithic architecture is fine when you have an application that is simple and lightweight (so it doesn’t require scaling), and microservices are suitable when the application is complex and evolving. (p. 2)

After some years of developing the said monolith, things would become too hard to manage, partly due to the tight-coupling and side-effects of it (e.g. when you fix a bug, ten more appear). But when would you migrate such monoliths to a more microservices based design?

Kazanavičius & Mažeika also summarize that you should migrate to microservices when:

  • The monolith is too complex to maintain
  • You benefit from decentralization and modularization of the monolith
  • You see importance in the long run (as there’ll be a lot of growing-pain for the short time)

References

Kazanavičius, J., & Mažeika, D. (2019, 25-25 April 2019). Migrating Legacy Software to Microservices Architecture. 2019 Open Conference of Electrical, Electronic and Information Sciences (eStream)

Backwards compatibility in Microservices

As stated in the previous posts, a side effect of designing a system using microservices is that you end up having tens and hundreds of services that you need to keep operational. When you deploy a new version of a service that contains a bug fix in particular service, or when you want to improve another by adding a feature to it, the ‘system’ as a whole would still require to be functional. But how do you ensure that?

The problem is that if there is a fault at one of the services, other services communicating with it may fail and this would end up having a domino-effect bringing down the whole system. Furthermore, what if due to that bug-fix or enhancement, you need to update the service interfaces of that microservice. This would be a breaking change so any other service relying on it won’t be able to communicate with it anymore and would have to update.

Is it possible to spend time and resources to run the new release (of the whole system) in pre-production environment? To simulate things running in production, now you’d have to have two sets of production environment, which would have running and maintenance costs.

Kargar & Hanifizade (2018) advise that “the microservices must have backward compatibility, so that each version of microservices must also support the previous version inputs” and this can be enforced with the use of regression and integration testing.

References
Kargar, M. J., & Hanifizade, A. (2018, 25-26 April 2018). Automation of regression test in microservice architecture. 2018 4th International Conference on Web Research (ICWR)

Microservices and deployment best practices

As stated in the previous post, a side effect of designing a system using microservices is that you end up having tens and hundreds of services that you need to keep running. One challenge right there, especially when it comes down to critical systems with zero down-time policy, is that how do you ensure these microservices are operational?

Fortunately, the software development community has already established patterns such as Continuous Development, Continuous Delivery, Canary releases and Blue/Green deployment methods (Kargar & Hanifizade, 2018). Yet, there are two concerns:

  • Does your organization already have these practices in place? There is a learning curve and organizational acceptance that takes time to establish these patterns. If these are not already well established, it is suggested that you start there first.
  • Even when you already have these practices at your organisation level, how do you ensure the release has the required quality before you deploy it?

There are other challenges related to how microservices are deployed and are run, which is the subject of the next post.

References
Kargar, M. J., & Hanifizade, A. (2018, 25-26 April 2018). Automation of regression test in microservice architecture. 2018 4th International Conference on Web Research (ICWR)

Monitoring in Microservices

One thing that is fundamentally different when comparing a legacy and traditional architecture to a distributed system such as Microservices or SoA is that due to the nature of more things being run in parallel, it is just hard to make sense of things. In fact, the move to write multi-threaded applications become more comment with the advent of multi-core CPUs in the last decade as a way to better use the system resources and to write a more performant software. While performance gains are great, running multi-threaded softwares comes with a caveat: it is harder to monitor, trace, debug and make sense of things.

There is yet another problem with monitoring when it comes to microservices. Traditionally the monoliths would have a set amount of executables and services that needs to be monitored. Even in terms of scaling, they only scale vertically, which means they run on a bigger machine and that doesn’t change the way things are monitored, which is having an eye on one machine that has everything (hence the word monolith). In microservices however, a decomposition of a software system may result to hundreds of micro service (Cinque et al, 2019) and each could be deployed to a separate box and scaled horizontally (e.g. run on multiple machines). So just by doing this, now you have tens if not hundreds of machines (whether virtual or physical) that you need to keep an eye on.

References

Cinque, M., Corte, R. D., & Pecchia, A. (2019, 27-30 Oct. 2019). Advancing Monitoring in Microservices Systems. 2019 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)