Tag: legacy

  • Legacy Modernization: Your Instincts Might Be Wrong

    I was originally going to write about legacy modernization in a single post, but it was long, and it made more sense to split it up. See the first part on the design patterns and anti-patterns.


    Courtesy: Wikimedia Commons

    There are a lot of possible reasons why a legacy modernization project might end up using the Big Bang anti-pattern. To be totally clear, it’s not usually that the organization is doing things wrong: there are quite a few traps that can land the project in a difficult situation. We’ll try to explore a few of the more common such challenges, but for today we’re going to consider why the Big Bang approach seems to be everyone’s first instinct.

    Not Everyone Knows the Proven Pattern

    Not everyone who takes on a legacy modernization project is familiar with the established pattern. Your architect certainly should be, but in a lot of organizations (especially in government, where software development is not their primary mission) it is not reasonable to expect stakeholders or maybe even managers to understand software architecture.

    The problem, for sure, is one of education. Ideally, you would start off the modernization project by making sure everyone was aware of the different types of approaches and the pros and cons of each.

    However, from the perspective of the consultant (which is how I usually find myself in these types of projects), it’s pretty rare to come in at the beginning. The usual situation is that a legacy modernization project has languished for months or even years before anyone admits that they need outside expertise. Put another way, consultants don’t get called in when the modernization project has gone well from the start.

    So, in the absence of experts in legacy modernization, the beginning of a project — when the high-level approach is chosen — tends to be dominated by people who aren’t aware of the patterns.

    The Big Bang is Simpler

    It’s harder to read code than write it.

    — Joel Spolsky, “Things You Should Never Do, Part I

    Let’s compare the two approaches at a surface level.

    The Strangler Fig necessarily requires working on more parts of the code, since it involves modifying the legacy application. And as a result, it involves a risk of disrupting things during implementation. Furthermore, modifying the legacy application involves reading and making changes to code that has often been neglected for years.

    The Big Bang approach instead limits the changes only to the replacement system, and therefore it also limits the risk of disruption during implementation.

    The Big Bang is the simpler approach, and your instincts as a software developer ought to lean toward the less-complex solution.

    However, as we’ve discussed, the Big Bang trades immediate safety for a huge risk of catastrophic failure at change-over time. It’s one of the exceptions, where taking the more complicated approach is worthwhile, but it’s almost undeniable that on a surface level the Big Bang is simpler.

    The Big Bang is How We Replace Most Other Things

    The Big Bang is also analogous to a lot of the way we replace things in life: if you want a new car, you don’t replace one part at a time like Theseus’s Ship; you procure a completely different car and dispose of the old.

    The problem, though, is that custom software is not like a car. If we need an analogy, it’s probably more like a house: individual, decorated to your own tastes, and difficult to replace. When you don’t like your bathroom, it’s usually much better to remodel than to replace the entire house and go through the entire expensive and time-consuming process of purchasing, selling, packing up your life, and relocation.

    But no matter what analogy you subscribe to for software development or legacy modernization, replacing a legacy application is a huge task — bigger than writing a greenfield application. You have to copy business logic that has usually evolved over years. What’s more, the application itself is often tightly coupled, making it difficult to split off pieces for incremental modernization.

    We’ll come back to the tight coupling in another post, but for now it’s important because the obvious solution to such a problem is to not even try to split up the legacy application.


    As a software engineer, your instincts are an important part of crafting a solution. For small-scale changes, it can often be sufficient to choose the most obvious solution to the most obvious problem. That’s even a good maxim for large-scale problems, but with an important caveat: the larger the scope of your project, the more crucial it is to consider all of your alternatives, including the not-obvious or not-simple ones.


    Other posts in this series:

  • Legacy Modernization: Strangler Fig and Big Bang

    I started to write this as one post, but it was turning out to be a lot longer than I want to write out in one sitting. So, I’m going to divide this up into three separate posts: the problem, the misaligned incentives, and thoughts on solutions. I’ll link the other parts here when they’re done.


    Modernization Projects

    I’ve been working in GovTech for a little while, and one of the things I find fascinating about the space is that a lot of the projects involve modernizing an existing system. I know not everyone agrees, but I enjoy learning a long-established system and the challenge of updating it to fit new requirements.

    I think a fair number of modernization projects begin for the wrong reasons. In my opinion, if the existing system meets the functional requirements (does it do its job?) and nonfunctional requirements (is it fast enough? maintainable?), then you don’t need a new system!

    In other words, putting your system in the cloud is not desirable for its own sake. Microservices architecture is a great way to achieve some requirements at certain levels of resources, but it shouldn’t be the goal itself!

    But as a consultant, if you’re being asked to consult on a legacy modernization project, it’s usually not at the start of the project, and it’s definitely not on a project that’s going well. Nobody asks for help when it’s smooth sailing. If you’re calling a consultant on your project, then odds are you’re already in trouble.

    There are a lot of reasons why these types of projects might struggle, but in my opinion the main one is that they sound a lot easier than they really are. After all one might think, “We created this software in the first place; it should be easy to fix it up.”

    I’ll probably write a lot more on other types of modernization pitfalls, but for now I want to focus on one specific issue: the high-level approach to modernizing a monolithic architecture and converting it into an equivalent microservices architecture.

    The Big Bang

    Courtesy: Wikimedia Commons

    Your first instinct when trying to replace a legacy system might be to start completely fresh and kick off your application in an entirely fresh metaphorical universe with a Big Bang-style creation. Then you develop the new system for a while until it does everything that the old system does. And finally, you switch over from the old system to the new and never look back!

    Look, I get why the Big Bang approach is tempting. It’s less complex: you only have to develop on one code base, and there’s little risk of upsetting things during the pre-release phase. Your developers would undoubtedly rather write new code than read the old, so they’re also going to be pushing for this apparently-simple approach.

    Now, don’t get me wrong: I love the simple approach to things. I think in most of software development that the less complex the solution, the easier it will be to maintain and the more resilient it’s likely to be. I think if you want to do something in the complicated way, there had better be a good reason.

    “If you do a big bang rewrite, the only thing you are guaranteed of is the big bang.” – Martin Fowler

    Many readers probably noticed that the Big Bang is also a highly risky approach. For a project of any size, it’s going to be a lengthy process that delivers results only after months or years. It defers all user input until the end of that years-long development. And it renovates a large amount of business logic all at once, from the perspective of the user; the chances of accidentally breaking business rules is very high. To top it off, the difficulty of troubleshooting defects is much higher when you make such sweeping changes all at once.

    From the stakeholders’ perspective, too, it can be a massive risk to expect the collective will for change to persist for years at a time without a deliverable. People leave jobs, and new people have different priorities. The same people move around in an organization. And even the same people in the same position will often change their minds over that period of time, especially if progress isn’t visible.

    In short, the lack of iteration makes the Big Bang approach a huge risk.

    Strangler Fig

    Courtesy: Wikimedia Commons

    The alternative is to use the Strangler Fig pattern: split your monolith into microservices iteratively in small pieces instead of all at once.

    The basic loop of the Strangler Fig approach is:

    1. Identify a piece of the monolith that can be separated from the rest.
    2. Create a microservice that addresses the business logic of that monolith piece.
    3. In the monolith, call the microservice instead of whatever you were doing before.
    4. Remove the piece of the monolith that is no longer used.

    It’s easy to see that this approach is far more iterative and has much more frequent feedback from users. Each release carries a risk of business disruption, but by working on a much smaller piece of the application at a time, resolving these disruptions should be far easier than trying to troubleshoot the entire application at once.

    Just as importantly, you get tangible results on a more frequent basis, so stakeholders have visible progress to share and celebrate! You can announce your progress in organization-wide newsletters, demo for your stakeholders, and report the percentage of the legacy application that you’ve retired.

    In short, you’re trading some complexity in the development work — you have to refactor the monolith — for all of the benefits of iterative, Agile development. Personally, I think that for a project of any appreciable size, that decision is a no-brainer.

    And yet, a lot of organizations still make the easy-seeming choice at the onset of their project. What’s more, they often have difficulty adjusting their approach once their modernization efforts get bogged down. Next time we’ll look closely at the incentives that might push an organization in the wrong direction.


    If you find this topic interesting, I highly recommend Kill It With Fire: Manage Aging Computer Systems (and Future-Proof Modern Ones) by Marianne Bellotti, especially if you find yourself in GovTech.


    Other posts in this series: