Tag: architecture

  • Legacy Modernization: Do We Even Need to Do This?

    If you’re very, very lucky, then you’re just at the beginning of your legacy modernization project. You haven’t done development work, and you haven’t even designed anything. Your first step is to carefully consider what you want and how to get there. If so, then well done! You’re already off to a better start than a lot of such projects.

    More often, though, you’re partway through your modernization effort, and it’s struggling. The main risks in such a project, as we’ve discussed, are with losing momentum or money. However, there are plenty of other things that could go wrong, some of which we’ve talked about in other posts. But something isn’t going well, and you’ve got to figure out what to do.

    In either scenario, your first decision has to be whether or not to modernize. Odd though it may sound, I don’t think the answer is always “yes.”

    I firmly believe that there’s nothing inherently wrong with using old software. Old software has already been adapted to meet user needs. If it’s old and still in use, that’s because it works. Upgrading to newer technologies or moving to a public cloud are not worthwhile as goals just for their own sake. Legacy modernization is extremely expensive and not to be undertaken lightly.

    Is It Broken?

    The first question we should consider here is whether our legacy system is meeting all of our business needs. Odds are, it’s not. Something isn’t right about it, or hopefully we wouldn’t be considering modernization at all.

    Maybe the legacy system is too slow, has reached the limits of vertical scaling, and it can’t be horizontally scaled. Maybe its on-premises infrastructure is being retired. Maybe everyone is tired of using a 1980s green-screen interface. Or possibly you have new compliance requirements that aren’t satisfied.

    Whatever is going on, you should identify the particular requirements that are unmet and their importance to your mission. Write them down, and be specific. We’ll need that list in a moment.

    Can It Be Fixed?

    The second question is just as important: can we modify the existing system to meet our requirements?

    • Can we add an API and modern web interface on top of the legacy system?
    • What if we separate the data layer from the application/business layers, so we can separate them and scale horizontally?
    • Can we lift-and-shift into public cloud without rewriting the whole application? (NB: a lift-and-shift is never as easy as it first appears).

    The answer here is often that the existing application can’t be modified.

    • If it’s in COBOL, I understand it’s really hard to find COBOL developers. So the price of maintaining the system might be out of control.
    • If the infrastructure is going away and is out of your control, you’ll definitely need to move to somewhere else, and it might necessitate modernization.
    • Reaching the limits of scaling is very tough to remedy, so it’s one of those scenarios where I think modernization of some sort is frequently justified.

    So, now also write down what makes it impractical to upgrade your existing system.

    What If We’re Already Modernizing?

    Even if your modernization project is already in progress, you need to do this analysis, maybe especially if you’re already underway.

    I’m sure most of my readers understand the sunk cost fallacy, but even so the warning bears repeating.

    A partially-complete modernization project is expensive in terms of money and effort. You’ve probably spent months or years of developer time. There have been a lot of stakeholder conversations, and it takes effort to build the will to modernize in the first place.

    However, no matter what you’ve invested already, you should be considering the cost-benefit of completing the project versus avoiding the remainder of the project by upgrading the existing software.

    This can be a hard analysis to complete: project timelines are notoriously imprecise, and that is doubly true in a project that involves, for example, understanding and replicating obscure or poorly-understood functionality of a decades-old monolith.

    However, this analysis is critical to project success: the difference in cost between upgrading in place versus a complete rewrite can be absolutely enormous. Even if you’re mostly sure you need to replace the system, a small chance of saving all that effort is worth an hour or two of analysis.


    Next time, we’ll cover choosing the architecture in a modernization project. We’ve mostly talked about the Strangler Fig versus Big Bang approaches, and that implies moving to a cloud-based microservices architecture. However, that’s far from the only reasonable approach, and it’s critically important that you move to the architecture that makes sense for your requirements and available developer resources.


    Other posts in this series:

  • Legacy Modernization: Strangler Fig and Big Bang

    I started to write this as one post, but it was turning out to be a lot longer than I want to write out in one sitting. So, I’m going to divide this up into three separate posts: the problem, the misaligned incentives, and thoughts on solutions. I’ll link the other parts here when they’re done.


    Modernization Projects

    I’ve been working in GovTech for a little while, and one of the things I find fascinating about the space is that a lot of the projects involve modernizing an existing system. I know not everyone agrees, but I enjoy learning a long-established system and the challenge of updating it to fit new requirements.

    I think a fair number of modernization projects begin for the wrong reasons. In my opinion, if the existing system meets the functional requirements (does it do its job?) and nonfunctional requirements (is it fast enough? maintainable?), then you don’t need a new system!

    In other words, putting your system in the cloud is not desirable for its own sake. Microservices architecture is a great way to achieve some requirements at certain levels of resources, but it shouldn’t be the goal itself!

    But as a consultant, if you’re being asked to consult on a legacy modernization project, it’s usually not at the start of the project, and it’s definitely not on a project that’s going well. Nobody asks for help when it’s smooth sailing. If you’re calling a consultant on your project, then odds are you’re already in trouble.

    There are a lot of reasons why these types of projects might struggle, but in my opinion the main one is that they sound a lot easier than they really are. After all one might think, “We created this software in the first place; it should be easy to fix it up.”

    I’ll probably write a lot more on other types of modernization pitfalls, but for now I want to focus on one specific issue: the high-level approach to modernizing a monolithic architecture and converting it into an equivalent microservices architecture.

    The Big Bang

    Courtesy: Wikimedia Commons

    Your first instinct when trying to replace a legacy system might be to start completely fresh and kick off your application in an entirely fresh metaphorical universe with a Big Bang-style creation. Then you develop the new system for a while until it does everything that the old system does. And finally, you switch over from the old system to the new and never look back!

    Look, I get why the Big Bang approach is tempting. It’s less complex: you only have to develop on one code base, and there’s little risk of upsetting things during the pre-release phase. Your developers would undoubtedly rather write new code than read the old, so they’re also going to be pushing for this apparently-simple approach.

    Now, don’t get me wrong: I love the simple approach to things. I think in most of software development that the less complex the solution, the easier it will be to maintain and the more resilient it’s likely to be. I think if you want to do something in the complicated way, there had better be a good reason.

    “If you do a big bang rewrite, the only thing you are guaranteed of is the big bang.” – Martin Fowler

    Many readers probably noticed that the Big Bang is also a highly risky approach. For a project of any size, it’s going to be a lengthy process that delivers results only after months or years. It defers all user input until the end of that years-long development. And it renovates a large amount of business logic all at once, from the perspective of the user; the chances of accidentally breaking business rules is very high. To top it off, the difficulty of troubleshooting defects is much higher when you make such sweeping changes all at once.

    From the stakeholders’ perspective, too, it can be a massive risk to expect the collective will for change to persist for years at a time without a deliverable. People leave jobs, and new people have different priorities. The same people move around in an organization. And even the same people in the same position will often change their minds over that period of time, especially if progress isn’t visible.

    In short, the lack of iteration makes the Big Bang approach a huge risk.

    Strangler Fig

    Courtesy: Wikimedia Commons

    The alternative is to use the Strangler Fig pattern: split your monolith into microservices iteratively in small pieces instead of all at once.

    The basic loop of the Strangler Fig approach is:

    1. Identify a piece of the monolith that can be separated from the rest.
    2. Create a microservice that addresses the business logic of that monolith piece.
    3. In the monolith, call the microservice instead of whatever you were doing before.
    4. Remove the piece of the monolith that is no longer used.

    It’s easy to see that this approach is far more iterative and has much more frequent feedback from users. Each release carries a risk of business disruption, but by working on a much smaller piece of the application at a time, resolving these disruptions should be far easier than trying to troubleshoot the entire application at once.

    Just as importantly, you get tangible results on a more frequent basis, so stakeholders have visible progress to share and celebrate! You can announce your progress in organization-wide newsletters, demo for your stakeholders, and report the percentage of the legacy application that you’ve retired.

    In short, you’re trading some complexity in the development work — you have to refactor the monolith — for all of the benefits of iterative, Agile development. Personally, I think that for a project of any appreciable size, that decision is a no-brainer.

    And yet, a lot of organizations still make the easy-seeming choice at the onset of their project. What’s more, they often have difficulty adjusting their approach once their modernization efforts get bogged down. Next time we’ll look closely at the incentives that might push an organization in the wrong direction.


    If you find this topic interesting, I highly recommend Kill It With Fire: Manage Aging Computer Systems (and Future-Proof Modern Ones) by Marianne Bellotti, especially if you find yourself in GovTech.


    Other posts in this series: