Acquisitions. Data center consolidation. Technology refresh. Cloud or mobile initiatives. These are among the reasons companies undergo widespread upgrades to their IT infrastructure. Their motivation usually includes the promise of greater cost savings and faster, more agile operations to meet the needs of the business.
Yet, despite their efforts toward IT transformation, many IT teams fail to realize the intended ROI of such a move. That's because they neglect one key element in the mix: Solving the legacy platform problem.
What is a Legacy Platform and Why is It a Problem for Today's Organization?
Legacy platforms might be systems and applications developed in the last few years. For some industries, they could even have been developed as much as 20+ years ago. Some are orphaned systems whose original owners and operating details are now unclear. Despite their age, however, many may have been customized over time to offer a still-useful function to the organization.
What makes such legacy systems a problem is their continued reliance on old, outdated infrastructure that has since outlived its original vendor support contract. Such systems are often based on old database software, an old operating system, or, even, an outdated product or platform that is no longer available or no longer supported by the vendor. Some may even be from vendors who have since closed shop.
Unfortunately, the legacy platform's reliance on outdated components can become a major drag on the overall performance and intended ROI of any major IT infrastructure upgrade. This can lead to added costs and risk, such as the platform now being:
- Too hard to support. An older platform may no longer have ready expertise available—either in-house or externally—for troubleshooting or fast resolution when operational issues arise. If it fails, there may be no one left who knows why it failed or how to get it back up and running again.
- Too costly to support. If outside expertise is still needed to manage the system or resolve system issues, the organization can easily end up paying a premium to gain the assistance of a rare expert. Similarly, products can rack up significant extra costs for extended licenses and premium vendor support contracts after they've passed their original support timeline.
- Too risky to support. Today's organizations have increasingly moved more to a standard, data center-wide approach to security and compliance. Many legacy platforms, however, continue to remain siloed and removed from the latest security patches and processes. This creates added risk and organizational vulnerability. It's not uncommon to see an IT operations manager tell a legacy application owner they can no longer comply with the organizationís security requirements because the legacy application or platform simply doesnít support it.
- Neglected or forgotten until it's time to calculate project ROI. A once-useful legacy application may now run in the background, largely unnoticed until something goes wrong. Such applications may traverse multiple servers, a factor that makes them easy to miss in migration planning for large upgrade projects. Unfortunately, by not being considered in the original ROI calculations for a new platform or migration, such legacy systems can create their own long tail of technical debt: The organization still benefits from the new capabilities of the upgrade, but it soon finds itself spending a growing portion of the budget to support such legacy systems.
If It Ainít Broke, Why Fix It?
Despite the potential risks of leaving legacy applications as they are, organizations may still wonder, "Why fix the problem now?" Some hesitate to spend the money reengineering a legacy system when it's already gotten most of the bugs worked out and still delivers a lot of value. So, why bite the bullet now and take the steps needed to integrate the application or system into the new infrastructure?
What often drives the decision are the incentives to make the legacy application more cohesive and more workable with current and upcoming systems in the new infrastructure. The prospects of greater overall ROI, lower support costs and more compliant, secure operations are also tempting for the organization's bottom line.
How Legacy Apps and Platforms Get Left Behind
Prior to full-scale infrastructure upgrades, a group of very smart IT folks are often involved in the process of detailed migration planning. Given that focus, how can so many legacy systems and platforms remain overlooked for update or migration? Sometimes it's a conscious decision to 'kick the can down the road' a little longer rather than incur the cost to migrate or retool an older system. But, often, other reasons are at play:
- Legacy system migration isn't cookie-cutter or simple. Whenever an organization migrates a system from one platform to another, unforeseen issues can arise. Migrating older systems on legacy platforms can create still more challenges. For one thing, such systems don't always lend themselves to common server-by-server migration methods or to easily migrating their workloads to the same target as the rest of the organization's applications. They just may not fit into a blanket approach to migration that's dedicated to a single target. Thatís because many legacy applications are more complex, often spanning multiple servers and, even, more than one platform. Their workloads need careful evaluation first to ascertain the best environment or target platform for them.
- Critical resources may not be available to aid in the migration. The legacy platform may require development team resources to aid in the migration process. Given their potential, conflicting priorities and projects, this factor alone can stall migration to a new IT infrastructure. As mentioned earlier, there may also be skills gaps where no expertise is readily available in the legacy platform. Such expertise can be important when it comes to migration details or potential gotchas to avoid. Similarly, missing application owners may no longer be available to verify or acceptance-test a potential migration.
- Company culture may resist migration of legacy systems. An organization's past experience with large-scale migrations may color its willingness to complicate what might be perceived as an already complex migration project. If past migrations caused what was perceived as undue business disruption, the business can become resistant to further migration plans. A company might prefer the relative stability of its current, legacy platform to the risks of potential disruption or upheaval from another migration.
No App Left Behind: Successful Migration Tips for Legacy Platforms
Despite these issues, organizations wanting to fully embrace an infrastructure upgrade project know that they are on borrowed time when it comes to their legacy leftover applications and systems. If such systems continue to perform a critical function for the company, they will soon be faced with three choices:
- Migrate the old application to the new platform.
- Start over: Rewrite the old application and retire the legacy system. This typically involves capturing old functionality within a completely new system.
- Shoehorn functionality from the old system into a product you already subscribe to or license.
Typically, organizations choose Option 1—to migrate the legacy system to a new platform—as their most economical choice. This is because old systems usually have unique functionality you can't easily buy off-the-shelf or shoehorn into an existing product. Similarly, starting over from scratch is one of the more expensive options of the three.
Once migration is the preferred method, the question becomes how best to proceed. Many legacy application can simply be rehosted or moved from one platform to another in something like a lift-and-shift method. This is fairly straightforward and is often handled with a normal server/workload approach to migration.
The challenges arise with a more complex type of migration. In this case, despite the issues and potential gaps, what's the best way to proceed?
Here are a few things we've learned:
- Treat each complex migration as you would a new deployment. This is a necessary mindset required to ensure each phase in the migration/deployment is carefully mapped.
- Think of the migration in terms of workloads, not individual servers.
- Assemble an integrated team for the migration that is separate from the main IT operations team. When Datalink assists organizations in this regard, we view the integrated migration team as a "Task Force" of experts who bring different multiple disciplines to the table. This team should include members in the following roles:
- Business analyst - who understands how the legacy application impact the business;
- Application development - because migration may involve code or configuration changes;
- Infrastructure personnel - who understand how the legacy system might run on multiple servers so that you can't just migrate a server alone;
- QA Support - who can develop a test plan and test mechanism to ensure the system, post-migration, continues to support all of its original functionality
- Coordinate the planned migration with the development and business teams to work around their own business or upcoming release priorities. Efforts to personally engage these teams are preferable to bulk communications sent regarding the upcoming migration plans. Otherwise, potential disruptions and delays are more likely.
- Perform proactive research regarding the legacy application and its workload needs. This involves going to the legacy system itself and learning about how it operates as well as how it's been set up and configured.
Despite the short-term pain and upfront budget required to migrate such applications, the benefits can be just as great. Only by pursuing this path can organizations fully reap the intended financial gains and benefits of a large upgrade to their own IT infrastructure.
About the Author
Dennis Vickers joined Datalink in 2014. With over 30 years of experience in technical services, Dennis has been deeply involved with both the tactical and strategic operations of organizations. He has a unique understanding of not only how to implement and operate technology, but also how that technology integrates and supports organizations. His breadth of experience allows him to effectively lead teams charged with designing, developing and implementing complex information technology systems.
Prior roles have included VP of Enterprise Services at BEAR Data Solutions and President and Founder of ISPro, Inc. Dennis is also a professor at California Lutheran University, teaching courses on software development and database technology. Dennis holds his Ph.D. in computer information systems from Nova Southeastern University.