Legacy systems: Too old to die?
Connecting state and local government leaders
Some legacy systems will likely never move to modern platforms, but agencies must get smarter about what to migrate and how.
Amid reports of laid-off workers unable to access their state’s legacy unemployment insurance computer systems, we’re revisiting this GCN feature from August 2015. It shows that when it comes to legacy systems, while many of the same intractable issues persist, much of the same good advice also holds true.
Revelations of massive security breaches at the Office of Personnel Management in 2015 not only set millions of feds on edge. The breaches also highlighted, yet again, agencies’ reliance on legacy IT systems, some of which are decades old.
The solution in most cases is to replace or at least upgrade them -- but that’s much easier said than done.
OPM Director Katherine Archuleta, who has since resigned, told congressional oversight panels in June of that year that a large share of the blame for the breaches belonged with the legacy systems on which her agency depends, and they are proving tough to modernize. OPM CIO Donna Seymour told the same lawmakers that it was impossible to encrypt data in some of those systems.
Some of the systems in question are more than 20 years old and written in Cobol, Seymour said. Getting them to the point at which they could be fully encrypted and accept other security measures, such as two-factor authentication, would require a full and very expensive rewrite of the software.
Beyond such improvements, simply maintaining existing IT systems is an expensive proposition for government agencies. A Professional Services Council’s survey of federal CIOs and chief information security officers found that, on average, 75 percent of IT budgets go to operations and maintenance (O&M) of existing infrastructure. That number will go down over time, but the CIOs and CISOs said that three years from now they expect to put just over a third of their budgets into development, modernization and enhancement.
Some sectors are even worse off. For example, the Defense Department currently spends 80 percent of its IT budget on existing systems. And the Navy recently awarded a $9.1 million contract to Microsoft to support legacy Windows programs such as XP. The deal could run through 2017 and eventually cost more than $30 million.
David Wennergren, a former DOD technology executive and now senior vice president of technology at the Professional Services Council, said upgrading legacy systems is a complex process for most agencies.
“You’ve got to have a strategic decision that it’s time to migrate off System A, and then [ask] what’s that migration plan going to look like and does everyone agree on that direction,” he said. “If you decide you want to build a new system that also requires a different appropriation [from] the one that provides operations and maintenance dollars, so you’ve then got to go to Congress and convince them of the need.”
Alternatively, Wennergren said, organizations could take advantage of consumption-based models that allow them to use O&M funds, such as the cloud. Rather than build a wholly new system, agencies could hire a provider to deliver the service “and pay them by the drink,” he said. That way the onus is on the provider to determine whether a new system is needed to support the outsourcing contract -- and if so, the provider pays for it.
It’s a question of priorities, he added. A new Web-based front end might be enough to provide users with an efficient and modern experience, even though there’s a legacy system chugging away in the background. And out of 100 legacy systems at a given agency, half might be fine just the way they are while the other 50 are woefully out of date, leaving the agency with operating systems that are no longer supported and core functions that “are held together with duct tape,” he said.
“So you have systems where you either have a compelling opportunity or a compelling need that you have to deal with first,” Wennergren said. “If you can first understand what you have, then you can put together migration plans about how and why to move systems this year.”
The importance of an application audit
NASA has been one of the strongest proponents of the cloud for those purposes and of hybrid solutions in particular.
Certain physical systems, such as supercomputers, must stay within NASA’s infrastructure, said Roopangi Kadakia, the agency’s Web services executive, at a recent cloud security conference hosted by GCN sister publication FCW. But by using the hybrid cloud, she added, “I can actually start building applications. I can take advantage of that data [produced by legacy systems] in different ways, in more innovative ways that wouldn’t be possible if we had to keep it all within our environment.”
Kadakia has also talked about how NASA’s flagship portal, NASA.gov -- with its 150 applications and some 200,000 pages of content -- took just 13 weeks to move. And that included upgrading from the old technology where the site was previously hosted.
To move NASA’s more than 64,000 applications to the cloud requires assessing the security risks, she said. The least risky approach is a staggered migration that involves moving some 10 percent of NASA’s publicly accessible websites to the cloud each year.
However, Ed Airey, product marketing director at Micro Focus, said migrating systems and applications is not the only way to improve them, and in some cases, it might not be necessary or even possible to do that, particularly when the platforms or the applications running on them are strategic to the organization.
“Platforms in many ways can be considered separate from the applications,” he said. “The applications themselves can retain the business rules and logic and the data itself, while being reconfigured to operate and interact with modern technologies such as Java and Microsoft’s .NET.”
The problem with trying to upgrade cornerstone, decades-old Cobol systems is that an agency has invested years of development effort in the applications based on them, and much of the business and mission success of the organization depends on that. So the first thing an agency must do is get a full appreciation and understanding of how those applications work, Airey said. And that’s not always easy.
“In some cases, applications are very well documented, and [agencies] have the staff and resources in place to not only support the application but also to understand how the different business components fit together,” he said. “But as people retire or move on, and in some cases as the technology itself changes, that landscape becomes more complex.”
Kadakia said conducting an application audit to identify and mitigate critical vulnerabilities, some of which the applications’ users were not aware of, was responsible for much of the cost of NASA’s migration.
When agencies lack documentation or insight, change becomes risky because administrators don’t fully understand the implications of what they are about to do, Airey said. Because of that fear, they sometimes defer the changes and end up with a much bigger problem further down the road.
Other migration routes
A problem with many legacy applications is that they were written in a monolithic or vertical way, said Jason Andersen, vice president of business line management at Stratus Technologies. That approach makes it difficult to migrate the applications because they are not compatible with the current service-oriented IT architectures, in which applications tend to be spread across various tiers and services. Therefore, legacy applications -- particularly mission-critical ones -- often require a wholesale rewrite in order to migrate them.
One solution would be to also rework some of the infrastructure on which those applications depend. Instead of putting most of the reliability and security into the applications themselves, which was the old way of doing things, agencies could put that functionality into the infrastructure. It would cost a bit more, but agencies would save on the iterative testing and requalification that the rewritten and often significantly larger applications would require, Andersen said.
Another approach is to move the application to a more amenable infrastructure, but there are some potential pitfalls, he added. The application might have been written for an operating system that’s no longer supported or it might include functionality with special hooks or application programming interfaces that must be accommodated.
An evolving approach to upgrading or migrating applications is to only move certain parts of them, Andersen said. “Essentially, the application gets tweaked by putting the right API set in front of it, then you can move it piece by piece,” he added. “So you might move the user interface first, or the transaction or message queue, then save the hardest part for last, the one that could really bite you.”
That hardest part will happen only when an agency understands how everything works together and has a stable infrastructure in place. Andersen said that’s one of the reasons why there are still so many mainframes in government: Agencies elected to move the parts they could and left behind the pieces they didn’t want to mess with, so “they kind of did a hybrid migration, if you will.”
A preference for starting fresh
Stan Tyliszczak, staff vice president for technology integration and chief engineer at General Dynamics IT, said it would be less risky to migrate a legacy application as a whole because the various pieces of the application are working together as an ecosystem. Database applications, for example, rely on fairly high-speed connectivity between front and back ends, and if an agency were to separate those pieces -- perhaps by putting a wide-area network between them, with the kind of latencies that produces -- the application might wind up not working at all.
Even so, he admitted to a growing interest in what he called split solutions -- “such things as an analytical cloud that gives access to analysis tools that are tied into a data lake that has disparate sources around the world, not just your own, and you can choose the most appropriate tools for the job [and] can create very robust solutions. If you can have that kind of environment, it’s a different story, but we are only at the very front edges of deploying that kind of technology today.”
Given their druthers -- and budgets -- most agencies would probably prefer to develop applications from scratch in the cloud rather than migrate legacy applications. That’s what DOD IT professionals would do, according to a recent MeriTalk survey. More than half of the respondents said building new is the smarter way to go, versus just 18 percent who chose migration. Some 28 percent anticipated using a mix of both strategies.
Security concerns, the need to maintain data structure and the fact that the legacy applications were custom-built to DOD requirements were the chief reasons respondents gave for choosing migration over building new applications. However, the cost of migrating was a major concern.
Tyliszczak said the study shows that, given the choice, agencies would prefer to build something new so they would not have to deal with all the thorny issues that bubble beneath the surface with a legacy application. Migration is only advantageous when an application was developed recently and when migrating it is fairly easy and does not pose a big risk, he said.
In the end, agencies must make their own decisions about whether and how to migrate applications based on the best way to use scarce resources and constrained budgets. For that reason alone, some legacy applications might remain in dedicated, on-premise hardware or, at best, in virtualized environments with spruced-up, Web-capable front ends.
Wennergren said several things could happen given the tough financial environment that agencies operate in today. Budget pressures could prompt people to lead the charge toward change. But instead people often hunker down and protect what they have “because it’s easier to defend the programs of record, and that’s often why we tend to hold onto legacy stuff for too long,” he added.
That’s also a reason why government still spends so much on maintaining legacy systems. Perhaps the OPM breaches will be the final straw that pushes legacy issues ahead of other priorities. “It’s clear we’ve fallen behind on IT modernization,” Wennergren said, “and it’s clear that it has to be addressed.”
NEXT STORY: Coronavirus resources climb into the cloud