5 ways to subvert the risk of legacy IT
Connecting state and local government leaders
A comprehensive, centralized monitoring solution that can view both legacy and modern infrastructure in a single console can help agencies quickly spot trouble and resolve issues before customers are impacted.
In today’s always-on, always-connected age, government agencies face increasing pressure to make immediate and significant progress on digital initiatives, while minimizing the costly risk of downtime. Legacy systems, however, often hamper digital transformation and stand in the way of a full view of operational performance. These proprietary and decentralized systems don’t communicate with each other, making them a challenge to monitor in a comprehensive and efficient way.
In fact, according to the Washington Post, many of the IRS' 60 legacy IT systems have not received updates in decades, raising the risk of crippling downtime, as the agency learned in the 2018 Tax Day outage. In its review of the system crash, the Treasury Inspector General for Tax Administration cited inferior processes to monitor, report and respond to a system failure when it occurs; lack of oversight and accountability in contractors who failed to meet service-level standards for monitoring and incident management; and lack of redundancy or automatic failover, leading to a single point of failure.
The detailed report makes a compelling argument for the need for more effective monitoring and faster response by agencies, should such an event occur. The risk and cost of legacy IT can be high of not properly monitored. Consider these five ways agencies can avoid system downtime for their aging infrastructure, while reducing legacy barriers that could be preventing digital transformation.
1. Understand the composition of legacy environments. In most organizations, legacy systems perform vital tasks, making replacement infeasible. Over the years, however, many systems have become a mix of aging technology and modern approaches that use virtualization and cloud. This mishmash of old and new can create a disparate and decentralized infrastructure that includes virtual machines, hybrid cloud accounts, internet-of-things endpoints, physical and virtual networks and much more. Further, many of these components rest outside the control of the IT department, adding even more layers of opacity and complexity. Add in the challenge of proprietary systems, and it can be very difficult to get a holistic view of what is truly going on. To truly get a full picture of a legacy system’s health, availability and capacity, IT managers need a centralized monitoring solution that understands the variables for each piece of the environment.
2. Eliminate the siloed approach. Legacy systems perpetuate the silo challenge as their metrics are often confined to distinct environments rather than being passed up to IT leaders who can better align them to operational value. This culture of blindly following what has been done before, rather than breaking free of the siloed mindset, places organizations in a consistent firefighting mode where being strategic goals cannot be achieved. Operational processes, methodologies and tools that bridge silos will deliver high value, enabling greater innovation and agility.
3. Rein in tool sprawl. With the proliferation of silos comes endemic tool sprawl. As the IT organization reacts to problems with legacy systems, it’s common to see multiple monitoring products added on top of each silo. While monitoring tools may have been added to lessen the risk of downtime, in reality they are only compounding the problem by adding new layers of complexity and management without providing the comprehensive and clear view IT needs over the entire operation. To evolve to a more strategic state, these multiple point monitoring tools must be minimized.
4. Automate to reduce human error. A Ponemon Institute report found that human error was the second most common cause of system failure -- and business downtime -- accounting for around 22 percent of all incidents. This shows that most outages are avoidable, and proper monitoring is one of the most important things agencies can do to reduce outage susceptibility. When outages occur without warning, it’s vital to detect the failure quickly and identify the impacted systems. Organizations should have processes in place to rapidly mitigate the issue -- reducing downtime and service interruption . Comprehensive monitoring and automation can help. An automated approach ensures that infrastructure is well maintained and high performing -- across silos -- reducing cost and downtime risk.
5. Employ comprehensive operational visibility. Visibility into legacy systems must be integrated with the monitoring of modern systems and tied into common service desks and notification solutions for more rapid remediation. Behind every application lies the servers, networks, storage, clouds and virtual systems needed to deliver application workflow and services. And, as digital infrastructure evolves, so too does its complexity -- resulting in an often unwieldy mix of legacy and modern components, any of which could fail at any moment. The more disparate and decentralized systems are, the more frequently they may fail. To avoid this risk, effective monitoring is vital. By using a comprehensive, centralized monitoring solution that can view both legacy and modern infrastructure together in a single console, IT operations can spot the early warning signs of issues and resolve them before customers are impacted.
Legacy systems are a reality in even the most modern government IT environments, but they don’t have to put agencies at risk. With comprehensive, automated, real-time insight into legacy systems that integrates with hardware, application stacks (on premises or in the cloud), service desks and notification software, government IT can avoid the high cost of legacy IT while staving off downtime.