Considering RPA? Make sure you understand the security implications
Connecting state and local government leaders
Automation introduces new forms of risk, especially when accessing cloud-based data.
Robotic process automation is gaining traction in government, after having gotten its start in private-sector enterprises. RPA applies elements of artificial intelligence to workflow processes to allow employees to shift from routine tasks to more high-value work. In fact, the opportunities for improving efficiencies in the public sector are so great that the federal government is now mandating agencies adopt technologies like RPA.
But automation is also enabling possible security risks that have not been encountered yet. It’s tempting to start looking for ways to implement RPA as the Office of Management and Budget would have agencies do, but before then, federal IT decision-makers must get up to speed on RPA and ensure that the current levels of data security applied to humans extends to robots as well.
RPA drivers: Efficiency and risk management
Why the fascination with RPA? On its own, automation can reduce costs and improve efficiency. Add intelligence to the mix, as with RPA, and the value is greater, enabling continuous operation with reduced human error and improved information processing.
To that end, the federal government has identified RPA as part of the President’s Management Agenda. It represents the federal government’s push for overall IT infrastructure modernization and securing data across both on-premise and in cloud-based environments.
Clearly, there is more to RPA than automating routine tasks. RPA is still new, and, aside from being complex technology, the terminology is not yet standardized. Here’s a quick overview of the basics and what’s needed for robots to take on human tasks securely.
RPA is designed to emulate some forms of work done by humans, usually routine tasks and processes. The higher the volume, and/or the simpler the task, the greater the value of RPA. It also adds value to the extent that it can perform in both cloud and legacy environments. Because it supports today’s technology upgrades as well as the extensive premises-based applications still in use across the public sector, RPA is an important bridge to the future.
There are two operating modes for RPA: unattended and attended. Attended operation applies to cases where a task or process cannot be fully automated, so the automated process or bot works in tandem with humans. Because humans can intervene at any time, these applications may seem to pose fewer security concerns.
On the other hand, because human labor is involved, the business value of attended operation is less significant than unattended. Plus, human involvement introduces more chance of error.
Unattended RPA, which offers end-to-end automation and can run at any time, frees up workers for high-value tasks. The catch? Unattended RPA presents a distinct challenge for IT personnel in terms of data security. Non-person entities such as software robots, devices and other automated technologies must have digital identities akin to those assigned to human users in order to access agency systems. Traditionally, identity management applies to humans, supporting robots and other NPEs is quite new.
Fortunately, existing forms of PKI-based credentialing can be extended to robots in the form of software certificates that comply with federal requirements. Unlike a physical smartcard, which requires a user to be present, these digital identity certificates can be securely accessed through automation and can be stored in a FIPS-validated hardware security module.
Having fully secure, unattended RPA sets a high threshold for meaningful adoption. On May 6, 2019, the Defense Logistics Agency showed that a robot could obtain PKI-based credentialing to gain access to DLA sites and operate around the clock. The proof of concept was a first for any government agency and demonstrated that both attended and unattended RPA could be truly viable for government workflow automation.
Security implications of robotic automation
Automation introduces new forms of risk, especially when accessing cloud-based data. Distinct cyber risks come with RPA, as are new approaches to ensure that bots are “who they say they are.” Risks can be even greater than when granting access to humans, since humans usually don’t work more than eight hours at a stretch. When access is granted to an unattended bot, which can run around the clock, the window for security risk is commensurately larger.
Agencies contemplating RPA technology should keep these potential security risks in mind:
- External threats, where a bad actor compromises a bot to gain access to sensitive data.
- Internal threats, where an employee or contractor manipulates or trains a bot for malicious purposes.
- Poor design, where a bot inadvertently exposes sensitive data -- personal information, voter registrations, financial details, etc. -- to unsecure sources such as the internet or public Wi-Fi.
- Unsecure data management, where a bot accesses sensitive data but does not encrypt it before sending to or from the cloud.
- Network vulnerability, where a poorly-designed robot enables hackers to remotely attack the network.
- Denial-of-service interruptions, where scheduled robot activities occur in such rapid succession that the network may be overwhelmed, causing a ripple effect of service disruptions and possible security breaches.
When developing a data security strategy and evaluating technology partners for RPA programs, cyber risks like these and others must be taken into account. Only then can agencies confidently put the technology to work, and begin to reap the benefits of robotics and automation.
NEXT STORY: How to create smart security for smart cities