Co-authored by Alan Radford from One Identity and Stuart Sharp from OneLogin
Organizations are under constant pressure to boost productivity and cut costs. To handle repetitive, mundane and time-consuming tasks and processes, companies are turning to Robotic Process Automation (RPA) solutions. While this automation can bring new levels of efficiency, it also invites risk, especially when tasks delegated to RPA require access to privileged systems and information. No organization wants to be in the news due to a cyberattack or a data breach. So, it’s crucial that these automated processes are executed securely. Consider these top five robotic process automation security risks and suggested mitigation steps you can take.
What is RPA?
Robotic process automation uses software tools to integrate, build and launch automated robots that can imitate human activities across digital tools. These RPA tools can navigate user interfaces, read, select and move data, click buttons, and be programmed to complete defined tasks that used to need human input. In essence, these robots become ‘digital workers.’
In many organizations, RPA can replace entire roles and processes typically handled by human workers. For example, payroll processing, data migration, HR onboarding, IT help desk tasks, sales quotes, scheduling and reporting are all processes and tasks that can be handled by bots.
Many RPA solutions don’t need specialized help from a software developer to implement. Instead of connecting tasks via APIs or scripts, some of these solutions simply record a user’s actions and repeat them via a frontend interface.
Since technical skills aren’t always necessary to implement these types of RPA solutions, it empowers the business to quickly automate rote processes. However, if implemented without a thought to security, this can also introduce significant risk – and increase chances of a costly cyberattack or breach. Robots need accounts to use, attended robots will typically use the host user account while unattended bots typically use their own assigned account. In either scenario, multiple accounts may also be in play.
Top Robotic Process Automation Security Risks
- Robots add a new layer of orchestration between the business and a process
Using an RPA solution introduces new layer of orchestration that security teams need to monitor and maintain to protect against potential breaches. Though some organizations have the resources to develop proprietary RPA solutions, others will use an RPA solution from an external vendor.
Due to the nature of how bots are created and maintained, whomever designs and creates the bots can enact control over what the bot can do and how it behaves. This control can be done through either through the bot itself as a puppeteer or by access to the bot account. Where bots themselves have a separation in their duties, the bot developer does not. For example, there is typically a business separation of duties between accounts payable and accounts receivable. If different bots are performing these tasks, this is compliant on the surface, but the same developer must not be able to interact with both bots. To further illustrate, consider that to provide privileged credentials to the bot, the bot developer also needs to know the privileged credentials. In either case, the developer becomes a new entity that needs to be safeguarded.
- Shadow IT implementations
Departments that independently implement RPA solutions without the involvement of IT security can leave organizations vulnerable to potential attacks. An IT security team can’t monitor, protect or defend against breaches on platforms of which they are not aware. In some cases, robotic identities are born in HR to take advantage of existing on-boarding processes. However, new challenges arise from this approach where it is not clear which records represent real human beings and which represent robots. What date of birth does the robot have? Should it have an email address? Does it belong in the time off booking system? Should this HR record be considered when calculating average tenure in the company?
Put plainly; human resources was not designed for non-human resources.
- RPA bots that require privileged access
Some bots that work with high-risk systems and data require privileged access to log into CRMs, ERPs, databases and other systems. To access these systems, the bots need credentials. These credentials are typically either recorded in a credential repository on the bot platform or worse, hard coded into bot processes.
In other implementations, bots may be tasked with an additional step to grab shared credentials from a file, application or database. Hard coded, shared, unsecured and unchanged credentials are the holy grail to cyber-attackers. They can use these static credentials to gain access to applications and data. How bots store and access credentials are a huge RPA security risk, especially when they are tasked with accessing highly privileged and sensitive data. From an operational standpoint, if a credential expires, the bot stops working until the credential is restored.
- RPA bots move laterally across applications and systems
Most organizations use several platforms, networks, and applications to store data and complete necessary tasks. When RPA bots are programmed to complete multi-step tasks in a business process, they often have permissions and credentials to navigate across a set of these applications and systems. If a cyberattacker gains access to one of these accounts, the attacker can then move across applications and systems to which the bot has access and attempt to elevate privileges to access secured data.
- Operational friction
RPA solutions are designed with a focus on efficiency and not security, which can lead to operational complications. For example, the way in which accounts are created and have rights delegated can prevent a bot from performing its task. This can be particularly challenging when on-boarding or creating new bots. Streamlined IAM processes bots around joiner, mover, leaver removes friction from RPA programs and can increase your ROI from the RPA program.
Why these RPA security risks must be addressed
Imagine giving thousands of new employees all the same credentials, and unleashing them to access all secure databases, applications, devices and networks across an organization. What are the chances among those new employees that an account will be compromised, leading to an expensive data breach or ransomware hack?
Often, for efficiency and profitability, organizations create ‘new hires’ accounts using a copy-and-paste method. Scaling to thousands of bots to complete rote and time-consuming tasks is not totally out of the realm of possibility. However, this all-too-real scenario calls into question how can organizations securely deploy digital workers and limit RPA security risks?
Companies that have implemented RPA without considering these security risks have added significantly to their potential attack surface. It’s just this type of situation that begs to be exploited by an attacker.
With RPA, is it possible to be 100-percent secure?
It is not possible to secure anything 100 percent. There is always something that can be attacked, and threat actors will always work to discover and exploit vulnerabilities. However, there are plenty of steps an organizations can take to minimize and mitigate potential threats and vulnerabilities in RPA implementations.
Strategies to mitigate RPA security risks
Robots aren’t human but they need identity
Within every organization, there are roles and responsibilities that human employees occupy. Those employees are given credentials, access to systems and platforms, a scope of work that they are expected to fulfil, and managers that give directions and check on progress. Even though employees report to a manager, details such as passwords and account access are not shared.
Human workers can discern if something is out of bounds for their role, privileges or responsibilities. To ensure this, governance processes are put in place by the business. That way only the right users have the right access to the right resources at the right times.
In essence, human workers are given an identity, a role and access to systems and platforms that they need to complete their duties.
For many organizations, this means using identity governance, access management and privileged access management systems to help implement Zero Trust principles. Plus, they do this to keep track of who accesses and has permissions to specific networks, software applications and platforms.
Though this is how human workers have access to organizational applications, this story is often very different for robot workers.
In some organizations, there is a central department of developers that manage and implement robotic workers. In others, robotic workers are implemented by individual and disparate departments using various no-code RPA solutions.
In many of these implementations, none of the bots are given an identity and there is no governance, ownership or visibility over the information they move and use with internal systems.
In a unified identity security and Zero Trust environment, RPA bots are given identity so they can be authenticated, audited and accountable for their roles and actions.
Robots need credentials
Static credentials are a big security risk, especially when they are tasked with accessing highly privileged and sensitive data.
For a human employee accessing a database or network device with highly sensitive data, they will typically log in to a privileged access management solution and request admin rights for a set amount of time. Once the human worker completes their task and logs out, the password is reset and there is an audit trail of that worker using and accessing the privileged information on a database or network device.
For bots, that story is often different. For robots to use privileged credentials, someone must know the credentials in order to give the bot those credentials. In doing so, those usernames and passwords remain static. Otherwise, the bot can break if the credentials change.
Unchanged and unsecured credentials are ripe targets for cyber attackers to use them to access valuable systems, networks, databases and applications.
However, anytime a password is reset, somebody must tell the bot what the new password is. But whomever does that can get stuck in the middle of a never-ending task of entering and resetting passwords. On top of that, individual accountability for who or what accesses privileged data can't be enforced if the password is know by multiple identities (the bot and the bot developer for example).
These issues are both mitigated by having a policy requiring bots to retrieve credentials from a privileged access management tool that actively manages the credential on an ongoing basis. This approach prevents both the bot owner and bot developer from knowing the password and removes operational blockers for the bots.
In addition, for bots working with applications that support modern authentication protocols such as SAML or OIDC, greater control can be achieved by managing them through the company’s Access Management system. Each bot can be granted a unique identity, which enables their access to all federated applications to be tracked, including immediately terminating access when the bot is no longer used. Similarly, the bot’s identity can be suspended when not required, preventing bad actors from gaining access.
Further benefits can be achieved by adding bots to a modern IDaaS system the offers dynamic risk analysis and enforcement. The risk engine will monitor the behavior of each individual bot over time and deny access when an access attempt using the bot’s credentials differs from normal operation. In this context, an unapproved or non-compliant change in robot behavior can be identified and blocked.
This can be achieved through a combination of dynamic risk analysis and MFA. Each bot can be assigned an MFA factor that belongs to the bot owner, whether that be email, SMS or an app-based push notification. The dynamic risk engine profiles normal bot access over time and MFA prompts are suppressed for known, low-risk access attempts. When a significant change in behavior occurs, such as when a bot logs in from a different geographic location, or using a different browser or tool, the dynamic risk score will increase, and MFA will be enforced. The bot owner will be notified, and if the attempt is legitimate, the bot owner can complete the MFA challenge which will add the authentication profile to the known low-risk whitelist. A bad actor trying to utilize the bot credentials will be unable to complete the MFA challenge and will not be able to gain access.
Without a unified identity security platform that accounts for digital workers, there will forever be vulnerabilities because someone somewhere must know the credentials that the bots are using.
Instead of giving static credentials to a bot, an ideal state for RPA in terms of security is a closed loop identity management system that accounts for digital workers and allows bots to dynamically access privileged access management systems.
In that scenario, a bot takes an extra step to understand that it needs a credential and accesses a password vault to get that credential. Once the bot has finished its task and closed its session in a privileged environment, the password is reset so the bot no longer knows what the credentials are for a particular platform. Every time the bot needs privileged access within a closed loop system, it never uses the same credentials to access privileged data.
This closed loop identity management system offers greatly improved security and accountability to who or what accesses highly sensitive networks, databases and platforms.
Robots need governance
In the case of robot workers, someone must create each bot and tell it what to do. However, a bot developer must tell the bot what credentials to use to access a database or platform. Additionally, if the organization applies Zero Trust principles, the bot also needs to be authenticated. One of the cornerstones of a mature governance program is defining clear paths of ownership and accountability. In the context of RPA there are several owners at play, including:
- The manager of the bot developer
- The manager of the department in which the bot operates
- The owner of the RPA application
- The owner of the applications with which the bot interacts
- The bot owner
Developing a proper governance framework is a necessary step to outline oversight, accountability and mitigate various RPA security risks. In any good governance framework, there is a process for assigning ownership. Where bots are concerned, the business needs to be able to allocate ownership, which will typically change as a bot moves from pre-production into production.
When bots are given an identity and governance framework, their access to resources can be limited, monitored and audited. In so doing, ongoing compliance of robotic activity can be proven.
RPA tools are excellent solutions to execute time-consuming processes and incredible efficiencies for organizations. However, if security is not part of the RPA strategy, organizations without a robust PAM and/or an IGA solution invite risk by creating a significant cybersecurity exposure gap. In the honest pursuit of operational efficiency, they also open their organization to increased potential for a successful breach or cyberattack.