Well, here we are, almost approaching the end of July. It’s hard to believe the time is flying as fast as it is right now. But as this is happening, the Cyber Threat Landscape as we know it at this moment in time has not changed by any means. It still marches on, with new threat variants coming out, catalyzed by the COVID19 pandemic.
With this, there are even many new buzzwords that are coming out as well. No need to go through them here, but one of them is Artificial Intelligence, also affectionately known as “AI”.
Honestly, this is really not a new term that has been bandied about, it started to make its splash into the news headlines around one year ago. The key premise behind AI is to create applications that can literally mimic the thought and decision-making processes that the actual human brain does.
Of course, the human brain is an extremely complicated organism of our body, and truthfully, we will never fully understand 100% how it all works. For that matter, we are at probably only around 1%-2% of its complete understanding.
But as it relates to the field of Cybersecurity, one of the main areas where AI has been getting a lot of glow lately is when it comes to task automation. For example, the IT Security teams of today are getting completely hammered with many false positives, and because of that, they suffer from a psychological phenomenon that is known as “Alert Fatigue”. Because of this, many of the real warnings and alerts that come through very often get ignored.
So, with AI, it is hoped that this entire task can be automated, filtering out for all of these false positives, thus making the real ones much easier to triage. Another area where AI is being used is for Penetration Testing. For example, if a Penetration Test is supposed to be a very exhaustive one, there are a lot of little, mundane tasks that an AI tool can be used to help automate, so that the Red and Blue teams can focus on the bigger picture.
Also another area where AI is starting to be used is trying to model and predict what the future Cyber Threat Landscape will look like. This is proving to be very beneficial, as it gives the IT Security team clues as to what the coming days and even months could like down the road, thus giving them valuable time to prepare for the new threat variants that will be coming out.
Another offshoot that is coming about AI is a subfield known as “Robotic Process Automation”, or RPA for short. What exactly is it? Well, here is a technical definition of it:
“RPA is an application of technology, governed by business logic and structured inputs, aimed at automating business processes. Using RPA tools, a company can configure software, or a “robot,” to capture and interpret applications for processing a transaction, manipulating data, triggering responses and communicating with other digital systems. RPA scenarios range from something as simple as generating an automatic response to an email to deploying thousands of bots, each programmed to automate jobs in an ERP system.”
So first, it is not making the use of robotic hardware, as is it so popularly envisioned (though this could happen one day). Rather, as the definition states, the AI system is being used to create specialized, software-based robots, also called “bots”. These only exist in the digital world, not so much in the physical world. From here, just a few bots can be created, even hundreds of thousands of them can be created, depending primarily upon the requirements and the complexity of the application that needs to be automated.
The use of these kinds of bots are not just restricted to the world of Cybersecurity, they can be used in just about any industry, where are routine tasks that need to be automated. One such place where this could widely be used at is in production lines and supply chain management.
For example, imagine a car production facility. Obviously, there are many parts that need to go into a car, especially the Smart Car of today. Rather than having a human say, install an electronic component, an arm that is programmed by these bots that have been created can do this task automatically. Although this sounds very advantageous, keep in mind, that these bots are also prone to Cyberattacks as well.
*Backdoors left open to the bots:
As mentioned in the definition, bots just don’t appear magically out of the blue. Rather, they have to be created from source code, such as that of Python. In a rush to deliver these bots, the software development teams very often don’t test this source code for any security weaknesses or vulnerabilities that exist. Very often they make use of off the shelf, open source APIs that have not been tested either. Because of this, many backdoors are thus left wide open for the Cyberattacker can penetrate into. Not only are the company’s digital assets are complete risk, but the Cyberattacker can even completely take control of these bots and even shut down an entire production plant as well. Heck, this can even lead to a massive Distributed Denial of Service (DDoS) type of attack as well.
*They are interconnected to many other digital systems:
Bots just don’t exist in a world entirely to them. They need to receive their specific instructions from somewhere, and because of that, they are interlinked to many other kinds of systems to give them this information. For example, bots are connected with both ERP and CRM systems (such as SAP and Salesforce, respectively). If these systems are not patched and kept up to date, the Cyberattacker can even penetrate through these as well in order to gain access to the bots and further control them.
*They interact with very sensitive information:
Bots can also be used for the processing of the Personal Identifiable Information (PII) datasets of both customers and employees alike. For example, rather than having a dedicated clerk to manually calculate the payroll, bots can be used to automate this task as well. But, if a Cyberattacker gains access to this as well, these PII datasets can be easily hijacked and sold on the Dark Web.
*The issues with Shadow Management:
Given just how everything is based in the Cloud as a SaaS based offering, so are bots. Rather than having to create them scratch, and if your application for them is simple enough, you can purchase premade Bots and deploy them rather quickly. But in a rush to do this, very often these do not get the approval of the IT Security team to use before it has been tested. In fact, this also gets unnoticed until this is too late.
My Thoughts On This
When deploying an RPA system, you need to first take the mindset that the Bots are not isolated to themselves, they interact with many other objects, devices, and components from within the physical place of the business. Therefore, you need to have that frame of mind that deploying Bots is just like implementing an IT system in of itself.
For example, when you procure and set up a firewall or a router, you probably test it out first, right? Well, the same is true of the Bots. In this regard, you need to take a top down approach. For instance, first take stock of what these Bots will be connected to. Once you have determined this, then make sure those systems are up to date with the required software patches and firmware updates.
Then once you have either created the Bots or bought a SaaS based package, you need to first them in a sandboxed environment to make sure not only that they are doing the job that they are supposed to do, but also that the interlinkages with the other systems are safe and secure as well.
Once the IT Security Team has validated all of this, then you can move these Bots out into the production environment, but by using a gradual, phased in approach. Also, make sure that all of the parties that are going to be impacted by the use of these Bots are also involved in this process. It is very important to get their buy in order to avoid Shadow Management from creeping in.
But apart from this, it is also keep in mind, that since RPA systems and their parts are a part of AI, they are only going to function as good as the data inputs that they are receiving. In other words, “Garbage In, Garbage Out”. Thus, you need to make sure that the information/data that you are feeding them are constantly being optimized, so that the Bots are performing their jobs to the level off expected accuracy.