Wow, what a week this was.  You got it; I am talking about the Coronavirus.  I have never entire life seen so much volatility on the financial markets (except for the 2008 meltdown), and seen so much hysteria at the grocery stores, in the news headlines, but above all, so many closures taking place. 

In fact, I just had a business lunch meeting this past Thursday, and the restaurant where we met at was a pure ghost town.  Just barely a few weeks ago, you had to wait just to get seated. 

Now, you have prime picking for the seat that you want.  Now the big story is that there is a severe lack of toilet paper on shelves of grocery stores.  Don’t exactly get the obsession with this one, but I had better go out and get some soon. 

Of course, also, the Cybersecurity news headlines are running rampant with all of the new threat variants that are coming out.

A lot of these Phishing Emails, such as buying unapproved and even untested Coronavirus vaccinations, and even Ransomware locking up Android wireless devices.  I was actually thinking of making this my blog theme for the weekend, but I decided against it. 

The reason for this is that there is so much out there you can simply Google it, and read all to your heart’s content.

Just don’t believe everything you read and hear, more than likely, the news headlines are all pumped up so that they make the top rankings in the search engines.  In fact, I just wrote an extensive blog about this last weekend, so if I write anymore about it, it will just be a repeat of it in a different way. 

So, for today’s blog, we are going to visit another hot topic today:  Artificial Intelligence, or simply AI for short.

Ever since late last summer, AI has started to make the rage in Cybersecurity.  Why is this so?  Well, people in our industry view it as a way to help filter for false positives so that the overtaxed IT Security teams can go react to what appears to be real. 

Also, AI has been a tool that can help predict what the future Cybersecurity Threat Landscape will look like.  In order to do this task, one has to comb through mountains of datasets and analyze for any known and hidden trends. 

This can take a human being days if not even weeks to do.  But with AI, this can all be done in a matter of just a few mere minutes.  So, these are the primary advantages of AI so far, but keep in mind, it is not yet a fully explored tool yet for the Cybersecurity industry, so there is a lot be learned and applied about it. 

Many businesses are hoping that this will also help alleviate some of the worker shortage, such as automating these tasks just described.

Although AI is a great tool to be used in your Cybersecurity arsenal, the fallacy in thinking is that it will be the next saving grace in thwarting off the Cyberattacker, 100%.  But as we know, this is not the case.  Everybody and every business are at risk in falling victim, the trick here is in being proactive and quickly remediating any sort of damage that has been done.

Cybersecurity industry professionals are starting to realize this, and this has been further substantiated by a recent study that was conducted by White Hat Security at the recent RSA conference.  Their report is entitled the “AI and Human Element Security Sentiment Study”, and it can be downloaded at this link:

https://info.whitehatsec.com/Content-2020-AIHumanSecurityReport_LPNew.html?utm_source=website&utm_medium=0320-Website-AIHumanSecurityReport

There were 102 respondents in this research project, and here are some of their key findings:

*40% of them use AI in some degree or another for their overall Cybersecurity posture;

*45% of them claim that using AI has helped to some degree in filling the ranks and files of their respective IT Security teams;

*70% of them are using AI to help automate repetitive tasks;

*Because of the above, 40% of the respondents have felt an overall reduction in the stress level of their current job positions, and 65% of them also reported that they can focus a lot better now on combatting the real threats that are incoming;

*But despite the above, 30% of the respondents also felt that some sort of human involvement is also needed when it comes to using AI specific tools.

My Thoughts On This

Overall, I think these findings are quite positive when it comes to using AI.  In fact, this probably the first time I have really heard anything positive about it, and having it further quantified by a scientific study.  What I think is most heartening is that it has actually decreased the stress level of Cybersecurity workers, something which has never been heard of before (at least not yet).  

But keep in mind that at least for now and for the visible future, AI will be primarily used for those areas as previously described:  Task automation and data mining.  It still way too early to predict with any kind of uncertainty where it can be used effectively for other kinds of Cybersecurity applications.  Also remember, that AI, while although very sophisticated, is still a technological tool.

So, what does this mean?  It simply means that there is, and will always be to varying needs, human involvement will be required.  One thing that I did not mention earlier is that in order for an AI tool to be effective, it must first have data (and lots of it) fed into it from various, trusted sources.

Once it has processed and digested all of this data, only then an AI tool can really become effective.  But, to keep it fully optimized, it needs to have datasets constantly being fed into it, on a 24 X 7 X 365 basis.

This is where the human need comes into play.  Obviously, an AI tool simply cannot decide on its own the datasets it wants to use at first, it takes a skilled professional to do that, depending upon the security requirements of the business. 

And it also takes this human being to make sure that the right kinds and right amounts of data are being pumped into it in order to for the AI tool to be as robust as possible for any given moment in time.

Also, random Quality Control (QC) checks need to be done from time to time in order to make sure that the AI tool is delivering its promised set of goods to the IT Security team that is using it.  Although this kind of reporting can be done from the AI tool itself, it takes a human being to provide an unbiased view on all of this. 

So, as I have always said, businesses should never rely upon technology solely in order to shore up their lines of defenses, whether it is AI related or not.  The best security model makes use of both the human element and the technological advantages in order to mitigate the risks that are posed by the Cyberattacker.  By having both, there is also a sense of redundancy that is invoked.

For example, if an AI tool does fail, the human component that was involved with it can make up for some of the slack until it is repaired or replaced.  By having both components involved, there is also a cross check involved in order to help ensure that the lines of defense at a business are as strong as they can be.

Remember, in Cybersecurity, it takes two to tango.  Well, really three if you include the Cyberattacker.  Without them, the industry would barely even exist.