There is no doubt that the Cybersecurity world today is filled with a lot of techno jargon.  In other words, there are a lot of fancy words that are throw out there, which for the average person, can be difficult to understand.  For example, what is MTTD, or EDR, or IRP or even DR?  Yea, just buzzwords, but actually do mean something, and do have a significant role to play in beefing up the lines of defense for a business or corporation.

One of the latest buzzwords to be bandied about now is the craze around Artificial Intelligence (AI) and Machine Learning (ML).  To be honest, these words were hardly heard of last year, but in 2019, they have made such a splash that one gets the impression that these tools are widely implemented.  But this is far from the reality. 

AI and ML are complex technologies to implement, especially when it comes to integrating them into legacy Security systems.  Also, it takes a specially trained individual with a background in these fields in order to fully understand and comprehend their usage in attempting to model the Cyberthreat landscape.

AI and ML are also terms that are used together, but they have not only different meanings, but different application as well.  So, let us try to differentiate the two. 

First ML can be defined specifically as follows:

“Machine learning is the study of computer algorithms that allow computer programs to automatically improve through experience.” 


Probably one of the best examples of this is futures and commodities trading.  Traders are always trying to predict what future market prices might be, but as we all know, given the limitations of our own heuristic abilities, a human being cannot do this in a short amount of time.  It would take days, and even weeks to sift through all of that market data just to predict what the next few days could look like. 

But with the help of Machine Learning, a system could learn all of this data in just a matter of a few minutes and try to make projections as to what the future could really hold in terms of market prices.  OF course, there is no guarantee that these predictions will be completely accurate, but there will be some level of it. 

But, the key with Machine Learning is that you have to keep feeding it lots of data from a lot of different sources 24 X 7 X 365 in order to learn about them, determine the trends, and from there, make the appropriate predictions.  On a crude level, ML can be viewed more as “Garbage in and Garbage Out”.

Artificial Intelligence, on the other hand, is more sophisticated than ML.  A specific definition of it as follows:

“It is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction.”


In other words, at a very simplistic level, it tries to mimic the decision-making process of an average human being.  Not only does it incorporate the Garbage in Garbage Out functionality of ML, but it also tries to reason and make conclusions as to what the best decision can be made to have the best possible outcome.

There are two forms of AI, and they include weak and strong.  Examples of the former include the Virtual Personal Assistants (VPAs) that we use on our Smartphones, such as that of Siri (for the iOS) and Cortana (for the Android). The latter tries to make decisions, without the need for human intervention. 

Let’s go back to our example of the futures trader.  Now that they know what the next few days could possibly look like with the help of Machine Learning, the next question that comes to mind is what kinds of trades should be made? Well, this is where the role of Artificial Intelligence now comes into play.

Given the just how powerful ML and AI can be, it is no wonder that the Cybersecurity industry is keen on using these tools as quickly as possible.  After all, everybody wants to know what the future will hold in terms of new threat variants and given the sheer amount of “noise” that exists, what the best possible decisions are.

This level of optimism is further underscored by a recent study conducted by EY, in which the C-Suite was polled into their plans of using AI and ML for Cybersecurity purposes.  Here is what they found:

*87% of CISOs and CIOs are investing in AI initiatives this year;

*47% of them view China as the biggest obstacle to further advancements of AI and ML;

*50% of the respondents selected the United States as the country with the best long-term AI strategy;

*80% feel that the US government is most open to working with the Cybersecurity industry to adopt AI technology;

*87% of CIOs and CISOs completely or somewhat trust the technology;

*82% of the respondents expect that their businesses will be impacted by AI to some extent within the next three years.

My thoughts on this?

Well, it is certainly good news, at least based from this survey, that there is some level of buy in from the CISOs and CIOs in Corporate America about the use of ML and AI.  However, as I have mentioned, there is still a long way to go, and although these technologies have started to make their mark, there is still much more room for further improvements and refinements.

At the present time, you will see AI and ML being used to help automate the repetitious tasks that are found with Penetration Testing and Threat Hunting.  Usually, long hours are required in order to conduct these tasks, and by incorporating ML and AI into them, the Threat Hunters and the Pen Testers will be able to devote more of their precious time to the more important tasks at hand.

In these instances, ML can be used to be fully automate the mundane tasks, and AI can be used to hep these teams predict what future Security holes and gaps could precipitate through the organization’s lines of defense.  But given all of the advantages that these tools possess, there is also the flip side to them.

Many CISOs and CIOs are fearful that they can be used for nefarious purposes by the Cyberattacker, or worst yet, they can be used to launch a large-scale insider attack.  These are certainly plausible concerns, but to my knowledge, nothing in this regard has happened yet.  But knock on wood, as the saying goes.

Personally, I think that 2019 will see the evolution and adoption of AI and ML into Cybersecurity, and the coming decade will witness its full usage.  But how well will they work?  Will they live up to the hype?  Only time will tell.

Finally, the link to the EY study can be seen here: