Before the COVD19 pandemic hit, pretty much all that the Cybersecurity Threat Landscape was aware of were threat variants such as Phishing, Malware, Trojan Horses, Spyware, Social Engineering, Ransomware, etc.
But given the last four years of the last Presidential Administration, and especially what we had to endure through after this last Presidential Election was over, there is a new threat variant that is lurking out there. It is simply known as “Misinformation” or “Disinformation”.
Although this may not be a Cyber Threat directly per se, the vehicles that are used to deliver these false hoods can make it a huge, potential threat. Now, Disinformation is nothing new. It has been around with us for a long time, especially in the political circles.
In the best, before technology was all interconnected together, whenever false news filtered through, we could always just ignore it, turn off the TV/Radio, etc.
But given the likes of Social Media, especially those of Twitter and Facebook, we are saturated by this on an almost daily basis. In fact, it has gotten so bad IMHO, that it can be difficult to tell what is real and what is not. But when it comes to spreading false information via the use of technology, the use of Bots and Bot Networks have come heavily into play as a result.
I have written about Bots before (I think), but loosely put, they can be defined as follows:
“Bots, or Internet robots, are also known as spiders, crawlers, and web bots. While they may be utilized to perform repetitive jobs, such as indexing a search engine, they often come in the form of malware. Malware bots are used to gain total control over a computer.”
Bots can be used for both the good and the bad. For example, with the former, Google makes use of Bots in order to constantly crawl for websites and their web pages in order to rank them accordingly, so that you get the best results when you type in a query.
But, with the latter, Bots can also be used to form a network between devices in order to deploy Malware to other devices in just a matter of a few minutes. In this regard, this is how the false information can be spread like wildfire.
So, how does one go about determining if what they are seeing on the Internet is a Bot Network or not, that could potentially be used to spread Dis/Misinformation? Here are some telltale signs:
*There are a lot of connections to people and/or organizations:
Probably one of the best examples of this is that of Twitter. In order for an account to appear that it has a sense of authority, typically it will have a lot of followers to it. Of course, how many followers makes an account look like this extremely subjective call, as people will tell you all that you need are just a few hundred or even a few thousand followers. Heck, there are some Twitter accounts that I have seen that have millions of followers. But if the account is for real and legitimate, there will be some logic to who they are connected to. For example, a financial journalist will probably have a bulk, or majority of their followers as well as whom they are following be those individuals and/or organizations that are also finally related to varying degrees. But of course, anybody can follow and be followed by whomever they want. So this is the first thing to look out for. For instance, if you find yourself on a political website trying to see what the latest happenings are, just take a quick peruse into all of this. If something seems to be out of the ordinary, then it could be a Bot Account that you are looking at. Also, it is very important to keep in mind that when a Twitter account has been created, it takes a lot of time to build up a good listing of followers. But if this account is a Bot based one, they very likely it will connect to other Twitter accounts that are also Bot based, in order grow their followers in a very short period of time. Also keep in mind that a Twitter account which is actually Bot, will engage with very little conversation, with other followers. For instance, rather than being the first to respond to a Tweet, they may be the third or fourth down the rung in doing so.
*The content is not original enough:
Whenever an individual engages with others, or if they are simply just posting content, most of the time, it is a combination of original thought, some retweets, liking other tweets, and even answering to other tweets by their own followers. But with a Bot account, this is not the case. They exhibit the following traits:
*No original content is posted;
*Most of the content are just reposts from other tweets;
*There is no further engagement in further dialogue with their followers.
Thus, Bots make use of specialized algorithms in which to post content. But the problem is that they are not sophisticated enough yet to actually vary that content. So, in this regard, it is quite likely that at Twitter account which is actually a Bot will have the same kinds of content repeated over and over again, but to within some varying degrees. Take a closer look at this. If you notice this kind of trend, then you could also be at a Bot network. This is a result of that Bot account making use of the same posting algorithms over and over again.
*The posting of content is predictable:
With a legitimate and human based Twitter account, the times in which posts are put onto their respective feed will of course vary. For example, let’s go back once again to the example of the financial journalist. If he or she comes across a major news story, then they will post it. Or if there is something that is earth shattering which could rattle the markets in the morning, they will of course post that as well too. The bottom line here is that the time of the content posting will vary greatly on a daily basis. But with a Bot based account, there will hardly be any variability in the timings of the postings. It will post at the same time on an almost daily basis (perhaps give or take a day to show some degree of variability). Or at the other extreme, these accounts could be posting on a daily basis, 24 X 7 X 365. In other words, look at the posting schedule. For example, variability is a good thing, uniformity is not. In this regard as well, look for the type of content that is being posted as well. For example, if almost 100% of the content is political in nature stating that an election was rigged, then you can tell for almost certainty that this is geared into actually brainwashing people that it really was rigged, when it wasn’t, as we saw after this last Presidential Election.
My Thoughts On This:
I have to put out one caveat here, and that this is there is a key difference between Disinformation and Misinformation. With the former, it is information/data that is deliberately made not to be true, and with the latter, it is information/data that not been made intentionally false, in other words, there was no original malicious foreplay that was involved. Whether we like it or not, this is a new area that Cybersecurity has to deal with, such as phony and fake websites that are designed to look like the real thing.
This is something that will not go away, the use of Social Media for spreading Dis/Misinformation is used six times more than other vehicle. IMHO, this will only get worse, especially as AI and ML take further root into our society. One of the best examples of this is that of Deepfakes.
This is where AI algorithms are used to create videos that look and sound like a real person, such as a politician asking for campaign donations in a commercial.
My best advice to you is if all of this saturation gets to become too much for you, then simply disconnect. I hardly ever look at the news anymore online, and if and when I do that, I usually go to a news site that I have come to trust. But as it relates to Cyber take the above tips into mind, as Bots can also spread malicious based Malware as well.