1(630)802-8605 Ravi.das@bn-inc.net

In some of my blog posts, have interviewed guests on my podcast shows, the question of Artificial Intelligence (AI) always comes up.  When I have written about it, I have usually discussed how it is very often used to help the IT Security staff of a business or a corporation in modeling the Cyber threat landscape much quicker, and with much more detail than a human being possibly could.

But when this topic arises in the podcast, my guests often take a dim and dark view of using Artificial Intelligence.  This is probably due to the fact they have used AI tools more than I have ever did, and can see directly the converse of the benefits that it is supposed to bring.

Essentially, the concept of AI is to try to mimic the thinking and thought process of the human brain, and to use that ability to help see and predict things that the normal mathematical modeling tools cannot do.  In a sense, AI can be used in just about every industry, but it has seen a great level of interest in the world of Cyber security, just because the nature of threats is changing almost every minute.

Well, as I was perusing the news headlines today of what I could write about, I came across the use of AI again.  But rather than being used for good purposes, this piece sort of brought out the sinister side of it, thus underscoring what my guests have said.  Apparently, researchers from New York University have been able to use AI in order to create a set of fingerprints, and can perhaps that can even be used to spoof an entire Fingerprint Recognition System.

These researchers have developed what are known as “DeepMasterPrints”, or realistic synthetic fingerprints that have the same quality of ridges that are visible when rolling an ink-covered fingertip on paper.  The exact details of this research can be seen at this link below:

https://arxiv.org/pdf/1705.07386.pdf

Just like in the Enrollment process of any Biometric System, the goal here of using AI is to try to create multiple images of a fingerprint, and use that in the attempt to spoof the Fingerprint Reader.  In other words, after these multiple images are created, it is then compiled into one, master image, in order to gain access to whatever in just one attempt.

Apparently, this is actually a step up for the researchers from the previous year, when they developed a series of fingerprint images that were called “MasterPrints”. But in this attempt, the researchers had to use an existing set of fingerprint images in order to create a spoofed image, which was not successful.  But, in the “DeepMasterPrints”, these fingerprint images have already been created from scratch, using AI tools.

After the latter fingerprint images were created, it was then compared against from the National Institute of Standards and Technology (NIST) ink-captured fingerprint dataset as well as another dataset captured from various sensors of Fingerprint Recognition Systems.  The results of this were astonishing:

*At a .1% False Rejection Rate (FRR), the “DeepMasterPrints” were able to spoof up to 23% of fingerprints in the datasets;

*At a 1% FRR, up to 77% of the fingerprints could theoretically be spoofed.

The images of this can be seen below:

(SOURCE:  https://www.zdnet.com/article/these-ai-generated-fake-fingerprints-can-fool-smartphone-security/)

My thoughts on this?

To me, I am sort of astonished to see the level of degree of closeness between the fingerprint images created by the researchers and the other datasets just mentioned.

Although I have not read the research paper directly yet (and I plan to do so), this work only shows, that given the advances in technology, even Biometric Systems could potentially be spoofed as well one day. There have been studies which were done in the past doing the same kind of work, but none of them really succeeded in actually spoofing a Biometric System.  This research is the closest that I know of that could have this ability.

But remember, the goal of these researchers is to spoof an entire Fingerprint Recognition System with just 23% of a replicated fingerprint sample.  While this could be theoretically possible, the Biometric Systems have also advanced as well in that they require a complete image of the fingerprint to be taken in order to grant the end user access to whatever they need (whether it is to gain access to a building or to login into their computer, etc.).

Also, the Biometric Systems of today still have one fail safe:  They require images to be collected from an end user that actually has a beating heart, and is alive.  Even the sensors can pick up on this.  But, given how quickly the advancements in AI could potentially take place, even this could be potentially bypassed in the future.  But there is still a long way for that to actually happen.

But this underscores another important theme I have written about in the past.  The use of Biometric Technology should not be used as the only means of defense.  Rather, it should be used as a layered approach when fortifying the lines of defenses for a business or a corporation.  It would be interesting to see this same kind of study being done on the Iris of the eye, where much more detail would have to be replicated.

But for the mean time, I don’t think that Biometric Systems can be easily spoofed, and won’t be for a long time to come. But I am not saying either that this potential does not exist either.  Finally, to those out there whom are not familiar with the lingo of Biometric Technology, the FRR is simply a statistical measure of the probability that a Fingerprint Recognition System will deny access to a legitimate individual.