-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathabstract.tex
More file actions
10 lines (6 loc) · 1.59 KB
/
abstract.tex
File metadata and controls
10 lines (6 loc) · 1.59 KB
1
2
3
4
5
6
7
8
9
10
Abstract - In recent years, machine learning algorithms enabled countless innovations, such as recommendation systems, intelligent voice assistants or self-driving cars. A large part of these innovations was possible thanks to Deep Learning, which has been experiencing its renaissance since 2009 [3], when ImageNet Classification with DCNN paper [4] was first time introduced. Deep Learning methods are extremely useful in biometric-authentication methods such as fingerprint[5] or face recognition[6].
The wide usage of machine learning, particularly its applications
where physical security (e.g. self-driving cars) or safety (biometric-authentication) is at risk, makes it very important to understand how ML algorithms can be attacked, in order to make these algorithms attack-proof.
In this paper we will sum up ideas presented in the 'Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition' paper [2]. We will discuss theory behind deep learning methods, details of face recognition methods and methods of attack on such systems.
Attacks described in original paper[2] focus on biometric systems - especially face recognition. Discussed attack methods have to meet some assumptions like they have to be physically realizable and inconspicuous. Attack generation should also be universal - it shall work regardless of the gender, age or skin color of the attacker.
Authors came up with idea of generating an eye-glass frames, which when worn by an attacker allow him/her to avoid being recognized as himself/herself or to impersonate another person from biometric system database.