Privacy-preserving indexing of large-scale biometric databases
Large-scale biometric deployments are quickly becoming ubiquitous even though a centralised mass storage of biometric data causes legal issues and privacy concerns. The computational workload of the conventional retrieval method quickly becomes impractical for large systems. This issue necessitates the research into algorithms for efficient biometric identification, i.e. biometric indexing. The feasibility of employing biometric indexing techniques in conjunction with privacy-preserving template protection schemes has yet to be explored.
Detection of manipulated biometric images using machine learning methods and image forensics
Recent research has demonstrated the vulnerability of face recognition systems to attacks based on manipulated biometric images. If manipulated biometric images, e.g. morphed images or deep-fakes, are infiltrated to a biometric recognition system accurate and reliable recognition can not be guaranteed. The aim of this project is to develop reliable methods to detect manipulated face images. Deep learning methods will be employed to train appropriate classifiers and the potential of forensic image analysis will be investigated to reliably detect anomalies in facial images.
Cryptographic primitives for GDPR compliance
In order to comply with the General Data Protection Regulation (GDPR), biometric operations should be performed over encrypted data. Homomorphic encryption and secure multi-party computation are considered as the main cryptographic tools. Unfortunately, the integration of these tools may impact on system operation. This ESR project will design privacy-preserving biometric identification and authentication primitives by leveraging innovative cryptographic techniques.
Evolutive and end-to-end learning for ASV anti-spoofing
While progress in anti-spoofing for speaker recognition has been rapid in the last few years, most spoofing countermeasures still fail to generalise well to different forms of spoofing attack. This project will research the application of deep learning technologies to the anti-spoofing problem. Specifically, the research will investigate the use of attention mechanisms to help identify salient information, evolutive and end-to-end approaches and combined solution to the two different problems, spoofing detection and automatic speaker recognition.
Biometric template protection of deep templates
Recent advances in DNN-based face and speaker biometric systems have shown a preference for training the DNNs on raw face and speaker data as opposed to using pre-extracted features. This introduces a new challenge to the development of suitable biometric template protection techniques to preserve the privacy of our biometric data.
Adversarial machine learning for combating the vulnerabilities of DNN-based biometric systems
Recent advances in DNN-based face and speaker biometric recognition systems indicate superior performance to traditional biometric recognition methods. We envision, therefore, that future state-of-the-art face and speaker biometric systems will be largely DNN-based. Likewise, we anticipate new challenges in dealing with such systems’ security aspects such as “obfuscation/evasion”, “impersonation”, ”poisoning” or ”trojan” attacks.
Quantifying privacy in mobile interaction
We seek an early-stage researcher pursuing a PhD qualification in the area of biometrics, deep learning, security, and privacy protection. The specific research topic of this position is to develop theory and methodologies to quantify privacy in the context of data (traditional biometrics as well as touch and movement patterns, soft biometrics, and context information) acquired through the interaction of the user with mobile devices. The candidate is expected to carry out a systematic study of the state of the art. Multimodal datasets containing mobile user interaction data will be collected. New methods and metrics to better quantify privacy will be proposed and applied to the biometric data under consideration in continuous authentication schemes.
Enhancing security in multimodal biometrics: e-learning and e-banking
We seek an early-stage researcher pursuing a PhD qualification in the area of biometrics, behavior, and human-computer interaction. This ESR will focus on evaluating and improving the security in two key applications of biometric authentication: e-Learning and e-Banking. Considering both scenarios of application, the main objectives will be: 1) to analyze the main challenges and requirements for security and privacy in single and multi-modal biometric systems, 2) to adapt existing techniques for template and attack protection, 3) to develop new techniques for template and attack protection based on the limitations of existing techniques.
Design of secure and privacy-preserving biometric authentication protocols using secure multi-party computation techniques
While biometric authentication systems provide important usability advantages, they are susceptible to serious privacy threats. The direct employment of cryptographic primitives seems to be one of the most robust approaches to tackle the challenging problem of privacy-preservation. Most of the state-of-the-art cryptographic protocols, however, were not designed taking into consideration the inherent variability of biometric data. In fact, cryptography tends to amplify small differences and it is not error tolerant (e.g., hashing, AES, and RSA). In this ESR, we will examine how to guarantee privacy-preservation in biometric authentication systems by employing advanced cryptographic methods such as secure multiparty computation (SMPC) primitives (e.g., homomorphic encryption) and verifiable computation (when biometric authentication is applied in a distributed setting).
Privacy-preserving biometric authentication and differential privacy
Privacy-preserving biometric authentication focuses on guaranteeing accurate biometric authentication, while at the same time providing strong privacy guarantees (e.g., avoid tracking, profiling of users and leakage of sensitive information). Encryption mechanisms and transformation mechanisms are not sufficient to avoid all leakage of information. Differential privacy is a useful framework that can be employed to provide strong privacy guarantees. Although differential privacy and privacy-preserving biometric authentication have received significant attention separately, their interconnection remains largely unexplored, since they have been studied in isolation. In this ESR, we will focus on employing differentially private mechanisms in the biometric authentication process. The goal is to achieve high accuracy in the authentication process, while at the same time avoid any leakage of information.
Impacts of the GDPR on algorithms used for automated-decision making
The objective of this research is to determine the impacts of the GDPR rules on the transparency and fairness of algorithms used for automated-decision making. According to Articles 12 to 15 of the GDPR, individuals have the right to receive meaningful information about the logic under which automated decisions were made and to be informed about the consequences of these decisions. Do algorithms need to be explained to individuals? What is the impact of such an obligation on the design of algorithms used for automated-decision making? These are some of the essential questions that the research will investigate. The research will build on collaboration with computer scientists to understand the design and functioning of algorithms used in the context of anti-spoofing.
Combining the legal requirements of data protection by design and data protection by default with the technical development of next-generation biometric systems
The new legal framework for data protection requires that all new technical developments using personal data follow a data protection by design and by default approach. While over the last five years a number of research projects have experimented with these concepts, there is very little systematic study of how these principles can be put into effect and effectively enforced. Given the nature of biometric data, the complexity of applying these principles needs special attention. This research would hence need to combine legal principles with technical approaches and explore how this approach affects the development of next generation biometric systems and the acceptability of their use in society.
Conceptual framework for use and regulation of new biometric data
The Centre for Information Technology and Intellectual Property Rights (CiTiP) hires a legal researcher in biometrics and body-machine interactions and seeking a PhD qualification. The Marie Curie fellow researcher will investigate the need for new legal concepts and principles for the use of human characteristics, such as emotions, but also electrocardiograms (ECGs) and electroencephalograms (EEGs), in particular in an Internet of Things or more advanced robotics context. The research will look into the need of (new) norms and regulations, in particular in the field of privacy and data protection but also from a fundamental rights point of view.
Catalogue of risks of biometric data use and assessment of protective technical measures
The Centre for Information Technology and Intellectual Property Rights (CiTiP) hires a legal researcher seeking a PhD qualification, eager to investigate the qualification of the risks of current and new biometrics in present and future applications. The Marie Curie fellow researcher will investigate these risks and categorize them. This will be deployed to assess as to whether the legal framework is ready to cope with these risks and to update the obligatory risk assessments. The analysis will further be used to review current protective measures and to identify future protective measures for implementing privacy and data protection by design.
Page was designed with Mobirise