Annonce

Les commentaires sont clos.

Internship Master2 @ Inria Rennes: Membership Inference Attack

24 Novembre 2021


Catégorie : Stagiaire


One of the wonders of machine learning is that it turns any kind of data into mathematical equations. Once you train a machine learning model on training examples—whether it’s on images, audio, raw text, or tabular data—what you get is a set of numerical parameters. In most cases, the model no longer needs the training dataset and uses the tuned parameters to map new and unseen examples to categories or value predictions. You can then discard the training data and publish the model on GitHub or run it on your own servers without worrying about storing or distributing sensitive information contained in the training dataset.

Nevertheless, a type of privacy-leak oriented attack against ML systems, namely membership in- ference, makes it possible to detect whether a given data instance was used to train a machine learning model. In many cases, the attackers can stage membership inference attacks without having access to the machine learning model’s parameters. They just query the model and observe its output (soft decision scores or hard predicted labels). Membership inference can cause severe security and privacy concerns in cases where the target model has been trained with sensitive information. For example, identifying that a certain patient’s clinical record was used to train a automatic diagnosis model reveals that the patient’s identity and relevant personal information. Moreover, such privacy risk might lead commercial companies who wish to leverage machine learning-as-a-service to violate privacy regula- tions. [VBE18] argues that membership inference attacks on machine learning models increase greatly the vulnerability of machine learning service providers on privacy leaks. They may face further legal issues related to privacy information breaching in their business practices due to GDPR (General Data Protection Regulation).

 

In this project, our plan is to implement and benchmark a typical membership inference attack al- gorithm, originally proposed in [LZ21]. Though previous attacks infer membership based on decision confidences, this study demonstrates that hard class label-only exposures to adversaries are also highly vulnerable. In particular, this study develops two types of decision-based attacks, namely transfer attack, and boundary attack. Empirical test shows that the proposed method in [LZ21] can even outperform the previous state-of-the-art membership attack approaches.

In differential privacy, a common defense is to randomize the procedure by adding noise either on the inputs (the training data set), the process itself (the training process of the model), or the outputs (the trained model’s parameters). We will focus on evaluating the trade-off between the loss of performance and the gain of robustness against such membership inference attacks.

The candidate for this project is expected to have accomplished courses on Machine Learning and/or have experience of implementing Machine Learning algorithms using Python for practical data mining problems. Especially, expertise in using Pytorch will be required in the project.

The internship takes place within INRIA Rennes, campus universitaire de Beaulieu, Rennes, France.

Contact information: Teddy Furon teddy.furon@inria.fr and Yufei Han yufei.han@inria.fr

 

[LZ21] Zheng Li and Yang Zhang. Membership leakage in label-only exposures, 2021.
[VBE18] Michael Veale, Reuben Binns, and Lilian Edwards. Algorithms that remember: Model inversion attacks and data protection law. Social Science Research Network, 2018.