Annonce

Les commentaires sont clos.

Privacy-Preserving AI with Deep Federated learning

6 December 2022


Catégorie : Stagiaire


Le travail entamé pourra ensuite être poursuivi en doctorat

Topic: Privacy-Preserving AI with Deep Federated learning

Keywords: deep neural networks, federated learning, privacy-preserving, non-iid data, image classification, face recognition, teacher-student model, knowledge distillation,

 

Motivation and goal:

Federated Learning (FL) is an effective machine learning paradigm which allows multiple heterogeneous client devices to jointly train a unified global model without sharing their private data. With growing concerns regarding data privacy and rapid increase in data volume, FL has become a very important research topic and has shown its great potential to facilitate many real-world applications.

While traditional federated learning is capable of learning high-performing models, problem of non-iid data statistics(non independent and identically distributed) is still a practical challenge under studied. A crucial issue of FL is the reduction of communication cost and this is directly related to the problem of client heterogeneity.

The data distribution of clients often highly differs as clients independently collect the data with their own preferences. FL with deep neural networks relies on stochastic gradient descent (SGD) and the IID sampling of the training data is important to ensure that the stochastic gradient is an unbiased estimate of the full gradient. Therefore, in non-IID FL setup, minimizing the local empirical loss is fundamentally inconsistent with minimizing the global loss. Non-IID data leads to drifted local model, resulting in degraded performance and slow convergence of the global model.

To address this heterogeneity challenge, most existing methods constrain the direction of local model update to align the local and global optimal points [1]. They force the local models to be consistent to the global model, but they conduct quite simple model aggregation. Several works try to improve the efficiency of model aggregation among which knowledge distillation (KD) has emerged as an effective solution. Knowledge distillation [2], a teacher-student paradigm, with the original goal of model compression, learns a lightweight student model using knowledge distilled from one or more powerful teachers. When being used in FL to tackle client heterogeneity [3], KD techniques treat each client model as a teacher, whose information is aggregated into the student (global) model to improve its generalization performance. However, all these methods ignore the incompatibility of local knowledges and induce the forgetting of knowledge in the global model.

The intern will study and evaluate the state-of-the-art FL algorithms that address the non-iid data problem. One idea can be to dynamically correct the local drift of gradient in order to reduce its impact to the global objective. This makes the global model converge quickly and obtains better performance. Another idea is to improve the knowledge distillation-based method for more robust model aggregation by extracting richer knowledge from clients such as gradient distribution, hard sample mining.As for application, federated image classification in general or federated face recognition can be considered.

 

Require skills:

- Good knowledge in deep learning and computer vision.

- Good programming skill in Python (with pytorch or tensorflow)

 

Place of work:

Laboratory ETIS, UMR8051, site ENSEA, 6 av du Ponceau, 95000 Cergy-Pontoise

Duration: 5 - 6 months

Start: Spring 2023

Remuneration: around 600Euros/month

 

Contact:

Son VU son.vu@ensea.fr

(Cover letter, CV, transcript of records, English or French)

 

Reference:

[1] L. Zhang, L. Shen, L. Ding, D. Tao, L. Duan, “Fine-tuning Global Model via Data-Free Knowledge Distillation for Non-IID Federated Learning”, CVPR 2022

[2]G. Hinton, O. Vinyals, J. Dean, “Distilling the knowledge in a neural network”arXiv:1503.02531, 2015

[3] D. Li, J. Wang, “FedMD: Heterogenous Federated Learning via Model Distillation”, NeurIPS Workshop 2019

[4] Q. Meng, F. Zhou, H. Ren, T. Feng, G. Liu, Y. Lin,“Improving Federated Learning Face Recognition via Privacy-Agnostic Clusters”,ICLR 2022

[5] J. Pourcel,N.-S. Vu, R.M. French, “Online Task-free Continual Learning with Dynamic Sparse Distributed Memory”, ECCV 2022

[6] AwesomeFederatedMachineLearning github

https://github.com/innovation-cat/Awesome-Federated-Machine-Learning