PhD on explainable AI (XAI)
Candidate proﬁle and practical details
We are looking forward to welcoming a student for a PhD in the ﬁeld of explainable AI. The candidate should be motivated and hardworking. Background in computer vision, deep learning, statistics are desirable. The PhD will take place in the GREYC laboratory, in Caen https://www.greyc.fr/en/home/.
It will be co-supervised by Frederic Jurie (email@example.com) and Loic Simon (firstname.lastname@example.org).
With the advent of extremely eﬃcient neural networks, and their pervasive use in modern AI systems, the research community has questioned their reliability for high stakes decision making . More importantly, some legal regulators have taken steps to ensure that users of AI decision systems have the right to obtain explanations from the systems under consideration. This is for instance the case in Europe through the General Data Protection Regulation (*) (GDPR). Such regulation may certainly impede the integration and development of high performance AI products unless the functioning of underlying systems can be robustly explained. And such is the goal of so-called Explainable Artiﬁcial Intelligence (a.k.a XAI) .
The need for interpretation capability over deep neural networks functioning has been realized by the research community quite soon. The main line of research concerns post-hoc analysis where a fully trained network is scrutinized so as to expose its inner workings. However, post-hoc explanation is far from being suﬃcient since it may misinterpret an entirely unreasonable decision making process and present it in a convincingly reasonable one and vice versa. This stumbling block was underlined in a high impact article  which drew the reader’s attention on how confounding factors as well as biases in training datasets can induce misinterpretations. The article advocates for the utter necessity of intrinsic interpretability of XAI systems. Such systems, also referred to as explainable by-design must enforce an easy interpretation of their decision making from the start (as opposed to after-the-fact like in post-hoc explanation). The topic of this PhD will be in this direction (XAI by design).
 Arun Das and Paul Rad. Opportunities and challenges in explainable artiﬁcial intelligence (xai): A survey. arXiv preprint arXiv:2006.11371, 2020.
 Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5):206–215, 2019.
(c) GdR 720 ISIS - CNRS - 2011-2020.