We are hiring highly motivated intern student to work on speech analysis using deep learning techniques.
Laboratory: Laboratoire Images, Signaux et Systèmes Intelligents (LiSSi) à Université Paris-Est Créteil UPEC (ex-Université Paris 12)
Supervisors: Dr. Alice OTHMANI
N.B. Other professors will be involved in this project
Duration: 6 months
Starting Period: Between January and March
Subject: Major Depression Disorder recognition and assessment from speech
Minimum to desired requirements:
•M1 in computer science, applied mathematics or electrical engineering, with a focus on machine learning.
•Experience in machine learning and data analysis
•Experience in Signal processing
•Demonstrated record of high-performance scientific programming with python
•Demonstrated analytical, verbal, and scientific writing skills
Depression is a mental disorder caused by several factors: psychological, social or even physical factors. Psychological factors are related to permanent stress and the inability to successfully cope with difficult situations. Social factors concern relationship struggles with family or friends and physical factors cover head injuries. Depression describes a loss of interest in every exciting and joyful aspect of everyday life. Mood disorders and mood swings are temporary mental states taking an essential part of daily events, whereas, depression is more permanent and can lead to suicide at its extreme severity levels. Depression is a mood disorder that is persistent for up to eight months and beyond. According to the World Health Organization (WHO), 350 million people, globally, are diagnosed with depression. A recent study estimated the total economic burden of depression to be 210 billion US Dollars per year , caused mainly by increased absenteeism and reduced productivity in the workplace. In many cases, the affected person denies facing mental disorders like depression, thus, he/she does not get the proper treatment.
Several works in automatic depression recognition and assessment are reported in the literature . Automatic depression recognition has become more and more popular since 2011 with the emergence of the eight successive editions of the Audio/Visual Emotion Challenge AVEC . Typically, depressed individuals tend to change their expressions at a very slow rate and pronounce flat sentences with stretched pauses. Therefore, to detect depression, two types of features are frequently used: facial geometry features and audio features for their ability and consistency to reveal signs of depression.
During this internship, innovative approaches based on deep learning will be developed to predict depression from speech. More details will be given during the interview.
 Greenberg, P. E., Fournier, A. A., Sisitsky, T., Pike, C. T. and Kessler, R. C. (2015). The economic burden of adults with major depressive disorder in the United States (2005 and 2010). Journal of Clinical Psychiatry, 76, 155–162.
 Pampouchidou, A., Simos, P., Marias, K., Meriaudeau, F., Yang, F., Pediaditis, M., & Tsiknakis, M. (2017). Automatic assessment of depression based on visual cues: A systematic review. IEEE Transactions on Affective Computing.
 Ringeval, F., Schuller, B., Valstar, M., Cowie, R., Kaya, H., Schmitt, M., ... & Çiftçi, E. (2018, October). AVEC 2018 workshop and challenge: Bipolar disorder and cross-cultural affect recognition. In Proceedings of the 2018 on Audio/Visual Emotion Challenge and Workshop (pp. 3-13). ACM.
 E. Rejaibi, A. Komaty, F. Meriaudeau, S. Agrebi, A. Othmani, “MFCC-based Recurrent Neural Network for Automatic Clinical Depression Recognition and Assessment from Speech”, submitted to Signal processing: Image Communication. Published in ArVix https://arxiv.org/abs/1909.07208
 D. Kadoch, K. Bentounes, R. Alfred, E. Rejaibi, M.Daoudi, A. Hadid, A. Othmani, “Clinical Depression and Affect Recognition with EmoAudioNet”, submitted to IEEE transactions on Affective Computing. Soon to be added in ArVix
Applications and further questions regarding the position should be addressed to:
(c) GdR 720 ISIS - CNRS - 2011-2020.