Stage Unsupervised learning of an emotional latent representation in the domain of Food Science
6 Novembre 2023
Catégorie : Stagiaire
Proposition de Stage de fin d'étude / master à CentraleSupelec (possibilité de poursuite en thèse) : Unsupervised learning of an emotional latent representation in the domain of Food Science.
#IA #machinelearning #FoodScience #computervision
Unsupervised learning of an emotional latent representation in the domain of Food Science
Position: Master internship. This internship could lead to a Ph.D. position fully funded by the AiMotions project, on an extension of the same topic.
Duration: 5 to 6 months, starting in March or April 2024.
Location: CentraleSupélec campus of Rennes (France).
Affiliation: AIMAC team of the IETR laboratory (UMR CNRS 6164).
Supervisor: Catherine SOLADIE.
This master internship is part of the AiMotions project (2024-2028) funded by the French National Research Agency (ANR). AiMotions is an unprecedented joint contribution of Artificial Intelligence, Computer Vision, and Affective Computing to Food Science.
AiMotions aims to contribute to a better understanding of eating behaviours, via the analysis of emotions while eating. Emotional states influence eating behaviours including motivation to eat, food choice, or amount of food intake [1a, 1b]. However, understanding why and how emotions have a positive or negative impact on eating behaviours remains a vast topic of debate [2a]. One of the main reasons is that most studies in Food Science collect emotional feedback using only self-reported questionnaires [2b]. Having unbiased implicit data (physiological measures, brain activity, facial expressions), acquired in an ecological context, should improve the quality of the analysis and give new insights for both fundamental and applied research.
Description and objectives
To achieve success, AiMotions aims to develop a new AI-based framework to capture and analyse emotional responses elicited by food.
Adopting an unsupervised method to extract objective information on emotions is motivated by the following reasons:
- labelling facial expressions in videos requires tedious and error-prone annotation by experts who must manually extract and label the spatio-temporal information of the emotions in each frame;
- the performance gap between supervised and self/unsupervised methods is closing .
The latest advances in terms of unsupervised algorithms allowing dimension reduction (embedding), such as Variational Autoencoders (VAE) [4a,4b] or Dynamic VAE [4c] seem to perfectly fit and will be investigated and tailored to meet the specificities of emotional reactions.
To deal with challenges specific to Food Science, such as occlusions (glass, …) and specific facial movements (mastication), the goal is to enrich those AI systems and combine the detection of objects and movements specific to the domain, with the facial videos in input of our unsupervised system to feed it with more discriminating relevant information. Trust in the model will be quantified by a confidence score, i.e., a performance indicator on the uncertainty of the model prediction.
The main result of this embedding process is the creation of a continuous emotional space in which the different inputs of a Food Science database will be projected. Even if the projected data are not labeled, their position in space indicates the proximity or distance between sets of data, on an emotional level. Using objective features ensures greater reliability and trust in the resulting projections in the multidimensional space of emotions and offers a posterioriinterpretation and labelling of the space for Food Science experimentations.
The candidate will be pursuing his/her last year of Master's or engineer’s degree. He/she should have good knowledge and practical skills in machine learning and/or image processing. A good practice in Python is required, experience with PyTorch would be a plus. The candidate should also have good oral and written communication skills.
How to apply
Interested candidates should submit their transcripts, a detailed CV, 2 recommandation letters, and a cover letter to Catherine SOLADIE (email@example.com).
The intern will be supervised by Catherine SOLADIE and will join the AIMAC team of the IETR laboratory, located in the CentraleSupélec’s campus of Rennes (Brittany, France). CentraleSupélec offers accommodation on the campus.
The intern will benefit from the research environment of CentraleSupélec, in particular the computational resources of the Mésocentre.
The intern will receive the legal internship gratification of about 600 €/month.
[1a] Lagast, S., Gellynck, X., Schouteten, J. J., De Herdt, V., & De Steur, H. (2017). Consumers’ emotions elicited by food: A systematic review of explicit and implicit methods. Trends in food science & technology, 69, 172-189., https://doi.org/10.1016/j.tifs.2017.09.006
[1b] Rantala, E., Balatsas-Lekkas, A., Sozer, N., & Pennanen, K. (2022). Overview of objective measurement technologies for nutrition research, food-related consumer and marketing research. Trends in Food Science & Technology, 125, 100-113., https://doi.org/10.1016/j.tifs.2022.05.006
[2a] Evers, C., Dingemans, A., Junghans, A. F., & Boevé, A. (2018). Feeling bad or feeling good, does emotion affect your consumption of food? A meta-analysis of the experimental evidence. Neuroscience & Biobehavioral Reviews, 92, 195-208., https://doi.org/10.1016/j.neubiorev.2018.05.028
[2b] Cardello, A. V., & Jaeger, S. R. (2021). Questionnaires should be the default method in food-related emotion research. Food Quality and Preference, 92, 104180., https://doi.org/10.1016/j.foodqual.2021.104180
 Schmarje, L., Santarossa, M., Schröder, S. M., & Koch, R. (2021). A survey on semi-, self-and unsupervised learning for image classification. IEEE Access, https://doi.org/10.1109/access.2021.3084358
[4a] Kingma, D. P., & Welling, M. (2014). Auto-encoding variational Bayes. International Conference on Learning Representation (ICLR). https://doi.org/10.48550/arXiv.1312.6114
[4b] Rezende, D. J., Mohamed, S., & Wierstra, D. (2014). Stochastic backpropagation and approximate inference in deep generative models. International Conference on Machine Learning (ICML), https://proceedings.mlr.press/v32/rezende14.pdf
[4c] Girin, L., Leglaive, S., Bie, X., Diard, J., Hueber, T., & Alameda-Pineda, X. (2020). Dynamical variational autoencoders: A comprehensive review., Found Trends Mach Learn, 2021, https://doi.org/10.1561/2200000089