Vous êtes ici : Accueil » Kiosque » Annonce

Identification

Identifiant: 
Mot de passe : 

Mot de passe oublié ?
Détails d'identification oubliés ?

Annonce

22 octobre 2018

PhD: Design of a biomimetic event-based vision system for fall detection


Catégorie : Doctorant


The SEEING project proposes to investigate a novel paradigm of fall detection relying on event-based cameras associated with an adaptive mixed (analog/digital) neuromorphic processor hosted on reconfigurable circuits (Field Programmable Analog Array and FPGA)

 

Posture and fall detection have been the focus of research due to their potential economic impact. 3D sensors such as the Kinect are used to record spatial movements and to discriminate between several types of movements and behaviors. However, some recently organized challenges tend to demonstrate that such sensors are not suitable for the recognition of certain postures, in particular for fall prevention in the elderly. This last challenge is a significant public health problem. At least one senior out of two experiences at least one fall. Pedrono et al. estimate that 37% of the 450.000 elderly falling every year cases are admitted to the hospital resulting in 12 to 14 days of hospitalization on average. 12.000 patients die after a fall or because of the consequences of a fall. To address this problem, solutions with sensors attached to the patient have been explored, as well as deported cameras or UWB micro-doppler software radar. Such systems are still limited by their ergonomics, their volume, invasion of privacy (due to image recording), acceleration sensitivity, power consumption, their precision or discrimination between situations.

In this context, the SEEING project proposes to investigate a novel paradigm relying on event-based cameras associated with an adaptive mixed (analog/digital) neuromorphic processor hosted on reconfigurable circuits (Field Programmable Analog Array and FPGA). Unlike classical cameras that are synchronized to a clock signal and each clock period is associated to a new set of data for each pixel, event-based cameras are asynchronous: pixels are independent one from each other and send an information only when there is a change in their field of view. They are thus particularly adapted to surveillance tasks as they only record information when and where a movement or a change occurs. Data are therefore naturally compressed and constraints on the processing stage are reduced. The output signal of event-based cameras is usually coded in AER format (Address Event Representation). Events are transmitted immediately when detected in a vector containing both spatial and temporal coordinates of occurrence. This representation requires a specific processing that common computer assisted vision tools do not support. Novel algorithms are emerging in the field of event-based data processing. These algorithms are currently executed on conventional architectures. Visual tasks, such as fall or behavior detection, are difficult to implement on conventional architecture though they are effortless for a human brain.

This project will include the following tasks:

Requested skills:

Location: Laboratoire ETIS Laboratory, UMR8051, site ENSEA, 6 av du ponceau, 95000 Cergy-Pontoise

 

PhD supervisor: Pr. Olivier ROMAIN (olivier.romain@u-cergy.fr)

Co-supervision : Dr. Florian Kölbl (florian.kolbl@u-cergy.fr)

Dr. Camille Simon-Chane (camille.simon-chane@ensea.fr)

Please send your application (resume and covering letter) by email with the header ‘[Candidature] SEEING’.

 

Sujet en français: https://www-etis.ensea.fr//fr/actualite/sujet-de-these-equipe-astre-375.html

 

Dans cette rubrique

(c) GdR 720 ISIS - CNRS - 2011-2018.