Réunion

Les commentaires sont clos.

Generative models: Control and (mis)Usage

Date : 31-05-2022
Lieu : online + Villejuif

Thèmes scientifiques :
  • A - Méthodes et modèles en traitement de signal
  • B - Image et Vision
  • T - Apprentissage pour l'analyse du signal et des images

Nous vous rappelons que, afin de garantir l'accès de tous les inscrits aux salles de réunion, l'inscription aux réunions est gratuite mais obligatoire.


S'inscrire à la réunion.

Inscriptions

77 personnes membres du GdR ISIS, et 24 personnes non membres du GdR, sont inscrits à cette réunion.

Capacité de la salle : 100 personnes.

Annonce

Since the advent of generative adversarial networks (Goodfellow et al., 2014) and variational autoencoders (Kingma and Welling, 2014), many neural architectures have been proposed to improve data-driven image synthesis. Once the most recent models are learned from a dataset, sampling randomly in their latent space usually produces images with a striking realism. Beyond the increasing quality of the resulting images, the models also proposed more disentangled representation, that allows a deeper interpretation of their internal structure. In particular, some works showed that controlling latent codes along a certain path in their latent space can result in variations of the semantic attributes in the corresponding generated images.

Such an ability to semantically edit images is useful for various real-world tasks, including artistic visualization, design, photo enhancement, inpainting and retouching, film post-production and targeted data augmentation. These works also have a theoretical interest to better understand the internal structure of the models as well as the learning procedure.

This meeting aims to bring together researchers and students to discuss recent theoretical and practical advances in generative models. Topics include, but are not limited to:

  • neural architectures for image generation and edition
  • GAN inversion
  • disentangled representation
  • conditional image synthesis
  • style transfer
  • cross-modal content generation
  • image edition or generation with non RGB images (infrared, X-ray, ultrasound...)
  • image edition with few data, unsupervised and self-supervised approaches
  • evaluation of image edition and generation
  • application of image editing and generation with generative models

Two speakers accepted to give a talk:

If you are interested in presenting a work to this meeting, send an email to the organizer with (1) the title of the presentation (2) the list of authors (3) an abstract of the work presented before May 12th.

Organizers:

The meeting will be both online and face to face at 7, rue Guy Môquet 94800 Villejuif (CNRS), depending on the health status (Salle de conférence, Bâtiment L, Au -1)

It is necessary to make an access request for people from outside the Villejuif campus. This request must be made on the Campus users' portal: https://extra.core-cloud.net/collaborations/SGVC/accueilUsager/SitePages/Accueil.aspx

Link visio: TBD with final program

Programme

Important Note

The meeting will be both online and face to face at 7, rue Guy Môquet 94800 Villejuif (CNRS), depending on the health status (Salle de conférence, Bâtiment L, Au -1)

It is necessary to make an access request for people from outside the Villejuif campus. This request must be made from the persons registred to the meeting. The visio will be available to a larger audience.

Program (in construction)

- [invited] Symeon (Akis) Papadopoulos DeepFakes: Technology, Methods, Trends and Challenges

Résumés des contributions

[invited] Symeon (Akis) Papadopoulos; DeepFakes: Technology, Methods, Trends and Challenges

DeepFakes pose an increasing risk to democratic societies as they threaten to undermine the credibility of audiovisual material as evidence of real-world events. The technological field of DeepFakes is highly evolving with new generation methods constantly improving the quality, convincingness and ease of generation of synthetic content, and new detection methods aiming at detecting as many as possible cases of synthetic content generation and manipulation. The talk will provide a short overview of the technology behind DeepFake content generation and detection, highlighting the main methods and tools available, and discussing some ongoing trends. It will also briefly discuss the experience of the Media Verification (MeVer) team with developing, evaluating, and deploying a DeepFake detection service in the context of the AI4Media project.

[invited] Guillaume Le Moing;