Annonce
IEEE JSTSP Special Issue: Learning-Based Quality Prediction for Advanced Visual Media
9 Février 2022
Catégorie : Revues
Call for Papers
IEEE Journal of Selected Topics in Signal Processing
Special Issue: Learning-Based Quality Prediction for Advanced Visual Media
As of today, 80% of the internet traffic is dedicated to the transmission of images and videos – a percentage that is expected to keep increasing over the next few years. At the same time, we are witnessing the emergence and adoption of advanced media modalities and novel visual representations, including high dynamic range, mixed reality, volumetric video or point clouds. These modalities enable novel forms of interactive and immersive experiences to users and open up new frontiers of user applications, while, at the same time, require novel processing and transmission techniques. As in most of these existing and envisioned applications, the human is the ultimate receiver or consumer of the visual content, human perception is critical for the optimization, acceptance, and success of these applications. In practice, visual quality is measured using computational models that estimate the perceived quality as experienced by a human viewer.Unfortunately, the problem of designing reliable, robust, and universal computational visual quality models is still not solved satisfactorily. While some advances have been made for 2D images and videos, such models are currently out of reach for immersive and advanced media contents and applications. While traditional visual quality computational approaches are based on well-known principles of human vision perception or natural scene statistics, more recent ones make use of machine learning and artificial intelligence methods, including deep learning approaches. These recent models are able to achieve a superior performance in terms of quality prediction accuracy as perceived by humans. Nevertheless, their interpretation and generalization are still a challenge.
This special issue aims at collecting contributions to learning-based approaches to understand and estimate the perceived visual quality of advanced visual media contents, such as light fields, point clouds, and immersive media in general. More specifically, we are interested in models and techniques that tackle the challenges of estimating the quality of new advanced media contents.
Scope
Topics of interest include, but are not limited to:
-Representation learning for visual quality estimation;
-Deep learning for visual quality estimation;
-Large scale datasets for quality assessment;
-Learning-based approaches for advanced visual media, including immersive systems and signals, point clouds and volumetric video;
-Hybrid models combining first principles and deep learning for quality estimation
-Image/video generation
-Talking heads generation
Important Dates
- Manuscript Submissions due: September 1, 2022
- First review due: November 1, 2023
- Revised manuscript due:January 3, 2023
- Second review due: March 15, 2023
- Final manuscript due: May 1, 2023
- Publication: July 1, 2023
Guest editors
- Aladine Chetouani (Lead GE, aladine.chetouani@univ-orleans.fr), laboratoire PRISME, université d’Orléans, Orléans, France
- Sebastian Bosse (sebastian.bosse@hhi.fraunhofer.de), Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute, Berlin, Germany
- Patrick Le Callet (patrick.lecallet@univ-nantes.fr), LS2N, Polytech’Nantes, Nantes Université, Nantes, France
- Mylene Farias (mylene@ene.unb.br)University of Brasília, Brasília, Brazil
- Johannes Ballé (jballe@google.com), Google Research, USA
- Jing Li (jing.li.univ@gmail.com), Alibaba Group, Beijing, China