Avec les avancées technologiques récentes, l'introduction de nouveaux formats d'images et la disponibilité, à la fois pour le grand public et les professionnels, de matériel permettant de manipuler des résolutions de plus en plus élevées, la compression d'images et vidéos demeure un sujet de recherche et de développement majeur. Elle a évolué ces dernières années en prenant en comptes de nouveau champs et applications afin de répondre à de nouveaux besoins. Il apparait opportun de lancer une série de réunions permettant, aux novices comme aux experts, aux chercheur juniors comme aux seniors, de discuter ces avancées et de mettre en commun les connaissances établies à ce jour.
Cette réunion du GDR ISIS, est la première d'une série dédiée à cette problématique, propose de faire un état des lieux des standards JPEG et MPEG en traitant les points suivants :
- Les avancées en termes de compression images fixes avec les travaux récents ou en cours au sein du comité JPEG
- Versatile Video Coding et le futur de la compression vidéo (utilisation de réseaux de neurones) (nouveau standard MPEG-I VVC/H.266 depuis le 1 juillet 2020)
- Les nouvelles représentations de données et outils de compression pour de la vidéo réellement immersive
Jeudi 16 juillet à 9h.
- Chaker Larabi, XLIM, firstname.lastname@example.org
- Didier Nicholson, EKTACOM, email@example.com
An overview of recent and ongoing JPEG standardisation activities
The JPEG standardisation committee has been active in the last three decades as a major contributor to digital imaging ecosystem both in consumer and in professional markets.
In this talk we start with a quick overview of past JPEG standards, namely JPEG, JPEG 2000 and JPEG XR and discuss their challenges and reasons behind success of those that are widely used and those that are not.
We then move to most recent image coding standards that have been either recently completely or are close to completion, namely, JPEG XT, JPEG Systems, JPEG XS, JPEG XL and JPEG Pleno, providing an overview of their objectives, their complementarity to each other, their unique features and the reasons that led to their definition.
The talk will end with an overview of recent explorations that are ongoing in JPEG Committee, namely, JPEG Media Blockchain, JPEG AI and JPEG DNA as well as the roadmap and potential standards to which they might lead to.
Touradj Ebrahimi is a Professor at EPFL and head of its multimedia signal processing group. He is also co-founder of RayShaper SA and current Convenor the the JPEG standardisation committee.
He has been recipient of various awards in his more than 25 years of career. In 2019 he won of First IEEE Star Innovator Award in Multimedia for his work on quality of experience in multimedia and its impact in multimedia standardisation. Prof. Ebrahimi is a Fellow of the SPIE and IEEE.
VVC and how it paves the way for data driven methods in future video coding
Compressed video data are growing at a faster rate than ever before. Already today, video data make up by far the highest percentage of bits on the Internet and in mobile traffic. This demonstrates the need for even more efficient compression, which goes beyond the current state of the art High Efficiency Video Coding standard (HEVC). In order to master this demanding challenge, the ITU Video Coding Expert Group (VCEG) and the ISO/IEC Moving Pictures Expert Group (MPEG) started working together in the Joint Video Experts Team (JVET). Now as of July 2020, VVC has just been finalized and approved by the ITU-T as H.266. VVC has been designed to achieve significantly improved compression capability compared to previous standards such as HEVC, and at the same time to be highly versatile for effective use in a broadened range of applications. Some key application areas for the use of VVC particularly include ultra-high-definition video (e.g. 4K or 8K resolution), video with a high dynamic range and wide colour gamut (e.g., with transfer characteristics specified in Rec. ITU-R BT.2100), and video for immersive media applications such as 360° omnidirectional video, in addition to the applications that have commonly been addressed by prior video coding standards. Another aspect that marks an important milestone in video coding development is the introduction of simple compression algorithms based on machine-learning. Machine-learning or data-driven based methods are a promising research direction for the development of new tools in video compression. On the other hand, designing such tools so that they are applicable in standards which are widely used on multiple types of consumer devices is a challenging task. In this talk, we will outline some aspects of this problem with a particular focus on the role of data-driven methods in the standardization process of the VVC standard.
Benjamin Bross received the Dipl.-Ing. degree in electrical engineering from RWTH Aachen University, Aachen, Germany, in 2008. In 2009, he joined the Fraunhofer Institute for Telecommunications ? Heinrich Hertz Institute, Berlin, Germany, where he is currently heading the Video Coding Systems group at the Video Coding & Analytics Department and in 2011, he became a part-time lecturer at the HTW University of Applied Sciences Berlin. Since 2010, Benjamin is very actively involved in the ITU-T VCEG | ISO/IEC MPEG video coding standardization processes as a technical contributor, coordinator of core experiments and chief editor of the High Efficiency Video Coding (HEVC) standard [ITU-T H.265 | ISO/IEC 23008-2] and the emerging Versatile Video Coding (VVC) standard. In addition to his involvement in standardization, Benjamin is coordinating standard-compliant software implementation activities. This includes the development of an HEVC encoder that is currently deployed in broadcast for HD and UHD TV channels. Besides giving talks about recent video coding technologies, Benjamin Bross is an author or co-author of several fundamental HEVC and VVC-related publications, and an author of two book chapters on HEVC and Inter-Picture Prediction Techniques in HEVC. He received the IEEE Best Paper Award at the 2013 IEEE International Conference on Consumer Electronics ? Berlin in 2013, the SMPTE Journal Certificate of Merit in 2014 and an Emmy Award at the 69th Engineering Emmy Awards in 2017 as part of the Joint Collaborative Team on Video Coding for its development of HEVC.
Jonathan Pfaff received his Diploma and his Dr. rer. nat. degree in Mathematics from Bonn University, Bonn, Germany, in 2010 and 2012, respectively. After a postdoctoral stay at Stanford University, he joined Fraunhofer HHI in 2015 where he is currently heading the research group Video Coding Technologies at the Video-Coding & Analytics Department. He has successfully contributed to the upcoming Versatile Video Coding (VVC) standard.
Point Cloud Compression : searching for correlations in 2D vs 3D
Point clouds are typically represented by extremely large amounts of data, which is a significant barrier for mass market applications. However, the relative ease to capture and render spatial information compared to other volumetric video representations makes point clouds increasingly popular to present immersive volumetric data.
This talk introduces the technologies developed during the MPEG standardization process for defining an international standard for point cloud compression. The diversity of point clouds in terms of density conducted to the design of two approaches: the first one, called V-PCC (Video based Point Cloud Compression) is projecting the 3D space into a set of 2D patches and encodes them by using traditional video technologies. The second one, called G-PCC (Geometry based Point Cloud Compression) is traversing directly the 3D space in order to create the predictors.
With the current V-PCC encoder implementation providing a compression of 125:1, a dynamic point cloud of 1 million points could be encoded at 8 Mbit/s with good perceptual quality. For the second approach, the current implementation of a lossless, intra-frame G PCC encoder provides a compression ratio up to 10:1 and acceptable quality lossy coding of ratio up to 35:1.
By providing high-level immersiveness at currently available bandwidths, the two MPEG standards are expected to enable several applications such as six Degrees of Freedom (6 DoF) immersive media, virtual reality (VR) / augmented reality (AR), immersive real-time communication, autonomous driving, cultural heritage, and a mix of individual point cloud objects with background 2D/360-degree video.
Marius Preda is associate professor at "Institut MINES-Télécom" and Chairman of the 3D Graphics group of ISO MPEG. He contributed to various ISO standards with technologies in the fields of 3D graphics, virtual worlds and augmented reality and has received several ISO Certifications of Appreciation. Academically, he received a Degree in Engineering from Politehnica Bucharest, a PhD in Mathematics and Informatics from University Paris V and an eMBA from IMT Business School, Paris.