Annonce
FLAIR #2: textural and temporal information for semantic segmentation from multi-source optical imagery
4 Juillet 2023
Catégorie : Compétitions et challenges
Following a successful first challenge, we are thrilled to present the FLAIR #2 challenge.
Participants will be tasked with developing innovative solutions that can effectively harness the textural information from single date aerial imagery and temporal information from Sentinel-2 satellite time series to enhance semantic segmentation, domain adaptation, and transfer learning. Your solutions should address the challenges of reconciling differing acquisition times, spatial resolutions, accommodating varying conditions, and handling the heterogeneity of semantic classes across different locations.
Challenge information :
Codalab competition page : https://codalab.lisn.upsaclay.fr/competitions/13447
FLAIR#2 datapaper : https://arxiv.org/pdf/2305.14467.pdf
FLAIR project page : https://ignf.github.io/FLAIR/#FLAIR2
Github repository for baseline : https://github.com/IGNF/FLAIR-2-AI-Challenge
Data and baseline:
The FLAIR #2 dataset encompasses 20,384,841,728 annotated pixels at a spatial resolution of 0.20 m from aerial imagery, divided into 77,762 patches of size 512x512. The FLAIR #2 dataset also includes an extensive collection of satellite data, with a total of 51,244 acquisitions of Copernicus Sentinel-2 satellite images. For each area, a comprehensive one-year record of acquisitions has been gathered offering valuable insights into the spatio-temporal dynamics and spectral characteristics of the land cover.
The dataset covers 50 spatio-temporal domains, encompassing 916 areas spanning 817 km². With 13 semantic classes (plus 6 not used in this challenge), this dataset provides a robust foundation for advancing land cover mapping techniques.
The baseline of the challenge is the U-T&T model, a two-branch architecture that combines spatial and temporal information from very high-resolution aerial images and high-resolution satellite images into a single output. The U-Net architecture is employed for the spatial/texture branch, using a ResNet34 backbone model pre-trained on ImageNet. For the spatio-temporal branch, the U-TAE architecture incorporates a Temporal self-Attention Encoder (TAE) to explore the spatial and temporal characteristics of the Sentinel-2 time series data, applying attention masks at different resolutions during decoding. This model allows for the fusion of learned information from both sources, enhancing the representation of mono-date and time series data.
Prizes, participation and rules:
The winners of the challenge can anticipate receiving prizes, with the announcement set to take place at the end of September. Here's a breakdown of the prize distribution: