Coding Theory and Its Applications to Data Storage, Communications and Security
Thèmes scientifiques :
- D - Télécommunications : compression, protection, transmission
Nous vous rappelons que, afin de garantir l'accès de tous les inscrits aux salles de réunion, l'inscription aux réunions est gratuite mais obligatoire.
15 personnes membres du GdR ISIS, et 8 personnes non membres du GdR, sont inscrits à cette réunion.
Capacité de la salle : 90 personnes.
The seminar is the theory of error-correcting codes and its applications follows the main goals:
(i) to present last results in algebraic and graph-based coding
(ii) to present most relevant applications of the channel coding of today
If you are willing to contribute to the seminar and give a talk related to one of the aforementioned subjects, please contact the organizers before November 15, 2019. The participation of PhD students and postdoctoral researchers is strongly encouraged.
- Iryna Andriyanova (ETIS, ENSEA-Université de Cergy-Pontoise-CNRS); firstname.lastname@example.org
- Charly Poulliat (IRIT, ENSEEIHT), email@example.com
9:30-10:15 Finite-length scaling of spatially coupled LDPC codes under window decoding over the binary erasure channel
Résumés des contributions
Finite-length scaling of spatially coupled LDPC codes under window decoding over the binary erasure channel
We analyze the finite-length performance of spatially coupled low-density parity-check (SC-LDPC) codes under window decoding over the binary erasure channel. In particular, we propose a refinement of the scaling law by Olmos and Urbanke for the frame error rate (FER) of terminated SC-LDPC ensembles under full belief propagation (BP) decoding. The proposed refined scaling law is based on modeling the decoding process as two independent Ornstein-Uhlenbeck processes, in correspondence to the two decoding waves that propagate toward the center of the coupled chain for terminated SC-LDPC codes, and yields a very good prediction of the FER. We then extend the proposed scaling law to predict the performance of (terminated) SC-LDPC code ensembles under the more practical sliding window decoding. The proposed scaling allows to easily quantify the loss in performance with respect to full BP decoding for a given window size. Finally, we show how this framework can be applied to predict the bit error rate performance of SC-LDPC codes.
Non-binary turbo codes: design, simplified decoding and comparison with SoA codes
4th and 5th generations of mobile communications systems adopted binary turbo, LDPC and polar codes as forward error correction techniques. However, these codes do not seem to provide full satisfaction with respect to the foreseen requirements in terms of spectral efficiency and reliability of future communications systems (2030 and beyond) expected to support new applications such as holographic communications, autonomous vehicles, tactile internet, etc. A first potential solution was provided a few years ago through the proposal of non-binary LDPC codes. They have proven to outperform their binary counterparts, especially for short frame sizes and/or when coupled with high-order modulations at the cost of a largely increased complexity. Lately, similar studies have started to emerge for turbo codes where the same type of conclusions related to performance and complexity are drawn. In this work, we define a new structure of non-binary turbo codes associated with binary and high-order modulations where all constituent components are investigated and optimized. Moreover, we were able to reduce their decoding complexity through the proposal of a new simplified decoding algorithm. Resulting codes have shown to outperform widely used codes including the ones adopted in latest standards.
Magic state distillation with error-correcting codes
One of the main obstacles towards building a quantum computer is the fragility of quantum information: any unwanted interaction with the
environment gives rise to the phenomenon of decoherence. In practice, all the hardware of the quantum computer is intrinsically faulty: not only the qubits, but also the gates and the measurement devices. To address this issue, one must resort to quantum fault-tolerance techniques. One way to to achieve this is through magic-state distillation protocols. These protocols consist in replacing the gates of a quantum circuit that are particularly difficult to implement fault tolerantly (such as T-gates) by the preparation of some special quantum states called magic states that together with easier to implement fault tolerantly quantum gates are able to perform the original difficult to implement T-gate for instance. These magic states are of course also noisy in practice and magic-state distillation procedures aim at exploiting quantum error-correction to improve their quality and finally obtain high-fidelity magic states. This moves the cost of implementing expensive quantum gates to the
preparation of high fidelity magic states. The codes required in this setting need to be highly structured in order to meet some rather stringent conditions. We will show that polar codes and Reed-Solomon codes can be used in this context and that they can improve the overhead which is measured by the ratio of noisy input states to almost perfect output states.
LDPC codes for extractable source coding and application to 360 degree images
We consider a framework in which the data generated by several correlated sources are compressed and stored on a server to be later successively requested by many clients. In this case, all the sources should be compressed jointly in order to minimize the amount of data stored on the server. But clients are only interested by subsets of the sources and joint compression makes it difficult to access only a part of the sources without having to decode the whole coded stream. Therefore, we aim at developing an extractable source coding scheme in which the sources are jointly compressed at the server but one can easily extract only a subpart of the compressed data. The objective of extractable source coding is then to minimize both the storage rate on the server, and the transmission rate needed to serve the user?s requests. An application of this problem is 360 degree images, in which the data is recorded in every direction and the user user can freely choose to look in any direction.
For this problem, we first provide the achievable storage and transmission rates for extractable source coding. We then show that rate-compatible Low Density Parity Check (LDPC) codes can be used to build a practical coding scheme with performance close to the achievable rates. Based on these results, we build a practical source coding for 360 degree images. We show how to optimally select the model to represent the correlation between blocks of the images, and how to quantize the model parameters by addressing the tradeoff between rate cost of transmitting parameters and decoding efficiency of a decoder with mismatched parameters. In order to use low complexity binary LDPC codes, we transform the block pixels into bit planes, and show that the loss in performance is very small compared to coding at the pixel level. Finally, we evaluate the rate performance of our scheme, and show that it has a small loss with respect to achievable storage and transmission rates provided by information theory results.
On high-rate non-binary LDPC codes with extremely low bit error rates
The focus of this talk is on the design of high-rate LDPC codes achieving extremely low bit error rates. A particular interest is given to (c,d) regular LDPC ensembles over GF(q). Our approach is based on the calculation of the average bit error rate of a given ensemble, by putting together the asymptotic and the finite-length analysis of non-binary LDPC codes.
Variable-length coding for zero-error information transmission
The zero-error information theory aims at designing coding schemes in which the probability of a correct decoding is exactly equal to one. The zero-error capacity of a channel was defined in Shannon (1956) as the maximum asymptotic rate that can be reached with zero-error codes. In general, the characterization of the zero-error capacity is a wide open problem, even if some particular cases are solved. In this work, I will propose a new approach based on variable-length codes. The first result provides a single-letter expression for the asymptotic performances of this family of variable-length codes. The second result proves the optimality of intermingled variable-length codes for an example where no finite-length code could reach the zero-error capacity. These results are also connected to the problem of channel coding in the finite block-length regime, and automaton theory.