TY - JOUR
TI - Guess & check codes for deletions, insertions, and synchronization
DO - https://doi.org/doi:10.7282/t3-0r5j-8d93
PY - 2020
AB - Deletion and insertion errors are experienced in various communication and storage systems. These errors are often associated with various forms of loss of synchronization. Constructing codes for correcting deletions and insertions with optimal redundancy, and efficient encoding and decoding, has been and remains in general an open problem. The study of deletion correcting codes has several applications, such as file synchronization and DNA-based storage. This thesis will study the problem of designing codes for deletions, insertions, and synchronization, and will be divided into three parts.
In the first part, this thesis focuses on the problem of constructing codes that can correct $delta$ deletions that are arbitrarily located in a binary string. The fundamental limits for this problem, derived by Levenshtein, show that the optimal number of redundant bits needed to correct $delta$ deletions in an $n$-bit codeword is $mathcal{O}(delta log n)$. Varshamov-Tenengolts (VT) codes, dating back to 1965, are zero-error single deletion $(delta=1)$ correcting codes, that have an asymptotically optimal redundancy of value at most $log (n+1)$ bits, and have linear time $mathcal{O}(n)$ encoding and decoding algorithms. Finding similar codes for $delta geq 2$ deletions remains an open problem. Classical deletion correcting codes require correcting all $delta$ deletions with zero-error. In our work, we relax the standard zero-error decoding requirement, and instead require correcting almost all $delta$ deletions (a fraction that goes to $1$ with $n$). One of our main contributions is an explicit construction of a new family of codes, that we call Guess & Check (GC) codes, that can correct with high probability up to a constant number of $delta$ deletions (or insertions). GC codes are systematic and have $c(delta+1)log k$ redundancy, where $k$ is the length of the information message and $c>delta$ is a code parameter. Moreover, these codes have deterministic encoding and decoding algorithms that run in polynomial time in $k$. GC codes are, so far, the only existing {em systematic} codes with logarithmic redundancy, which makes these codes suitable for file synchronization applications. We describe the application of GC codes to file synchronization, and highlight the resulting savings in terms of communication cost and number of communication rounds.
In the second part of this thesis, we study the problem of correcting deletions that are localized in certain parts of the codeword, which are unknown a priori. This study is motivated by file synchronization, in applications such as cloud storage, where large files are often edited by deleting and inserting characters in a relatively small part of the file (such as editing a paragraph). The model we study is when $delta$ deletions are localized in a window of size $w$ in the codeword. This model is a generalization of the bursty model in which all deletions must be consecutive. Our main contribution in this part is constructing new explicit codes for the localized model, that can correct, with high probability, $delta leq w$ deletions that are localized in a single window of size $w$, where $w=o(k)$ grows as a sublinear function of the length of the information message $k$. Therefore, we extend existing results in the literature which study the problem for a window size that is fixed to $w=3$ or $w=4$. Furthermore, the encoding complexity of our codes is $o(k^2)$, and the decoding complexity is $mathcal{O}(k^2)$.
In the third part of this thesis, we study the problem of coded trace reconstruction which has applications to DNA-based storage. DNA-based storage systems introduce various challenges, such as errors in the stored data due to DNA breakages caused by chemical reactions, or errors resulting from the process of DNA sequencing. For instance, DNA sequencing using nanopores results in multiple traces (copies) of the data which contain errors such as deletions. One solution to enhance the reliability of such systems is to code the stored data. In coded trace reconstruction, the goal is to design codes that allow reconstructing the data efficiently from a small a number of traces.
We study the model where the traces are obtained as outputs of independent deletion channels, where each channel deletes each bit of the input codeword $x$ independently with probability $p$. Our main contribution in this part is designing novel codes for trace reconstruction with constant redundancy, which allow reconstructing a coded sequence from a constant number of traces, in the regime where $p=Theta(1/n)$. Our results improve on the state of the art coded trace reconstruction algorithm, which requires a logarithmic redundancy in $n$ for a similar regime where the number of deletions in each trace is fixed.
KW - Electrical and Computer Engineering
LA - English
ER -