Linear time list recovery via expander codes Brett Hemenway and Mary Wootters June 17 2016
Outline Introduction List recovery Expander codes List recovery of expander codes Conclusion
Our Results One slide version ◮ Inner code + expander graph ⇒ Expander code
Our Results One slide version ◮ Inner code + expander graph ⇒ Expander code ◮ [SS96, Spi96, Zem01, BZ02, BZ05, BZ06] Inner code has So does the expander code ⇒ decent distance and it’s decodable in linear time!
Our Results One slide version ◮ Inner code + expander graph ⇒ Expander code ◮ [SS96, Spi96, Zem01, BZ02, BZ05, BZ06] Inner code has So does the expander code ⇒ decent distance and it’s decodable in linear time! ◮ [HOW13] So does the expander code Inner code has ⇒ and it’s locally decodable in decent locality sub-linear time!
Our Results One slide version ◮ Inner code + expander graph ⇒ Expander code ◮ [SS96, Spi96, Zem01, BZ02, BZ05, BZ06] Inner code has So does the expander code ⇒ decent distance and it’s decodable in linear time! ◮ [HOW13] So does the expander code Inner code has ⇒ and it’s locally decodable in decent locality sub-linear time! ◮ Moral? So does the expander code Inner code has ⇒ and there’s an efficient algorithm! decent
Our Results One slide version ◮ Inner code + expander graph ⇒ Expander code ◮ [SS96, Spi96, Zem01, BZ02, BZ05, BZ06] Inner code has So does the expander code ⇒ decent distance and it’s decodable in linear time! ◮ [HOW13] So does the expander code Inner code has ⇒ and it’s locally decodable in decent locality sub-linear time! ◮ Moral? So does the expander code Inner code has ⇒ and there’s an efficient algorithm! decent ◮ This work: Inner code has So does the expander code decent ⇒ and it’s list-recoverable in linear list-recoverability time!
Our Results One Two slide version Inner code has So does the expander code ⇒ decent and it’s list-recoverable in linear list-recoverability time! ◮ List-recoverable: decodable from uncertainty (instead of errors).
Our Results One Two slide version Inner code has So does the expander code ⇒ decent and it’s list-recoverable in linear list-recoverability time! ◮ List-recoverable: decodable from uncertainty (instead of errors). ◮ Known linear-time list-recoverable codes have rate < 1 / 2.
Our Results One Two slide version Inner code has So does the expander code ⇒ decent and it’s list-recoverable in linear list-recoverability time! ◮ List-recoverable: decodable from uncertainty (instead of errors). ◮ Known linear-time list-recoverable codes have rate < 1 / 2. ◮ Expander codes can have rate 1 − ε !
Our Results One Two slide version Inner code has So does the expander code ⇒ decent and it’s list-recoverable in linear list-recoverability time! ◮ List-recoverable: decodable from uncertainty (instead of errors). ◮ Known linear-time list-recoverable codes have rate < 1 / 2. ◮ Expander codes can have rate 1 − ε ! ◮ We can plug our codes into [Meir’14] and get the optimal ∗ rate/error trade-off, for any rate.
Outline Introduction List recovery Expander codes List recovery of expander codes Conclusion
List Decoding
List Decoding G O O D T I M E F O R P I E
List Decoding G O O D T I M E F O R P I E X A C U O V R T
List Decoding GOODTIMEFORPIE GOATSATEALLPIE
List Recovery G Y W D A N B E F T R I T E L O O E A I D I G O E E V E M I V H T T M L E T L P I H
List Recovery G Y W D A N B E F T R I T E L O O E A I D I G O E E V E M I V H T T M L E T L P I H
List Recovery GOODTIMEFORPIE MYWHATBIGTEETH LIVEANDLETLIVE G Y W D A N B E F T R I T E L O O E A I D I G O E E V E M I V H T T M L E T L P I H
List Recovery From Erasures GOODTIMEFORPIE MYWHATBIGTEETH LIVEANDLETLIVE ? ? ? ? G Y W D A N B E F T R I T E L O O E A I D I G O E E V E M I V H T T M L E T L P I H
List Recovery Definition Definition C ∈ Σ N is ( α, ℓ, L )-list-recoverable (from erasures) if: ◮ for any set of lists S 1 , . . . , S N so that at least α N of them have size ≤ ℓ , ◮ there are at most L codewords c ∈ C so that c i ∈ S i for all i .
List Recovery Definition Definition C ∈ Σ N is ( α, ℓ, L )-list-recoverable (from erasures) if: ◮ for any set of lists S 1 , . . . , S N so that at least α N of them have size ≤ ℓ , ◮ there are at most L codewords c ∈ C so that c i ∈ S i for all i . ◮ α ← − fraction of codeword not erased ◮ ℓ ← − number of symbols in each slot ◮ L ← − number of codewords that match Note: L ≥ ℓ .
List decoding concatenated codes An application of list recovery message
List decoding concatenated codes An application of list recovery message C out
List decoding concatenated codes An application of list recovery message C out C in C in C in C in C in C in C in
List decoding concatenated codes An application of list recovery C out C in C in C in C in C in C in C in
List decoding concatenated codes An application of list recovery List decode C in C out C in C in C in C in C in C in C in
List decoding concatenated codes An application of list recovery List decode C in C out list list list list list list list C in C in C in C in C in C in C in
List decoding concatenated codes An application of list recovery List recover C out List decode C in C out list list list list list list list C in C in C in C in C in C in C in
List decoding concatenated codes An application of list recovery List recover C out { List of messages } List decode C in C out list list list list list list list C in C in C in C in C in C in C in Concatenated code is list decodable
List Recovery Applications ◮ Related to list decoding [GI02, GI03, GI04] ◮ Compressed sensing [NPR12, GNP + 13] ◮ Group testing [INR10] ◮ Erasure model is weaker than error model (erasure model was studied before in [GI04])
Outline Introduction List recovery Expander codes List recovery of expander codes Conclusion
Tanner Codes [Tanner’81] Given: ◮ A d -regular graph G with n vertices and N = nd 2 edges ◮ An inner code C 0 with block length d over Σ We get a Tanner code C . ◮ C has block length N and alphabet Σ. ◮ Codewords are labelings of edges of G . ◮ A labeling is in C if the labels on each vertex form a codeword of C 0 . ◮ (We fix an arbitrary ordering of edges at each vertex)
Example [Tanner’81] G is K 8 , and C 0 is the [7 , 4 , 3]-Hamming code. � 8 � N = = 28 and Σ = { 0 , 1 } 2
Example [Tanner’81] G is K 8 , and C 0 is the [7 , 4 , 3]-Hamming code. A codeword of C is a labeling of edges of G . red �→ 0 blue �→ 1 (0 , 0 , 0 , 0 , 0 , 0 , 0 , 1 , 1 , 0 , 1 , 1 , 1 , 0 , 1 , 0 , 0 , 1 , 0 , 1 , 0 , 0 , 0 , 1 , 0 , 0 , 1 , 1) ∈ C ⊂ { 0 , 1 } 28
Example [Tanner’81] G is K 8 , and C 0 is the [7 , 4 , 3]-Hamming code. These edges form a codeword in the Hamming code red �→ 0 blue �→ 1 (0 , 0 , 0 , 0 , 0 , 0 , 0 , 1 , 1 , 0 , 1 , 1 , 1 , 0 , 1 , 0 , 0 , 1 , 0 , 1 , 0 , 0 , 0 , 1 , 0 , 0 , 1 , 1) ∈ C ⊂ { 0 , 1 } 28
Encoding Tanner Codes Encoding is Easy! 1. Generate parity-check matrix Requires: ◮ Edge-vertex incidence matrix of graph ◮ Parity-check matrix of inner code 2. Calculate a basis for the kernel of the parity-check matrix 3. This basis defines a generator matrix for the linear Tanner Code 4. Encoding is just multiplication by this generator matrix
Linearity If the inner code C 0 is linear, so is the Tanner code C ◮ C 0 = Ker( H 0 ) for some parity check matrix H 0 . x ∈ C 0 ⇐ ⇒ H 0 = 0 x
Linearity If the inner code C 0 is linear, so is the Tanner code C ◮ C 0 = Ker( H 0 ) for some parity check matrix H 0 . x ∈ C 0 ⇐ ⇒ H 0 = 0 x ◮ So codewords of the Tanner code C also are defined by linear constraints: v ↔ y y ∈ C ⇐ ⇒ ∀ v ∈ G , H 0 = 0 y | Γ( v )
Recommend
More recommend