bounds on achievable rates of sparse quantum codes over
play

Bounds on achievable rates of sparse quantum codes over the quantum - PowerPoint PPT Presentation

Bounds on achievable rates of sparse quantum codes over the quantum erasure channel Nicolas Delfosse (with Gilles Z emor) Institute of Mathematics - Univ. of Bordeaux - France Second Int. Conf. on Quantum Error Correction - QEC 11 USC Los


  1. Bounds on achievable rates of sparse quantum codes over the quantum erasure channel Nicolas Delfosse (with Gilles Z´ emor) Institute of Mathematics - Univ. of Bordeaux - France Second Int. Conf. on Quantum Error Correction - QEC 11 USC Los Angeles - December 5, 2011

  2. Capacity of a classical channel x Channel x’ ◮ The channel introduces errors 1 C. Shannon - A mathematical theory of communication. The Bell System Technical Journal, Vol. 27, pp. 379–423, 623–656, July, October, 1948.

  3. Capacity of a classical channel k bits n bits n bits k bits m Encoding x Decoding Channel x’ m’ ◮ The channel introduces errors − → We add redundancy 1 C. Shannon - A mathematical theory of communication. The Bell System Technical Journal, Vol. 27, pp. 379–423, 623–656, July, October, 1948.

  4. Capacity of a classical channel k bits n bits n bits k bits m Encoding x Decoding Channel x’ m’ ◮ The channel introduces errors − → We add redundancy ◮ What is the highest rate R = k / n with P err → 0? → It is the capacity of the channel. 1 − 1 C. Shannon - A mathematical theory of communication. The Bell System Technical Journal, Vol. 27, pp. 379–423, 623–656, July, October, 1948.

  5. Capacity of a classical channel k bits n bits n bits k bits m Encoding x Decoding Channel x’ m’ ◮ The channel introduces errors − → We add redundancy ◮ What is the highest rate R = k / n with P err → 0? → It is the capacity of the channel. 1 − ◮ We want fast encoding and decoding − → sparse codes − → IN COMPENSATION: a bit below the capacity. 1 C. Shannon - A mathematical theory of communication. The Bell System Technical Journal, Vol. 27, pp. 379–423, 623–656, July, October, 1948.

  6. Capacity of a classical channel k bits n bits n bits k bits m Encoding x Decoding Channel x’ m’ ◮ The channel introduces errors − → We add redundancy ◮ What is the highest rate R = k / n with P err → 0? → It is the capacity of the channel. 1 − ◮ We want fast encoding and decoding − → sparse codes − → IN COMPENSATION: a bit below the capacity. ◮ With stabilizers of small weight, we can use degeneracy. 1 C. Shannon - A mathematical theory of communication. The Bell System Technical Journal, Vol. 27, pp. 379–423, 623–656, July, October, 1948.

  7. Plan 1 Capacity of the quantum erasure channel Capacity of the quantum erasure channel 1 Stabilizer codes 2 A combinatorial proof 3 2 With sparse quantum codes Expected rank of a random sparse submatrix 1 Achievable rates of sparse quantum codes over the QEC 2 3 An application to percolation theory Kitaev’s toric code and percolation 1 Hyperbolic quantum codes 2 Bound on the critical probability 3

  8. Capacity of the quantum erasure channel What is the highest rate R = k / n of quantum codes with P err → 0? Theorem (Bennet, DiVicenzo, Smolin - 97) The capacity of the quantum erasure channel is 1 − 2 p. Proved with no-cloning 2 − → independent of quantum codes properties. Goal: find a combinatorial proof and improve it for particular families of codes 2 C. H. Bennett, D. P. DiVincenzo, and J. A. Smolin - Capacities of Quantum Erasure Channels. Phys. Rev. Lett. 78, 3217–3220 (1997)

  9. Stabilizer codes ◮ S = < S 1 , . . . , S r > a stabilizer group of rank r . ◮ C ( S ) = Fix( S ) is the stabilizer code. ◮ R = n − r is the rate of the stabilizer code. n ◮ The syndrome of E ∈ P n is σ ( E ) ∈ F r 2 such that: σ i = 0 ⇔ E and S i commute . → If E ′ ∈ S then E and EE ′ have the same effect. − − → We can measure the syndrome. Using the syndrome, we search the most probable error.

  10. The quantum erasure channel Each qubit is erased independently with proba p . � random Pauli error I , X , Y , Z erased qubit ← → erased position known On n qubits: e ∈ F n 2 denotes the erased positions | ψ � → E | ψ � with Supp ( E ) ⊂ e (we write E ⊂ e ) ◮ the erased positions are known: e ∈ F n 2 , ◮ the syndrome is known: σ ( E ) ∈ F r 2 , ◮ the error E ⊂ e is unknown. To correct the state, we search an error E ⊂ e with syndrome σ .

  11. A combinatorial bound   I X Z Y Z H = Z Z X I Z   I Y Y Y Z � � e = 0 1 1 0 0 ◮ There are 4 2 errors E ⊂ e

  12. A combinatorial bound   I X Z Y Z H e = Z Z X I Z   I Y Y Y Z � � e = 0 1 1 0 0 ◮ There are 4 2 errors E ⊂ e ◮ There are 2 2 syndromes σ ( E ) with E ⊂ e

  13. A combinatorial bound   I X Z Y Z H ¯ e = Z Z X I Z   I Y Y Y Z � � e = 0 1 1 0 0 ◮ There are 4 2 errors E ⊂ e ◮ There are 2 2 syndromes σ ( E ) with E ⊂ e ◮ There are 2 errors in each degeneracy class

  14. A combinatorial bound   I X Z Y Z H ¯ e = Z Z X I Z   I Y Y Y Z � � e = 0 1 1 0 0 ◮ There are 4 2 errors E ⊂ e ◮ There are 2 2 syndromes σ ( E ) with E ⊂ e ◮ There are 2 errors in each degeneracy class − → e can not be corrected

  15. A combinatorial bound   I X Z Y Z ◮ 2 rank H e syndromes H ¯ e = Z Z X I Z   e errors, in each ◮ 2 rank H − rank H ¯ I Y Y Y Z class � � e = 0 1 1 0 0 Lemma ◮ There are 4 2 errors E ⊂ e We can correct ◮ There are 2 2 syndromes σ ( E ) 2 rank H − (rank H ¯ e − rank H e ) with E ⊂ e ◮ There are 2 errors in each errors E ⊂ e. degeneracy class − → e can not be corrected

  16. A combinatorial bound Let ( H t ) be a sequence of stabilizer matrices of rate R . Theorem (D., Z´ emor - 2011) If P err → 0 then R ≤ 1 − 2 p − g ( p ) ≤ 1 − 2 p , where � rank H ¯ � e − rank H e g ( p ) = lim sup E p n

  17. A combinatorial bound Let ( H t ) be a sequence of stabilizer matrices of rate R . Theorem (D., Z´ emor - 2011) If P err → 0 then R ≤ 1 − 2 p − g ( p ) ≤ 1 − 2 p , where � rank H ¯ � e − rank H e g ( p ) = lim sup E p n ◮ For general stabilizer codes g ( p ) can be small ( ≈ 0) ◮ BUT for sparse matrices, this bound is below the capacity Goal: estimate g ( p ) for sparse matrices

  18. Rank of a random sparse matrix       H e       ���� pn columns ◮ Typically: H e is a r × np matrix

  19. Rank of a random sparse matrix       H e       � �� � pn columns ◮ Typically: H e is a r × np matrix

  20. Rank of a random sparse matrix       H e       � �� � pn columns ◮ Typically: H e is a r × np matrix

  21. Rank of a random sparse matrix       H e       � �� � pn columns ◮ Typically: H e is a r × np matrix

  22. Rank of a random sparse matrix       H e       � �� � pn columns ◮ Typically: H e is a r × np matrix ◮ When np = r , the square matrix H e has almost full rank − → g ( p ) is close 0

  23. Rank of a random sparse matrix   Z X Z     H e       � �� � pn columns ◮ Typically: H e is a r × np matrix ◮ When np = r , the square matrix H e has almost full rank − → g ( p ) is close 0

  24. Rank of a random sparse matrix   Z X Z     H e       � �� � pn columns ◮ Typically: H e is a r × np matrix ◮ When np = r , the square matrix H e has almost full rank − → g ( p ) is close 0 ◮ BUT for a sparse matrix H , there are α n null rows in H e − → g ( p ) > λ − → Bound on achievable rates

  25. Rank of a random sparse matrix   Z X Z     H e     Z Y X   � �� � pn columns ◮ Typically: H e is a r × np matrix ◮ When np = r , the square matrix H e has almost full rank − → g ( p ) is close 0 ◮ BUT for a sparse matrix H , there are α n null rows in H e − → g ( p ) > λ − → Bound on achievable rates

  26. Rank of a random sparse matrix   Z X Z     H e     Z Y X   � �� � pn columns ◮ Typically: H e is a r × np matrix ◮ When np = r , the square matrix H e has almost full rank − → g ( p ) is close 0 ◮ BUT for a sparse matrix H , there are α n null rows in H e − → g ( p ) > λ − → Bound on achievable rates ◮ Similarly, there are β n identical rows of weight 1 ... − → more accurate bound

  27. Achievable rates of sparse CSS codes Theorem (D., Z´ emor - 2011) Achievable rates of CSS(2, m) codes with d X , d Z ≥ 2 δ + 1 , over the quantum erasure channel of probability p satisfy: R ≤ 1 − 2 p − g ( p ) � 4 � � � 1 − (1 − p ) m S δ ( p (1 − p ) m − 2 ) ≤ (1 − 2 p ) − 1 mp S δ depends on the generating function for rooted subtrees in the m -regular tree

  28. Achievable rates of sparse CSS codes Figure: Bounds on achievable rates of CSS(2,8) codes with δ = 0 (blue) and δ = 30 (black)

  29. Kitaev’s toric code and percolation X Rows of H X = XX X Z Rows of H Z = Z ZZ Figure: The stabilizers Figure: The toric code ◮ It is a CSS(2 , 4) code

  30. Kitaev’s toric code and percolation X Rows of H X = XX X Z Rows of H Z = Z ZZ Figure: The stabilizers Figure: The toric code ◮ It is a CSS(2 , 4) code ◮ An erasure is problematic iff it covers an homological cycle

Recommend


More recommend