Asymptotic properties of entanglement polytopes for large number of qubits and RMT Adam Sawicki Center of Theoretical Physics PAS joint work with TOMASZ MACIĄŻEK
Setting ◮ System of L qubits, H = C 2 ⊗ . . . ⊗ C 2 ◮ We focus on pure states, i.e points [ ψ ] in P H . ◮ Problem: Classification of pure states entanglement
Usual approach ◮ Stochastic local operations and classical communication (SLOCC) ◮ K = SU (2) × L - local unitary operations ◮ G = K C = SL (2 , C ) × L - invertible SLOOC ◮ Two states [ φ 1 ] and [ φ 2 ] are G -equivalent iff [ g.φ 1 ] = [ φ 2 ] , g ∈ G ◮ Let C φ := [ G.φ ]
Problems of the usual approach ◮ Number of classes C φ = [ G.φ ] is infinite starting from four qubits. ◮ Number of parameters required to distinguish between classes C φ grows exponentially with number of qubits. ◮ These parameters, e.g. invariant polynomials typically lack physical meaning and are not measureable. ◮ We want to introduce classification that is much more robust by organising classes C φ into a finite number of families that can be distinguished using single qubit measurements.
1-qubit RDMs ◮ ρ i ([ φ ]) - the i -th one-qubit Reduced Density Matrix (RDM) � � ρ 1 ([ φ ]) − 1 2 I, . . . , ρ L ([ φ ]) − 1 µ ([ φ ]) = 2 I ◮ The ordered spectrum of ρ i ([ φ ]) − 1 2 I is given by � � ρ i ([ φ ]) − 1 = {− λ i , λ i } , λ i ∈ [0 , 1 2] . σ 2 I ◮ The Collection of spectra for [ φ ] ∈ P ( H ) : � × L � 0 , 1 Ψ : P H → , Ψ([ φ ]) = { λ 1 , λ 2 , . . . , λ L } . 2
First Convexity Theorem ◮ ∆ H := Ψ( P H ) is a convex polytope. ◮ Follows from the momentum map convexity theorem (Kirwan ’84) ◮ Higuchi, Sudbery, Szulc ’03 - This polytope is given by the intersection of � 1 � � 1 � � ∀ i 2 − λ i ≤ 2 − λ j , j � = i � × L 0 , 1 � with the cube 2
Second Convexity Theorem ◮ C φ = [ G.φ ] ◮ ∆ C φ = Ψ( C φ ) is a convex polytope. ◮ Follows from the theorem of Brion ’87 ◮ ∆ C φ is called an Entanglement Polytope (EP) ◮ Introduced to QI in ’12 (AS, Oszmaniec, Kuś) and (Walter, Doran, Gross, Christandl) ◮ Although for L ≥ 4 the number of classes C φ is infinite, the number of polytopes ∆ C φ is always finite! ◮ Brion’s theorem: Finding EPs requires knowing the generating set of the covariants. ◮ This was solved only up to 4 qubits (Briand, Luque, J.-Y. Thibon 2003).
Enatanglement polytopes for 2 and 3 qubits
Properties of Entanglement Polytopes ◮ Entanglement polytopes are typically not disjoint, ∆ C ∩ ∆ C ′ � = ∅ . ◮ Example: ∆ C GHZ = ∆ H thus ∆ C φ ⊂ ∆ C GHZ for every C φ ◮ Entanglement polytopes can be regarded as entanglement witnesses.
Properties of Entanglement Polytopes ◮ EPs as entanglement witnesses: ◮ For [ φ ] ∈ P ( H ) we give a list of polytopes that do not contain Ψ([ φ ]) . ◮ The decision-making power of EPs is determined by the volume of the region in ∆ H where many EPs overlap. ◮ Problem: Decision-making power of EPs for large L .
Properties of Entanglement Polytopes for large L ? ◮ Finding entanglement polytopes, even for five qubits, is in fact intractable! ◮ We want to say something about EPs without finding them. ◮ For a polytope ∆ C φ let λ C φ be the point that is closest to the origin 0 . ◮ Our aim is to understand the distribution of | λ C φ | 2 in ∆ H for large number of qubits.
Procedure for finding λ C φ for L qubits ◮ ’15 T. Maciążek and AS - the procedure for finding λ C φ using momentum map results of Kirwan ’84 1. Construct L -dimensional hypercube whose vertices have coordinates ± 1 2 . 2. Chose L out of 2 L vertices and consider the plane P containing the chosen points . 3. Find the closest point p to the origin 0 in P . 4. Point p = λ C φ for some ∆ C φ .
Procedure for finding λ C φ for L qubits ◮ For vectors v 1 , . . . , v k ∈ R n let ( v 1 | v 1 ) ( v 1 | v k ) . . . . ... . G ( v 1 , . . . , v k ) = . . . . ( v k | v 1 ) . . . ( v k | v k ) ◮ | G ( v 1 , . . . , v k ) | := det G ( v 1 , . . . , v k ) | λ C φ | 2 = 1 | G ( v 1 , . . . , v L ) | | G ( v 1 − v L , . . . , v L − 1 − v L ) , 4 where v i are vectors with ± 1 entries – Bernoulli vectors.
Example | G ( v 1 , v 2 , v 3 ) | d 2 = | G ( v 1 − v 3 , v 2 − v 3 ) ,
The model ◮ Vertices of L -dimensional cube are ‘uniformly distributed’ on S L − 1 with r 2 = L . ◮ Let v = ( v 1 , . . . , v L ) t ∈ R L be the Gaussian vector with independent v i ∼ N (0 , 1) . − 1 2 � v � 2 � � v ∼ exp � (2 π ) L ◮ The distribution of v is isotropic. � v � 2 is χ 2 L with the mean L √ and σ = 2 L √ 2 L ◮ When L → ∞ the ratio → 0 L | G ( v 1 ,...,v L ) | ◮ Problem: Calculate distribution of | G ( v 1 − v L ,...,v L − 1 − v L ) | for v i ∼ N (0 , I )
The model
Warm up | G ( v 1 , . . . , v L ) | | G ( v 1 , . . . , v L − 1 ) | ◮ Let v i ∼ N (0 , I ) and consider G := G ( v 1 , . . . , v L ) . ◮ G is a positive symmetric matrix distributed according to Wishart distribution W ( L, I ) 1 2 ) | G | − 1 2 e − (1 / 2)tr G 2 L 2 / 2 Γ L ( L ◮ | G [ L,L ] | – the ( L, L ) minor of G . | G ( v 1 , . . . , v L ) | | G | | G ( v 1 , . . . , v L − 1 ) | = | G [ L,L ] |
Warm up ◮ Cholesky decomposition: G = TT t , where T -lower triangular matrix ◮ Theorem (Bartlett) T 2 ii are independent random variables � L − i + 1 , 1 � T 2 ii ∼ Gamma 2 2 ◮ Gamma ( α, β ) is the probability distribution with density: β α Γ( α ) x α − 1 e − βx f ( x ) = ◮ We are interested in T 2 LL | G | � 1 2 , 1 � | G [ L,L ] | = T 2 LL ∼ Gamma 2
Finding distribution of | λ C φ | 2 ◮ Using antisymmetry of the determinant | G ( v 1 , . . . , v L − 1 , v L ) | = | G ( v 1 − v L , . . . , v L − 1 − v L , v L ) | ◮ Let G ′ := G ( v 1 − v L , . . . , v L − 1 − v L , v L ) ◮ Thus | G ′ | | G ( v 1 − v L , . . . , v L − 1 − v L , v L ) | = | G ( v 1 − v L , . . . , v L − 1 − v L ) | | G ′ [ L,L ] | ◮ What is the distribution of G ′
Finding distribution of | λ C φ | 2 G ′ = A t GA ◮ A lower triangular matrix 1 0 . . . 0 0 0 1 0 0 . . . . ... . A = . 0 0 0 0 0 . . . 1 0 − 1 − 1 − 1 − 1 . . . ◮ Theorem: Assume G ∼ W ( L, I ) then A t GA ∼ W ( L, Σ) , where Σ := A t A ◮ G ′ is distributed according to the Wishart distribution W ( L, Σ)
Finding distribution of | λ C φ | 2 ◮ W ( L, Σ) is a Gramm matrix of vectors w i ∼ N (0 , Σ) ◮ Conclusion: For vectors v i ∼ N (0 , I ) and vectors w i ∼ N (0 , Σ) the distribution of | G ( v 1 , . . . , v L ) | | G ( w 1 , . . . , w L ) | | G ( v 1 − v L , . . . , v L − 1 − v L ) and | G ( w 1 , . . . , w L − 1 ) | are the same. | G ( w 1 ,...,w L ) | ◮ We need to calculate distribution of | G ( w 1 ,...,w L − 1 ) | for w i ∼ N (0 , Σ)
Finding distribution of | λ C φ | 2 ◮ Let RR t be the Cholesky decomposition of Σ = A t A , ◮ Let TT t be the Cholesky decomposition of G ∼ W ( L, I ) ◮ H = RTT t R t ∼ W ( L, Σ) ∼ G ( w 1 , . . . , w L ) | H | LL = 1 | H [ L,L ] | = ( RT ) 2 LL = R 2 LL T 2 LT 2 LL LL ∼ Gamma ( 1 2 , 1 ◮ T 2 2 ) � 1 � | λ C φ | 2 ∼ Gamma 2 , 2 L
How does it match Bernoulli ? 0.30 0.25 L=13 L=20 L=200 0.25 0.20 counts (normed) counts (normed) 0.20 0.15 0.15 0.10 0.10 0.05 0.05 0.00 0.00 14 12 10 8 6 4 2 0 20 15 10 5 0 ln( || λ C || 2 ) ln( || λ C || 2 )
Conclucisons ◮ The closet points to the origin of the EPs accumulate at the distance O ( 1 L ) to the origin √ ◮ The mean of | λ C φ | 2 is 1 4 L ◮ The usefulness of EPs might be limited for large L ! ◮ Distillation of linear entropy does not help.
Recommend
More recommend