relative generalized hamming weights of one point
play

Relative generalized Hamming weights of one-point algebraic - PowerPoint PPT Presentation

Relative generalized Hamming weights of one-point algebraic geometric codes: an application to secret sharing INdAM meeting: International meeting on numerical semigroups Cortona 2014, September 10th. Diego Ruano


  1. Relative generalized Hamming weights of one-point algebraic geometric codes: an application to secret sharing INdAM meeting: International meeting on numerical semigroups Cortona 2014, September 10th. Diego Ruano ❤tt♣✿✴✴♣❡♦♣❧❡✳♠❛t❤✳❛❛✉✳❞❦✴ ∼ ❞✐❡❣♦✴ (Joint work with Olav Geil, Stefano Martin, Ryutaroh Matsumoto, Yuan Luo) <

  2. Reference 1 O. Geil, S. Martin, R. Matsumoto, D. Ruano, Y. Luo: “Relative generalized Hamming weights of one-point algebraic geometric codes”. To appear in IEEE Transactions on Information Theory. (available at arXiv:1403.7985) ◮ O. Geil, S. Martin: Aalborg University, Denmark. ◮ R. Matsumoto: Tokyo Institute of Technology, Japan. ◮ Y. Luo: Shanghai Jiao Tong University, China.

  3. Ramp secret sharing schemes 2 A ramp secret sharing scheme with t -privacy and r -reconstruction is an algorithm that, 1. given an input � s ∈ F ℓ q 2. outputs a vector � x ∈ F n q , the vector of shares that we want to share among n players such that, given a collection of shares { x i | i ∈ I} , I ⊆ { 1 , . . . , n } , 1. one has no information about � s if # I ≤ t 2. one can recover � s if # I ≥ r

  4. Ramp secret sharing schemes 2 A ramp secret sharing scheme with t -privacy and r -reconstruction is an algorithm that, 1. given an input � s ∈ F ℓ q 2. outputs a vector � x ∈ F n q , the vector of shares that we want to share among n players such that, given a collection of shares { x i | i ∈ I} , I ⊆ { 1 , . . . , n } , 1. one has no information about � s if # I ≤ t 2. one can recover � s if # I ≥ r We shall always assume that t is largest possible and that r is smallest possible such that the above hold.

  5. Example: Ramp Shamir’s scheme 3 ◮ � s = ( s 0 , . . . , s ℓ − 1 ) ∈ F ℓ q a secret ◮ n participants ◮ Reconstruction r = k , privacy t = k − ℓ .

  6. Example: Ramp Shamir’s scheme 3 ◮ � s = ( s 0 , . . . , s ℓ − 1 ) ∈ F ℓ q a secret ◮ n participants ◮ Reconstruction r = k , privacy t = k − ℓ . f ℓ , f ℓ + 1 , . . . , f k − 1 ∈ F q random f = s 0 + s 1 X + · · · + s ℓ − 1 X ℓ − 1 + f ℓ X ℓ + · · · + f k − 1 X k − 1 ∈ F q [ x ] ◮ Shares: f ( x 1 ) , . . . , f ( x n ) , with x i ∈ F q and x i � = x j .

  7. Example: Ramp Shamir’s scheme 3 ◮ � s = ( s 0 , . . . , s ℓ − 1 ) ∈ F ℓ q a secret ◮ n participants ◮ Reconstruction r = k , privacy t = k − ℓ . f ℓ , f ℓ + 1 , . . . , f k − 1 ∈ F q random f = s 0 + s 1 X + · · · + s ℓ − 1 X ℓ − 1 + f ℓ X ℓ + · · · + f k − 1 X k − 1 ∈ F q [ x ] ◮ Shares: f ( x 1 ) , . . . , f ( x n ) , with x i ∈ F q and x i � = x j . ◮ Privacy and reconstruction follows from Lagrange interpolation.

  8. Example: Ramp Shamir’s scheme 3 ◮ � s = ( s 0 , . . . , s ℓ − 1 ) ∈ F ℓ q a secret ◮ n participants ◮ Reconstruction r = k , privacy t = k − ℓ . f ℓ , f ℓ + 1 , . . . , f k − 1 ∈ F q random f = s 0 + s 1 X + · · · + s ℓ − 1 X ℓ − 1 + f ℓ X ℓ + · · · + f k − 1 X k − 1 ∈ F q [ x ] ◮ Shares: f ( x 1 ) , . . . , f ( x n ) , with x i ∈ F q and x i � = x j . ◮ Privacy and reconstruction follows from Lagrange interpolation. Disadvantage: note that q ≥ n .

  9. Chen et al. Ramp secret sharing schemes 4 ◮ Consider a secret � s ∈ F ℓ q ◮ C 2 = � � v 1 , . . . ,� v k 2 � � C 1 = � � v 1 , . . . ,� v k 2 ,� v k 2 + 1 , . . . ,� v k 1 � ⊆ F n q

  10. Chen et al. Ramp secret sharing schemes 4 ◮ Consider a secret � s ∈ F ℓ q ◮ C 2 = � � v 1 , . . . ,� v k 2 � � C 1 = � � v 1 , . . . ,� v k 2 ,� v k 2 + 1 , . . . ,� v k 1 � ⊆ F n q ◮ Set L = � v K 2 + 1 , . . . , v k 1 � , C 1 = C 2 ⊕ L (direct sum) ◮ ℓ = dim ( L ) = dim ( C 1 / C 2 ) = k 1 − k 2

  11. Chen et al. Ramp secret sharing schemes 4 ◮ Consider a secret � s ∈ F ℓ q ◮ C 2 = � � v 1 , . . . ,� v k 2 � � C 1 = � � v 1 , . . . ,� v k 2 ,� v k 2 + 1 , . . . ,� v k 1 � ⊆ F n q ◮ Set L = � v K 2 + 1 , . . . , v k 1 � , C 1 = C 2 ⊕ L (direct sum) ◮ ℓ = dim ( L ) = dim ( C 1 / C 2 ) = k 1 − k 2 The n shares are the n coordinates of � x � x = � c 2 + ψ ( � s ) = a 1 � v 1 + · · · + a k 2 � v k 2 + s 1 � v k 2 + 1 + · · · + s ℓ � v k 1 ∈ C 1 a 1 , . . . , a k 2 ∈ F q random.

  12. Chen et al. Ramp secret sharing schemes 4 ◮ Consider a secret � s ∈ F ℓ q ◮ C 2 = � � v 1 , . . . ,� v k 2 � � C 1 = � � v 1 , . . . ,� v k 2 ,� v k 2 + 1 , . . . ,� v k 1 � ⊆ F n q ◮ Set L = � v K 2 + 1 , . . . , v k 1 � , C 1 = C 2 ⊕ L (direct sum) ◮ ℓ = dim ( L ) = dim ( C 1 / C 2 ) = k 1 − k 2 The n shares are the n coordinates of � x � x = � c 2 + ψ ( � s ) = a 1 � v 1 + · · · + a k 2 � v k 2 + s 1 � v k 2 + 1 + · · · + s ℓ � v k 1 ∈ C 1 a 1 , . . . , a k 2 ∈ F q random. Algebraically: 1. � s is represented by the coset ψ ( � s ) + C 2 in C 1 / C 2 2. q ℓ different cosets in C 1 / C 2 and there are q k 2 representatives

  13. How much information is leaked? 5 Bounds for privacy and reconstruction (Chen et al. ) 1. r < n − d ( C 1 ) 2. t > d ( C ⊥ 2 )

  14. How much information is leaked? 5 Bounds for privacy and reconstruction (Chen et al. ) 1. r < n − d ( C 1 ) 2. t > d ( C ⊥ 2 ) One can be more precise with the first relative generalized Hamming weight (RGHW) M 1 ( C 1 , C 2 ) = min { wt ( c ) | c ∈ C 1 \ C 2 } ≥ d ( C 1 )

  15. How much information is leaked? 5 Bounds for privacy and reconstruction (Chen et al. ) 1. r < n − d ( C 1 ) 2. t > d ( C ⊥ 2 ) One can be more precise with the first relative generalized Hamming weight (RGHW) M 1 ( C 1 , C 2 ) = min { wt ( c ) | c ∈ C 1 \ C 2 } ≥ d ( C 1 ) Privacy and reconstruction (Kurihara, Matsumoto et al. ) 1. r = n − M 1 ( C 1 , C 2 ) + 1 2. t = M 1 ( C ⊥ 2 , C ⊥ 1 ) − 1

  16. A more precise definition of the information leaked 6 Privacy and reconstruction A ramp secret sharing scheme has ( t 1 , . . . , t ℓ ) -privacy and ( r 1 , . . . , r ℓ ) -reconstruction if t 1 , . . . , t ℓ are chosen largest possible and r 1 , . . . , r ℓ are chosen smallest possible such that: 1. an adversary cannot obtain m q -bits of information about � s with any t m shares, 2. it is possible to recover m q -bits of information about � s with any collection of r m shares. In particular, one has t = t 1 and r = r ℓ .

  17. A more precise definition of the information leaked 6 Privacy and reconstruction A ramp secret sharing scheme has ( t 1 , . . . , t ℓ ) -privacy and ( r 1 , . . . , r ℓ ) -reconstruction if t 1 , . . . , t ℓ are chosen largest possible and r 1 , . . . , r ℓ are chosen smallest possible such that: 1. an adversary cannot obtain m q -bits of information about � s with any t m shares, 2. it is possible to recover m q -bits of information about � s with any collection of r m shares. In particular, one has t = t 1 and r = r ℓ . Exact values (Kurihara, Matsumoto et al. ) and (Geil et al. ) 1. r m = n − M ℓ − m + 1 ( C 1 , C 2 ) + 1 2. t m = M m ( C ⊥ 2 , C ⊥ 1 ) − 1

  18. RGHW 7 Supp ( D ) = { i ∈ { 1 , . . . , n } : ∃ � c ∈ D , c i � = 0 } Ex: Supp = { ( 0 , 0 , 1 , 1 , 0 ) , ( 0 , 1 , 0 , 1 , 1 ) } = 4 Minimum Hamming weight d ( C ) = min { wt ( � c ) = Supp ( � c ) | � c ∈ C } The m th generalized Hamming weight d m ( C ) = min {| Supp ( D ) | : D ⊆ C , dim ( D ) = m }

  19. RGHW 7 Supp ( D ) = { i ∈ { 1 , . . . , n } : ∃ � c ∈ D , c i � = 0 } Ex: Supp = { ( 0 , 0 , 1 , 1 , 0 ) , ( 0 , 1 , 0 , 1 , 1 ) } = 4 Minimum Hamming weight d ( C ) = min { wt ( � c ) = Supp ( � c ) | � c ∈ C } The m th generalized Hamming weight d m ( C ) = min {| Supp ( D ) | : D ⊆ C , dim ( D ) = m } The m th relative generalized Hamming weight (RGHW) M m ( C 1 , C 2 ) = min {| Supp ( D ) | : D ⊆ C , dim ( D ) = m , D ∩ C 2 = { � 0 }} d C d C and M C 0 d C .

  20. Schemes based on MDS codes 8 Let C 1 , C 2 MDS codes (Reed-Solomon): C ⊥ 1 , C ⊥ 2 are also MDS and ◮ M m ( C 1 , C 2 ) = d m ( C 1 ) = n − k 1 + m ◮ M m ( C ⊥ 2 , C ⊥ 1 ) = d m ( C ⊥ 2 ) = k 2 + m

  21. Schemes based on MDS codes 8 Let C 1 , C 2 MDS codes (Reed-Solomon): C ⊥ 1 , C ⊥ 2 are also MDS and ◮ M m ( C 1 , C 2 ) = d m ( C 1 ) = n − k 1 + m ◮ M m ( C ⊥ 2 , C ⊥ 1 ) = d m ( C ⊥ 2 ) = k 2 + m Privacy and reconstruction: M m ( C ⊥ 2 , C ⊥ 1 ) = n − M ℓ − m + 1 ( C 1 , C 2 ) + 1 , t = t 1 = k 2 , r = r ℓ = k 1 .

  22. Schemes based on MDS codes 8 Let C 1 , C 2 MDS codes (Reed-Solomon): C ⊥ 1 , C ⊥ 2 are also MDS and ◮ M m ( C 1 , C 2 ) = d m ( C 1 ) = n − k 1 + m ◮ M m ( C ⊥ 2 , C ⊥ 1 ) = d m ( C ⊥ 2 ) = k 2 + m Privacy and reconstruction: M m ( C ⊥ 2 , C ⊥ 1 ) = n − M ℓ − m + 1 ( C 1 , C 2 ) + 1 , t = t 1 = k 2 , r = r ℓ = k 1 . t m = r m − 1 , t m + 1 = t m + 1 .

  23. Schemes based on MDS codes 8 Let C 1 , C 2 MDS codes (Reed-Solomon): C ⊥ 1 , C ⊥ 2 are also MDS and ◮ M m ( C 1 , C 2 ) = d m ( C 1 ) = n − k 1 + m ◮ M m ( C ⊥ 2 , C ⊥ 1 ) = d m ( C ⊥ 2 ) = k 2 + m Privacy and reconstruction: M m ( C ⊥ 2 , C ⊥ 1 ) = n − M ℓ − m + 1 ( C 1 , C 2 ) + 1 , t = t 1 = k 2 , r = r ℓ = k 1 . t m = r m − 1 , t m + 1 = t m + 1 . Since r − t = k 1 − k 2 = ℓ , it is optimal. However, when the number of participants is large compared to the field size we cannot assume C 1 and C 2 to be MDS.

Recommend


More recommend