A Challenge Code for Maximizing the Entropy of PUF Responses Olivier Rioul 1 , Patrick Solé 1 , Sylvain Guilley 1 , 2 and Jean-Luc Danger 1 , 2 1 LTCI, CNRS, Télécom ParisTech, Université Paris-Saclay, 75 013 Paris, France. Email: firstname.lastname@telecom-paristech.fr 2 Secure-IC S.A.S., 15 Rue Claude Chappe, Bât. B, ZAC des Champs Blancs, 35510 Cesson-Sévigné, France. Email: firstname.lastname@secure-ic.com <sylvain.guilley@telecom-paristech.fr>
Outline Entropy of PUF Concept Estimation Prevision Running example: the Loop-PUF (LPUF) SRAM PUF example L-PUF into details [CDGB12] Theory Definitions Results Main result Beyond n bits Conclusions 2 / 30 Sylvain Guilley A Challenge Code for Maximizing the Entropy of PUF Responses June 23, 2016
Outline Entropy of PUF Concept Estimation Prevision Running example: the Loop-PUF (LPUF) SRAM PUF example L-PUF into details [CDGB12] Theory Definitions Results Main result Beyond n bits Conclusions 3 / 30 Sylvain Guilley A Challenge Code for Maximizing the Entropy of PUF Responses June 23, 2016
Entropy of PUFs i.i.d. PUF 1 PUF 2 ... PUF M PUFs are instanciations of blueprints by a fab plant 4 / 30 Sylvain Guilley A Challenge Code for Maximizing the Entropy of PUF Responses June 23, 2016
(estimation ˆ After fabrication P ) (a) (b) 2 − 128 2 − 128 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... ... x x x x x x x x x x x x 0 0 0 f f f 0 0 0 f f f 0 0 0 f f f 0 0 0 f f f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 0 0 f f f 0 0 0 f f f 0 1 2 d e f 0 1 2 d e f Which PUF is the most entropic? 5 / 30 Sylvain Guilley A Challenge Code for Maximizing the Entropy of PUF Responses June 23, 2016
(estimation ˆ After fabrication P ) (a) (b) 2 − 128 2 − 128 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... ... x x x x x x x x x x x x 0 0 0 f f f 0 0 0 f f f 0 0 0 f f f 0 0 0 f f f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 0 0 f f f 0 0 0 f f f 0 1 2 d e f 0 1 2 d e f Which PUF is the most entropic? 0xff...ff � Recall H = − P ( R = PUF ( c )) log P ( R = PUF ( c )) . c = 0x00...00 5 / 30 Sylvain Guilley A Challenge Code for Maximizing the Entropy of PUF Responses June 23, 2016
Before fabrication Stochastic model Active discussion at ISO sub-committee 27: 6 / 30 Sylvain Guilley A Challenge Code for Maximizing the Entropy of PUF Responses June 23, 2016
Outline Entropy of PUF Concept Estimation Prevision Running example: the Loop-PUF (LPUF) SRAM PUF example L-PUF into details [CDGB12] Theory Definitions Results Main result Beyond n bits Conclusions 7 / 30 Sylvain Guilley A Challenge Code for Maximizing the Entropy of PUF Responses June 23, 2016
Non-delay PUF: SRAM PUF Challenge: c log 2 ( c ) elt. 1 elt. 2 elt. n ... Response: B c Amount of entropy: = n . 8 / 30 Sylvain Guilley A Challenge Code for Maximizing the Entropy of PUF Responses June 23, 2016
Delay PUF: core delay element d ( c i ) d T 1 d T 2 i i y i − 1 = x i y i = x i +1 element element d B 1 d B 2 i − 1 i + 1 i i c i Same idea as in other delay PUFs, like arbiter-PUF, etc. 9 / 30 Sylvain Guilley A Challenge Code for Maximizing the Entropy of PUF Responses June 23, 2016
Let d ( c i ) be the corresponding delay. As time is an extensive physical quantity: � d T 1 + d B 2 = d TB if c i = − 1, i i i d ( c i ) = d B 1 + d T 2 = d BT if c i = + 1. i i i The delays d TB and d BT are modeled as i.i.d. normal random variables selected at i i fabrication [PDW89]. Figure: Monte-Carlo simulation (with 500 runs) of the delays in a chain of 60 basic buffers implemented in a 55 nm CMOS technology. 10 / 30 Sylvain Guilley A Challenge Code for Maximizing the Entropy of PUF Responses June 23, 2016
Delay PUF: Loop PUF d 1 , 1 d 1 ,N 1 1 oscillator ID d 0 , 1 d 0 ,N measurement 0 0 C 1 C N Amount of entropy: > n ? Nota bene: here, d ( c ) is expressed in number of clock cycles. 11 / 30 Sylvain Guilley A Challenge Code for Maximizing the Entropy of PUF Responses June 23, 2016
LPUF is not self-contained It needs a protocole Loop-PUF: Challenge c ∈ {± 1 } n Response B c ∈ {± 1 } n i.i.d. normal B c = sign( � n i =1 c i ∆ i ) random variables ∆ i input : Challenge c output: Response B c 1 Set challenge c 2 Measure d 1 ← ⌊ N � n i = 1 d ( c i ) ⌋ 3 Set challenge − c 4 Measure d 2 ← ⌊ N � n i = 1 d ( − c i ) ⌋ 5 return B c = sign ( d 1 − d 2 ) Algorithm 1: Protocole to get one bit out LPUF . 12 / 30 Sylvain Guilley A Challenge Code for Maximizing the Entropy of PUF Responses June 23, 2016
Our result: RAM-PUF vs Loop-PUF For n = 8 SRAM-PUF LPUF (0 ≤ M ≤ 2 n − 1 ) (0 ≤ M ≤ n ) 13 / 30 Sylvain Guilley A Challenge Code for Maximizing the Entropy of PUF Responses June 23, 2016
Outline Entropy of PUF Concept Estimation Prevision Running example: the Loop-PUF (LPUF) SRAM PUF example L-PUF into details [CDGB12] Theory Definitions Results Main result Beyond n bits Conclusions 14 / 30 Sylvain Guilley A Challenge Code for Maximizing the Entropy of PUF Responses June 23, 2016
Challenge Definition A challenge c is a vector of n control bits c = ( c 1 , c 2 , . . . , c n ) ∈ {± 1 } n . Let ∆ 1 , ∆ 2 , . . . , ∆ n be i.i.d. zero-mean normal (Gaussian) variables characterizing the technological dispersion. A bit response to challenge c is defined as B c = sign (∆ c ) ∈ {± 1 } (1) where ∆ c = c 1 ∆ 1 + c 2 ∆ 2 + · · · + c n ∆ n . (2) 15 / 30 Sylvain Guilley A Challenge Code for Maximizing the Entropy of PUF Responses June 23, 2016
Challenge code Definition A challenge code C is a set of M n -bit challenges that form a ( n , M ) binary code. We shall identify C with the M × n matrix of ± 1’s whose lines are the challenges. The M codewords and their complements are used to challenge the PUF elements. The corresponding identifier is the M -bit vector B = ( B c ) c ∈C . (3) The entropy of the PUF responses is denoted by H = H ( B ) . 16 / 30 Sylvain Guilley A Challenge Code for Maximizing the Entropy of PUF Responses June 23, 2016
Orthant probabilities Let X 1 , X 2 , . . . , X n be zero-mean, jointly Gaussian (not necessarily independent) and identically distributed. As a prerequisite to the derivations that follow, we wish to compute the orthant probability P ( X 1 > 0 , X 2 > 0 , . . . , X n > 0 ) . The probabilities associated to other sign combinations can easily be deduced from it using the symmetry properties of the Gaussian distribution. Since the value of the orthant probability does not depend on the common variance of the random variables we may assume without loss of generality that each X i has unit variance: X i ∼ N ( 0 , 1 ) . The orthant probability will depend only on the correlation coefficients ρ i , j = E ( X i X j ) ( i � = j ) . (4) 17 / 30 Sylvain Guilley A Challenge Code for Maximizing the Entropy of PUF Responses June 23, 2016
Some lemmas Lemma (Quadrant probability of a bivariate normal) P ( X 1 > 0 , X 2 > 0 ) = 1 4 + arcsin ρ 1 , 2 (5) . 2 π Lemma (Orthant probability of a trivariate normal) P ( X 1 > 0 , X 2 > 0 , X 3 > 0 ) = 1 8 + arcsin ρ 1 , 2 + arcsin ρ 2 , 3 + arcsin ρ 1 , 3 (6) . 4 π Lemma (No closed formula for n > 3 exists. . . ) 18 / 30 Sylvain Guilley A Challenge Code for Maximizing the Entropy of PUF Responses June 23, 2016
Main Result: Hadamard Codes We have M responses bits, so H ( B ) ≤ M bits. When is it possible to have the maximum value H ( B ) = M bits? 19 / 30 Sylvain Guilley A Challenge Code for Maximizing the Entropy of PUF Responses June 23, 2016
Main Result: Hadamard Codes We have M responses bits, so H ( B ) ≤ M bits. When is it possible to have the maximum value H ( B ) = M bits? Theorem H ( B ) = M implies M ≤ n. H ( B ) = M = n bits if and only if C is a Hadamard ( n , n ) code. 19 / 30 Sylvain Guilley A Challenge Code for Maximizing the Entropy of PUF Responses June 23, 2016
Main Result: Hadamard Codes We have M responses bits, so H ( B ) ≤ M bits. When is it possible to have the maximum value H ( B ) = M bits? Theorem H ( B ) = M implies M ≤ n. H ( B ) = M = n bits if and only if C is a Hadamard ( n , n ) code. Proof. H ( B ) = M means that all bits B c are independent, i.e., all Y j = � n i = c i X i ’s are independent (uncorrelated), i.e., all M ( n -bit) challenges c ( j ) are orthogonal. 19 / 30 Sylvain Guilley A Challenge Code for Maximizing the Entropy of PUF Responses June 23, 2016
Hadamard Codes n orthogonal binary ± 1 vectors form an Hadamard code : n = 1 C = ( 1 ) , H = 1 bit; � � 1 1 n = 2 C = , H = 2 bits; 1 − 1 � � 1 1 1 n = 3 No Hadamard code! but any (3,3)code ≡ − 1 1 1 � 1 1 / 1 / 1 − 1 1 � 3 3 3 CC t = for which Σ = 1 gives 1 / 1 − 1 / 3 3 1 / 3 − 1 / 1 3 � 1 � 1 8 + arcsin 1 / 8 + arcsin 1 / 3 � 3 � H = − 6 log 4 π 4 π � 1 � 1 8 − 3 arcsin 1 / 8 − 3 arcsin 1 / 3 � 3 � − 2 log 4 π 4 π ≈ 2 . 875 < 3 bits . 20 / 30 Sylvain Guilley A Challenge Code for Maximizing the Entropy of PUF Responses June 23, 2016
Recommend
More recommend