Revisiting P.S.M. Lowerbound M A M B 0 a 1 b 1 0 Y X 1 a 2 b 2 r 1 0 x 1 y 1 1 0 . 1 . 0 1 x 2 y 2 r 2 0 · · 1 . . 1 · · 0 1 x K y K 1 0 1 . . . . . . . . . a J b L
Revisiting P.S.M. Lowerbound M A M B 0 a 1 b 1 0 Y X 1 a 2 b 2 r 1 0 x 1 y 1 1 0 . 1 . 0 1 x 2 y 2 r 2 0 · · 1 . . 1 · · 0 1 x K y K 1 0 1 . . . . . . . . . a J b L
Revisiting P.S.M. Lowerbound M A M B 0 a 1 b 1 0 Y X 1 a 2 b 2 r 1 0 x 1 y 1 1 0 . 1 . 0 1 x 2 y 2 r 2 0 · · 1 . . 1 · · 0 1 x K y K 1 r 3 0 1 . . 0 1 . . 1 0 1 . . . . . a J b L
Revisiting P.S.M. Lowerbound M A M B 0 a 1 b 1 0 Y X 1 a 2 b 2 r 1 0 x 1 y 1 1 0 . 1 . 0 1 x 2 y 2 r 2 0 · · 1 . . 1 · · 0 1 x K y K 1 r 3 0 1 . . 0 1 . . 1 0 1 . . . . . a J b L
Revisiting P.S.M. Lowerbound M A M B 0 a 1 b 1 0 Y X 1 a 2 b 2 r 1 0 x 1 y 1 1 0 . 1 . 0 1 x 2 y 2 r 2 0 · · 1 . . 1 · · 0 1 x K y K 1 r 3 0 1 . . 0 1 . . 1 Communication: log |M A | + log |M B | 0 1 . . . . . a J b L
Revisiting P.S.M. Lowerbound M A M B 0 a 1 b 1 0 Y X 1 a 2 b 2 r 1 0 x 1 y 1 1 0 . 1 . 0 1 x 2 y 2 r 2 0 · · 1 . . 1 · · 0 1 x K y K 1 r 3 0 1 . . 0 1 . . Mechanism: 1 0 1 . . . . . a J b L
Revisiting P.S.M. Lowerbound M A M B 0 a 1 b 1 0 Y X 1 a 2 b 2 r 1 0 x 1 y 1 1 0 . 1 . 0 1 x 2 y 2 r 2 0 · · 1 . . 1 · · 0 1 x K y K 1 r 3 0 1 . . 0 1 . . Mechanism: 1 • Lowerbound number of r ’s 0 1 . . . . . a J b L
Revisiting P.S.M. Lowerbound M A M B 0 a 1 b 1 0 Y X 1 a 2 b 2 r 1 0 x 1 y 1 1 0 . 1 . 0 1 x 2 y 2 r 2 0 · · 1 . . 1 · · 0 1 x K y K 1 r 3 0 1 . . 0 1 . . Mechanism: 1 • Lowerbound number of r ’s 0 1 . . • Lowerbound size of each image set . . . a J b L
Revisiting P.S.M. Lowerbound M A M B 0 a 1 b 1 0 Y X 1 a 2 b 2 r 1 0 x 1 y 1 1 0 . 1 . 0 1 x 2 y 2 r 2 0 · · 1 . . 1 · · 0 1 x K y K 1 r 3 0 1 . . 0 1 . . Mechanism: 1 • Lowerbound number of r ’s 0 1 . . • Lowerbound size of each image set . . • Upperbound size of overlap between two image sets . a J b L
Revisiting P.S.M. Lowerbound M A M B 0 a 1 b 1 0 Y X 1 a 2 b 2 r 1 0 x 1 y 1 1 0 . 1 . 0 1 x 2 y 2 r 2 0 · · 1 . . 1 · · 0 1 x K y K 1 r 3 0 1 . . 0 1 Assumption 1 on f : . . 1 0 1 . . . . . a J b L
Revisiting P.S.M. Lowerbound M A M B 0 a 1 b 1 0 Y X 1 a 2 b 2 r 1 0 x 1 y 1 1 0 . 1 . 0 1 x 2 y 2 r 2 0 · · 1 . . 1 · · 0 1 x K y K 1 r 3 0 1 . . 0 1 Assumption 1 on f : . . 1 • f non-degenerate: 0 1 . . . . . a J b L
Revisiting P.S.M. Lowerbound M A M B 0 a 1 b 1 0 Y X 1 a 2 b 2 r 1 0 x 1 y 1 1 0 . 1 . 0 1 x 2 y 2 r 2 0 · · 1 . . 1 · · 0 1 x K y K 1 r 3 0 1 . . 0 1 Assumption 1 on f : . . 1 • f non-degenerate: 0 1 . . • x � = x ′ ⇒ f ( x , · ) � = f ( x ′ , · ) . . . a J b L
Revisiting P.S.M. Lowerbound M A M B 0 a 1 b 1 0 Y X 1 a 2 b 2 r 1 0 x 1 y 1 1 0 . 1 . 0 1 x 2 y 2 r 2 0 · · 1 . . 1 · · 0 1 x K y K 1 r 3 0 1 . . 0 1 Assumption 1 on f : . . 1 • f non-degenerate: 0 1 . . • x � = x ′ ⇒ f ( x , · ) � = f ( x ′ , · ) • Simarly for y � = y ′ . . . a J b L
Revisiting P.S.M. Lowerbound M A M B 0 a 1 b 1 0 Y X 1 a 2 b 2 r 1 0 x 1 y 1 1 0 . 1 . 0 1 x 2 y 2 r 2 0 · · 1 . . 1 · · 0 1 x K y K 1 r 3 0 1 . . 0 1 Assumption 1 on f : . . 1 • f non-degenerate: 0 1 . . • x � = x ′ ⇒ f ( x , · ) � = f ( x ′ , · ) • Simarly for y � = y ′ . . . a J Consequence: b L
Revisiting P.S.M. Lowerbound M A M B 0 a 1 b 1 0 Y X 1 a 2 b 2 r 1 0 x 1 y 1 1 0 . 1 . 0 1 x 2 y 2 r 2 0 · · 1 . . 1 · · 0 1 x K y K 1 r 3 0 1 . . 0 1 Assumption 1 on f : . . 1 • f non-degenerate: 0 1 . . • x � = x ′ ⇒ f ( x , · ) � = f ( x ′ , · ) • Simarly for y � = y ′ . . . a J Consequence: • r is one-to-one b L
Revisiting P.S.M. Lowerbound M A M B 0 a 1 b 1 0 Y X 1 a 2 b 2 r 1 0 x 1 y 1 1 0 . 1 . 0 1 x 2 y 2 r 2 0 · · 1 . . 1 · · 0 1 x K y K 1 r 3 0 1 . . 0 1 Useful Edge ( x , y ): . . 1 0 1 . . . . . a J b L
Revisiting P.S.M. Lowerbound M A M B 0 a 1 b 1 0 Y X 1 a 2 b 2 r 1 0 x 1 y 1 1 0 . 1 . 0 1 x 2 y 2 r 2 0 · · 1 . . 1 · · 0 1 x K y K 1 r 3 0 1 . . 0 1 Useful Edge ( x , y ): . . 1 • f ( x , y ) = f ( x , y ) 0 1 . . . . . a J b L
Revisiting P.S.M. Lowerbound M A M B 0 a 1 b 1 0 Y X 1 a 2 b 2 r 1 0 x 1 y 1 1 0 . 1 . 0 1 x 2 y 2 r 2 0 · · 1 . . 1 · · 0 1 x K y K 1 r 3 0 1 . . 0 1 Useful Edge ( x , y ): . . 1 • f ( x , y ) = f ( x , y ) 0 1 . . • x : x with last bit inverted . . . a J b L
Revisiting P.S.M. Lowerbound M A M B 0 a 1 b 1 0 Y X 1 a 2 b 2 r 1 0 x 1 y 1 1 0 . 1 . 0 1 x 2 y 2 r 2 0 · · 1 . . 1 · · 0 1 x K y K 1 r 3 0 1 . . 0 1 Useful Edge ( x , y ): . . 1 • f ( x , y ) = f ( x , y ) 0 1 . . • x : x with last bit inverted . . Assumption 2 on f : . a J b L
Revisiting P.S.M. Lowerbound M A M B 0 a 1 b 1 0 Y X 1 a 2 b 2 r 1 0 x 1 y 1 1 0 . 1 . 0 1 x 2 y 2 r 2 0 · · 1 . . 1 · · 0 1 x K y K 1 r 3 0 1 . . 0 1 Useful Edge ( x , y ): . . 1 • f ( x , y ) = f ( x , y ) 0 1 . . • x : x with last bit inverted . . Assumption 2 on f : • Half of the edges are useful . a J b L
Revisiting P.S.M. Lowerbound M A M B 0 a 1 b 1 0 Y X 1 a 2 b 2 r 1 0 x 1 y 1 1 0 . 1 . 0 1 x 2 y 2 r 2 0 · · 1 . . 1 · · 0 1 x K y K 1 r 3 0 1 . . 0 1 Useful Edge ( x , y ): . . 1 • f ( x , y ) = f ( x , y ) 0 1 . . • x : x with last bit inverted . . Assumption 2 on f : • Half of the edges are useful . a J Consequence: b L
Revisiting P.S.M. Lowerbound M A M B 0 a 1 b 1 0 Y X 1 a 2 b 2 r 1 0 y 1 x 1 1 0 1 . . 0 1 y 2 x 2 r 2 0 · · 1 1 . . · · 0 1 x K y K 1 r 3 0 . 1 . 0 1 . . Useful Edge ( x , y ): 1 • f ( x , y ) = f ( x , y ) 0 1 . . • x : x with last bit inverted . . Assumption 2 on f : • Half of the edges are useful . a J Consequence: b L • Image set has half of f ’s edges
Revisiting P.S.M. Lowerbound r r y x r ˜ r ˜ a b Trivial Overlaps
Revisiting P.S.M. Lowerbound r r y x r ˜ r ˜ a b Trivial Overlaps x r r y r ˜ ˜ a r b x Non-trivial Overlaps
Revisiting P.S.M. Lowerbound r r y x r ˜ r ˜ a b Trivial Overlaps x r r y r ˜ ˜ a r b x Non-trivial Overlaps Y X ′ ◦ 0 Complementary Similar Rectangles X ′ ◦ 1 Truth Table of f
Revisiting P.S.M. Lowerbound r r y x r ˜ r ˜ a b Trivial Overlaps x r r y r ˜ ˜ a r b x Non-trivial Overlaps Y X ′ ◦ 0 Complementary Similar Rectangles X ′ ◦ 1 Assumption 3 on f : Size ≤ 2 · 2 k Truth Table of f
Revisiting P.S.M. Lowerbound r r y x r ˜ r ˜ a b Trivial Overlaps x r r y r ˜ ˜ a r b x Non-trivial Overlaps y x r r ˜ r ˜ r a b ˜ ˜ y x Unaccounted Overlaps
Revisiting P.S.M. Lowerbound r r y x r ˜ r ˜ a b Trivial Overlaps x r r y r ˜ ˜ a r b x Non-trivial Overlaps y x r r ˜ r ˜ r a b ˜ ˜ y x Unaccounted Overlaps Implication:
Revisiting P.S.M. Lowerbound r r y x r ˜ r ˜ a b Trivial Overlaps x r r y r ˜ ˜ a r b x Non-trivial Overlaps y x r r ˜ r ˜ r a b ˜ ˜ y x Unaccounted Overlaps Implication: • Reveal all inputs not required to be private .
Revisiting P.S.M. Lowerbound r r y x r ˜ r ˜ a b Trivial Overlaps x r r y r ˜ ˜ a r b x Non-trivial Overlaps y x r r ˜ r ˜ r a b ˜ ˜ y x Unaccounted Overlaps Implication: • Reveal all inputs not required to be private . • Potentially higher communication cost
Counterexample to P.S.M. Lowerbound � T 0 · x [1 : k − 1] ◦ 0 , � x [ k ] = 0 L ( x ) := , T 1 · x [1 : k − 1] ◦ 1 , x [ k ] = 1 f ( x , y ) = < L ( x ) , y > , T 0 , T 1 , T 0 + T 1 : full rank
Counterexample to P.S.M. Lowerbound � T 0 · x [1 : k − 1] ◦ 0 , � x [ k ] = 0 L ( x ) := , T 1 · x [1 : k − 1] ◦ 1 , x [ k ] = 1 f ( x , y ) = < L ( x ) , y > , T 0 , T 1 , T 0 + T 1 : full rank x , R A C y , R B
Counterexample to P.S.M. Lowerbound � T 0 · x [1 : k − 1] ◦ 0 , � x [ k ] = 0 L ( x ) := , T 1 · x [1 : k − 1] ◦ 1 , x [ k ] = 1 f ( x , y ) = < L ( x ) , y > , T 0 , T 1 , T 0 + T 1 : full rank x , R A L ( x ) C y , R B
Counterexample to P.S.M. Lowerbound � T 0 · x [1 : k − 1] ◦ 0 , � x [ k ] = 0 L ( x ) := , T 1 · x [1 : k − 1] ◦ 1 , x [ k ] = 1 f ( x , y ) = < L ( x ) , y > , T 0 , T 1 , T 0 + T 1 : full rank • PSM for < · , · > x , R A L ( x ) C y , R B
Counterexample to P.S.M. Lowerbound � T 0 · x [1 : k − 1] ◦ 0 , � x [ k ] = 0 L ( x ) := , T 1 · x [1 : k − 1] ◦ 1 , x [ k ] = 1 f ( x , y ) = < L ( x ) , y > , T 0 , T 1 , T 0 + T 1 : full rank • PSM for < · , · > x , R A M A L ( x ) < L ( x ) , y > C M B y , R B
Counterexample to P.S.M. Lowerbound � T 0 · x [1 : k − 1] ◦ 0 , � x [ k ] = 0 L ( x ) := , T 1 · x [1 : k − 1] ◦ 1 , x [ k ] = 1 f ( x , y ) = < L ( x ) , y > , T 0 , T 1 , T 0 + T 1 : full rank • PSM for < · , · > x , R A M A L ( x ) < L ( x ) , y > C M B • Communication: 2k + 2 bits y , R B
New Proof for a Communication Lowerbound • x ∈ X , y ∈ Y , R ∈ { 0 , 1 } ∗ • f : X × Y → Z x , R A M A f ( x , y ) C • Pefect correctness M B • Perfect privacy y , R B
Key idea of the proof M A M B a 0 b 0 Y X y 0 x 0 a 1 b 1 x 1 y 1 a 2 b 2 . . . . . . . x J . y K b ˜ K a ˜ J
Key idea of the proof M A M B a 0 b 0 Y X y 0 x 0 a 1 b 1 x 1 y 1 a 2 b 2 . . . . . . . x J . y K b ˜ K µ ∽ X × Y a ˜ J
Key idea of the proof M A M B a 0 b 0 Y X y 0 x 0 a 1 b 1 x 1 y 1 a 2 b 2 ( X , Y ) . . . . . . . x J . y K b ˜ K µ ∽ X × Y a ˜ J
Key idea of the proof M A M B R a 0 b 0 Y X y 0 x 0 a 1 b 1 x 1 y 1 a 2 b 2 ( X , Y ) . . . . . . . x J . y K b ˜ K µ ∽ X × Y a ˜ J
Key idea of the proof M A M B R a 0 b 0 Y X y 0 x 0 a 1 b 1 x 1 y 1 a 2 b 2 ( X , Y ) . . . . ( X ′ , Y ′ ) . . . x J . y K b ˜ K µ ∽ X × Y a ˜ J
Key idea of the proof M A M B R a 0 b 0 Y X y 0 x 0 a 1 b 1 x 1 y 1 a 2 b 2 ( X , Y ) . . . . ( X ′ , Y ′ ) . . . x J R ′ . y K b ˜ K µ ∽ X × Y a ˜ J
Main Result Theorem Let f : X × Y → Z be non-degenerate and let µ be a distribution on X × Y . Then, PSM ( f ) ≥ log(1 /α ( µ )) + H ∞ ( µ ) − log(1 /β ( µ )) − 1 .
Main Result Theorem Let f : X × Y → Z be non-degenerate and let µ be a distribution on X × Y . Then, PSM ( f ) ≥ log(1 /α ( µ )) + H ∞ ( µ ) − log(1 /β ( µ )) − 1 . Y R 1 α ( µ ) := Volume of disjoint, Similar Rectangles X := ( R 1 , R 2 : similar, disjoint) { min( µ ( R 1 ) , µ ( R 2 )) } max R 2 Truth Table of f
Main Result Theorem Let f : X × Y → Z be non-degenerate and let µ be a distribution on X × Y . Then, PSM ( f ) ≥ log(1 /α ( µ )) + H ∞ ( µ ) − log(1 /β ( µ )) − 1 . H ∞ ( µ ) := Min. Entropy of µ
Main Result Theorem Let f : X × Y → Z be non-degenerate and let µ be a distribution on X × Y . Then, PSM ( f ) ≥ log(1 /α ( µ )) + H ∞ ( µ ) − log(1 /β ( µ )) − 1 . β ( µ ) := Volume of Useful Edges := Pr[( X , Y ) � = ( X ′ , Y ′ ) | f ( X , Y ) = f ( X ′ , Y ′ )]
Special cases Theorem (Boolean function) For non-degenerate f : X × Y → { 0 , 1 } , PSM ( f ) ≥ 2(log |X| + log |Y| ) − log M − 3 .
Special cases Theorem (Boolean function) For non-degenerate f : X × Y → { 0 , 1 } , PSM ( f ) ≥ 2(log |X| + log |Y| ) − log M − 3 . M := ( R 1 , R 2 : similar, disjoint) | R 1 | max
Special cases Theorem (Boolean function) For non-degenerate f : X × Y → { 0 , 1 } , PSM ( f ) ≥ 2(log |X| + log |Y| ) − log M − 3 . Proof. Use µ : uniform distribution. �
Special cases Theorem (Boolean function) For non-degenerate f : X × Y → { 0 , 1 } , PSM ( f ) ≥ 2(log |X| + log |Y| ) − log M − 3 . Corollary (Random function) For a random, boolean f : { 0 , 1 } k × { 0 , 1 } k → { 0 , 1 } , w.h.p., PSM ( f ) ≥ 3 k − 2 log k − 1 .
Special cases Theorem (Boolean function) For non-degenerate f : X × Y → { 0 , 1 } , PSM ( f ) ≥ 2(log |X| + log |Y| ) − log M − 3 . Corollary (Random function) For a random, boolean f : { 0 , 1 } k × { 0 , 1 } k → { 0 , 1 } , w.h.p., PSM ( f ) ≥ 3 k − 2 log k − 1 . Proof. W.h.p., M ≤ k 2 · 2 k . �
Special cases Theorem (Boolean function) For non-degenerate f : X × Y → { 0 , 1 } , PSM ( f ) ≥ 2(log |X| + log |Y| ) − log M − 3 . Corollary (Random function) For a random, boolean f : { 0 , 1 } k × { 0 , 1 } k → { 0 , 1 } , w.h.p., PSM ( f ) ≥ 3 k − 2 log k − 1 . Theorem (Explicit functions) f k : { 0 , 1 } k × { 0 , 1 } k → { 0 , 1 } � � ∃ k for which PSM ( f k ) ≥ 3 k − O (log k )
Special cases Theorem (Boolean function) For non-degenerate f : X × Y → { 0 , 1 } , PSM ( f ) ≥ 2(log |X| + log |Y| ) − log M − 3 . Corollary (Random function) For a random, boolean f : { 0 , 1 } k × { 0 , 1 } k → { 0 , 1 } , w.h.p., PSM ( f ) ≥ 3 k − 2 log k − 1 . Theorem (Explicit functions) f k : { 0 , 1 } k × { 0 , 1 } k → { 0 , 1 } � � ∃ k for which PSM ( f k ) ≥ 3 k − O (log k ) Proof. Suffices to sample f k from poly(k)-wise independent distribution. �
Conditional Disclosure of a Secret (C.D.S.) • x ∈ X , y ∈ Y • s ∈ { 0 , 1 } x , s A C ( x , y ) y B
Conditional Disclosure of a Secret (C.D.S.) • x ∈ X , y ∈ Y • s ∈ { 0 , 1 } x , s • h : X × Y → { 0 , 1 } A s iff h ( x , y ) = 1 C ( x , y ) y B
Conditional Disclosure of a Secret (C.D.S.) • x ∈ X , y ∈ Y • s ∈ { 0 , 1 } x , s • h : X × Y → { 0 , 1 } A s iff h ( x , y ) = 1 C ( x , y ) • Pefect correctness y B • Perfect privacy
Conditional Disclosure of a Secret (C.D.S.) • x ∈ X , y ∈ Y • s ∈ { 0 , 1 } x , s , R • h : X × Y → { 0 , 1 } A • R ∈ { 0 , 1 } ∗ s iff h ( x , y ) = 1 C ( x , y ) • Pefect correctness y , R B • Perfect privacy
Conditional Disclosure of a Secret (C.D.S.) • x ∈ X , y ∈ Y • s ∈ { 0 , 1 } x , s , R • h : X × Y → { 0 , 1 } A • R ∈ { 0 , 1 } ∗ M A s iff h ( x , y ) = 1 C ( x , y ) M B • Pefect correctness y , R B • Perfect privacy
Conditional Disclosure of a Secret (C.D.S.) • x ∈ X , y ∈ Y • s ∈ { 0 , 1 } x , s , R • h : X × Y → { 0 , 1 } A • R ∈ { 0 , 1 } ∗ M A s iff h ( x , y ) = 1 C ( x , y ) M B • Pefect correctness y , R B • Perfect privacy • Useful applications: unconditionally private information retrieval (P.I.R.), priced O.T., secret sharing for graph-based access structures, attribute-based encryption
Conditional Disclosure of a Secret (C.D.S.) • x ∈ X , y ∈ Y • s ∈ { 0 , 1 } x , s , R • h : X × Y → { 0 , 1 } A • R ∈ { 0 , 1 } ∗ M A s iff h ( x , y ) = 1 C ( x , y ) M B • Pefect correctness y , R B • Perfect privacy • Communication lowerbound : • Ω(log k ) for several explicit predicates (Gay et al., CRYPTO, 2015)
Conditional Disclosure of a Secret (C.D.S.) • x ∈ X , y ∈ Y • s ∈ { 0 , 1 } x , s , R • h : X × Y → { 0 , 1 } A • R ∈ { 0 , 1 } ∗ M A s iff h ( x , y ) = 1 C ( x , y ) M B • Pefect correctness y , R B • Perfect privacy • Communication lowerbound : • Ω(log k ) for several explicit predicates (Gay et al., CRYPTO, 2015) • k − o ( k ) for some non-explicit predicate (Applebaum et al., CRYPTO, 2017)
C.D.S. Lowerbound Theorem For predicate h : X × Y → { 0 , 1 } , CDS ( h ) ≥ 2 log | h − 1 (0) | − log M − log |X| − log |Y| − 1 .
C.D.S. Lowerbound Theorem For predicate h : X × Y → { 0 , 1 } , CDS ( h ) ≥ 2 log | h − 1 (0) | − log M − log |X| − log |Y| − 1 . | h − 1 (0) | := Number of 0-inputs of h
C.D.S. Lowerbound Theorem For predicate h : X × Y → { 0 , 1 } , CDS ( h ) ≥ 2 log | h − 1 (0) | − log M − log |X| − log |Y| − 1 . Y M := Size of largest 0-monochromatic rectangle of h X All 0’s Truth Table of h
C.D.S. Lowerbound : Special Cases Corollary (CDS for Inner Product) For predicate h ( x , y ) = < x , y > , x , y ∈ { 0 , 1 } k , CDS ( h ) ≥ k − 3 − o (1) .
C.D.S. Lowerbound : Special Cases Corollary (CDS for Inner Product) For predicate h ( x , y ) = < x , y > , x , y ∈ { 0 , 1 } k , CDS ( h ) ≥ k − 3 − o (1) . Remarks: • Tight bound. • Previous bound: Ω(log k ).
C.D.S. Lowerbound : Special Cases Corollary (CDS for Inner Product) For predicate h ( x , y ) = < x , y > , x , y ∈ { 0 , 1 } k , CDS ( h ) ≥ k − 3 − o (1) . Remarks: • Tight bound. • Previous bound: Ω(log k ). Corollary (CDS for Random predicate) For a random predicate h : { 0 , 1 } k × { 0 , 1 } k → { 0 , 1 } , w.h.p., CDS ( h ) ≥ k − 4 − o (1) .
Summary We revisited the P.S.M. lowerbound of Feige, Kilian, Naor(FKN) (STOC, 1994) and proved the following results:
Summary We revisited the P.S.M. lowerbound of Feige, Kilian, Naor(FKN) (STOC, 1994) and proved the following results: • Counterexample: an f whose P.S.M. communicates only 2 k + 2 bits.
Recommend
More recommend