k 1 O k n variables and • Run the above reduction 2 n k n 2 times. k 1 • Total running time 2 n k n 2 2 o n 2 n for large enough algorithm on the MAX 3-SAT 1 1 constant k . o n 2 n k produced instances. and output YES if the algorithm outputs YES on any of the instances Proof Sketch • Run the 2 o n Lemma clauses. on n converted to MAX 3-SAT 1 1 • MAX 3 k -CSP 1 1 2 on n variables and O n clauses can be For large enough constant k, there exists a randomized reduction from 10 MAX 3-SAT ( . 99 , . 97 ) on n variables and O ( n ) clauses to MAX 3 k-CSP ( 1 , 1 / 2 ) on n variables and O ( n ) clauses, such that: • YES instances reduce to YES instances with probability ≥ 2 − n / k . • NO instances reduce to NO instances with probability ≥ 1 − 2 − n .
• Run the above reduction 2 n k n 2 times. k 1 • Total running time 2 n k n 2 2 o n 2 n for large enough Proof Sketch constant k . o n 2 n k produced instances. and output YES if the algorithm outputs YES on any of the instances • Run the 2 o n algorithm on the MAX 3-SAT 1 1 Lemma clauses. For large enough constant k, there exists a randomized reduction from 10 MAX 3-SAT ( . 99 , . 97 ) on n variables and O ( n ) clauses to MAX 3 k-CSP ( 1 , 1 / 2 ) on n variables and O ( n ) clauses, such that: • YES instances reduce to YES instances with probability ≥ 2 − n / k . • NO instances reduce to NO instances with probability ≥ 1 − 2 − n . • MAX 3 k -CSP ( 1 , 1 / 2 ) on n variables and O ( n ) clauses can be converted to MAX 3-SAT ( 1 , 1 − Ω k ( 1 )) on n ′ = O k ( n ) variables and
k 1 • Total running time 2 n k n 2 2 o n 2 n for large enough Proof Sketch algorithm on the MAX 3-SAT 1 1 constant k . o n 2 n k produced instances. and output YES if the algorithm outputs YES on any of the instances • Run the 2 o n Lemma clauses. For large enough constant k, there exists a randomized reduction from 10 MAX 3-SAT ( . 99 , . 97 ) on n variables and O ( n ) clauses to MAX 3 k-CSP ( 1 , 1 / 2 ) on n variables and O ( n ) clauses, such that: • YES instances reduce to YES instances with probability ≥ 2 − n / k . • NO instances reduce to NO instances with probability ≥ 1 − 2 − n . • MAX 3 k -CSP ( 1 , 1 / 2 ) on n variables and O ( n ) clauses can be converted to MAX 3-SAT ( 1 , 1 − Ω k ( 1 )) on n ′ = O k ( n ) variables and • Run the above reduction 2 n / k n 2 times.
• Total running time 2 n k n 2 2 o n 2 n for large enough Proof Sketch Lemma constant k . o n 2 n k produced instances. and output YES if the algorithm outputs YES on any of the 10 For large enough constant k, there exists a randomized reduction from clauses. MAX 3-SAT ( . 99 , . 97 ) on n variables and O ( n ) clauses to MAX 3 k-CSP ( 1 , 1 / 2 ) on n variables and O ( n ) clauses, such that: • YES instances reduce to YES instances with probability ≥ 2 − n / k . • NO instances reduce to NO instances with probability ≥ 1 − 2 − n . • MAX 3 k -CSP ( 1 , 1 / 2 ) on n variables and O ( n ) clauses can be converted to MAX 3-SAT ( 1 , 1 − Ω k ( 1 )) on n ′ = O k ( n ) variables and • Run the above reduction 2 n / k n 2 times. • Run the 2 o ( n ′ ) algorithm on the MAX 3-SAT ( 1 , 1 − Ω k ( 1 )) instances
Proof Sketch Lemma constant k . produced instances. and output YES if the algorithm outputs YES on any of the clauses. 10 For large enough constant k, there exists a randomized reduction from MAX 3-SAT ( . 99 , . 97 ) on n variables and O ( n ) clauses to MAX 3 k-CSP ( 1 , 1 / 2 ) on n variables and O ( n ) clauses, such that: • YES instances reduce to YES instances with probability ≥ 2 − n / k . • NO instances reduce to NO instances with probability ≥ 1 − 2 − n . • MAX 3 k -CSP ( 1 , 1 / 2 ) on n variables and O ( n ) clauses can be converted to MAX 3-SAT ( 1 , 1 − Ω k ( 1 )) on n ′ = O k ( n ) variables and • Run the above reduction 2 n / k n 2 times. • Run the 2 o ( n ′ ) algorithm on the MAX 3-SAT ( 1 , 1 − Ω k ( 1 )) instances • Total running time 2 n / k n 2 · 2 o ( n ′ ) = 2 n / k + o ( n ) ≤ 2 δ n for large enough
Derandomization using samplers • One-sided derandomization using samplers. We use LLL to handle the completeness case. 11
PCPs and Perfect Completeness
Q i Defjnition of PCPs q 12 PCP c , s [ r , q ] with proof size n : YES ( x ∈ L ) : ∃ Π , Pr i [ Q i (Π) = 1 ] ≥ c NO ( x ̸∈ L ) : ∀ Π , Pr i [ Q i (Π) = 1 ] ≤ s Q m = 2 r Q 1 . . . Q 2 . . . . . . Q j . . . Π 1 Π n . . .
Q i Defjnition of PCPs q 12 PCP c , s [ r , q ] with proof size n : YES ( x ∈ L ) : ∃ Π , Pr i [ Q i (Π) = 1 ] ≥ c NO ( x ̸∈ L ) : ∀ Π , Pr i [ Q i (Π) = 1 ] ≤ s Q m = 2 r Q 1 . . . Q 2 . . . . . . Q j . . . Π 1 Π n . . .
Q i Defjnition of PCPs q 12 PCP c , s [ r , q ] with proof size n : YES ( x ∈ L ) : ∃ Π , Pr i [ Q i (Π) = 1 ] ≥ c NO ( x ̸∈ L ) : ∀ Π , Pr i [ Q i (Π) = 1 ] ≤ s Q m = 2 r Q 1 . . . Q 2 . . . . . . Q j . . . Π 1 Π n . . .
Q i Defjnition of PCPs q 12 PCP c , s [ r , q ] with proof size n : YES ( x ∈ L ) : ∃ Π , Pr i [ Q i (Π) = 1 ] ≥ c NO ( x ̸∈ L ) : ∀ Π , Pr i [ Q i (Π) = 1 ] ≤ s Q m = 2 r Q 1 . . . Q 2 . . . . . . Q j . . . Π 1 Π n . . .
Q i Defjnition of PCPs q 12 PCP c , s [ r , q ] with proof size n : YES ( x ∈ L ) : ∃ Π , Pr i [ Q i (Π) = 1 ] ≥ c NO ( x ̸∈ L ) : ∀ Π , Pr i [ Q i (Π) = 1 ] ≤ s Q m = 2 r Q 1 . . . Q 2 . . . . . . Q j . . . Π 1 Π n . . .
PCP 1 s O log n PCP 1 s log n PCP 1 1 2 log n PCP results • Linear-sized PCP with long queries [BKKMS’13]: with a O n proof size. n O 1 NTIME O n O log log n O 1 • PCP theorem[ALMSS]: For some constant s NTIME O n • Almost-linear proofs [Ben-Sasson, Sudan] and [Dinur]: O 1 NTIME O n 1, 13
PCP 1 s log n PCP 1 1 2 log n PCP results • Almost-linear proofs [Ben-Sasson, Sudan] and [Dinur]: NTIME O n O log log n O 1 • Linear-sized PCP with long queries [BKKMS’13]: NTIME O n O 1 n with a O n proof size. 13 • PCP theorem[ALMSS]: For some constant s < 1, NTIME [ O ( n )] ⊆ PCP 1 , s [ O ( log n ) , O ( 1 )]
PCP 1 1 2 log n PCP results • Almost-linear proofs [Ben-Sasson, Sudan] and [Dinur]: • Linear-sized PCP with long queries [BKKMS’13]: NTIME O n O 1 n with a O n proof size. 13 • PCP theorem[ALMSS]: For some constant s < 1, NTIME [ O ( n )] ⊆ PCP 1 , s [ O ( log n ) , O ( 1 )] NTIME [ O ( n )] ⊆ PCP 1 , s [ log n + O ( log log n ) , O ( 1 )]
PCP results • Almost-linear proofs [Ben-Sasson, Sudan] and [Dinur]: • Linear-sized PCP with long queries [BKKMS’13]: 13 • PCP theorem[ALMSS]: For some constant s < 1, NTIME [ O ( n )] ⊆ PCP 1 , s [ O ( log n ) , O ( 1 )] NTIME [ O ( n )] ⊆ PCP 1 , s [ log n + O ( log log n ) , O ( 1 )] NTIME [ O ( n )] ⊆ PCP 1 , 1 / 2 [ log n + O ϵ ( 1 ) , n ϵ ] , with a O ϵ ( n ) proof size.
Linear-Sized PCP conjecture Conjecture (Linear-sized PCP conjecture) 14 NTIME [ O ( n )] has linear-sized PCPs, i.e. NTIME [ O ( n )] ⊆ PCP 1 , s [ log n + O ( 1 ) , O ( 1 )] for some constant s < 1 .
Our Question • What is the role of completeness in PCPs? Can one build better PCPs with imperfect completeness? • Can we convert an imperfect PCP to a perfect completeness PCP in a blackbox manner? 15
Our Question • What is the role of completeness in PCPs? Can one build better PCPs with imperfect completeness? • Can we convert an imperfect PCP to a perfect completeness PCP in a blackbox manner? 15
Our Question • What is the role of completeness in PCPs? Can one build better PCPs with imperfect completeness? • Can we convert an imperfect PCP to a perfect completeness PCP in a blackbox manner? 15
1 log n PCP c s r q R PCP 1 rs c r qr c Ways to transfer gap • One can just apply the best known PCPs for NTIME O n , for example MAX 3- SAT 99 97 PCP 1 1 O log log n O 1 • Bellare Goldreich and Sudan [1] studied many such black-box reductions between PCP classes. Their result for transferring the gap to 1: 16
PCP c s r q R PCP 1 rs c r qr c Ways to transfer gap example • Bellare Goldreich and Sudan [1] studied many such black-box reductions between PCP classes. Their result for transferring the gap to 1: 16 • One can just apply the best known PCPs for NTIME [ O ( n )] , for MAX 3- SAT ( . 99 , . 97 ) ∈ PCP 1 , 1 − Ω( 1 ) ( log n + O ( log log n ) , O ( 1 ))
PCP c s r q R PCP 1 rs c r qr c Ways to transfer gap example • Bellare Goldreich and Sudan [1] studied many such black-box reductions between PCP classes. Their result for transferring the gap to 1: 16 • One can just apply the best known PCPs for NTIME [ O ( n )] , for MAX 3- SAT ( . 99 , . 97 ) ∈ PCP 1 , 1 − Ω( 1 ) ( log n + O ( log log n ) , O ( 1 ))
Ways to transfer gap example • Bellare Goldreich and Sudan [1] studied many such black-box reductions between PCP classes. Their result for transferring the gap to 1: 16 • One can just apply the best known PCPs for NTIME [ O ( n )] , for MAX 3- SAT ( . 99 , . 97 ) ∈ PCP 1 , 1 − Ω( 1 ) ( log n + O ( log log n ) , O ( 1 )) PCP c , s [ r , q ] ≤ R PCP 1 , rs / c [ r , qr / c ] .
Our Result Gap-Transfer theorem We show a blackbox way to transfer a PCP with imperfect completeness to one with perfect completeness, while incurring a small loss in the query complexity, but maintaining other parameters of the original PCP. From now on, we will take c s 9 10 6 10 . Let L have a PCP with c 0 9 s 0 6, with total verifjer queries m . We will show how to build a new proof system (specify proof bits and verifjer queries) for L that has completeness 1 and soundness 1. 17
Our Result Gap-Transfer theorem We show a blackbox way to transfer a PCP with imperfect completeness to one with perfect completeness, while incurring a small loss in the query complexity, but maintaining other parameters of the original PCP. From now on, we will take c s 9 10 6 10 . Let L have a PCP with c 0 9 s 0 6, with total verifjer queries m . We will show how to build a new proof system (specify proof bits and verifjer queries) for L that has completeness 1 and soundness 1. 17
Our Result Gap-Transfer theorem We show a blackbox way to transfer a PCP with imperfect completeness to one with perfect completeness, while incurring a small loss in the query complexity, but maintaining other parameters of the original PCP. We will show how to build a new proof system (specify proof bits and verifjer queries) for L that has completeness 1 and soundness 1. 17 From now on, we will take ( c , s ) = ( 9 / 10 , 6 / 10 ) . Let L have a PCP with c = 0 . 9 , s = 0 . 6, with total verifjer queries = m .
Our Result Gap-Transfer theorem We show a blackbox way to transfer a PCP with imperfect completeness to one with perfect completeness, while incurring a small loss in the query complexity, but maintaining other parameters of the original PCP. We will show how to build a new proof system (specify proof bits and 17 From now on, we will take ( c , s ) = ( 9 / 10 , 6 / 10 ) . Let L have a PCP with c = 0 . 9 , s = 0 . 6, with total verifjer queries = m . verifjer queries) for L that has completeness 1 and soundness < 1.
A Robust Circuit using Thresholds . log m layers We can derandomize this using samplers. . . . . . . . . 18 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 . . . Thr 0 . 8 . . . O ( 1 ) C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .
A Robust Circuit using Thresholds . log m layers We can derandomize this using samplers. . . . . . . . . 18 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 . . . Thr 0 . 8 . . . O ( 1 ) C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .
A Robust Circuit using Thresholds . log m layers We can derandomize this using samplers. . . . . . . . . 18 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 . . . Thr 0 . 8 . . . O ( 1 ) C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .
A Robust Circuit using Thresholds . log m layers We can derandomize this using samplers. . . . . . . . . 18 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 . . . Thr 0 . 8 . . . O ( 1 ) C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .
A Robust Circuit using Thresholds . log m layers We can derandomize this using samplers. . . . . . . . . 18 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 . . . Thr 0 . 8 . . . O ( 1 ) C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .
A Robust Circuit using Thresholds . log m layers We can derandomize this using samplers. . . . . . . . . 18 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 . . . Thr 0 . 8 . . . O ( 1 ) C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .
A Robust Circuit using Thresholds . log m layers We can derandomize this using samplers. . . . . . . . . 18 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 . . . Thr 0 . 8 . . . O ( 1 ) C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .
Increasing fraction of 1’s . log m layers . . . . . . . . 19 C i frac of 0 ′ s = 0 Thr 0 . 8 frac of 0 ′ s < . 1 / 2 i ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 frac of 0 ′ s < . 1 / 2 . . . Thr 0 . 8 . . . O ( 1 ) frac of 0 ′ s < . 1 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .
Increasing fraction of 1’s . log m layers . . . . . . . . 19 C i frac of 0 ′ s = 0 Thr 0 . 8 frac of 0 ′ s < . 1 / 2 i ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 frac of 0 ′ s < . 1 / 2 . . . Thr 0 . 8 . . . O ( 1 ) frac of 0 ′ s < . 1 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .
Increasing fraction of 1’s . log m layers . . . . . . . . 19 C i frac of 0 ′ s = 0 Thr 0 . 8 frac of 0 ′ s < . 1 / 2 i ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 frac of 0 ′ s < . 1 / 2 . . . Thr 0 . 8 . . . O ( 1 ) frac of 0 ′ s < . 1 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .
Increasing fraction of 1’s . log m layers . . . . . . . . 19 C i frac of 0 ′ s = 0 Thr 0 . 8 frac of 0 ′ s < . 1 / 2 i ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 frac of 0 ′ s < . 1 / 2 . . . Thr 0 . 8 . . . O ( 1 ) frac of 0 ′ s < . 1 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .
Increasing fraction of 1’s . log m layers . . . . . . . . 19 C i frac of 0 ′ s = 0 Thr 0 . 8 frac of 0 ′ s < . 1 / 2 i ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 frac of 0 ′ s < . 1 / 2 . . . Thr 0 . 8 . . . O ( 1 ) frac of 0 ′ s < . 1 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .
Increasing fraction of 1’s . log m layers . . . . . . . . 19 C i frac of 0 ′ s = 0 Thr 0 . 8 frac of 0 ′ s < . 1 / 2 i ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 frac of 0 ′ s < . 1 / 2 . . . Thr 0 . 8 . . . O ( 1 ) frac of 0 ′ s < . 1 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .
Maintaining fraction of 1’s . log m layers . . . . . . . . 20 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 frac of 1 ′ s < 6 / 10 . . . Thr 0 . 8 . . . O ( 1 ) frac of 1 ′ s < 7 / 10 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .
Maintaining fraction of 1’s . log m layers . . . . . . . . 20 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 frac of 1 ′ s < 6 / 10 . . . Thr 0 . 8 . . . O ( 1 ) frac of 1 ′ s < 7 / 10 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .
Maintaining fraction of 1’s . log m layers . . . . . . . . 20 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 frac of 1 ′ s < 6 / 10 . . . Thr 0 . 8 . . . O ( 1 ) frac of 1 ′ s < 7 / 10 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .
Maintaining fraction of 1’s . log m layers . . . . . . . . 20 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 frac of 1 ′ s < 6 / 10 . . . Thr 0 . 8 . . . O ( 1 ) frac of 1 ′ s < 7 / 10 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .
Maintaining fraction of 1’s . log m layers . . . . . . . . 20 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 frac of 1 ′ s < 6 / 10 . . . Thr 0 . 8 . . . O ( 1 ) frac of 1 ′ s < 7 / 10 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .
Final PCP Thr 0 8 Thr 0 8 . . . . . . . . . Thr 0 8 In a single query, we will verify all included gates: Thr 0 8 C m C j C i C 2 C 1 n 1 and the top gate evaluates to 1 check whether each gate’s output is consistent with its inputs 21
Final PCP In a single query, we will verify all included gates: . . . . . . . . . 21 and the top gate evaluates to 1 C i check whether each gate’s output is consistent with its inputs Thr 0 . 8 Thr 0 . 8 . . . Thr 0 . 8 . . . Thr 0 . 8 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .
• Queries: q O log m Parameters of the Reduction This gives us a PCP that has the following properties: • Completeness: 1 • Soundness: 9/10 q O r • Randomness complexity: r (stays the same) • Size: O m 22
• Queries: q O log m Parameters of the Reduction This gives us a PCP that has the following properties: • Completeness: 1 • Soundness: 9/10 q O r • Randomness complexity: r (stays the same) • Size: O m 22
• Queries: q O log m Parameters of the Reduction This gives us a PCP that has the following properties: • Completeness: 1 • Soundness: 9/10 q O r • Randomness complexity: r (stays the same) • Size: O m 22
• Queries: q O log m Parameters of the Reduction This gives us a PCP that has the following properties: • Completeness: 1 • Soundness: 9/10 q O r • Randomness complexity: r (stays the same) • Size: O m 22
Parameters of the Reduction This gives us a PCP that has the following properties: • Completeness: 1 • Soundness: 9/10 • Randomness complexity: r (stays the same) • Size: O m 22 • Queries: q + O ( log m ) = q + O ( r )
Parameters of the Reduction This gives us a PCP that has the following properties: • Completeness: 1 • Soundness: 9/10 • Randomness complexity: r (stays the same) • Size: O m 22 • Queries: q + O ( log m ) = q + O ( r )
Parameters of the Reduction This gives us a PCP that has the following properties: • Completeness: 1 • Soundness: 9/10 • Randomness complexity: r (stays the same) 22 • Queries: q + O ( log m ) = q + O ( r ) • Size: O ( m )
PCP c s r q PCP 1 s r PCP c s r q R PCP 1 s r Main theorem Theorem O log r q O 1 c, we have that, 0 1 with s For all constants, c s s the new randomness and query complexities have better dependence on the initial r q . Theorem We have a similar “randomized reduction” between PCP classes where O r q O 1 c, we have that, 0 1 with s For all constants, c s s 23
PCP c s r q R PCP 1 s r Main theorem 0 1 with s O log r q O 1 c, we have that, For all constants, c s s Theorem Theorem the initial r q . the new randomness and query complexities have better dependence on We have a similar “randomized reduction” between PCP classes where 23 For all constants, c , s , s ′ ∈ ( 0 , 1 ) with s < c, we have that, PCP c , s [ r , q ] ⊆ PCP 1 , s ′ [ r + O ( 1 ) , q + O ( r )] .
PCP c s r q R PCP 1 s r Main theorem 0 1 with s O log r q O 1 c, we have that, For all constants, c s s Theorem Theorem the new randomness and query complexities have better dependence on We have a similar “randomized reduction” between PCP classes where 23 For all constants, c , s , s ′ ∈ ( 0 , 1 ) with s < c, we have that, PCP c , s [ r , q ] ⊆ PCP 1 , s ′ [ r + O ( 1 ) , q + O ( r )] . the initial r , q .
Main theorem Theorem We have a similar “randomized reduction” between PCP classes where the new randomness and query complexities have better dependence on Theorem 23 For all constants, c , s , s ′ ∈ ( 0 , 1 ) with s < c, we have that, PCP c , s [ r , q ] ⊆ PCP 1 , s ′ [ r + O ( 1 ) , q + O ( r )] . the initial r , q . For all constants, c , s , s ′ ∈ ( 0 , 1 ) with s < c, we have that, PCP c , s [ r , q ] ≤ R PCP 1 , s ′ [ r + O ( 1 ) , q + O ( log r )]
PCP c s log n PCP 1 s log n PCP 1 s log n Comparison to Best-Known PCPs O log n n O 1 NTIME O n While the current best known linear-sized PCP is: O 1 q We get the following result for NTIME O n : then NTIME O n q , O 1 For all constants, c s s , if NTIME O n Corollary 24
PCP 1 s log n Comparison to Best-Known PCPs Corollary While the current best known linear-sized PCP is: NTIME O n O 1 n 24 We get the following result for NTIME [ O ( n )] : For all constants, c , s , s ′ , if NTIME [ O ( n )] ⊆ PCP c , s [ log n + O ( 1 ) , q ] , then NTIME [ O ( n )] ⊆ PCP 1 , s ′ [ log n + O ( 1 ) , q + O ( log n )] .
Comparison to Best-Known PCPs Corollary While the current best known linear-sized PCP is: 24 We get the following result for NTIME [ O ( n )] : For all constants, c , s , s ′ , if NTIME [ O ( n )] ⊆ PCP c , s [ log n + O ( 1 ) , q ] , then NTIME [ O ( n )] ⊆ PCP 1 , s ′ [ log n + O ( 1 ) , q + O ( log n )] . NTIME [ O ( n )] ⊆ PCP 1 , s [ log n + O ϵ ( 1 ) , n ϵ ] ,
has 2 o n algorithms. Conclusion • Our results imply that building linear-sized PCPs with minimal queries for NTIME O n and perfect completeness should be nearly as hard (or easy!) as linear-sized PCPs with minimal queries for NTIME O n and imperfect completeness. • We show the equivalence of Gap-ETH under perfect and imperfect completeness, i.e. Max-3SAT with perfect completeness has 2 o n randomized algorithms ifg Max-3SAT with imperfect completeness 25
has 2 o n algorithms. Conclusion • Our results imply that building linear-sized PCPs with minimal as hard (or easy!) as linear-sized PCPs with minimal queries for • We show the equivalence of Gap-ETH under perfect and imperfect completeness, i.e. Max-3SAT with perfect completeness has 2 o n randomized algorithms ifg Max-3SAT with imperfect completeness 25 queries for NTIME [ O ( n )] and perfect completeness should be nearly NTIME [ O ( n )] and imperfect completeness.
Conclusion • Our results imply that building linear-sized PCPs with minimal as hard (or easy!) as linear-sized PCPs with minimal queries for • We show the equivalence of Gap-ETH under perfect and imperfect randomized algorithms ifg Max-3SAT with imperfect completeness 25 queries for NTIME [ O ( n )] and perfect completeness should be nearly NTIME [ O ( n )] and imperfect completeness. completeness, i.e. Max-3SAT with perfect completeness has 2 o ( n ) has 2 o ( n ) algorithms.
PCP c s log n O 1 , then PCP 1 s log n PCP c s log n PCP 1 s log n 2 k for satisfjable 2 k 2 k (which is tight up to constant factors). Open Problems k -CSP 1 instances whereas for unsatisfjable instances MAX Currently we know that MAX k -CSP 1 2 O k 1 3 • Blackbox reductions to get better parameters for MAX k -CSP? completeness to Gap-ETH? • Can we derandomize the reduction from Gap-ETH without perfect O 1 O 1 o log log n • A query reduction on our result for PCPs, using [Dinur], gives that: O 1 Can one prove that, This is what one gets using the current PCPs for NTIME O n . O 1 . O log log n NTIME O n If NTIME O n Corollary 26
PCP c s log n PCP 1 s log n 2 k for satisfjable 2 k 2 k (which is tight up to constant factors). Open Problems • Can we derandomize the reduction from Gap-ETH without perfect k -CSP 1 instances whereas for unsatisfjable instances MAX Currently we know that MAX k -CSP 1 2 O k 1 3 • Blackbox reductions to get better parameters for MAX k -CSP? completeness to Gap-ETH? o log log n O 1 • A query reduction on our result for PCPs, using [Dinur], gives that: O 1 O 1 Can one prove that, This is what one gets using the current PCPs for NTIME O n . Corollary 26 If NTIME [ O ( n )] ⊆ PCP c , s [ log n , O ( 1 )] , then NTIME [ O ( n )] ⊆ PCP 1 , s ′ [ log n + O ( log log n ) , O ( 1 )] .
2 k for satisfjable 2 k 2 k (which is tight up to constant factors). Open Problems completeness to Gap-ETH? k -CSP 1 instances whereas for unsatisfjable instances MAX Currently we know that MAX k -CSP 1 2 O k 1 3 • Blackbox reductions to get better parameters for MAX k -CSP? • Can we derandomize the reduction from Gap-ETH without perfect • A query reduction on our result for PCPs, using [Dinur], gives that: Can one prove that, Corollary 26 If NTIME [ O ( n )] ⊆ PCP c , s [ log n , O ( 1 )] , then NTIME [ O ( n )] ⊆ PCP 1 , s ′ [ log n + O ( log log n ) , O ( 1 )] . This is what one gets using the current PCPs for NTIME [ O ( n )] . PCP c , s [ log n + O ( 1 ) , O ( 1 )] ⊆ PCP 1 , s ′ [ log n + o ( log log n ) , O ( 1 )]?
2 k for satisfjable 2 k 2 k (which is tight up to constant factors). Open Problems completeness to Gap-ETH? k -CSP 1 instances whereas for unsatisfjable instances MAX Currently we know that MAX k -CSP 1 2 O k 1 3 • Blackbox reductions to get better parameters for MAX k -CSP? • Can we derandomize the reduction from Gap-ETH without perfect • A query reduction on our result for PCPs, using [Dinur], gives that: Can one prove that, Corollary 26 If NTIME [ O ( n )] ⊆ PCP c , s [ log n , O ( 1 )] , then NTIME [ O ( n )] ⊆ PCP 1 , s ′ [ log n + O ( log log n ) , O ( 1 )] . This is what one gets using the current PCPs for NTIME [ O ( n )] . PCP c , s [ log n + O ( 1 ) , O ( 1 )] ⊆ PCP 1 , s ′ [ log n + o ( log log n ) , O ( 1 )]?
Recommend
More recommend