Known domain (here n 1 n ) x is now a distribution, let’s call it D n Property n , proximity parameter 0 1 Sample access to D The setting: Decide with high probability: Is D , or TV D ? proofs of proximity for distribution testing Same problem, different object ...and access ...and distance Does it really make a difference? 6
x is now a distribution, let’s call it D n Property n , proximity parameter 0 1 Sample access to D Decide with high probability: Is D , or TV D ? proofs of proximity for distribution testing Same problem, different object ...and access ...and distance Does it really make a difference? The setting: Known domain (here [ n ] = { 1 , . . . , n } ) 6
Property n , proximity parameter 0 1 Sample access to D Decide with high probability: Is D , or TV D ? proofs of proximity for distribution testing Same problem, different object ...and access ...and distance Does it really make a difference? The setting: Known domain (here [ n ] = { 1 , . . . , n } ) x is now a distribution, let’s call it D ∈ ∆([ n ]) 6
Sample access to D Decide with high probability: Is D , or TV D ? proofs of proximity for distribution testing Same problem, different object ...and access ...and distance Does it really make a difference? The setting: Known domain (here [ n ] = { 1 , . . . , n } ) x is now a distribution, let’s call it D ∈ ∆([ n ]) Property Π ⊆ ∆([ n ]) , proximity parameter ε ∈ ( 0 , 1 ] 6
Decide with high probability: Is D , or TV D ? proofs of proximity for distribution testing Same problem, different object ...and access ...and distance Does it really make a difference? The setting: Known domain (here [ n ] = { 1 , . . . , n } ) x is now a distribution, let’s call it D ∈ ∆([ n ]) Property Π ⊆ ∆([ n ]) , proximity parameter ε ∈ ( 0 , 1 ] Sample access to D 6
Decide with high probability: Is D , or TV D ? proofs of proximity for distribution testing Same problem, different object ...and access ...and distance Does it really make a difference? The setting: Known domain (here [ n ] = { 1 , . . . , n } ) x is now a distribution, let’s call it D ∈ ∆([ n ]) Property Π ⊆ ∆([ n ]) , proximity parameter ε ∈ ( 0 , 1 ] Sample access to D 6
proofs of proximity for distribution testing Same problem, different object ...and access ...and distance Does it really make a difference? The setting: Known domain (here [ n ] = { 1 , . . . , n } ) x is now a distribution, let’s call it D ∈ ∆([ n ]) Property Π ⊆ ∆([ n ]) , proximity parameter ε ∈ ( 0 , 1 ] Sample access to D Decide with high probability: Is D ∈ Π , or δ TV ( D , Π) > ε ? 6
sample access to D n explicit access to 0 and a proof : * For every D , there exists proof s.t. T D 1 * For every TV D and any “proof” , T D 0 2 3 MA distribution testers NP distribution testers that are allowed to toss coins IP distribution testers MA distribution testers that interact with a prover different types of proofs NP distribution testers Deterministic algorithm T with 7
explicit access to 0 and a proof : * For every D , there exists proof s.t. T D 1 * For every TV D and any “proof” , T D 0 2 3 MA distribution testers NP distribution testers that are allowed to toss coins IP distribution testers MA distribution testers that interact with a prover different types of proofs NP distribution testers Deterministic algorithm T with sample access to D ∈ ∆([ n ]) 7
* For every D , there exists proof s.t. T D 1 * For every TV D and any “proof” , T D 0 2 3 MA distribution testers NP distribution testers that are allowed to toss coins IP distribution testers MA distribution testers that interact with a prover different types of proofs NP distribution testers Deterministic algorithm T with sample access to D ∈ ∆([ n ]) explicit access to ε > 0 and a proof π : 7
* For every TV D and any “proof” , T D 0 2 3 MA distribution testers NP distribution testers that are allowed to toss coins IP distribution testers MA distribution testers that interact with a prover different types of proofs NP distribution testers Deterministic algorithm T with sample access to D ∈ ∆([ n ]) explicit access to ε > 0 and a proof π : * For every D ∈ Π , there exists proof π s.t. T D ( ε, π ) = 1 7
MA distribution testers NP distribution testers that are allowed to toss coins IP distribution testers MA distribution testers that interact with a prover different types of proofs NP distribution testers Deterministic algorithm T with sample access to D ∈ ∆([ n ]) explicit access to ε > 0 and a proof π : * For every D ∈ Π , there exists proof π s.t. T D ( ε, π ) = 1 * For every δ TV ( D , Π) > ε and any “proof” π , Pr[ T D ( ε, π ) = 0 ] ≥ 2 / 3 7
MA distribution testers NP distribution testers that are allowed to toss coins IP distribution testers MA distribution testers that interact with a prover different types of proofs NP distribution testers Deterministic algorithm T with sample access to D ∈ ∆([ n ]) explicit access to ε > 0 and a proof π : * For every D ∈ Π , there exists proof π s.t. T D ( ε, π ) = 1 * For every δ TV ( D , Π) > ε and any “proof” π , Pr[ T D ( ε, π ) = 0 ] ≥ 2 / 3 7
IP distribution testers MA distribution testers that interact with a prover different types of proofs NP distribution testers Deterministic algorithm T with sample access to D ∈ ∆([ n ]) explicit access to ε > 0 and a proof π : * For every D ∈ Π , there exists proof π s.t. T D ( ε, π ) = 1 * For every δ TV ( D , Π) > ε and any “proof” π , Pr[ T D ( ε, π ) = 0 ] ≥ 2 / 3 MA distribution testers NP distribution testers that are allowed to toss coins 7
different types of proofs NP distribution testers Deterministic algorithm T with sample access to D ∈ ∆([ n ]) explicit access to ε > 0 and a proof π : * For every D ∈ Π , there exists proof π s.t. T D ( ε, π ) = 1 * For every δ TV ( D , Π) > ε and any “proof” π , Pr[ T D ( ε, π ) = 0 ] ≥ 2 / 3 MA distribution testers NP distribution testers that are allowed to toss coins IP distribution testers MA distribution testers that interact with a prover 7
∙ Are proof-augmented testers stronger than standard testers? ∙ If so, to what extent? Polynomially better? Exponentially better? Large classes? ∙ What are the most important resources? Randomness? Interaction? Private coins? ∙ What can and cannot be achieved with each proof system? some questions This is all very nice, but: 8
Polynomially better? Exponentially better? Large classes? ∙ What are the most important resources? Randomness? Interaction? Private coins? ∙ What can and cannot be achieved with each proof system? ∙ If so, to what extent? some questions This is all very nice, but: ∙ Are proof-augmented testers stronger than standard testers? 8
∙ What are the most important resources? Randomness? Interaction? Private coins? ∙ What can and cannot be achieved with each proof system? Polynomially better? Exponentially better? Large classes? some questions This is all very nice, but: ∙ Are proof-augmented testers stronger than standard testers? ∙ If so, to what extent? 8
∙ What are the most important resources? Randomness? Interaction? Private coins? ∙ What can and cannot be achieved with each proof system? Polynomially better? Exponentially better? Large classes? some questions This is all very nice, but: ∙ Are proof-augmented testers stronger than standard testers? ∙ If so, to what extent? 8
∙ What are the most important resources? Randomness? Interaction? Private coins? ∙ What can and cannot be achieved with each proof system? Exponentially better? Large classes? some questions This is all very nice, but: ∙ Are proof-augmented testers stronger than standard testers? ∙ If so, to what extent? Polynomially better? 8
∙ What are the most important resources? Randomness? Interaction? Private coins? ∙ What can and cannot be achieved with each proof system? Large classes? some questions This is all very nice, but: ∙ Are proof-augmented testers stronger than standard testers? ∙ If so, to what extent? Polynomially better? Exponentially better? 8
∙ What are the most important resources? Randomness? Interaction? Private coins? ∙ What can and cannot be achieved with each proof system? some questions This is all very nice, but: ∙ Are proof-augmented testers stronger than standard testers? ∙ If so, to what extent? Polynomially better? Exponentially better? Large classes? 8
∙ What can and cannot be achieved with each proof system? Randomness? Interaction? Private coins? some questions This is all very nice, but: ∙ Are proof-augmented testers stronger than standard testers? ∙ If so, to what extent? Polynomially better? Exponentially better? Large classes? ∙ What are the most important resources? 8
∙ What can and cannot be achieved with each proof system? Randomness? Interaction? Private coins? some questions This is all very nice, but: ∙ Are proof-augmented testers stronger than standard testers? ∙ If so, to what extent? Polynomially better? Exponentially better? Large classes? ∙ What are the most important resources? 8
∙ What can and cannot be achieved with each proof system? Interaction? Private coins? some questions This is all very nice, but: ∙ Are proof-augmented testers stronger than standard testers? ∙ If so, to what extent? Polynomially better? Exponentially better? Large classes? ∙ What are the most important resources? Randomness? 8
∙ What can and cannot be achieved with each proof system? Private coins? some questions This is all very nice, but: ∙ Are proof-augmented testers stronger than standard testers? ∙ If so, to what extent? Polynomially better? Exponentially better? Large classes? ∙ What are the most important resources? Randomness? Interaction? 8
∙ What can and cannot be achieved with each proof system? some questions This is all very nice, but: ∙ Are proof-augmented testers stronger than standard testers? ∙ If so, to what extent? Polynomially better? Exponentially better? Large classes? ∙ What are the most important resources? Randomness? Interaction? Private coins? 8
some questions This is all very nice, but: ∙ Are proof-augmented testers stronger than standard testers? ∙ If so, to what extent? Polynomially better? Exponentially better? Large classes? ∙ What are the most important resources? Randomness? Interaction? Private coins? ∙ What can and cannot be achieved with each proof system? 8
functions vs distributions 9
functions vs distributions 10
first example
This is a hard problem (requires n n samples [Val11]) ...unless a prover is giving us support! Or rather, a prover is specifying supp D ... Then we only need O 1 samples to detect whether is D is -far from SuppSize k Caveat: this requires a long proof (O n n bits) support size Consider the support size problem: SuppSize ≤ n / 2 = { D ∈ ∆([ n ]) : | supp ( D ) | ≤ n / 2 } 12
(requires n n samples [Val11]) ...unless a prover is giving us support! Or rather, a prover is specifying supp D ... Then we only need O 1 samples to detect whether is D is -far from SuppSize k Caveat: this requires a long proof (O n n bits) support size Consider the support size problem: SuppSize ≤ n / 2 = { D ∈ ∆([ n ]) : | supp ( D ) | ≤ n / 2 } This is a hard problem 12
...unless a prover is giving us support! Or rather, a prover is specifying supp D ... Then we only need O 1 samples to detect whether is D is -far from SuppSize k Caveat: this requires a long proof (O n n bits) support size Consider the support size problem: SuppSize ≤ n / 2 = { D ∈ ∆([ n ]) : | supp ( D ) | ≤ n / 2 } This is a hard problem (requires Ω( n / log( n )) samples [Val11]) 12
Or rather, a prover is specifying supp D ... Then we only need O 1 samples to detect whether is D is -far from SuppSize k Caveat: this requires a long proof (O n n bits) support size Consider the support size problem: SuppSize ≤ n / 2 = { D ∈ ∆([ n ]) : | supp ( D ) | ≤ n / 2 } This is a hard problem (requires Ω( n / log( n )) samples [Val11]) ...unless a prover is giving us support! 12
Then we only need O 1 samples to detect whether is D is -far from SuppSize k Caveat: this requires a long proof (O n n bits) support size Consider the support size problem: SuppSize ≤ n / 2 = { D ∈ ∆([ n ]) : | supp ( D ) | ≤ n / 2 } This is a hard problem (requires Ω( n / log( n )) samples [Val11]) ...unless a prover is giving us support! Or rather, a prover is specifying supp ( D ) ... 12
Caveat: this requires a long proof (O n n bits) support size Consider the support size problem: SuppSize ≤ n / 2 = { D ∈ ∆([ n ]) : | supp ( D ) | ≤ n / 2 } This is a hard problem (requires Ω( n / log( n )) samples [Val11]) ...unless a prover is giving us support! Or rather, a prover is specifying supp ( D ) ... Then we only need O ( 1 /ε ) samples to detect whether is D is ε -far from SuppSize ≤ k 12
support size Consider the support size problem: SuppSize ≤ n / 2 = { D ∈ ∆([ n ]) : | supp ( D ) | ≤ n / 2 } This is a hard problem (requires Ω( n / log( n )) samples [Val11]) ...unless a prover is giving us support! Or rather, a prover is specifying supp ( D ) ... Then we only need O ( 1 /ε ) samples to detect whether is D is ε -far from SuppSize ≤ k Caveat: this requires a long proof (O ( n log n ) bits) 12
The proof length is a key complexity measure for proofs of proximity For functions, linear-length proofs completely trivialize the model! Why? (How to check that function f has property ) The tester has explicit access to the proof If f it can directly check whether Hence, it boils down to test that f is identical to which can easily be done using O 1 queries...for functions on long proofs – functions 13
For functions, linear-length proofs completely trivialize the model! Why? (How to check that function f has property ) The tester has explicit access to the proof If f it can directly check whether Hence, it boils down to test that f is identical to which can easily be done using O 1 queries...for functions on long proofs – functions The proof length is a key complexity measure for proofs of proximity 13
(How to check that function f has property ) The tester has explicit access to the proof If f it can directly check whether Hence, it boils down to test that f is identical to which can easily be done using O 1 queries...for functions on long proofs – functions The proof length is a key complexity measure for proofs of proximity For functions, linear-length proofs completely trivialize the model! Why? 13
The tester has explicit access to the proof If f it can directly check whether Hence, it boils down to test that f is identical to which can easily be done using O 1 queries...for functions on long proofs – functions The proof length is a key complexity measure for proofs of proximity For functions, linear-length proofs completely trivialize the model! Why? (How to check that function f has property Π ) 13
If f it can directly check whether Hence, it boils down to test that f is identical to which can easily be done using O 1 queries...for functions on long proofs – functions The proof length is a key complexity measure for proofs of proximity For functions, linear-length proofs completely trivialize the model! Why? (How to check that function f has property Π ) The tester has explicit access to the proof π 13
Hence, it boils down to test that f is identical to which can easily be done using O 1 queries...for functions on long proofs – functions The proof length is a key complexity measure for proofs of proximity For functions, linear-length proofs completely trivialize the model! Why? (How to check that function f has property Π ) The tester has explicit access to the proof π If π = f it can directly check whether π ∈ Π 13
which can easily be done using O 1 queries...for functions on long proofs – functions The proof length is a key complexity measure for proofs of proximity For functions, linear-length proofs completely trivialize the model! Why? (How to check that function f has property Π ) The tester has explicit access to the proof π If π = f it can directly check whether π ∈ Π Hence, it boils down to test that f is identical to π 13
...for functions on long proofs – functions The proof length is a key complexity measure for proofs of proximity For functions, linear-length proofs completely trivialize the model! Why? (How to check that function f has property Π ) The tester has explicit access to the proof π If π = f it can directly check whether π ∈ Π Hence, it boils down to test that f is identical to π which can easily be done using O ( 1 /ε ) queries 13
on long proofs – functions The proof length is a key complexity measure for proofs of proximity For functions, linear-length proofs completely trivialize the model! Why? (How to check that function f has property Π ) The tester has explicit access to the proof π If π = f it can directly check whether π ∈ Π Hence, it boils down to test that f is identical to π which can easily be done using O ( 1 /ε ) queries...for functions 13
O n 2 max ...or even O D 16 2 3 [VV17] where 2 3 denotes the 2 3 quasi-norm, and D 16 is the distribution obtained by removing the maximal element of D as well as removing a maximal set of elements of total mass 16 1 ...or perhaps O 1 c [BCG17] where c 0 is a constant, and D is the K-functional between D 1 and 2 with respect to the distribution D But wait, how can the proof fully describe the distribution? The description of D n may be very large (even infinite...) Luckily, it suffices to send a granular approximation D approx of D What if D , but D approx is close to, yet not in ? We can use a tolerant tester to make sure it rules the same max on long proofs – distributions For distribution testing, testing identity is much harder : 14
max ...or even O D 16 2 3 [VV17] where 2 3 denotes the 2 3 quasi-norm, and D 16 is the distribution obtained by removing the maximal element of D as well as removing a maximal set of elements of total mass 16 1 ...or perhaps O 1 c [BCG17] where c 0 is a constant, and D is the K-functional between D 1 and 2 with respect to the distribution D But wait, how can the proof fully describe the distribution? The description of D n may be very large (even infinite...) Luckily, it suffices to send a granular approximation D approx of D What if D , but D approx is close to, yet not in ? We can use a tolerant tester to make sure it rules the same max on long proofs – distributions For distribution testing, testing identity is much harder : O ( √ n /ε 2 ) 14
1 ...or perhaps O 1 c [BCG17] where c 0 is a constant, and D is the K-functional between D 1 and 2 with respect to the distribution D But wait, how can the proof fully describe the distribution? The description of D n may be very large (even infinite...) Luckily, it suffices to send a granular approximation D approx of D What if D , but D approx is close to, yet not in ? We can use a tolerant tester to make sure it rules the same on long proofs – distributions For distribution testing, testing identity is much harder : O ( √ n /ε 2 ) ...or even O ( ∥ D − max − ε/ 16 ∥ 2 / 3 ) [VV17] where ∥ · ∥ 2 / 3 denotes the ℓ 2 / 3 quasi-norm, and D − max − ε/ 16 is the distribution obtained by removing the maximal element of D as well as removing a maximal set of elements of total mass ε/ 16 14
But wait, how can the proof fully describe the distribution? The description of D n may be very large (even infinite...) Luckily, it suffices to send a granular approximation D approx of D What if D , but D approx is close to, yet not in ? We can use a tolerant tester to make sure it rules the same on long proofs – distributions For distribution testing, testing identity is much harder : O ( √ n /ε 2 ) ...or even O ( ∥ D − max − ε/ 16 ∥ 2 / 3 ) [VV17] where ∥ · ∥ 2 / 3 denotes the ℓ 2 / 3 quasi-norm, and D − max − ε/ 16 is the distribution obtained by removing the maximal element of D as well as removing a maximal set of elements of total mass ε/ 16 ...or perhaps O ( κ − 1 D ( 1 − c ε )) [BCG17] where c > 0 is a constant, and κ D is the K-functional between ℓ 1 and ℓ 2 with respect to the distribution D 14
The description of D n may be very large (even infinite...) Luckily, it suffices to send a granular approximation D approx of D What if D , but D approx is close to, yet not in ? We can use a tolerant tester to make sure it rules the same on long proofs – distributions For distribution testing, testing identity is much harder : O ( √ n /ε 2 ) ...or even O ( ∥ D − max − ε/ 16 ∥ 2 / 3 ) [VV17] where ∥ · ∥ 2 / 3 denotes the ℓ 2 / 3 quasi-norm, and D − max − ε/ 16 is the distribution obtained by removing the maximal element of D as well as removing a maximal set of elements of total mass ε/ 16 ...or perhaps O ( κ − 1 D ( 1 − c ε )) [BCG17] where c > 0 is a constant, and κ D is the K-functional between ℓ 1 and ℓ 2 with respect to the distribution D But wait, how can the proof fully describe the distribution? 14
Luckily, it suffices to send a granular approximation D approx of D What if D , but D approx is close to, yet not in ? We can use a tolerant tester to make sure it rules the same on long proofs – distributions For distribution testing, testing identity is much harder : O ( √ n /ε 2 ) ...or even O ( ∥ D − max − ε/ 16 ∥ 2 / 3 ) [VV17] where ∥ · ∥ 2 / 3 denotes the ℓ 2 / 3 quasi-norm, and D − max − ε/ 16 is the distribution obtained by removing the maximal element of D as well as removing a maximal set of elements of total mass ε/ 16 ...or perhaps O ( κ − 1 D ( 1 − c ε )) [BCG17] where c > 0 is a constant, and κ D is the K-functional between ℓ 1 and ℓ 2 with respect to the distribution D But wait, how can the proof fully describe the distribution? The description of D ∈ ∆([ n ]) may be very large (even infinite...) 14
What if D , but D approx is close to, yet not in ? We can use a tolerant tester to make sure it rules the same on long proofs – distributions For distribution testing, testing identity is much harder : O ( √ n /ε 2 ) ...or even O ( ∥ D − max − ε/ 16 ∥ 2 / 3 ) [VV17] where ∥ · ∥ 2 / 3 denotes the ℓ 2 / 3 quasi-norm, and D − max − ε/ 16 is the distribution obtained by removing the maximal element of D as well as removing a maximal set of elements of total mass ε/ 16 ...or perhaps O ( κ − 1 D ( 1 − c ε )) [BCG17] where c > 0 is a constant, and κ D is the K-functional between ℓ 1 and ℓ 2 with respect to the distribution D But wait, how can the proof fully describe the distribution? The description of D ∈ ∆([ n ]) may be very large (even infinite...) Luckily, it suffices to send a granular approximation D approx of D 14
We can use a tolerant tester to make sure it rules the same on long proofs – distributions For distribution testing, testing identity is much harder : O ( √ n /ε 2 ) ...or even O ( ∥ D − max − ε/ 16 ∥ 2 / 3 ) [VV17] where ∥ · ∥ 2 / 3 denotes the ℓ 2 / 3 quasi-norm, and D − max − ε/ 16 is the distribution obtained by removing the maximal element of D as well as removing a maximal set of elements of total mass ε/ 16 ...or perhaps O ( κ − 1 D ( 1 − c ε )) [BCG17] where c > 0 is a constant, and κ D is the K-functional between ℓ 1 and ℓ 2 with respect to the distribution D But wait, how can the proof fully describe the distribution? The description of D ∈ ∆([ n ]) may be very large (even infinite...) Luckily, it suffices to send a granular approximation D approx of D What if D ∈ Π , but D approx is close to, yet not in Π ? 14
on long proofs – distributions For distribution testing, testing identity is much harder : O ( √ n /ε 2 ) ...or even O ( ∥ D − max − ε/ 16 ∥ 2 / 3 ) [VV17] where ∥ · ∥ 2 / 3 denotes the ℓ 2 / 3 quasi-norm, and D − max − ε/ 16 is the distribution obtained by removing the maximal element of D as well as removing a maximal set of elements of total mass ε/ 16 ...or perhaps O ( κ − 1 D ( 1 − c ε )) [BCG17] where c > 0 is a constant, and κ D is the K-functional between ℓ 1 and ℓ 2 with respect to the distribution D But wait, how can the proof fully describe the distribution? The description of D ∈ ∆([ n ]) may be very large (even infinite...) Luckily, it suffices to send a granular approximation D approx of D What if D ∈ Π , but D approx is close to, yet not in Π ? We can use a tolerant tester to make sure it rules the same 14
functions vs distributions 15
∙ there exists a (hard) property that is testable via O 1 samples Can we get significant savings via short (sublinear) proofs? Yes! Theorem There exists a property that requires n samples to test, yet , given a proof of length O n can be tested using O n 1 samples But can we do better? Not much... (not without interaction) what about short proofs? So far we saw that: ∙ any property can be tested using O ( √ n /ε 2 ) -ish samples 16
Can we get significant savings via short (sublinear) proofs? Yes! Theorem There exists a property that requires n samples to test, yet , given a proof of length O n can be tested using O n 1 samples But can we do better? Not much... (not without interaction) what about short proofs? So far we saw that: ∙ any property can be tested using O ( √ n /ε 2 ) -ish samples ∙ there exists a (hard) property that is testable via O ( 1 /ε ) samples 16
Can we get significant savings via short (sublinear) proofs? Yes! Theorem There exists a property that requires n samples to test, yet , given a proof of length O n can be tested using O n 1 samples But can we do better? Not much... (not without interaction) what about short proofs? So far we saw that: ∙ any property can be tested using O ( √ n /ε 2 ) -ish samples ∙ there exists a (hard) property that is testable via O ( 1 /ε ) samples 16
Yes! Theorem There exists a property that requires n samples to test, yet , given a proof of length O n can be tested using O n 1 samples But can we do better? Not much... (not without interaction) what about short proofs? So far we saw that: ∙ any property can be tested using O ( √ n /ε 2 ) -ish samples ∙ there exists a (hard) property that is testable via O ( 1 /ε ) samples Can we get significant savings via short (sublinear) proofs? 16
But can we do better? Not much... (not without interaction) what about short proofs? So far we saw that: ∙ any property can be tested using O ( √ n /ε 2 ) -ish samples ∙ there exists a (hard) property that is testable via O ( 1 /ε ) samples Can we get significant savings via short (sublinear) proofs? Yes! Theorem Ω( √ n ) samples to test, yet ∀ β , There exists a property that requires ˜ given a proof of length ˜ O ( n β ) can be tested using O ( n 1 − β ) samples 16
But can we do better? Not much... (not without interaction) what about short proofs? So far we saw that: ∙ any property can be tested using O ( √ n /ε 2 ) -ish samples ∙ there exists a (hard) property that is testable via O ( 1 /ε ) samples Can we get significant savings via short (sublinear) proofs? Yes! Theorem Ω( √ n ) samples to test, yet ∀ β , There exists a property that requires ˜ given a proof of length ˜ O ( n β ) can be tested using O ( n 1 − β ) samples 16
Not much... (not without interaction) what about short proofs? So far we saw that: ∙ any property can be tested using O ( √ n /ε 2 ) -ish samples ∙ there exists a (hard) property that is testable via O ( 1 /ε ) samples Can we get significant savings via short (sublinear) proofs? Yes! Theorem Ω( √ n ) samples to test, yet ∀ β , There exists a property that requires ˜ given a proof of length ˜ O ( n β ) can be tested using O ( n 1 − β ) samples But can we do better? 16
what about short proofs? So far we saw that: ∙ any property can be tested using O ( √ n /ε 2 ) -ish samples ∙ there exists a (hard) property that is testable via O ( 1 /ε ) samples Can we get significant savings via short (sublinear) proofs? Yes! Theorem Ω( √ n ) samples to test, yet ∀ β , There exists a property that requires ˜ given a proof of length ˜ O ( n β ) can be tested using O ( n 1 − β ) samples But can we do better? Not much... (not without interaction) 16
Distribution testers are not only non-adaptive w.r.t. the samples, but also w.r.t. the proof Thus, testers can emulate all possible proofs reusing the samples! Since there are 2 p possible proofs, we need to amplify the soundness to assure no error occurs w.h.p. To this end, we invoke the tester O p times, increasing the sample complexity to O p s . limitations of non-interactive proofs of proximity Lemma For every Π and MA distribution tester for Π with proof length p and sample complexity s, it holds that p · s = Ω( SAMP (Π)) The idea 17
but also w.r.t. the proof Thus, testers can emulate all possible proofs reusing the samples! Since there are 2 p possible proofs, we need to amplify the soundness to assure no error occurs w.h.p. To this end, we invoke the tester O p times, increasing the sample complexity to O p s . limitations of non-interactive proofs of proximity Lemma For every Π and MA distribution tester for Π with proof length p and sample complexity s, it holds that p · s = Ω( SAMP (Π)) The idea Distribution testers are not only non-adaptive w.r.t. the samples, 17
Thus, testers can emulate all possible proofs reusing the samples! Since there are 2 p possible proofs, we need to amplify the soundness to assure no error occurs w.h.p. To this end, we invoke the tester O p times, increasing the sample complexity to O p s . limitations of non-interactive proofs of proximity Lemma For every Π and MA distribution tester for Π with proof length p and sample complexity s, it holds that p · s = Ω( SAMP (Π)) The idea Distribution testers are not only non-adaptive w.r.t. the samples, but also w.r.t. the proof 17
Since there are 2 p possible proofs, we need to amplify the soundness to assure no error occurs w.h.p. To this end, we invoke the tester O p times, increasing the sample complexity to O p s . limitations of non-interactive proofs of proximity Lemma For every Π and MA distribution tester for Π with proof length p and sample complexity s, it holds that p · s = Ω( SAMP (Π)) The idea Distribution testers are not only non-adaptive w.r.t. the samples, but also w.r.t. the proof Thus, testers can emulate all possible proofs reusing the samples! 17
To this end, we invoke the tester O p times, increasing the sample complexity to O p s . limitations of non-interactive proofs of proximity Lemma For every Π and MA distribution tester for Π with proof length p and sample complexity s, it holds that p · s = Ω( SAMP (Π)) The idea Distribution testers are not only non-adaptive w.r.t. the samples, but also w.r.t. the proof Thus, testers can emulate all possible proofs reusing the samples! Since there are 2 p possible proofs, we need to amplify the soundness to assure no error occurs w.h.p. 17
limitations of non-interactive proofs of proximity Lemma For every Π and MA distribution tester for Π with proof length p and sample complexity s, it holds that p · s = Ω( SAMP (Π)) The idea Distribution testers are not only non-adaptive w.r.t. the samples, but also w.r.t. the proof Thus, testers can emulate all possible proofs reusing the samples! Since there are 2 p possible proofs, we need to amplify the soundness to assure no error occurs w.h.p. To this end, we invoke the tester O ( p ) times, increasing the sample complexity to O ( p · s ) . 17
Recommend
More recommend