Spectrum Sensing: Problem Statement Directional transmission B ⊂ 2 π is available for transmission � Motivation & Setup Angular resolution of δ ≤ B � Examles Spectrum Sensing ⊲ Initial Access Noisy Search Code to Search Break Experiment Design Subsets of B are used sequentially by transmitter (receiver) � Inspection of a subset results in a signal plus noise � measurement Y a = a T ( W + Z ) B Z ∼ N (0 , δσ 2 I ) a , W ∈ { 0 , 1 } || W || 0 = K δ 7 / 30
Spectrum Sensing: Problem Statement Directional transmission B ⊂ 2 π is available for transmission � Motivation & Setup Angular resolution of δ ≤ B � Examles Spectrum Sensing ⊲ Initial Access Noisy Search Code to Search Break Experiment Design Subsets of B are used sequentially by transmitter (receiver) � Inspection of a subset results in a signal plus noise � measurement Y a = a T ( W + Z ) B Z ∼ N (0 , δσ 2 I ) a , W ∈ { 0 , 1 } || W || 0 = K δ 7 / 30
Spectrum Sensing: Problem Statement Directional transmission B ⊂ 2 π is available for transmission � Motivation & Setup Angular resolution of δ ≤ B � Examles Spectrum Sensing ⊲ Initial Access Noisy Search Code to Search Break Experiment Design Subsets of B are used sequentially by transmitter (receiver) � Inspection of a subset results in a signal plus noise � measurement Y a = a T ( W + Z ) B Z ∼ N (0 , δσ 2 I ) a , W ∈ { 0 , 1 } || W || 0 = K δ Minimize E { τ ǫ } � 7 / 30
Motivation & Setup Examles ⊲ Noisy Search Problem Setup Questions Analysis I Analysis II Summary Result Measurement-Dependent Noisy Search Code to Search Break Experiment Design 8 / 30
Measurement-Dependent Noisy Search Motivation & Setup B δ , || W || 0 = 1 Uknown parameter: W ∈ { 0 , 1 } � Examles B δ chosen sequentially Actions A ( t ) ∈ A ⊂ { 0 , 1 } � Noisy Search ⊲ Problem Setup Y ( t ) = A ( t )( W + Z ) � Questions Analysis I Analysis II Summary Result Code to Search Break Experiment Design 9 / 30
Measurement-Dependent Noisy Search Motivation & Setup B δ , || W || 0 = 1 Uknown parameter: W ∈ { 0 , 1 } � Examles B δ chosen sequentially Actions A ( t ) ∈ A ⊂ { 0 , 1 } � Noisy Search ⊲ Problem Setup Y ( t ) = A ( t )( W + Z ) = A ( t ) W + ˆ Z � Questions Analysis I Observation noise variance increases w | A ( t ) | – Analysis II Summary Result Code to Search Break Experiment Design 9 / 30
Measurement-Dependent Noisy Search Motivation & Setup B δ , || W || 0 = 1 Uknown parameter: W ∈ { 0 , 1 } � Examles B δ chosen sequentially Actions A ( t ) ∈ A ⊂ { 0 , 1 } � Noisy Search ⊲ Problem Setup Y ( t ) = A ( t )( W + Z ) = A ( t ) W + ˆ Z � Questions Analysis I Observation noise variance increases w | A ( t ) | – Analysis II Summary Result τ − 1 time 1 . . . τ Code to Search A ( τ − 1) sample A (1) . . . Break observation Y (1) . . . Y ( τ − 1) Experiment Design ˆ W = d ( Y τ − 1 , x τ − 1 ) declaration error 1 { ˆ W � = W } Objective: Find τ , A (0) , . . . , A ( τ − 1) , and d ( · ) that minimize E [ τ ] s.t. Pe ≤ ǫ 9 / 30
Measurement-Dependent Noisy Search Motivation & Setup B δ , || W || 0 = 1 Uknown parameter: W ∈ { 0 , 1 } � Examles B δ chosen sequentially Actions A ( t ) ∈ A ⊂ { 0 , 1 } � Noisy Search ⊲ Problem Setup Y ( t ) = A ( t )( W + Z ) = A ( t ) W + ˆ Z � Questions Analysis I Observation noise variance increases w | A ( t ) | – Analysis II Summary Result τ − 1 time 1 . . . τ Code to Search A ( τ − 1) sample A (1) . . . Break observation Y (1) . . . Y ( τ − 1) Experiment Design ˆ W = d ( Y τ − 1 , x τ − 1 ) declaration error 1 { ˆ W � = W } Objective: Find τ , A (0) , . . . , A ( τ − 1) , and d ( · ) that minimize E [ τ ] s.t. Pe ≤ ǫ Numerical solution via a dynamic programming equation � 9 / 30
Simpler Questions of General Consequence Role of allowable actions set A Motivation & Setup � Examles Noisy Search Problem Setup ⊲ Questions Analysis I Analysis II Summary Result Code to Search Break Experiment Design 10 / 30
Simpler Questions of General Consequence Role of allowable actions set A Motivation & Setup � Examles Noisy Search Designing A can significantly reduce the overhead – Problem Setup ⊲ Questions Analysis I Analysis II Summary Result Code to Search Break Experiment Design 10 / 30
Simpler Questions of General Consequence Role of allowable actions set A Motivation & Setup � Examles Noisy Search Designing A can significantly reduce the overhead – Problem Setup ⊲ Questions Analysis I Even though noise variance increases w | a | linearly! ⊲ Analysis II Summary Result Code to Search Break Experiment Design 10 / 30
Simpler Questions of General Consequence Role of allowable actions set A Motivation & Setup � Examles Noisy Search Designing A can significantly reduce the overhead – Problem Setup ⊲ Questions Analysis I Even though noise variance increases w | a | linearly! ⊲ Analysis II Summary Result Code to Search Selecting A ( t ) based on past observations (a feedback � Break scheme) or off-line (non-adaptively)? Experiment Design 10 / 30
Simpler Questions of General Consequence Role of allowable actions set A Motivation & Setup � Examles Noisy Search Designing A can significantly reduce the overhead – Problem Setup ⊲ Questions Analysis I Even though noise variance increases w | a | linearly! ⊲ Analysis II Summary Result Code to Search Selecting A ( t ) based on past observations (a feedback � Break scheme) or off-line (non-adaptively)? Experiment Design What is the adaptivity gain? – Feedback policies are computationally expensive – 10 / 30
Role of Measurements Role of allowable actions set A Motivation & Setup � Examles Noisy Search Problem Setup Questions ⊲ Analysis I Analysis II Summary Result Code to Search Break Experiment Design 11 / 30
Role of Measurements Role of allowable actions set A Motivation & Setup � Examles Advantages of group testing – Noisy Search Problem Setup Questions ⊲ Analysis I Analysis II Summary Result Code to Search Break Experiment Design 11 / 30
Role of Measurements Role of allowable actions set A Motivation & Setup � Examles Advantages of group testing – Noisy Search Problem Setup Questions ⊲ Analysis I Analysis II Summary Result Code to Search Break Experiment Design 11 / 30
Role of Measurements Role of allowable actions set A Motivation & Setup � Examles Advantages of group testing – Noisy Search Problem Setup Questions ⊲ Analysis I Analysis II Summary Result Code to Search Break Experiment Design If A only singletons ( || A ( t ) || = 1 ) ⇒ search time O ( B/δ ) – If A includes intervals, can be O (log( B/δǫ )) – 11 / 30
Role of Measurements Role of allowable actions set A Motivation & Setup � Examles Advantages of group testing – Noisy Search Problem Setup Questions ⊲ Analysis I Analysis II Summary Result Code to Search Break Experiment Design If A only singletons ( || A ( t ) || = 1 ) ⇒ search time O ( B/δ ) – If A includes intervals, can be O (log( B/δǫ )) – Observation: X � �� � If Y a = z ) , ⇒ E [ τ ] ≈ log B/δǫ 1 { object in a } + Z , Z ∼ N (0 , σ 2 I ( X,Y a ) 11 / 30
Adaptivity Gain Motivation & Setup Selecting A ( t ) based on past observations (a feedback � Examles scheme) is computationally expensive Noisy Search Problem Setup Questions Analysis I ⊲ Analysis II Summary Result Code to Search Break Experiment Design 12 / 30
Adaptivity Gain Motivation & Setup Selecting A ( t ) based on past observations (a feedback � Examles scheme) is computationally expensive Noisy Search Problem Setup Critical to quantify the Adaptivity (feedback) gain E [ τ ǫ ] : � Questions Analysis I ⊲ Analysis II E [ τ na ǫ ] − E [ τ ∗ ǫ ] Summary Result Code to Search Break Experiment Design 12 / 30
Adaptivity Gain Motivation & Setup Selecting A ( t ) based on past observations (a feedback � Examles scheme) is computationally expensive Noisy Search Problem Setup Critical to quantify the Adaptivity (feedback) gain E [ τ ǫ ] : � Questions Analysis I ⊲ Analysis II E [ τ na ǫ ] − E [ τ ∗ ǫ ] Summary Result Code to Search Asymptotic analysis when B/δ grows � Break Experiment Design 12 / 30
Adaptivity Gain Motivation & Setup Selecting A ( t ) based on past observations (a feedback � Examles scheme) is computationally expensive Noisy Search Problem Setup Critical to quantify the Adaptivity (feedback) gain E [ τ ǫ ] : � Questions Analysis I ⊲ Analysis II E [ τ na ǫ ] − E [ τ ∗ ǫ ] Summary Result Code to Search Asymptotic analysis when B/δ grows � Break Experiment Design Qualitative difference when B grows versus δ shrinks – 12 / 30
Adaptivity Gain Motivation & Setup Selecting A ( t ) based on past observations (a feedback � Examles scheme) is computationally expensive Noisy Search Problem Setup Critical to quantify the Adaptivity (feedback) gain E [ τ ǫ ] : � Questions Analysis I ⊲ Analysis II E [ τ na ǫ ] − E [ τ ∗ ǫ ] Summary Result Code to Search Asymptotic analysis when B/δ grows � Break Experiment Design Qualitative difference when B grows versus δ shrinks – When B grows overall noise variance grows ⊲ Overall noise is constant even when 1 /δ grows ⊲ 12 / 30
Adaptivity Gain Motivation & Setup Selecting A ( t ) based on past observations (a feedback � Examles scheme) is computationally expensive Noisy Search Problem Setup Critical to quantify the Adaptivity (feedback) gain E [ τ ǫ ] : � Questions Analysis I ⊲ Analysis II E [ τ na ǫ ] − E [ τ ∗ ǫ ] Summary Result Code to Search Asymptotic analysis when B/δ grows � Break Experiment Design Qualitative difference when B grows versus δ shrinks – When B grows overall noise variance grows ⊲ Overall noise is constant even when 1 /δ grows ⊲ Need for a fairly tight non-asymptotic analysis – 12 / 30
Our Contributions: Main Take-aways (general K , K = 1 ) Motivation & Setup Searching with codebooks with feedback over a stateful � Examles channel Noisy Search Problem Setup Questions Analysis I Analysis II ⊲ Summary Result Code to Search Break Experiment Design 13 / 30
Our Contributions: Main Take-aways (general K , K = 1 ) Searching with codebooks with feedback over a stateful Motivation & Setup � Examles channel Noisy Search Problem Setup Questions Analysis I Analysis II ⊲ Summary Result (1) Code to Search Break (2) Y n Experiment Design (r) Z n 13 / 30
Our Contributions: Main Take-aways (general K , K = 1 ) Motivation & Setup Searching with codebooks with feedback over a stateful � Examles channel ( K = 1 ) Noisy Search Problem Setup Reduces the non-adaptive case to known IT problems – Questions Adaptive strategy as a variant of feedback code – Analysis I Analysis II ⊲ Summary Result Code to Search Break Experiment Design 13 / 30
Our Contributions: Main Take-aways (general K , K = 1 ) Motivation & Setup Searching with codebooks with feedback over a stateful � Examles channel ( K = 1 ) Noisy Search Problem Setup Reduces the non-adaptive case to known IT problems – Questions Adaptive strategy as a variant of feedback code – Analysis I Analysis II ⊲ Summary Result Non-asymptotic achievability analysis for an adaptive scheme � Code to Search Sorted Posterior Matching (SortPM) search strategy – Break Experiment Design 13 / 30
Our Contributions: Main Take-aways (general K , K = 1 ) Motivation & Setup Searching with codebooks with feedback over a stateful � Examles channel ( K = 1 ) Noisy Search Problem Setup Reduces the non-adaptive case to known IT problems – Questions Adaptive strategy as a variant of feedback code – Analysis I Analysis II ⊲ Summary Result Non-asymptotic achievability analysis for an adaptive scheme � Code to Search Sorted Posterior Matching (SortPM) search strategy – Break Experiment Design Characterize daptivity gain with two distinct asymptotic � regimes B/δ → ∞ 13 / 30
Our Contributions: Main Take-aways (general K , K = 1 ) Motivation & Setup Searching with codebooks with feedback over a stateful � Examles channel ( K = 1 ) Noisy Search Problem Setup Reduces the non-adaptive case to known IT problems – Questions Adaptive strategy as a variant of feedback code – Analysis I Analysis II ⊲ Summary Result Non-asymptotic achievability analysis for an adaptive scheme � Code to Search Sorted Posterior Matching (SortPM) search strategy – Break Experiment Design Characterize daptivity gain with two distinct asymptotic � regimes B/δ → ∞ Fixed search interval and increasing resolution (initial – access) Fixed resolution and increasing search (primary user – detection) 13 / 30
Our Contributions: Main Take-aways (general K , K = 1 ) Searching with codebooks with feedback over a stateful Motivation & Setup � Examles channel ( K = 1 ) Noisy Search Reduces the non-adaptive case to known IT problems – Problem Setup Questions Adaptive strategy as a variant of feedback code – Analysis I Analysis II ⊲ Summary Result Non-asymptotic achievability analysis for an adaptive scheme � Code to Search Sorted Posterior Matching (SortPM) search strategy – Break Characterize daptivity gain with two distinct asymptotic � Experiment Design regimes B/δ → ∞ Fixed search interval and increasing resolution (initial – access) Fixed resolution and increasing search (primary user – detection) 13 / 30
Motivation & Setup Examles Noisy Search ⊲ Code to Search Non-adaptive Search Strategies Upper Bound Prior Work Analysis Generalizations I Generalizations II Generalization III Break Experiment Design 14 / 30
Non-asymptotic Converse for Non-adaptive Search: Searching via coding over a stateful channel Motivation & Setup � Examles Noisy Search Code to Search ⊲ Non-adaptive Search Strategies Upper Bound Prior Work Generalizations I Generalizations II Generalization III Break Experiment Design 15 / 30
Non-asymptotic Converse for Non-adaptive Search: Searching via coding over a stateful channel Motivation & Setup � Examles Noisy Search Code to Search (1) ⊲ Non-adaptive (2) Search Strategies Y n Upper Bound Prior Work (r) Z n Generalizations I Generalizations II Generalization III Break Experiment Design 15 / 30
Non-asymptotic Converse for Non-adaptive Search: Searching via coding over a stateful channel Motivation & Setup � Examles Noisy Search Code to Search (1) ⊲ Non-adaptive (2) Search Strategies Y n Upper Bound Prior Work (r) Z n Generalizations I Generalizations II Generalization III Break Reduces non-adaptive case to known IT problem: Experiment Design – X q ∼ Ber ( q ) , Z q ∼ N (0 , qB Y = X q + Z q , δ σ 2 ) 15 / 30
Non-asymptotic Converse for Non-adaptive Search: Searching via coding over a stateful channel Motivation & Setup � Examles Noisy Search Code to Search (1) ⊲ Non-adaptive (2) Search Strategies Y n Upper Bound Prior Work (r) Z n Generalizations I Generalizations II Generalization III Break Reduces non-adaptive case to known IT problem: Experiment Design – X q ∼ Ber ( q ) , Z q ∼ N (0 , qB Y = X q + Z q , δ σ 2 ) ] ≥ (1 − ǫ ) log B δ − h ( ǫ ) E [ τ NA � ǫ C BPSK ( q, σ qB/δ ) 15 / 30
Non-asymptotic Converse for Non-adaptive Search: Searching via coding over a stateful channel Motivation & Setup � Examles Noisy Search Code to Search (1) ⊲ Non-adaptive (2) Search Strategies Y n Upper Bound Prior Work (r) Z n Generalizations I Generalizations II Generalization III Break Reduces non-adaptive case to known IT problem: Experiment Design – X q ∼ Ber ( q ) , Z q ∼ N (0 , qB Y = X q + Z q , δ σ 2 ) (1 − ǫ ) log B δ − h ( ǫ ) E [ τ NA ] ≥ � ǫ C BPSK ( q ∗ , σ q ∗ B/δ ) 15 / 30
Non-asymptotic Converse for Non-adaptive Search: Searching via coding over a stateful channel Motivation & Setup � Examles Noisy Search Code to Search ⊲ Non-adaptive Search Strategies Upper Bound Prior Work Generalizations I Generalizations II Generalization III Break Experiment Design Reduces non-adaptive case to known IT problem: – X q ∼ Ber ( q ) , Z q ∼ N (0 , qB Y = X q + Z q , δ σ 2 ) (1 − ǫ ) log B δ − h ( ǫ ) E [ τ NA ] ≥ � ǫ C BPSK ( q ∗ , σ q ∗ B/δ ) 15 / 30
Non-asymptotic Converse for Non-adaptive Search: Searching via coding over a stateful channel Motivation & Setup � Examles Noisy Search Code to Search ⊲ Non-adaptive Search Strategies Upper Bound Prior Work Generalizations I Generalizations II Generalization III Break Experiment Design Reduces non-adaptive case to known IT problem: – X q ∼ Ber ( q ) , Z q ∼ N (0 , qB Y = X q + Z q , δ σ 2 ) (1 − ǫ ) log B δ − h ( ǫ ) E [ τ NA ] ≥ � ǫ C BPSK ( q ∗ , σ q ∗ B/δ ) 15 / 30
Non-adaptive and Adaptive Search Strategies Motivation & Setup Examles Noisy Search Code to Search Non-adaptive ⊲ Search Strategies Upper Bound Prior Work Generalizations I Generalizations II Generalization III Break Experiment Design 16 / 30
Non-adaptive and Adaptive Search Strategies Non-adaptive Strategy: Motivation & Setup Examles Noisy Search Code to Search Non-adaptive ⊲ Search Strategies Upper Bound Prior Work Generalizations I Generalizations II Generalization III Break Experiment Design 16 / 30
Non-adaptive and Adaptive Search Strategies Non-adaptive Strategy: Motivation & Setup Examles Fix the number of samples τ = T Noisy Search Code to Search Non-adaptive ⊲ Search Strategies Upper Bound Prior Work Generalizations I Generalizations II Generalization III Break Experiment Design 16 / 30
Non-adaptive and Adaptive Search Strategies Non-adaptive Strategy: Motivation & Setup Examles Fix the number of samples τ = T Noisy Search select T to be such that E { P e } ≤ ǫ � Code to Search Non-adaptive ⊲ Search Strategies Upper Bound Prior Work Generalizations I Generalizations II Generalization III Break Experiment Design 16 / 30
Non-adaptive and Adaptive Search Strategies Non-adaptive Strategy: Motivation & Setup Examles Fix the number of samples τ = T Noisy Search select T to be such that E { P e } ≤ ǫ � Code to Search for all t ≤ T query random set a such that | a | = q ∗ B/δ Non-adaptive � ⊲ Search Strategies optimized Upper Bound Prior Work Generalizations I Generalizations II Generalization III Break Experiment Design 16 / 30
Non-adaptive and Adaptive Search Strategies Motivation & Setup Sorted Posterior Matching (sortPM) Strategy: Examles Consider prior ρ ( t ) := ( P { W = e i | A (0 : t − 1) , Y (0 , t − 1) } ) Noisy Search Code to Search declares i as the target, if ρ i ( t ) ≥ 1 − ǫ, i ∈ Ω � Non-adaptive ⊲ Search Strategies otherwise, queries the bins left of the median of the sorted � Upper Bound prior Prior Work Generalizations I observe (noisy) Y Generalizations II – Generalization III update the prior (posterior) via the Bayes’ rule – Break Experiment Design 16 / 30
Non-adaptive and Adaptive Search Strategies Motivation & Setup Sorted Posterior Matching (sortPM) Strategy: Examles Consider prior ρ ( t ) := ( P { W = e i | A (0 : t − 1) , Y (0 , t − 1) } ) Noisy Search Code to Search declares i as the target, if ρ i ( t ) ≥ 1 − ǫ, i ∈ Ω � Non-adaptive ⊲ Search Strategies otherwise, queries the bins left of the median of the sorted � Upper Bound prior Prior Work Generalizations I observe (noisy) Y Generalizations II – Generalization III update the prior (posterior) via the Bayes’ rule – Break Experiment Design � � � � � � � � �� 16 / 30
Non-adaptive and Adaptive Search Strategies Motivation & Setup Sorted Posterior Matching (sortPM) Strategy: Examles Consider prior ρ ( t ) := ( P { W = e i | A (0 : t − 1) , Y (0 , t − 1) } ) Noisy Search Code to Search declares i as the target, if ρ i ( t ) ≥ 1 − ǫ, i ∈ Ω � Non-adaptive ⊲ Search Strategies otherwise, queries the bins left of the median of the sorted � Upper Bound prior Prior Work Generalizations I observe (noisy) Y Generalizations II – Generalization III update the prior (posterior) via the Bayes’ rule – Break Experiment Design � � � � � � � � �� 16 / 30
Non-adaptive and Adaptive Search Strategies Motivation & Setup Sorted Posterior Matching (sortPM) Strategy: Examles Consider prior ρ ( t ) := ( P { W = e i | A (0 : t − 1) , Y (0 , t − 1) } ) Noisy Search Code to Search declares i as the target, if ρ i ( t ) ≥ 1 − ǫ, i ∈ Ω � Non-adaptive ⊲ Search Strategies otherwise, queries the bins left of the median of the sorted � Upper Bound prior Prior Work Generalizations I observe (noisy) Y Generalizations II – Generalization III update the prior (posterior) via the Bayes’ rule – Break Experiment Design � � � � � � � � �� 16 / 30
Non-adaptive and Adaptive Search Strategies Motivation & Setup Sorted Posterior Matching (sortPM) Strategy: Examles Consider prior ρ ( t ) := ( P { W = e i | A (0 : t − 1) , Y (0 , t − 1) } ) Noisy Search Code to Search declares i as the target, if ρ i ( t ) ≥ 1 − ǫ, i ∈ Ω � Non-adaptive ⊲ Search Strategies otherwise, queries the bins left of the median of the sorted � Upper Bound prior Prior Work Generalizations I observe (noisy) Y Generalizations II – Generalization III update the prior (posterior) via the Bayes’ rule – Break Experiment Design ���������������������� ��������� �������� � � � � � � � � �� 16 / 30
Non-adaptive and Adaptive Search Strategies Motivation & Setup Sorted Posterior Matching (sortPM) Strategy: Examles Consider prior ρ ( t ) := ( P { W = e i | A (0 : t − 1) , Y (0 , t − 1) } ) Noisy Search Code to Search declares i as the target, if ρ i ( t ) ≥ 1 − ǫ, i ∈ Ω � Non-adaptive ⊲ Search Strategies otherwise, queries the bins left of the median of the sorted � Upper Bound prior Prior Work Generalizations I observe (noisy) Y Generalizations II – Generalization III update the prior (posterior) via the Bayes’ rule – Break Experiment Design 16 / 30
SortPM: Upper Bound Motivation & Setup Theorem. [ Lalitha, Ronquillo and J. 17 ] Under SortPM, we have Examles log B/δǫ + max { log log B/δ, log log 1 ǫ } Noisy Search E [ τ SP M ] ≤ min + K ( α ) . 1 − h ( Q (( σ 2 αB/δ ) − 1 / 2 )) α Code to Search Non-adaptive Search Strategies ⊲ Upper Bound Prior Work Generalizations I Generalizations II where Generalization III h ( p ) = p log 1 1 Break p + (1 − p ) log 1 − p, Experiment Design K ( · ) is non-increasing function 17 / 30
SortPM: Upper Bound Motivation & Setup Theorem. [ Lalitha, Ronquillo and J. 17 ] Under SortPM, we have Examles log B/δǫ + max { log log B/δ, log log 1 ǫ } Noisy Search E [ τ SP M ] ≤ min + K ( α ) . 1 − h ( Q (( σ 2 αB/δ ) − 1 / 2 )) α Code to Search Non-adaptive Search Strategies ⊲ Upper Bound Prior Work Generalizations I Generalizations II where Generalization III h ( p ) = p log 1 1 Break p + (1 − p ) log 1 − p, Experiment Design K ( · ) is non-increasing function Analysis is based on a Lyapunov drift � 17 / 30
SortPM: Upper Bound Motivation & Setup Examles Corollary. [ Lalitha, Ronqullio and J. 17 ] Relying on hard-detected Noisy Search output symbols, the asymptotic adaptivity gain for B/δ → ∞ is: Code to Search Non-adaptive Search Strategies ⊲ Upper Bound τ NA opt − E [ τ A opt ] 1 Prior Work C BPSK ( q ∗ , Bσ 2 ) − 1 . lim = Generalizations I log B δ → 0 Generalizations II δ Generalization III Break Experiment Design τ NA opt − E [ τ A ≥ σ 2 δ opt ] lim log e. B δ log B B →∞ δ 17 / 30
SortPM: Upper Bound Motivation & Setup Corollary. [ Lalitha, Ronqullio and J. 17 ] Relying on hard-detected Examles output symbols, the asymptotic adaptivity gain for B/δ → ∞ is: Noisy Search Code to Search Non-adaptive τ NA opt − E [ τ A opt ] 1 Search Strategies C BPSK ( q ∗ , Bσ 2 ) − 1 . lim = ⊲ Upper Bound log B δ → 0 Prior Work δ Generalizations I Generalizations II Generalization III τ NA opt − E [ τ A ≥ σ 2 δ opt ] Break lim log e. B δ log B B →∞ Experiment Design δ 17 / 30
Prior Work: Measurement Independent Noise Motivation & Setup Examles Generalized binary search [Burnashev and Zigangirov ’74] Noisy Search � Code to Search Non-adaptive Search Strategies Channel coding over DMC with feedback [Burnashev ’75], � Upper Bound ⊲ Prior Work [Yamamato and Itoh ’79], ... [Naghshvar, Wigger and J ’13] Generalizations I Generalizations II Generalization III Break Posterior matching [Shayevitz and Feder ’11] � Experiment Design Bisection search with noisy responses [Horstein ’63], � [Waeber, Frazier, Henderson ’13] 18 / 30
Generalizations General noise model: Y ( t ) = A ( t ) W + ˆ Z , ˆ Motivation & Setup Z = f ( Z , A ( t )) � Examles Noisy Search Code to Search Non-adaptive Search Strategies Upper Bound Prior Work ⊲ Generalizations I Generalizations II Generalization III Break Experiment Design 19 / 30
Generalizations General noise model: Y ( t ) = A ( t ) W + ˆ Z , ˆ Motivation & Setup Z = f ( Z , A ( t )) � Examles Noisy Search 24 Code to Search Non-adaptive 22 Search Strategies 20 Upper Bound Prior Work ⊲ Generalizations I 18 Generalizations II 16 Generalization III 14 Break Experiment Design 12 10 8 10 1 10 2 19 / 30
Generalizations General noise model: Y ( t ) = A ( t ) W + ˆ Z , ˆ Motivation & Setup Z = f ( Z , A ( t )) � Examles Noisy Search 24 Code to Search Non-adaptive 22 Search Strategies 20 Upper Bound Prior Work ⊲ Generalizations I 18 Generalizations II 16 Generalization III 14 Break Experiment Design 12 10 8 10 1 10 2 19 / 30
Generalizations General noise model: Y ( t ) = A ( t ) W + ˆ Z , ˆ Motivation & Setup Z = f ( Z , A ( t )) � Examles Noisy Search 24 Code to Search Non-adaptive 22 Search Strategies 20 Upper Bound Prior Work ⊲ Generalizations I 18 Generalizations II 16 Generalization III 14 Break 12 Experiment Design 10 8 10 1 10 2 19 / 30
Generalizations General noise model: Y ( t ) = A ( t ) W + ˆ Z , ˆ Motivation & Setup Z = f ( Z , A ( t )) � Examles Noisy Search Fixed (hierarchical) beam patterns � Code to Search Non-adaptive Search Strategies Upper Bound Prior Work ⊲ Generalizations I Generalizations II Generalization III Break Experiment Design 19 / 30
Generalizations General noise model: Y ( t ) = A ( t ) W + ˆ Z , ˆ Motivation & Setup Z = f ( Z , A ( t )) � Examles Noisy Search Fixed (hierarchical) beam patterns � Code to Search Non-adaptive Search Strategies Upper Bound Prior Work ⊲ Generalizations I Generalizations II Generalization III Break Experiment Design 19 / 30
Generalizations General noise model: Y ( t ) = A ( t ) W + ˆ Z , ˆ Motivation & Setup Z = f ( Z , A ( t )) � Examles Noisy Search Fixed (hierarchical) beam patterns � Code to Search Non-adaptive Search Strategies Upper Bound Prior Work ⊲ Generalizations I Generalizations II Generalization III Break Experiment Design 19 / 30
Generalizations General noise model: Y ( t ) = A ( t ) W + ˆ Z , ˆ Motivation & Setup Z = f ( Z , A ( t )) � Examles Noisy Search Fixed (hierarchical) beam patterns � Code to Search Non-adaptive Search Strategies Upper Bound Prior Work ⊲ Generalizations I Generalizations II Generalization III Break Experiment Design 19 / 30
Generalizations and On-going Work Motivation & Setup Search for multiple target ( K > 1 ) � Examles Noisy Search Code to Search Non-adaptive Search Strategies Upper Bound Prior Work Generalizations I ⊲ Generalizations II Dynamic case: W ( t ) � Generalization III Break Experiment Design Beyond Gaussian � 20 / 30
Generalizations and On-going Work Motivation & Setup Search for multiple target ( K > 1 ) � Examles Noisy sequential group testing [Atia and Saligrama ’12]; � Noisy Search Mapped to an OR MAC [Kaspi, Shayevitz, J ’15] Code to Search Non-adaptive Search Strategies Upper Bound Prior Work Generalizations I ⊲ Generalizations II Dynamic case: W ( t ) � Generalization III Break Experiment Design Beyond Gaussian � 20 / 30
Generalizations and On-going Work Motivation & Setup Search for multiple target ( K > 1 ) � Examles Noisy sequential group testing [Atia and Saligrama ’12]; � Noisy Search Mapped to an OR MAC [Kaspi, Shayevitz, J ’15] Code to Search Non-adaptive 1 Factor of K in rate, where K bounds (is) the number of � Search Strategies targets Upper Bound Prior Work Generalizations I ⊲ Generalizations II Dynamic case: W ( t ) � Generalization III Break Experiment Design Beyond Gaussian � 20 / 30
Generalizations and On-going Work Motivation & Setup Search for multiple target ( K > 1 ) � Examles Noisy sequential group testing [Atia and Saligrama ’12]; � Noisy Search Mapped to an OR MAC [Kaspi, Shayevitz, J ’15] Code to Search Non-adaptive 1 Factor of K in rate, where K bounds (is) the number of � Search Strategies targets Upper Bound Prior Work ‡ Case of an adder channel Generalizations I ⊲ Generalizations II Dynamic case: W ( t ) � Generalization III Break Experiment Design Beyond Gaussian � 20 / 30
Generalizations and On-going Work Motivation & Setup Search for multiple target ( K > 1 ) � Examles Noisy sequential group testing [Atia and Saligrama ’12]; � Noisy Search Mapped to an OR MAC [Kaspi, Shayevitz, J ’15] Code to Search Non-adaptive 1 Factor of K in rate, where K bounds (is) the number of � Search Strategies targets Upper Bound Prior Work ‡ Case of an adder channel Generalizations I ⊲ Generalizations II Dynamic case: W ( t ) � Generalization III Break Results generalizes to unknown but constant speed (cut � Experiment Design rate by half) Beyond Gaussian � 20 / 30
Generalizations and On-going Work Motivation & Setup Search for multiple target ( K > 1 ) � Examles Noisy sequential group testing [Atia and Saligrama ’12]; � Noisy Search Mapped to an OR MAC [Kaspi, Shayevitz, J ’15] Code to Search Non-adaptive 1 Factor of K in rate, where K bounds (is) the number of � Search Strategies targets Upper Bound Prior Work ‡ Case of an adder channel Generalizations I ⊲ Generalizations II Dynamic case: W ( t ) � Generalization III Break Results generalizes to unknown but constant speed (cut � Experiment Design rate by half) Beyond Gaussian � Similar results for the binary symmetric noise (hard � decoding) [Kaspi, Shayevitz, J ’14] 20 / 30
Empirical Network Parameter tuning Network performance function of network parameter f : X → R . Motivation & Setup Examles Assumptions: Noisy Search X is the set of network parameters and protocols � Code to Search Non-adaptive f ( x ) is the network performance; f ( x 1 ) and f ( x 2 ) � Search Strategies ”correlated” Upper Bound Prior Work f observed w noise: y = f ( x ) + η ( x ) , η non-presistent noise � Generalizations I Generalizations II ⊲ Generalization III Goal: Design a sequential strategy of selecting n query points Break x 1 , . . . , x n to identify a global optimizer of f . Experiment Design Performance measures: � Simple regret: S n = f ( x ∗ ) − f ( x ∗ n ) – Cumulative regret: R n = � n t =1 f ( x ∗ ) − f ( x t ) – 21 / 30
Empirical Network Parameter tuning Network performance function of network parameter f : X → R . Motivation & Setup Examles Assumptions: Noisy Search X is the set of network parameters and protocols � Code to Search Non-adaptive f ( x ) is the network performance; f ( x 1 ) and f ( x 2 ) � Search Strategies ”correlated” Upper Bound Prior Work f observed w noise: y = f ( x ) + η ( x ) , η non-presistent noise � Generalizations I Generalizations II ⊲ Generalization III Goal: Design a sequential strategy of selecting n query points Break x 1 , . . . , x n to identify a global optimizer of f . Experiment Design Performance measures: � Simple regret: S n = f ( x ∗ ) − f ( x ∗ n ) – Cumulative regret: R n = � n t =1 f ( x ∗ ) − f ( x t ) – 21 / 30
Empirical Network Parameter tuning Network performance function of network parameter f : X → R . Motivation & Setup Examles Assumptions: Noisy Search X is the set of network parameters and protocols � Code to Search Non-adaptive f ( x ) is the network performance; f ( x 1 ) and f ( x 2 ) � Search Strategies ”correlated” Upper Bound Prior Work f observed w noise: y = f ( x ) + η ( x ) , η non-presistent noise � Generalizations I Generalizations II ⊲ Generalization III Goal: Design a sequential strategy of selecting n query points Break x 1 , . . . , x n to identify a global optimizer of f . Experiment Design Performance measures: � Simple regret: S n = f ( x ∗ ) − f ( x ∗ n ) – Cumulative regret: R n = � n t =1 f ( x ∗ ) − f ( x t ) – 21 / 30
Empirical Network Parameter tuning Network performance function of network parameter f : X → R . Motivation & Setup Examles Assumptions: Noisy Search X is the set of network parameters and protocols � Code to Search Non-adaptive f ( x ) is the network performance; f ( x 1 ) and f ( x 2 ) � Search Strategies ”correlated” Upper Bound Prior Work f observed w noise: y = f ( x ) + η ( x ) , η non-presistent noise � Generalizations I Generalizations II ⊲ Generalization III Goal: Design a sequential strategy of selecting n query points Break x 1 , . . . , x n to identify a global optimizer of f . Experiment Design Performance measures: � Simple regret: S n = f ( x ∗ ) − f ( x ∗ n ) – Cumulative regret: R n = � n t =1 f ( x ∗ ) − f ( x t ) – 21 / 30
Empirical Network Parameter tuning Network performance function of network parameter f : X → R . Motivation & Setup Examles Assumptions: Noisy Search X is the set of network parameters and protocols � Code to Search Non-adaptive f ( x ) is the network performance; f ( x 1 ) and f ( x 2 ) � Search Strategies ”correlated” Upper Bound Prior Work f observed w noise: y = f ( x ) + η ( x ) , η non-presistent noise � Generalizations I Generalizations II ⊲ Generalization III Goal: Design a sequential strategy of selecting n query points Break x 1 , . . . , x n to identify a global optimizer of f . Experiment Design Performance measures: � Simple regret: S n = f ( x ∗ ) − f ( x ∗ n ) – Cumulative regret: R n = � n t =1 f ( x ∗ ) − f ( x t ) [bandit] – 21 / 30
Motivation & Setup Examles Noisy Search Code to Search ⊲ Break Experiment Design Questions? 22 / 30
Motivation & Setup Examles Noisy Search Code to Search Break Experiment ⊲ Design Experiment Design Experiment Design: Single-shot Intuitive Overview Heuristic Approaches Notations Mutual Information EJS Achievability 23 / 30
Design of Experiments [Blackwell ’51] Motivation & Setup M mutually exclusive hypotheses: H i ⇔ { θ = i } , � Examles i = 1 , 2 , . . . , M Noisy Search Code to Search Prior ρ (0) = [ ρ 1 (0) , . . . , ρ M (0)] , ρ i (0) = P ( θ = i ) Break � Experiment Design Experiment ⊲ Design Intuitive Overview Heuristic Approaches Notations Mutual Information EJS Achievability 24 / 30
Recommend
More recommend