model based adaptive spatial sampling for occurrence map
play

Model-based adaptive spatial sampling for occurrence map - PowerPoint PPT Presentation

Model-based adaptive spatial sampling for occurrence map construction N. Peyrard and R. Sabbadin CompSust09 - Cornell University - june 2009 p. 1 Mapping spatial processes in environmental management Mapping pest occurrence


  1. Model-based adaptive spatial sampling for occurrence map construction N. Peyrard and R. Sabbadin CompSust’09 - Cornell University - june 2009 – p. 1

  2. Mapping spatial processes in environmental management Mapping pest occurrence • Building pest occurrence map in order to eradicate • Observations costly • Errors in mapping also costly P2004 P2001 P2002 P2003 500 500 500 500 400 400 400 400 300 300 300 300 200 200 200 200 100 100 100 100 200 400 600 200 400 600 200 400 600 200 400 600 Y2002 Y2003 Y2004 500 500 500 500 400 400 400 400 300 300 300 300 200 200 200 200 100 100 100 100 200 400 600 200 400 600 200 400 600 200 400 600 CompSust’09 - Cornell University - june 2009 – p. 2

  3. Mapping spatial processes in environmental management Different problems depending on observations nature • Data visualization - Complete observations (everywhere) - Perfect observations (No errors/missing data) ⇒ How to visualize data? • Map reconstruction - Complete observations - Noisy observations ⇒ How to reconstruct the “true” map? • Sampling and map construction - Incomplete observations (not everywhere) - Noisy observations ⇒ Where to observe? / How to reconstruct? CompSust’09 - Cornell University - june 2009 – p. 3

  4. Mapping spatial processes in environmental management How to design an efficient spatial sampling method to estimate an occurrence (0/1) map when � process to map has spatial structure � observations are imperfect/incomplete � sampling is costly � process does not evolve during the sampling period CompSust’09 - Cornell University - june 2009 – p. 4

  5. Overview of the proposed approach Optimization approach for designing spatial sampling policies The Hidden Markov Random Field model is used for: • Representing current uncertain knowledge about map to reconstruct • Updating knowledge after observations • Defining a unique criterion for - map reconstruction from observed data - spatial sampling actions selection CompSust’09 - Cornell University - june 2009 – p. 5

  6. Optimal sampling problem Hidden variable X X a Sampling action a Y Observation model p ( Y = o | x, a ) Question : How to reconstruct hidden variable X using sampling actions? 1. Hidden variable model 2. Updated model after sampling result 3. Hidden variable reconstruction 4. Sampling action optimization CompSust’09 - Cornell University - june 2009 – p. 6

  7. Spatial sampling optimization The hidden variable x is a map ⇒ The sampling optimization problem has to be revisited Question : How to reconstruct hidden map x using sampling actions? 1. Hidden map model 2. Updated model after sampling result 3. Hidden map reconstruction 4. Sampling action optimization CompSust’09 - Cornell University - june 2009 – p. 7

  8. Pairwise Markov random field (1) • Multiple interacting variables • Independence given neighborhood ⇒ Pairwise Markov random field Question : How to reconstruct hidden map x using sampling actions? 1. Hidden map model 2. Updated model after sampling result 3. Hidden map reconstruction 4. Sampling action optimization CompSust’09 - Cornell University - june 2009 – p. 8

  9. Pairwise Markov random field (2) • Multiple interacting variables • Independence given neighborhood ⇒ Pairwise Markov random field • Interaction graph G = ( V, E ) • ψ i : “weights” on states of vertex i • ψ ij : correlations “strength” between neighbor vertices • Z : normalizing constant / partition function P ( x ) = 1 � � �� � � ψ i ( x i ) ψ ij ( x i , x j ) Z i ∈ V ( i,j ) ∈ E CompSust’09 - Cornell University - june 2009 – p. 9

  10. Hidden Markov random field (1) Hidden variables • a ∈ { 0 , 1 } | V | : subset of V selected for sampling • Independent observations: Observations � P ( o | x, a ) = P i ( o i | x i , a i ) i ∈ V Question : How to reconstruct hidden map x using sampling actions? 1. Hidden map model 2. Updated model after sampling result 3. Hidden map reconstruction 4. Sampling action optimization CompSust’09 - Cornell University - june 2009 – p. 10

  11. Hidden Markov random field (2) Hidden variables • a ∈ { 0 , 1 } | V | : subset of V selected for sampling • Independent observations: Observations � P ( o | x, a ) = P i ( o i | x i , a i ) i ∈ V Updated Markov random field (Bayes’ theorem) 1 � � �� � � ψ ′ P ( x | o, a ) = i ( x i , o i , a i ) ψ ij ( x i , x j ) where Z i ∈ V ( i,j ) ∈ E ψ ′ i ( x i , o i , a i ) = ψ i ( x i ) P i ( o i | x i , a i ) CompSust’09 - Cornell University - june 2009 – p. 11

  12. Hidden map reconstruction (1) Hidden variables Local (MPM): Observations x ∗ i = arg max x i P i ( x i | o, a ) , ∀ i ∈ V Reconstruction �� �� Question : How to reconstruct hidden map x using sampling actions? 1. Hidden map model 2. Updated model after sampling result 3. Hidden map reconstruction 4. Sampling action optimization CompSust’09 - Cornell University - june 2009 – p. 12

  13. Hidden map reconstruction (2) Hidden variables Local (MPM): Observations x ∗ i = arg max x i P i ( x i | o, a ) Reconstruction �� �� Value of reconstructed map Expected number of well classified sites in x ∗ � � � V MPM ( o, a ) = max x i P i ( x i | o, a ) f i ∈ V CompSust’09 - Cornell University - june 2009 – p. 13

  14. Sampling action optimization (1) Hidden variables • a ∈ { 0 , 1 } | V | selected for sampling • Independent observations o ∈ { 0 , 1 } | V | Observations ⇒ How to optimize the choice of a ? Question : How to reconstruct hidden map x using sampling actions? 1. Hidden map model 2. Updated model after sampling result 3. Hidden map reconstruction 4. Sampling action optimization CompSust’09 - Cornell University - june 2009 – p. 14

  15. Sampling action optimization (2) Hidden variables • a ⊆ V selected for sampling • Independent observations o result Observations ⇒ How to optimize the choice of a ? � U ( a ) = − c ( a ) + P ( o | a ) V ( o, a ) o a ∗ = arg max U ( a ) a • The computation of a ∗ is hard! (NP-hard) • Only feasible for small problems or needs approximation! CompSust’09 - Cornell University - june 2009 – p. 15

  16. Approximate spatial sampling (1) Approximate the computation of a ∗ = arg max � P ( o | a ) V MPM ( o, a ) − c ( a ) + a o • Explore cells where initial knowledge is the most uncertain: marginal P i ( x i | o, a ) closest to 1 2   � �  � a = arg max ˜ − c ( a ) + f min P i ( X i = 1) , P i ( X i = 0)  a i,a i =1 • Marginals computation is itself NP-hard ⇒ approximation using belief propagation (sum prod) algorithm CompSust’09 - Cornell University - june 2009 – p. 16

  17. Approximate spatial sampling (2) The approximation results from simplifying assumptions: • Sampling actions are reliable • No passive observations • Joint probability approximated by one with idependent factors CompSust’09 - Cornell University - june 2009 – p. 17

  18. Adaptive spatial sampling (1) • Idea: - Sampling locations not chosen once for all before the sampling campaign - Intermediate observations are taken into account to design next sampling step - Possibility to visit a cell more than once CompSust’09 - Cornell University - june 2009 – p. 18

  19. Adaptive spatial sampling (2) • a sampling strategy δ is a tree • a trajectory in δ : τ = ( a 1 , o 1 , . . . , a K , o K ) Value of a leaf K � c ( a k ) + V MPM ( o 0 , o 1 , . . . , o K , a 0 , a 1 , . . . , a K ) U ( τ ) = − k =1 Value of a strategy V ( δ ) = � τ U ( τ ) P ( τ | δ ) CompSust’09 - Cornell University - june 2009 – p. 19

  20. Heuristic adaptive spatial sampling • Exact computation is PSPACE-hard ! ⇒ Heuristic algorithm - on line computation - approximate method for static sampling at each step CompSust’09 - Cornell University - june 2009 – p. 20

  21. Concluding remarks • A framework for spatial sampling optimization: - based on Hidden Markov random fields - different map quality criteria - extended to “adaptive” sampling • Problems too complex for exact resolution ⇒ Heuristic solution based on approximate marginals computation • Empirical validation on simulated problems: - Comparison of SSS, ASS and classical sampling methods (random sampling, ACS) - Markov random fields parameters learned from real data - ASS > SSS > classical methods CompSust’09 - Cornell University - june 2009 – p. 21

  22. Ongoing work • Exact algorithms for small problems (Usman Farrokh): combining variable elimination and tree search • “Random sets + kriging” approach (Mathieu Bonneau): development of a dedicated approximate method and comparison to the HMRF approach • PhD thesis on adaptive spatial sampling for weeds mapping at the scale of an agricultural area (Sabrina Gaba, INRA-Dijon). • Future? ⇒ Spatial partially observed Markov decision processes CompSust’09 - Cornell University - june 2009 – p. 22

  23. Questions? Thanks for listening CompSust’09 - Cornell University - june 2009 – p. 23

  24. Contents 1- Optimal sampling of a hidden random variable 2- Defining optimal spatial sampling problems 3- Approximate computation of an optimal strategy 4- Evaluation of proposed method on simulated data CompSust’09 - Cornell University - june 2009 – p. 24

Recommend


More recommend