15/05/2018 In The Name of God Contents Field-Wide Estimation of Soil 2/26 Moisture Using Compressive Sensing Importance of moisture estimation Compressive Sensing (CS) Applying CS theory to moisture estimation problem Data sets for numerical experiments Sharif University of Technology Different approximations for solving CS problem Electrical Engineering Comparison of different algorithms Department Novel sensor placement algorithm Hosein Pourshamsaei Supervisor: Dr. Amin Conclusion and future works Nobakhti Importance of Moisture Estimation Compressive Sensing (CS) 3/26 4/26 Essential role of moisture monitoring for decision Effective tool for reconstructing sparse signals making in precision agriculture: An 𝑚 � norm optimization problem Saving water in irrigation 𝑦 � = arg min 𝑦 � subject to 𝑧 = 𝛸𝑦 � Sever effects of water stress on crop yield 𝑧 : measurement vector ( M by 1) Irrigation in right time and right quantity 𝛸 : measurement matrix ( M by N ) Methods: 𝑦 : sparse signal ( N by 1) Regular moisture sensor installation over the field Cost and maintenance issues CS is valid for compressible signals Remote sensing methods Not possible for fine resolution and arbitrary time with reasonable cost Estimation theories 1
15/05/2018 Applying CS Theory to Moisture Estimation Problem Applying CS Theory to Moisture Estimation Problem 5/26 6/26 Consider 𝑦 = Ψα , where Ψ is IDCT matrix Moisture data is not sparse, but they are spatially Consider Φ as a matrix with M rows and N columns that correlated. each row contains 1 one and N -1 zeros. Factors affecting the moisture is nearly constant Modified formulation of CS for moisture estimation: during time. 𝛽 � = arg min 𝛽 � subject to 𝑧 = ΦΨα So with proper sorting data, they are sparse in � 𝑦 � = Ψ𝛽 � frequency domain. Preconditions: DCT (Discrete Cosine Transform) is used in this 𝜈 Φ, Ψ = 𝑂 max ���,��� < 𝛸 � , 𝛺 � > project. For our case: 𝜈 Φ, Ψ = 1 For reconstructing signal with 𝑚 � approximation: 𝑁 ≥ 𝐷𝐿𝜈 � Φ, ψ log 𝑂 Data Sets for Numerical Experiments Data Sets for Numerical Experiments 7/26 8/26 TIN-based Real-time Sorting data to enhance Integrate Basin sparsity in frequency Simulator (tRIBS) domain Peacheater Creek Sorting methods Watershed: A land with investigation is not Moisture Percent area of 64 km 2 that is purpose of this project. located in the A good method is coarse- northeastern corner of grained monotonic Oklahoma. ordering. 2
15/05/2018 Coarse-Grained Monotonic Ordering Coarse-Grained Monotonic Ordering 9/26 10/26 Dependence of the results Coare-grained monotonic Coare-grained monotonic Exact ordering Exact ordering on nature of the field and ordering ordering possibility of well-sorting data in all conditions is out of scope of the project. We simply assume that values are sorted exactly. Although exact ordering is not necessary for sparsity in frequency domain. Ref: Wu, X., Wu, Y., Liu, M., & Zheng, L. (2011). In-Situ Soil Moisture Sensing: Efficient Random Sensor Placement and Field Estimation using Compressive Sensing . Paper presented at the 7th International Conference on Wireless Communications, Networking and Mobile Computing, Wuhan, China. Weighted 𝑚 � Norm Approximation Different Approximations for Solving CS Problem 11/26 12/26 Simple 𝑚 � norm is not a good choice in some Simple 𝑚 � norm Approximation: examples: 𝛽 � = arg min 𝛽 � subject to 𝑧 = ΦΨα � x=[0 1 0] � , Φ = 2 1 1 2 , then y= Φ x=[1 1] T Weighted 𝑚 � norm Approximation: 1 1 FOCUSS Algorithm Solution with 𝑚 � norm approximation: 𝐲 � =[1/3 0 1/3] T Weighted 𝑚 � norm Approximation: Orthogonal Matching Pursuit (OMP) Algorithm 𝑦 � = arg min W𝑦 � subject to 𝑧 = Φ𝑦 � 1 , 𝑦 � � ≠ 0 𝑥 � = � 𝑦 � � ∞, 𝑦 � � = 0 3
15/05/2018 Weighted 𝑚 � Norm Approximation FOCUSS Algorithm 13/26 14/26 Weighting matrix is dependent on the solution. Using 𝑚 � norm: Iterative method: 𝑦 � = arg min 𝑦 � subject to 𝑧 = Φ𝑦 � Set w i(0) =1, for i =1,…, n . 1. This problem has a unique solution: Solve the weighted l 1 minimization problem: 2. 𝑦 (�) = arg min � = Φ � 𝑧 𝑋 (�) 𝑦 � subject to 𝑧 = 𝛸𝑦 . 𝑦 � Φ � denotes the Moore-Penrose inverse. Update the weights: 3. (���) = � � �� . 𝑥 � Solution is not proper for sparse signals. � � Terminate on convergence or if l reach to specific number. 4. Weigthed optimization can improve the results for Otherwise, increment l and go to step 2. sparse signals. Value of ϵ in step 3 should be chosen slightly smaller than the expected nonzero magnitudes of 𝐲 � . FOCUSS Algorithm Orthogonal Matching Pursuit (OMP) Algorithm 15/26 16/26 FOcal Underdetermined System Solver (FOCUSS): OMP is one of greedy algorithms. 𝑦 � = W arg min 𝑟 � subject to 𝑧 = 𝛸𝑋𝑟 OMP is an iterative algorithm. at each iteration, the � column of 𝜲 is chosen that is most strongly An iterative algorithm: correlated with the remaining part of y. Then its For initialization, set 𝑦 � = Φ � 𝑧 1. contribution to y is subtracted off and iterate on the Compute weighting matrix: 2. residual. 𝑋 �� = 𝑒𝑗𝑏 𝑦 ��� If the main signal is K -sparse, after K iterations the Compute 𝐲 𝐥 : 3. algorithm will recover the signal properly. � 𝑧 𝑦 � = 𝑋 �� 𝛸𝑋 �� Increment k and repeat steps 2 and 3 until convergence 4. occurs. 4
15/05/2018 Comparison of Different Algorithms Orthogonal Matching Pursuit (OMP) Algorithm 17/26 18/26 Initialize the residual 𝑠 � = 𝑧 , the index set 𝛭 � = ∅ , the matrix of chosen 1. Comparison criteria: atoms Φ � = ∅ , and the iteration number t =1. Find the index 𝜇 � by solving following simple optimization problem: RMSE error: 2. ���,…,� < 𝑠 ��� , 𝜒 � > . 𝜇 � = arg max ��� � � � . 𝑆𝑁𝑇𝐹 = Augment the index set and the matrix of chosen atoms: 3. 𝛭 � = 𝛭 ��� ∪ {𝜇 � } . Recovery percent: ratio of values that are recovered perfectly Φ � = [Φ ��� 𝜒 � � ] . to number of all values. Solve a least squares problem to obtain a new signal estimate: 4. 𝑧 − Φ � 𝑡 � . A value is assumed perfectly recovered if the error between 𝑡 � = arg min � estimated value and the real value is below %1. Calculate the new approximation of the data and the new residual: 5. 𝛽 � = Φ � s � . Computational time 𝑠 � = 𝑧 − 𝛽 � . Increment t and return to step 2 if 𝑢 < 𝐿 . 6. The estimate 𝐲 � has nonzero indices at the components listed in 𝛭 � . The 7. value of the estimate 𝐲 � in component 𝜇 � equals the j th component of 𝑡 � . Comparison of Different Algorithms Comparison of Different Algorithms 19/26 20/26 100 30 • Estimated moisture l1 90 data using different weighted l1 25 FOCUSS approximation 80 OMP l1 methods using 200 70 weighted l1 20 FOCUSS sensors with random 60 OMP placement: 50 15 • (a) l 1 -norm. 40 • (b) Weighted l 1 - 10 30 norm. 20 5 • (c) FOCUSS. 10 (d) OMP. • 0 0 50 100 150 200 250 300 350 50 100 150 200 250 300 350 Number of Sensors Number of Sensors • FOCUSS in not proper algorithm. • Main difference of the algorithms is related to using a few number of sensors. • Dependence of results on both number and location of sensors. 5
15/05/2018 Comparison of Different Algorithms Sensor Placement 21/26 22/26 Method Computational time in seconds Random sensor placement is not efficient for situations that sensor numbers is not high enough. l 1 -norm 76 High variations could not estimated well by random Weighted l 1 -norm 788 sensor placement. FOCUSS 74 One approach: OMP 71 dividing whole data to some clusters and do sensor allocation Moisture Percent • Time is not critical in many real applications. proportional to variance of Anyway it can be important in some situations especially in very large scale • fields. each cluster. • In sum, OMP is the best method for most situations. Novel Sensor Placement Algorithm Comparison of The Results 23/26 24/26 Place first sensor randomly. Set k =1. • Estimated moisture data using different sensor 1. placement methods using 70 sensors: 2. Solve following optimization problem: • (a) Random 𝛽 � = arg min 𝛽 � subject to 𝑧 = ΦΨα � • (b) Clustered 𝑦 � = Ψ𝛽 � • (c) Novel approach 3. Find the location which has the worst estimation: 110 110 110 Real data Real data Real data 100 Estimated data 100 Estimated data 100 Estimated data 𝑗 = arg max 𝑦 𝑗 − 𝑦 �(𝑗) 90 90 90 � 80 80 80 Moisture Percent Moisture Percent Moisture Percent 4. Place next sensor at location 𝑗 . 70 70 70 60 60 60 Increment k and go to step 2 if (k<sensor numbers) 50 50 50 5. 40 40 40 30 30 30 20 20 20 0 1000 2000 3000 4000 5000 6000 7000 0 1000 2000 3000 4000 5000 6000 7000 0 1000 2000 3000 4000 5000 6000 7000 Numbered Index Numbered Index Numbered Index (a) (b) (c) 6
Recommend
More recommend