this reduces to a generalized eigenvalue problem i e to
play

This reduces to a generalized eigenvalue problem, i.e. to finding - PowerPoint PPT Presentation

This reduces to a generalized eigenvalue problem, i.e. to finding generalized eigenvectors of the following form, with the lowest eigenvalues: Ly Dy This technique is called Laplacian Eigenmaps since the matrix L is called


  1.  This reduces to a generalized eigenvalue problem, i.e. to finding generalized eigenvectors of the following form, with the lowest eigenvalues:   Ly Dy  This technique is called “Laplacian Eigenmaps” since the matrix L is called the (graph) Laplacian matrix, which is commonly used in spectral graph theory.  This technique is due to Belkin and Niyogi – their NIPS 2003 paper “Laplacian Eigenmaps for Dimensionality Reduction and Data Representation”.

  2. The image part with relationship ID rId2 was not found in the file.          Image source: https://people.cs.pitt.edu/ On this slide, N refers to the number of nearest neighbors per point ~milos/courses/cs3750/lect (the other distances are set to infinity). The parameter  for the ures/class17.pdf Gaussian kernel needs to be selected carefully, especially if N is high.

  3.  In these applications, the projections are extremely noisy.  Prior to the ordering algorithm, a denoising step is required to prevent erroneous ordering.  The denoising step can be performed using Principal Components Analysis (PCA).

  4.  Repeat steps 1 to 6 for all p x 1 patches from image J (in a sliding window fashion).  Since we take overlapping patches, any given pixel will be covered by multiple patches (as many as p different patches).  Reconstruct the final projection by averaging the output values that appear at any pixel.

  5.  Assuming a Gaussian noise model, the noise variance can be semi ‐ automatically estimated from the regions in the projections which are completely blank (modulo noise).  An algorithm similar to the patch ‐ based PCA denoising (seen in CS 663 last semester) can be employed.  Given a small patch in a projection vector, we can find small patches similar to it at spatially distant locations within the same projection vector or in projection vectors acquired from other angles.

  6. k Setting derivative w.r.t. to 0, we get   E 2 E 2 ( ) ( )         kE 2 2 E 2 k ( ) ( )       E 2 2 E 2 2 ( ) ( )  E 2 How should we estimate ( ) ? call Re : Since we are dealing with          l l l l l ( ) ( ) ( ) ( ) ( ) L similar patches, we can i i i i assume (approximately)       that the l ‐ th eigen ‐ E 2 l E 2 l E 2 l ( ( )) ( ( )) ( ( )) i i coefficient of each of       E l E l 2 2 2 ( ( )) ( ( )) i those L patches are very L 1  similar.     2 l 2 ( ) i L  i 1 This may be a negative value, so we set it to be L 1       E l l 2 2 2 ( ( )) max( 0 , ( ) ) i L  i 1

  7.  In this case, the image f is in 3D and each tomographic projection is in 2D.  Per Fourier slice theorem, 2D Fourier transform of a 2D projection in direction d is equal to the central slice through the 3D Fourier transform of f .  But in 3D we actually have some more information – which is absent in 2D! Ajit Rajwade

  8.  The planes of projection in directions d i and d j will intersect in a common line c ij .  Hence their corresponding central slices through the Fourier volume will also have a common line .  The common line can be determined by a search in the Fourier space! Ajit Rajwade

  9. Central Fourier planes corresponding to directions d i and d j v r i =F(R di f ) u v r j =F(R dj f ) u Intersection of the planes in the Fourier space leads to a common line. The directions of projection are unknown, but the common line can be found out by searching over pairs of directions in the Fourier space and finding the best match. Ajit Rajwade

  10.  Consider the following equation:     R c b (cos , sin , 0 ) i ij ij ij ij Angle made by c ij with local X axis Common line in Common line expressed in local global coordinate coordinate system of plane of system projection in direction d i 2       r r ( , ) arg min ( ) ( )   ij ji i j , Ajit Rajwade

  11.  We see that     R c b (cos , sin , 0 ) i ij ij ij ij          b b R c R c c c cos( ) ij ik ij ik i ij i ik ij ik  Consider viewing directions c 12 , c 23 , c 31 . Then we have     T C c c c M CC , 12 23 31             1 cos cos   31 32 23 21            M   cos 1 cos 31 32 12 13             cos cos 1   23 21 12 13 Ajit Rajwade

  12.  The matrix C is rank 3 in the case that the common lines are not coplanar.  To obtain C from M , we have     T T M CC UDU C UD 0 . 5 (upto rotation/r eflection ambiguity) M will be positive semi ‐ definite with unit diagonal entries Ajit Rajwade

  13.  We have the following relation:  R C B i i i This is a 3 X (Q ‐ 1) matrix This is a 3 X (Q ‐ 1) matrix of common lines of common lines involving the i ‐ th involving the i ‐ th projection. The lines are projection. The lines are expressed in global expressed in the local coordinate system coordinate system of the i ‐ th plane Ajit Rajwade

  14.  To estimate R i : 2    E R R C B R R I T ( ) such that i i i i i F i  T T B C USV Solution : Consider t he SVD of i i  T T R UV i See class handout for more details of this solution! Ajit Rajwade

  15.  For all pairs of directions ( d i , d j ) find the angle φ ij .  For the i ‐ th direction, assemble the matrix B i and the matrix M i using the knowledge of the angles φ ij .  Use eigenvalue decomposition on M i to get C i .  Use the SVD method to obtain R i from C i and B i .  Repeat this for all i .  That gives us the directions of tomographic projection! Ajit Rajwade

  16. Ref: Candes, Romberg and Tao, “Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information”, IEEE Transactions on Information Theory, Feb 2006.  Consider a piecewise constant signal f (having length n ). Suppose coefficients of f are measured through the measurement matrix  yielding a vector y of only m << n measurements.       m C n 2 If log( / ) ( , ) ( C is some 0 constant) then the solution to the following is exact with probability  :   T       TV f y f m m n n y R R f R m n min ( ) such that , , , f

  17.  Define vector h as follows:    h x f x f x ( ) ( ) ( 1 )     j  u N H u e 2 / F u ( ) ( 1 ) ( )  So we will do the following:  h h * arg min such that h 1    T G I DH I D Ψ h Ah C C where  T Ψ h h Fourier coefficien ts of   -j π u/N - D D u,u -e 2 1 diagonal matrix for which ( ) ( 1 ) C  I N matrix of size | | by which selects only those frequency coefficien ts C C from set

  18.   Φ V      T 2       T minimize w.r.t. F E j      Consider diag( , ,..., ) 1 2 n      T Rank one matrix ( | | ... | m ) 1 2 2 We want a rank one  2             matrix that T t t where i i j j approximates E j as F  i j closely as possible in F         the Frobenius sense. ( , ,..., ) i i i n i n 1 , 1 2 , 2 , The solution lies in SVD!

  19.  This method does not directly target  but instead considers the Gram matrix D T D where D  ΦΨ with all columns unit - normalized .  The aim is to design Φ in such a way that the Gram matrix resembles the identity matrix as much as possible, in other words we want:  Ψ T Φ T ΦΨ I

  20. E j Rank one matrix 2  2             T t t where i i j j F  i j F         ( , ,..., ) i i i n i n 1 , 1 2 , 2 , We want a rank one matrix that approximates Ej as closely as possible in the Frobenius sense. The solution lies in SVD.    T t E USU S u u j kk k k Assuming that S 11 is the k largest singular value   t S u j 11 1

Recommend


More recommend