Some essential linear algebra ◮ V : a complex (or real) vector space � u | v � an inner (scalar) product on V � � u � the norm (length) given by � u | u � d ( u , v ) = � u − v � the distance (metric) induced by the inner product � u | v � v : projection of u on the line def. by v (if � v � = 1) ◮ important properties (inequalities) 1. Cauchy-Schwarz |� u | v �| ≤ � u � · � v � 2. Minkowski � u + v � ≤ � u � + � v � 3. Parallelogram � u + v � 2 + � u − v � 2 ≤ � u � 2 + � v � 2 ◮ Definition: A family of vectors E = { e 1 , e 2 , . . . } is ◮ an orthogonal system (OS) in V if � e k , e ℓ � = 0 for k � = ℓ ◮ an orthonormal system (ONS) in V if � e k , e ℓ � = δ k ,ℓ for k � = ℓ
The finite-dimensional case ◮ For V finite-dimensional V with orthonormal basis E = { e 1 , e 2 , . . . , e n ), the standard inner product is given by � u | v � = � � n k =1 u k e k | � n k =1 v ℓ e ℓ � ℓ =1 u k v ℓ � e k | e ℓ � = � n � n k =1 = � n k =1 u k · v k ◮ In terms of E one has u = � n k =1 � u | e k � e k u k = � u | e k � base E expansion i.e. k =1 � u | e k �� e k | v � � u | v � = � n inner product � u � 2 = � n k =1 |� u | e k �| 2 norm (length) ◮ Geometrically: u k = � u | e k � e k = projection of u onto the line defined by e k
Change of basis ◮ If F = { f 1 , f 2 , . . . , f n } is another ONS of V w.r.t. � . | . � , then � � f k = � f k | e j � e j e j = � e j | f k � f k and 1 ≤ j ≤ n 1 ≤ k ≤ n � � � e j | f k � ◮ Here U = 1 ≤ j , k ≤ n is a unitary matrix, i.e., � � � � U − 1 = � f k | e j � � e j | f k � 1 ≤ k , j ≤ n = U † 1 ≤ k , j ≤ n = U † is the conjugate-transpose of U (also called adjoint ) ◮ Transformation of the coefficients � u | e j � = � n k =1 � u | f k �� f k | e j � (1 ≤ j ≤ n ) � u | f k � = � n j =1 � u | e j �� e j | f k � (1 ≤ k ≤ n )
Very important example: the Discrete Fourier Transform ◮ V = C N with its usual inner product ◮ the standard basis E N e j = (0 , . . . , 0 , 1 , 0 , . . . , 0) t (0 ≤ j < N ) ◮ the DFT-basis F N with ω N = e 2 π i / N 1 � N ) N − 1 � t f j = N ) 2 , . . . , ( ω j 1 , ω j N , ( ω j √ (0 ≤ j < N ) N 1 � � t N , . . . , ω j · ( N − 1) ω j · 0 N , ω j · 1 N , ω j · 2 = √ N N ◮ the DFT-matrix U N and its inverse 1 � � 1 � � ω j · k U − 1 ω − j · k √ √ U N = = N N N N N 0 ≤ j , k < N 0 ≤ j , k < N
DFT 4 and DFT 6 ◮ DFT 4 i 0 i 0 i 0 i 0 i 0 i 0 i 0 i 0 1 1 1 1 i 0 i 1 i 2 i 3 i 0 i 1 i 2 i 3 U 4 = 1 = 1 = 1 1 i − 1 − i i 0 i 2 i 4 i 6 i 0 i 2 i 0 i 2 1 − 1 1 − 1 2 2 2 i 0 i 3 i 6 i 9 i 0 i 3 i 2 i 1 1 − i − 1 i ◮ DFT 6 ω 0 ω 0 ω 0 ω 0 ω 0 ω 0 6 6 6 6 6 6 ω 0 ω 1 ω 2 ω 3 ω 4 ω 5 6 6 6 6 6 6 1 1 ω 0 ω 2 ω 4 ω 0 ω 2 ω 4 � � ω k · ℓ 6 6 6 6 6 6 U 6 = √ 0 ≤ k ,ℓ< 6 = √ 6 ω 0 ω 3 ω 0 ω 3 ω 0 ω 3 6 6 6 6 6 6 6 6 ω 0 ω 4 ω 2 ω 0 ω 4 ω 2 6 6 6 6 6 6 ω 0 ω 5 ω 4 ω 3 ω 2 ω 1 6 6 6 6 6 6 ω 0 ω 1 ω 2 ω 3 ω 4 ω 5 6 6 6 6 6 6 √ √ √ √ 1+ i 3 − 1+ i 3 − 1 − i 3 1 − i 3 1 − 1 2 2 2 2
DFT 7 ◮ DFT 7 0 . 378 0 . 378 0 . 378 . . . 0 . 378 0 . 378 0 . 236 + 0 . 296 i − 0 . 084 + 0 . 368 i . . . 0 . 236 − 0 . 296 i 0 . 378 − 0 . 084 + 0 . 368 i − 0 . 341 − 0 . 164 i . . . − 0 . 084 − 0 . 368 i 0 . 378 − 0 . 341 + 0 . 164 i 0 . 236 − 0 . 296 i . . . − 0 . 341 − 0 . 164 i 0 . 378 − 0 . 341 − 0 . 164 i 0 . 236 + 0 . 296 i . . . − 0 . 341 + 0 . 164 i 0 . 378 − 0 . 084 − 0 . 368 i − 0 . 341 + 0 . 164 i . . . − 0 . 084 + 0 . 368 i 0 . 378 0 . 236 − 0 . 296 i − 0 . 084 − 0 . 368 i . . . 0 . 236 + 0 . 296 i ω 7 = e 2 π i / 7 = 0 . 62349 . . . + 0 . 781831 . . . i 1 1 e 2 π i / 7 = 0 . 235657 . . . + 0 . 295505 . . . i √ ω 7 = √ 7 7
Orthogonal transforms Other important orthogonal transforms used in image processing: ◮ DCT : Discrete Cosine Transform ◮ HWT : Hadamard-Walsh Transform ◮ KLT : Karhunen-Lo` eve Transform ◮ DWT : Discrete Wavelet Transform
Optimal approximation: The projection theorem ◮ Theorem V : a vector space with inner product � . | . � and norm � . � U : a finite-dimensional subspace of V { e 1 , e 2 , . . . , e n } an orthonormal basis of U Then: For each v ∈ V there exists a unique element u v ∈ U which minimizes the distance d ( v , u ) = � v − u � ( u ∈ U ). This element is � n the orthogonal projection � � v | e k � e k , ( ∗ ) u v = of v onto U k =1 and the decomposition of v is a unique v = v − u v + u v � �� � ���� ∈U ∈U ⊥
Optimal approximation: The projection theorem ◮ Proof. Define u v as in ( ∗ ). Then for 1 ≤ ℓ ≤ n � v − u v | e ℓ � = � v − � n k =1 � v | e k � e k | e ℓ � = � v | e ℓ � − � n k =1 � v | e k �� e k | e ℓ � = 0 that is: v − u v ∈ U ⊥ If u ∈ U is any element, then u − u v ∈ U , hence � v − u v | u − u v � = 0 But (Pythagoras!) � v − u � 2 = � v − u v � 2 + � u v − u � 2 ≥ � v − u v � 2 with equality if and only if u = u v �
Another important consequence (same scenario as before) ◮ Bessel ’s inequality For v ∈ V and any N ≥ 0 with v N = � N k =1 � v | e k � e k , then � v N � 2 = � N k =1 |� v | e k �| 2 ≤ � v � 2 because v − v N ⊥ { e 1 , . . . , e N }
What is a Hilbert space? ◮ H : vector space with scalar product � . | . � , norm � . � E = { e 0 , e 1 , . . . } = { e n } n ∈ N an ONS in H F = subspace of all finite linear combinations of elements of E ◮ Theorem: The following properties are equivalent k =0 � u | e k � e k , then 1. For all u ∈ H , if u N = � N N →∞ � u − u N � = 0 lim This is written as u = � ∞ k =0 � u | e k � e k 2. For all u , v ∈ H : ∞ � � u | e k �� e k | v � � u | v � = k =0 3. For all u ∈ H ∞ � � u � 2 = |� u | e k �| 2 k =0
What is a Hilbert space? ◮ Theorem (ctd.) 4. For all u ∈ H if � u | e k � = 0 for all k ∈ N , then u = 0 5. F is dense in H , i.e. for any u ∈ H , ε > 0 there is a f ∈ F such that � u − f � < ε 6. F ⊥ = { 0 } If these properties hold, H is called a (separable) Hilbert space , and E is a Hilbert basis of H ◮ Examples are the spaces ℓ 2 , L 2 ([0 , a )), L 2 ( R ) of square-summable sequences and square-integrable functions
The examples ◮ ℓ 2 , the space of square summable sequences, has (among others) the Hilbert basis of “unit vectors” δ k = ( δ k , j ) j ∈ Z ( k ∈ Z ) ◮ L 2 ([0 , a )), the space of square-integrable functions over a finite interval [0 , a ) has (among others) the Hilbert basis of complex exponentials ω k ( t ) = 1 ae 2 π ikt / a ( k ∈ Z ) or of the harmonics 1 a cos(2 π kt / a ) ( k ∈ N ) and 1 a sin(2 πℓ t / a ) ( ℓ ∈ N ≥ 0 ) ◮ A Hilbert basis of the space L 2 ( R ) of square-integrable functions over R is not obvious! Such bases will appear naturally in Wavelet theory! ◮ From an algebraic point of view all these spaces are “the same” (i.e., they are isomorphic )
Computing in Hilbert bases ◮ If E = { e k } k ∈ N is a Hilbert basis of H , then for u , v ∈ H 1. generalized Fourier expansion: � � u | e k � e k u = k ∈ N 2. inner product � � u | e k �� e k | v � � u | v � = k ∈ N 3. norm (length, energy) � � u � 2 = |� u | e k �| 2 k ∈ N . . . The best of all possible worlds . . .
Recommend
More recommend