modulation coding for the gaussian channel
play

Modulation & Coding for the Gaussian Channel Trivandrum School - PowerPoint PPT Presentation

Modulation & Coding for the Gaussian Channel Trivandrum School on Communication, Coding & Networking January 2730, 2017 Lakshmi Prasad Natarajan Dept. of Electrical Engineering Indian Institute of Technology Hyderabad


  1. Modulation & Coding for the Gaussian Channel Trivandrum School on Communication, Coding & Networking January 27–30, 2017 Lakshmi Prasad Natarajan Dept. of Electrical Engineering Indian Institute of Technology Hyderabad lakshminatarajan@iith.ac.in 1 / 54

  2. Digital Communication Convey a message from transmitter to receiver in a finite amount of time, where the message can assume only finitely many values. ❼ ‘time’ can be replaced with any resource: space available in a compact disc, number of cells in flash memory Picture courtesy brigetteheffernan.wordpress.com 2 / 54

  3. The Additive Noise Channel ❼ Message m ◮ takes finitely many, say M , distinct values ◮ Usually, not always, M = 2 k , for some integer k ◮ assume m is uniformly distributed over { 1 , . . . , M } ❼ Time duration T ◮ transmit signal s ( t ) is restricted to 0 ≤ t ≤ T ❼ Number of message bits k = log 2 M (not always an integer) 3 / 54

  4. Modulation Scheme ❼ The transmitter & receiver agree upon a set of waveforms { s 1 ( t ) , . . . , s M ( t ) } of duration T . ❼ The transmitter uses the waveform s i ( t ) for the message m = i . ❼ The receiver must guess the value of m given r ( t ) . ❼ We say that a decoding error occurs if the guess ˆ m � = m . Definition An M -ary modulation scheme is simply a set of M waveforms { s 1 ( t ) , . . . , s M ( t ) } each of duration T . Terminology ❼ Binary: M = 2 , modulation scheme { s 1 ( t ) , s 2 ( t ) } ❼ Antipodal: M = 2 and s 2 ( t ) = − s 1 ( t ) ❼ Ternary: M = 3 , Quaternary: M = 4 4 / 54

  5. Parameters of Interest ❼ Bit rate R = log 2 M bits/sec T � T Energy of the i th waveform E i = � s i ( t ) � 2 = s 2 i ( t )d t t =0 ❼ Average Energy � T M M 1 � � � s i ( t ) � 2 E = P ( m = i ) E i = M t =0 i =1 i =1 E ❼ Energy per message bit E b = log 2 M ❼ Probability of error P e = P ( m � = ˆ m ) Note P e depends on the modulation scheme, noise statistics and the demodulator. 5 / 54

  6. Example: On-Off Keying, M = 2 6 / 54

  7. Objectives 1 Characterize and analyze a modulation scheme in terms of energy, rate and error probability. ◮ What is the best/optimal performance that one can expect? 2 Design a good modulation scheme that performs close to the theoretical optimum. Key tool: Signal Space Representation ❼ Represent waveforms as vectors: ’geometry’ of the problem ❼ Simplifies performance analysis and modulation design ❼ Leads to efficient modulation/demodulation implementations 7 / 54

  8. 1 Signal Space Representation 2 Vector Gaussian Channel 3 Vector Gaussian Channel (contd.) 4 Optimum Detection 5 Probability of Error 8 / 54

  9. References ❼ I. M. Jacobs and J. M. Wozencraft, Principles of Communication Engineering, Wiley, 1965. ❼ G. D. Forney and G. Ungerboeck, “Modulation and coding for linear Gaussian channels,” in IEEE Transactions on Information Theory , vol. 44, no. 6, pp. 2384-2415, Oct 1998. ❼ D. Slepian and H. O. Pollak, “Prolate spheroidal wave functions, Fourier analysis and uncertainty I,” in The Bell System Technical Journal , vol. 40, no. 1, pp. 43-63, Jan. 1961. ❼ H. J. Landau and H. O. Pollak, “Prolate spheroidal wave functions, Fourier analysis and uncertainty III: The dimension of the space of essentially time- and band-limited signals,” in The Bell System Technical Journal , vol. 41, no. 4, pp. 1295-1336, July 1962. 8 / 54

  10. 1 Signal Space Representation 2 Vector Gaussian Channel 3 Vector Gaussian Channel (contd.) 4 Optimum Detection 5 Probability of Error 9 / 54

  11. Goal Map waveforms s 1 ( t ) , . . . , s M ( t ) to M vectors in a Euclidean space R N , so that the map preserves the mathematical structure of the waveforms. 9 / 54

  12. Quick Review of R N : N -Dimensional Euclidean Space R N = � � ( x 1 , x 2 , . . . , x N ) | x 1 , . . . , x N ∈ R Notation: x x x = ( x 1 , x 2 , . . . , x N ) and 0 0 0 = (0 , 0 , . . . , 0) Addition Properties: ❼ x x x + y y y = ( x 1 , . . . , x N ) + ( y 1 , . . . , y N ) = ( x 1 + y 1 , . . . , x N + y N ) ❼ x x x − y y y = ( x 1 , . . . , x N ) − ( y 1 , . . . , y N ) = ( x 1 − y 1 , . . . , x N − y N ) x ∈ R N ❼ x x x + 0 0 0 = x x x for every x x Multiplication Properties: ❼ ax x x = a ( x 1 , . . . , x N ) = ( ax 1 , . . . , ax N ) , where a ∈ R ❼ a ( x x x + y y y ) = ax x x + ay y y ❼ ( a + b ) x x = ax x x x + bx x x ❼ ax x 0 x 0 x = 0 0 if and only if a = 0 or x x = 0 0 10 / 54

  13. Quick Review of R N : Inner Product and Norm Inner Product ❼ � x x y y � = � y y x x � = x 1 y 1 + x 2 y 2 + · · · + x N y N x,y y,x ❼ � x x y z z � = � x x y y � + � x x z z � (distributive law) x,y y + z x,y x,z ❼ � ax x y � = a � x y x y y � x,y x,y ❼ If � x x y y � = 0 we say that x x y x,y x and y y are orthogonal Norm � � ❼ � x x x � = x 2 1 + · · · + x 2 N = � x x x,x x x � denotes the length of x x x x � 2 = � x ❼ � x x x x,x x x � denotes the energy of the vector x x x x � 2 = 0 if and only if x ❼ � x x x x = 0 0 0 ❼ If � x x x � = 1 we say that x x x is of unit norm ❼ � x x x − y y y � is the distance between two vectors. Cauchy-Schwarz Inequality ❼ |� x x x,y y y �| ≤ � x x x � � y y � y � x x x,y y y � ❼ Or equivalently, − 1 ≤ y � ≤ 1 � x x x � � y y 11 / 54

  14. Waveforms as Vectors The set of all finite-energy waveforms of duration T and the Euclidean space R N share many structural properties. Addition Properties ❼ We can add and subtract two waveforms x ( t ) + y ( t ) , x ( t ) − y ( t ) ❼ The all-zero waveform 0( t ) = 0 for 0 ≤ t ≤ T is the additive identity x ( t ) + 0( t ) = x ( t ) for any waveform x ( t ) Multiplication Properties ❼ We can scale x ( t ) using a real number a and obtain a x ( t ) ❼ a ( x ( t ) + y ( t )) = ax ( t ) + ay ( t ) ❼ ( a + b ) x ( t ) = ax ( t ) + bx ( t ) ❼ ax ( t ) = 0( t ) if and only if a = 0 or x ( t ) = 0( t ) 12 / 54

  15. Inner Product and Norm of Waveforms Inner Product � T ❼ � x ( t ) , y ( t ) � = � y ( t ) , x ( t ) � = t =0 x ( t ) y ( t )d t ❼ � x ( t ) , y ( t ) + z ( t ) � = � x ( t ) , y ( t ) � + � x ( t ) , z ( t ) � (distributive law) ❼ � ax ( t ) , y ( t ) � = a � x ( t ) , y ( t ) � ❼ If � x ( t ) , y ( t ) � = 0 we say that x ( t ) and y ( t ) are orthogonal Norm �� T � ❼ � x ( t ) � = � x ( t ) , x ( t ) � = t =0 x 2 ( t )d t is the norm of x ( t ) � T ❼ � x ( t ) � 2 = t =0 x 2 ( t )d t denotes the energy of x ( t ) ❼ If � x ( t ) � = 1 we say that x ( t ) is of unit norm ❼ � x ( t ) − y ( t ) � is the distance between two waveforms Cauchy-Schwarz Inequality ❼ |� x ( t ) , y ( t ) �| ≤ � x ( t ) � � y ( t ) � for any two waveforms x ( t ) , y ( t ) s M ∈ R N so We want to map s 1 ( t ) , . . . , s M ( t ) to vectors s s s 1 , . . . ,s s that the addition, multiplication, inner product and norm properties are preserved. 13 / 54

  16. Orthonormal Waveforms Definition A set of N waveforms { φ 1 ( t ) , . . . , φ N ( t ) } is said to be orthonormal if 1 � φ 1 ( t ) � = � φ 2 ( t ) � = · · · = � φ N ( t ) � = 1 (unit norm) 2 � φ i ( t ) , φ j ( t ) � = 0 for all i � = j (orthogonality) The role of orthonormal waveforms is similar to that of the standard basis e e e 1 = (1 , 0 , 0 , . . . , 0) ,e e e 2 = (0 , 1 , 0 , . . . , 0) , · · · ,e e N = (0 , 0 , . . . , 0 , 1) e Remark Say x ( t ) = x 1 φ 1 ( t ) + · · · x N φ N ( t ) , y ( t ) = y 1 φ 1 ( t ) + · · · + y N φ N ( t ) � N N � � � � � � x ( t ) , y ( t ) � = x i φ i ( t ) , y j φ j ( t ) = x i y j � φ i ( t ) , φ j ( t ) � i =1 j =1 i j � � � = x i y j = x i y i i j = i i = � x x x,y y y � 14 / 54

  17. Example 15 / 54

  18. Orthonormal Basis Definition An orthonormal basis for { s 1 ( t ) , . . . , s M ( t ) } is an orthonormal set { φ 1 ( t ) , . . . , φ N ( t ) } such that s i ( t ) = s i, 1 φ i ( t ) + s i, 2 φ 2 ( t ) + · · · + s i,M φ N ( t ) for some choice of s i, 1 , s i, 2 , . . . , s i,N ∈ R ❼ We associate s i ( t ) → s s s i = ( s i, 1 , s i, 2 , . . . , s i,N ) ❼ A given modulation scheme can have many orthonormal bases. ❼ The map s 1 ( t ) → s s s 1 , s 2 ( t ) → s s s 2 , . . . , s M ( t ) → s s s M depends on the choice of orthonormal basis. 16 / 54

  19. Example: M -ary Phase Shift Keying Modulation Scheme ❼ s i ( t ) = A cos(2 πf c t + 2 πi M ) , i = 1 , . . . , M ❼ Expanding s i ( t ) using cos( C + D ) = cos C cos D − sin C sin D � 2 πi � � 2 πi � s i ( t ) = A cos cos(2 πf c t ) − A sin sin(2 πf c t ) M M Orthonormal Basis � � ❼ Use φ 1 ( t ) = 2 /T cos(2 πf c t ) and φ 2 ( t ) = 2 /T sin(2 πf c t ) � � T � 2 πi � T � 2 πi � s i ( t ) = A 2 cos φ 1 ( t ) + A 2 sin φ 2 ( t ) M M ❼ Dimension N = 2 Waveform to Vector � � � � � A 2 T � 2 πi � A 2 T � 2 πi s i ( t ) → cos , sin 2 M 2 M 17 / 54

  20. 8 -ary Phase Shift Keying 18 / 54

Recommend


More recommend