lecture no 7
play

Lecture no: 7 Overview Block codes Convolution codes Fading - PowerPoint PPT Presentation

RADIO SYSTEMS ETI 051 Contents (CHANNEL CODING) Lecture no: 7 Overview Block codes Convolution codes Fading channel and interleaving Channel Coding Ove Edfors, Department of Electrical and Information Technology


  1. RADIO SYSTEMS – ETI 051 Contents (CHANNEL CODING) Lecture no: 7 • Overview • Block codes • Convolution codes • Fading channel and interleaving Channel Coding Ove Edfors, Department of Electrical and Information Technology Ove.Edfors@eit.lth.se 2010-04-27 Ove Edfors/Johan Kåredal - ETI 051 1 2010-04-27 Ove Edfors/Johan Kåredal - ETI 051 2 Channel coding Basic types of codes Channel codes are used to add protection against errors in the channel. It can be seen as a way of increasing the distance between transmitted alternatives, so that a receiver has a better chance of detecting the correct one in a noisy channel. OVERVIEW We can classify channel codes in two principal groups: BLOCK CODES CONVOLUTION CODES Encodes data in Encodes data in a stream, blocks of k , using without breaking it into code words of blocks, creating code length n . sequences. 2010-04-27 Ove Edfors/Johan Kåredal - ETI 051 3 2010-04-27 Ove Edfors/Johan Kåredal - ETI 051 4

  2. Channel coding Channel coding Information and redundancy Information and redundancy, cont. EXAMPLE Electronic circuits do not have the power of the human brain and needs more structured redundancy to be able to decode “noisy” Is the English language protected by a code, allowing us to correct messages. transmission errors? ”Pure information” When receiving the following sentence with errors marked by ´-´: without redundancy “D- n-t w-rr- -b--t ---r d-ff-cult--s -n M-th-m-t-cs. Original source data Source Channel ”Pure information” with - c-n -ss-r- --- m-n- -r- st-ll gr--t-r.” with redundancy coding coding structured redundancy. it can still be “decoded” properly. What does it say, and who is quoted? The structured redundancy added E.g. a speech There is something more than information in the original sentence in the channel coding is often called coder that allows us to decode it properly, redundancy . parity or check sum . Redundancy is available in almost all “natural” data, such as text, music, images, etc. 2010-04-27 Ove Edfors/Johan Kåredal - ETI 051 5 2010-04-27 Ove Edfors/Johan Kåredal - ETI 051 6 Channel coding Channel coding Illustration of code words Illustration of decoding Assume that we have a block code, which consists of k information If we receive a sequence that is not a valid code word, we decode bits per n bit code word ( n > k ). to the closest one. Since there are only 2 k different information sequences, there can be only 2 k different code words. Received word 2 n different binary sequences of length n . Only 2 k are valid code words in our code. Using this “rule” we can create decision boundaries like we did for signal constellations. This leads to a larger distance between the valid code words than One thing remains ... what do we mean by closest ? between arbitrary binary sequences of length n , which increases our We need a distance measure! chance of selecting the correct one after receiving a noisy version. 2010-04-27 Ove Edfors/Johan Kåredal - ETI 051 7 2010-04-27 Ove Edfors/Johan Kåredal - ETI 051 8

  3. Channel coding Channel coding Distances Coding gain The distance measure used depends on the channel over which When applying channel codes we decrease the E b / N 0 required to we transmit our code words (if we want the rule of decoding to the obtain some specified performance (BER). closest code word to give a low probability of error). BER This coding gain depends on the code Two common ones: and the specified performance. It Used for binary Hamming distance Measures the number of bits Un-coded translates directly to Coded being different between two channels with a lower requirement binary words. random bit errors. on received power in the link budget. BER s Euclidean distance Same measure we have used Used for AWGN p e c G c o d e for signal constellations. channels. E b / N 0 [dB] NOTE: E b denotes energy per information bit , even We will look at this in more detail later! for the coded case. 2010-04-27 Ove Edfors/Johan Kåredal - ETI 051 9 2010-04-27 Ove Edfors/Johan Kåredal - ETI 051 10 Channel coding Bandwidth When introducing coding we have essentially two ways of handling the indreased number of (code) bits that need to be transmitted: 1) Accept that the raw bit rate will increase the required radio bandwidth proportionally. This is the simplest way, but may not be possible, since we may have a limited bandwidth available. BLOCK CODES 2) Increase the signal constellation size to compensate for the increased number of bits, thus keeping the same bandwidth. Increasing the number of signal constellation points will decrease the distance between them. This decrease in distance will have to be compensated by the introduced coding. 2010-04-27 Ove Edfors/Johan Kåredal - ETI 051 11 2010-04-27 Ove Edfors/Johan Kåredal - ETI 051 12

  4. Channel coding Channel coding Linear block codes Some definitions Minimum distance of code: Code rate: i ≠ j d   bits in k x j  The encoding process of a linear block code can be written as d min = min x i ,  = = R bits out n x = G   u i ≠ j w   x j  = min x i   Modulo-2 arithmetic (XOR): x j = [ 1 ]  [ 1 ] = [ 0 ] where The minimum distance of a 0 0 0 code determines its error  u  x i   1 0 1 k - dimensional information vector correcting performance in non-fading channels. G n x k - dimensional generator matrix Hamming weight: Note: The textbook  x w   x  = number of ones in  x n - dimensional code word vector sometimes use the name “ Hamming distance of Hamming distance: the code ” ( d H ) to denote The matrix calculations are done in an appropriate arithmetic. d   x j  = w   x j  x i ,  x i   We will primarily assume binary codes and modulo-2 arithmetic. its minimum distance. 2010-04-27 Ove Edfors/Johan Kåredal - ETI 051 13 2010-04-27 Ove Edfors/Johan Kåredal - ETI 051 14 Channel coding Channel coding Encoding example Encoding example, cont. Encoding all possible 4 bit information sequences gives: For a specific (n,k) = (7,4) code we encode the information sequence 1 0 1 1 as [ 1 ] [ Hamming Information Code word [ 0 ] weight 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 This code has a If the information is directly 0 0 0 1 0 0 0 1 1 1 1 4 1 ] 0 1 0 0 0 minimum 3 visible in the code word, we 0 0 1 0 0 0 1 0 0 1 1 1 distance of 3. 0 0 1 1 0 0 1 1 1 0 0 3 0 0 1 0 say that the code is systematic . 1 3 0 1 0 0 0 1 0 0 1 0 1 (Minimum code 0 0 1 0 1 0 1 0 1 0 1 0 3 = word weight of a 0 0 0 1 1 0 1 1 0 0 1 1 0 1 1 0 4 linear code, 1 4 0 1 1 1 0 1 1 1 0 0 1 excluding the all- 1 1 0 1 0 1 0 0 0 1 0 0 0 1 1 0 3 zero code word.) 3 1 0 0 1 1 0 0 1 0 0 1 In addition to the k information 1 1 0 1 1 1 0 1 0 1 0 1 0 1 0 1 4 bits, there are n - k = 3 parity bits. 1 0 1 1 1 0 1 1 0 1 0 4 1 1 1 4 1 1 0 0 1 1 0 0 0 1 1 1 1 0 1 1 1 0 1 1 0 0 4 3 1 1 1 0 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 7 Generator matrix This is a (7,4) Hamming code, capable of correcting one bit error. 2010-04-27 Ove Edfors/Johan Kåredal - ETI 051 15 2010-04-27 Ove Edfors/Johan Kåredal - ETI 051 16

Recommend


More recommend