This is the fifth assignment from my Masters Applied Digital Information Theory Course which has never been published anywhere and I, as the author and copyright holder, license this assignment customized CC-BY-SA where anyone can share, copy, republish, and sell on condition to state my name as the author and notify that the original and open version available here.
The previous 4th assignment demonstrate 1 bit error detection using 3,4 parity codes. On this assignment will be demonstrating 1 bit error correction using 7,4 hamming codes. On 3,4 parity codes we group 3 bits per block and perform exclusive or on each blocks to get a bit called the parity code bit and add it into the 4th bit of the blocks. A different approach for the 7,4 hamming codes we first group 4 bits per block, and then obtain the 3 hamming bit codes from the 4 bits for each blocks and add them which makes each blocks contained 7 bits. Suppose there are 4 bits as follows:
To get the hamming bit codes we do the following calculation:
b5=b1⊕b2⊕b3, b6=b1⊕b2⊕b4, b7=b1⊕b3⊕b4
Those bits will be added to the block:
Following Table 1 are the complete list:
Table 1. Coding Table
On the receiver side the hamming codes is also constructed using the received first 4 bits of each blocks, then compare them with the hamming codes produced on the receiver side. Since the hamming codes were produced using the above equation (each codes uses different elements) we can know on which bit the error occurs, and a correction is performed. For example if receiver's b5, b6, b7 is different from transmitter's then an error on b1, if b5, b6 is different then an error occur on b2, if b5,b7 is different then an error occur on b3, if b6, b7 then b4, if b5 only then b5, if b6 only then b6, if b7 only then b7. Following Table 2 are the complete syndromes:
Table 2. Syndrome and Correction List
2. Memoryless Errors (Skew Tent Map)
On this simulation we will (1) generate a chaotic binary sequence from memoryless source (2) perform hamming coding (3) simulate through a noisy channel (4) perform error correction on receiver. We will calculate the practical probabilities of incorrect decoding and compare them theoretically defined by PI=1–PC where PC=7p(1-p)6 + (1-p)7 is the probability of correct decoding. The formula is derived from the probability of 1 bit error out of all possible error. Also we will compare the error probability of the binary sequence before correction and after correction. We will use critical point of c=0.499999 for generating the memoryless source, while we use c=1-p for generating the error sequence, and initial chaotic value for both is x(1)=0.333333 and various error probability p for the error sequence (noise). Generating the source will be the same as assignment 2, and generating the hamming codes is similar to 4th assignment except we input 3 extra bits in each blocks based on the initial 4 bits, which will make 7 bits. Like the previous assignment we perform exclusive or between the generated memoryless source and the error sequence to obtain the received sequence. On Table 3 is the simulation result of 1000000 (million) blocks (N) (7000000 bits) with error probability up to 0.4.
Table 3. simulation result of 1000000 (million) blocks (N) (7000000 bits) with error probability up to 0.4
3. Markov-type Errors (PWL Map)
Similar to section 2 but this time we generate the error sequence through Markov type or Piece Wise Linear (PWL) map. The theoretical incorrect error decoding is PI=1–PC and correct one:
PC = P(1)p2 (1-p)5 + 5P(0)p1 p2 (1-p1)4 + P(0)(1-p1)5 p1 + P(0)(1-p1)6
We use the same critical point and initial chaotic value. To generate the error sequence we use the PWL map with various p2 values and P1 = (1-c)/c P2, with c=1-p. On Table 4 is the theoretical result of 1000000 (million) blocks (N) (7000000 bits) with error probability up to 0.4, Table 5 is the simulation result, Table 6 is error probability before decoding, Table 7 is error probability after decoding.
Table 4. Theoretical Result
Table 5 Simulation Result
Table 6. Error Probability Before Decoding
Table 7. Error Probability After Decoding
On Figure 1 shows that the error probability before and after decoding fluctuates on different error probability values (p) and type of sources (memoryless and p2s), but had one thing in common that belowp < 0.3 the error after decoding decreases, and unfortunately rises above that. The incorrect coding for memoryless is preferable when p < 0.1 but not recommended when above. For PWL with p2=0.1 shows low incorrect decoding amongst other p2. For other values there is a cutting point on p=0.2, below lower p2 shows lower incorrect decoding and viceversa.
5. Source Code
The source code is written in Fortran95 which is said to be a good programming language for mathematics in the old days, the first one is for memoryless, and the first one is for PWL (markov).