Functional safety - error codes detection - parity and chechsum

Description of error detection codes and error correction codes

The Hamming distance and the residual error rates are different for all methods that allows to detect and to correct these transmission errors.  Coding methods that are be presented in the first part of this paper are:

  • error detection codes by parity
  • error detection code method by CHECKSUM
  • error detection codes by CRC 
  • Detectors codes and error correcting codes by HAMMING

The first two methods are described below. CRC and detectors / error correcting codes are described in another article

1 Parity

1.1.Principle of the parity

This is a modulo-2 sum of information bits. 

  • Parity is said even when the parity code equals to the sum modulo 2 of the information bits
  • Parity is said odd when parity code is equal to the complement of that.sum

This method of control is used in the RS-232 series transmission and in microprocessors.

Before each transmission of a word, an extra digit added. It is called a parity bit. After transmission, the presence of a simple error change the parity value and makes the error detectable .

1.2.Détection of errors - Residual Errors

If we call  'p' the probability of individual error of a single digit, and "n" the word length.

The probability of having a single error is

P(1) = P[1st digit wrong and the (n-1 following digit) valid] + P[1st digit valid  and 2nd digit false and the (n-2 following digits) valid] + ...

P(1) = p(1-p)n-1 + (1-p)p(1-p)n-2 + (1-p)2p(1-p)n-2 + ...

P(1) = n[p(1-p)n-1]

Similarly we obtain:

Parity can detect all odd errors with a power of detection PD that is :

PD = P(1) + P(3) + ... + P(2k-1)

Undetected errors are all even errors so a power not to detect failure PND:

PND = P(2) + P(4) + ... + P(2k)

These numbers are a function of p, and must be calculated depending of environments.

Assuming that we have a frame consisting of 8 data bits of information and a key verification by parity bit (n = 9). By changing an even number of bits e = {2, 4, 6, 8} in the frame we have another frame in which errors are not detected by the verification key.

The residual error rate can be calculated in accordance with the information provided on page, and the probability of residual error is equal to the sum of the even errors :

which is :

R = R1*R2

with : R1=P(2)+P(4)+P(6)+P(8) - the probability of error on the datagiven on

R2 = probability that the frame delimiters are correct. In the case of single-parity, we have two delimiters (start and end of frame) and a probability q * q = q2




There are several methods used to achieve a CHECKSUM such as:

  • The method of the modified sum control,
  • The arithmetic addition of the contents of the message,
  • Parity Codes interleaved .

This last method will be detailled.

2.1.Principle of the CHECKSUM

The principle of parity code interleaved CHECKSUM is a message M digits, written in the form of an array of "L" lines and "C" column (M = CL). In this table, is added (L + 1) th row and a (C + 1) th column constructed such that the words that are read horizontally and vertically are even.

The method then determines the number of "1" contained in a row (column). If this number is even, is assigned to the intersection of that line (column) and the last column (row), the state "0" or "1" if the number is odd. This operation is performed for each row and each column. The digit located at the crossroads of the last row and the last column is selected to ensure parity of the entire message.

For example, a message M = 49 digits (7 words of 7 bit):








A set of 64 digits (49 digits for the transmittion of the informationand 15 digits reserved to the control) is transmitted and received in series in the transmission channel. At the reception, the table is restored. Only one mistake can make the parity check of the line and the corresponding column, therefore, detection and correction are possible.

A double or quadruple error is detected but can not be corrected, the triple error is not always detected.

This code is more powerful than parity but requires additional hardware resources. A message which originally M digits (M = CL) requires C + L + 1 check digits, so an increase or redundancy DR:

For 7 data bits,     M = 49                        DR = 15/49 = 0.31

For 15 bits of information,  M = 225                      DR = 31/225 = 0.14

2.2.Detection coverage of a CHECKSUM - Residual Errors

The power detection/coverage of a CHECKSUM is calculated differently depending on the number of bytes, first from a main method or from a probabilistic method.

2.2.1.Main Méthode

This method is is based an accurate count of all combinations of "N" words "P" bits of a sequence of information leading to an amount equal to the sum "S" obtained on the "N" word of the sequence containing the original information.

The theoretical power of detection P D is defined as the ratio (expressed as a percentage) between the number of detected errors NDET by the control and the total number of errors NTOT that can occur on the sequence of information to control.

The expression of the total number of errors NTOT represents the number of combinations that can take the (P x N) bits of the sequence to control, which gives for N bytes:

TOT = 2 PN

The determination of the number NDET or of the number of undetected errors NNDET  where NDET + NNDET = 2P.N


NNDET correspond to the number   of combinations of "N" bytes whose sum is identical to the sum "S" obtained on the bytes of the sequence of information in the absence of errors.

The power of detection can then be expressed as follows:

Calculations that performs enumeration of all combinations of "N" bytes of an information sequence whose sum is "S" leads to the following formulas:

for S = 0;  ; P = 8 bits (the number of bits in a byte)

PD = 100 x (1-1/28.N) » 100 %

for  S   [ k (2P - 1),(k+1) (2P - 1)] with

integer k  [ 0 , N-1-INT(N/2P)]

         0! = 1


Numerical calculation is possible only for small values ​​of N (N <40). Beyond this level, it is necessary to use the "probabilistic" method.

2.2.2."Probabilistic Method" 

With the "probabilistic method", each byte may be considered as a random variable. A given configuration is the sequence of "N" corresponding random variables.

Considering that the probability distribution is the same for each byte (average value m, variance s 2), where "N" is large (N> 50), the distribution of the sum "S" of random variables is very substantially normal (follow the normal law used in statistics), average value Nm and variance  Ns2 and there the probability that the sum of "N" variable is equal to a value "s":


The probabilité of an entire value « s » is :

with          and         

The probability is maximum for the Nm value of "s":

Applied to memories of 8-bit data size (P = 8) and for a given data capacity, we get the following results:

For P = 8




N = 512 o   Pr[ S = 65280 ]           ~          0.0002386       (99.976 %)

N = 16 ko   Pr[ S = 2088960 ]       ~          0.000042         (99.9958 %)

N = 1 ko     Pr[ S = 130560 ]         ~          0.0001688       (99.983 %)

N = 32 ko   Pr[ S = 4177920 ]       ~          0.0000298       (99.997 %)