
# 1.5: Coding Theory


In many people’s minds “codes” and “cryptography” are inextricably linked, and they might be hard-pressed to tell you the difference. Nonetheless, the two topics are vastly different, as is the mathematics involved in them.

Coding theory is the study of encoding information into different symbols. When someone uses a code in an attempt to make a message that only certain other people can read, this becomes cryptography. Cryptographers study strategies for ensuring that a code is difficult to “break” for those who don’t have some additional information. In coding theory, we ignore the question of who has access to the code and how secret it may be. Instead, one of the primary concerns becomes our ability to detect and correct errors in the code.

Codes are used for many purposes in which the information is not intended to be secret. For example, computer programs are transformed into long strings of binary data, that a computer reinterprets as instructions. When you text a photo to a friend, the pixel and colour information are converted into binary data to be transmitted through radio waves. When you listen to an .mp3 file, the sound frequencies of the music have been converted into binary data that your computer decodes back into sound frequencies.

Electronic encoding is always subject to interference, which can cause errors. Even when a coded message is physically etched onto a device (such as a dvd), scratches can make some parts of the code illegible. People don’t like it when their movies, music, or apps freeze, crash, or skip over something. To avoid this problem, engineers use codes that allow our devices to automatically detect, and correct, minor errors that may be introduced.

In coding theory, we learn how to create codes that allow for error detection and correction, without requiring excessive memory or storage capacity. Although coding theory is not a focus of this course, designs can be used to create good codes. We will learn how to make codes from designs, and what makes these codes “good.”

Example $$\PageIndex{1}$$

Suppose we have a string of binary information, and we want the computer to store it so we can detect if an error has arisen. There are two symbols we need to encode: 0 and 1. If we just use 0 for 0 and 1 for 1, we’ll never know if a bit has been flipped (from 0 to 1 or vice versa). If we use 00 for 0 and 01 for 1, then if the first bit gets flipped we’ll know there was an error (because the first bit should never be 1), but we won’t notice if the second was flipped. If we use 00 for 0 and 11 for 1, then we will be able to detect an error, as long as at most one bit gets flipped, because flipping one bit of either code word will result in either 01 or 10, neither of which is a valid code word. Thus, this code allows us to detect an error. It does not allow us to correct an error, because even knowing that a single bit has been flipped, there is no way of knowing whether a 10 arose from a 00 with the first bit flipped, or from a 11 with the second bit flipped. We would need a longer code to be able to correct errors.

After our study of coding theory, you should be able to solve problems such as:

• Given a code, how many errors can be detected?
• Given a code, how many errors can be corrected?
• Construct some small codes that allow detection and correction of small numbers of errors.

Exercise $$\PageIndex{1}$$

Can you come up with an interesting counting problem that you wouldn’t know how to solve?