Skip to main content
Mathematics LibreTexts

4.6: Man-in-the-Middle Attacks, Certificates, and Trust

  • Page ID
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\NN}{\mathbb N}\)
    \(\newcommand{\RR}{\mathbb R}\)
    \(\newcommand{\QQ}{\mathbb Q}\)
    \(\newcommand{\ZZ}{\mathbb Z}\)
    \(\newcommand{\Cc}{\mathcal C}\)
    \(\newcommand{\Dd}{\mathcal D}\)
    \(\newcommand{\Ee}{\mathcal E}\)
    \(\newcommand{\Ff}{\mathcal F}\)
    \(\newcommand{\Kk}{\mathcal K}\)
    \(\newcommand{\Mm}{\mathcal M}\)
    \(\newcommand{\Pp}{\mathcal P}\)

    While public-key crypto can seem an unalloyed benefit to the networked world, close examination of the details of the last two sections shows a dangerous gap between a casual statement of the properties of these cryptographic tools and their reality. The distinction which at first goes unnoticed is between Bob, the person, and the bits arriving over the wire to Alice or Larry which claim to be from Bob. This has little effect if Eve is, as we have mostly been assuming her to be, a passive observer of the communications between Alice and Bob (and sometimes Larry). But if Eve has even more control over the network and can replace all communications with her own versions, new attacks are possible.

    Suppose Alice wants to send her secret message to Bob, again without ever having met him to exchange with him a key for a symmetric cryptosystem. She hopes that he has a public key, so she gets on the web and downloads his home page.

    Here is where Eve springs into action, intercepting Bob’s (web server’s) response to the web request. She keeps a copy of Bob’s public key \(k_e^B\) for herself, but substitutes into the web page data in its place an RSA encryption key of her own, \(k_e^E\) – for which she alone knows the corresponding decryption key \(k_d^E\) – and transmits the modified page on to Alice.

    Alice composes her cleartext \(m\), and transmits its corresponding ciphertext \(c_A=e_{k_e^E}(m)\) to Bob, or so she thinks. Instead, Eve intercepts that ciphertext, decrypts and stores \(m=d_{k_d^E}(c_A)=d_{k_d^E}(e_{k_e^E}(m))\). Then, in order to make Alice and Bob think everything is working normally (so they’ll keep up their revealing chatter), Eve transmits \(c_E=e_{k_e^B}(m)\) on to Bob.

    From Bob’s point of view, he gets an e-mail which seems to be from Alice and which, when decrypted with his private key, does make perfect sense. Furthermore, if he interacts with Alice off-line, she behaves as if she sent that message – in fact, she did, but not in the particular encrypted form that Bob received. Eve has completely violated the confidentiality of Alice and Bob’s communications, and she could violate the message integrity any time she liked, still making it appear to come legitimately from Alice.

    The above scenario is called a man-in-the-middle attack (pardon the non-gender-neutral terminology).

    Here is a graphical depiction of this attack:

    Alice Eve Bob
    generate keys:
    public \(k^B_e\), private \(k^B_d\)
    intercept \(k^B_e\leftarrowtail\) publish \(k^B_e\)
    generate keys:
    public \(k^E_e\), private \(k^E_d\)
    download \(k^E_e\) \(\leftarrowtail k^E_e\), spoof origin
    message \(m\in\Mm\)
    compute \(c_A=e_{k_e^E}(m)\)
    transmit \(c_A\) \(\rightarrowtail c_A\), intercept
    extract the cleartext
    change to \(m^\prime\)  if desired
    compute \(c_E=e_{k_e^B}(m^\prime)\)
    spoof origin \(c_E\rightarrowtail\) receive \(c_E\)
    read the message

    So it seems that symmetric cryptosystems actually have one nice thing built in: when the parties meet in that perfect, prelapsarian moment to exchange their symmetric key, they presumably can have confidence in the identity of the person they’re talking to – if not, they wouldn’t exchange the symmetric key until they had seen lots of official-looking identification materials. Asymmetric cryptosystems have the fundamental difficulty to overcome of establishing a trustworthy connection between a real person’s identity and a public key on the Internet which purports to be from that person.

    The technique from last section, Section 4.5, can help transfer the trust, at least. Suppose Alice wants to engage in secret communication with Bob, but does not know if she can trust the public key which appears to be on Bob’s web page. If that key came with an accompanying digital signature issued by a trusted third party [TTP] of which Alice had the public key, she could verify that the key was Bob’s, at least as far as the TTP knew.

    Here is a formal definition.

    Definition: Certificate

    Individuals or organizations who want to use asymmetric cryptography can go to a trusted third party called a certificate authority [CA] to request a [digital] certificate for their public keys. This certificate would be a digital signature on the public key itself, signed by the CA’s signing key. The CA’s verification key would be assumed widely disseminated across the Internet or perhaps built into the basic operating system software or computer hardware. Then anyone who wanted to use a public key could first check the validity of the associated certificate and have confidence that the intended party did own the key in question.

    The entire aggregate of certificates, CAs, pre-installed or widely distributed verification keys, etc., is called a public key infrastructure or PKI.

    In actual practice, the CA often cannot do more than certify that a certain public key is owned by an individual who has access to a certain e-mail address, or the webmaster password to a certain site, or some such purely Internet-based token. That much is usually quite easy – connecting this with a real, external identity would require checking some form of government-issued ID, probably, and is rarely done. Although it would be useful: perhaps the government should act as a CA, and government IDs should have a built-in RFID (recent US passports do!) which can transmit a certificate on the ID owner’s public key.

    There is one other approach to figuring out whether to have faith in a particular public key, favored by those who mistrust authority but are willing to trust some others they know personally. In this approach, individuals who know each other personally and have faith in each other’s good cryptologic habits can each sign each other’s public keys, adding their digital signatures to those which have already been collected. Then when you want to use someone’s public key, you can unwrap a chain of digital signatures, each signing the next, until you find one by a person whom you know personally, have met, and whose public key you have verified.

    This approach has become known as the web of trust, and is strongly supported by GnuPG and other OpenPGP-compatible organizations, see

    By the way, to be useful, a web of trust must include as many individuals and their keys as possible, and have as many connections (where individual X knows and signs individual Y’s public key) as possible. One way to get this going quickly is to have a key-signing party. If you are standing next to someone whom you know well who says “sure, this other person is someone I know well and trust”, then you might be willing to sign both of their keys right there. In practice, when you sign keys, you are standing there with someone whom you trust, so it usually suffices to check the md5 fingerprint of their key and not the whole thing – the md5 is shorter and standing there with this person you trust, presumably you do not think that anyone there has devoted large computational resources to finding a second md5 pre-image the fingerprint.

    This page titled 4.6: Man-in-the-Middle Attacks, Certificates, and Trust is shared under a CC BY-SA license and was authored, remixed, and/or curated by Jonathan A. Poritz.