Let me say it up front: breaking up end-to-end-encrypted HTTPS connections is bad. No matter why you think that you need to inspect and/or modify the contents of an HTTPS connection, please consider not doing it. And if you still think that you absolutely need it, please sit down and consider again just not doing it.
Unfortunately, I know that way too often this advice won’t be followed. And I don’t mean tools like the Burp Suite which only break up end-to-end-encryption of HTTPS connections temporarily to aid developers or security researchers. No, it’s rather the antivirus applications which do it because they want to scan all your traffic for potential threats. Or companies which do it because they want to see everything happening on their network.
Usually this results in privacy and/or security issues of varying severity. A while ago I already discussed the shortcomings of Kaspersky’s approach. I later found a catastrophic issue with Bitdefender’s approach. And altogether I’ve seen a fair share of typical issues in this area which are really hard to avoid. Let me explain.
How breaking up HTTPS connections works
HTTPS connections are end-to-end-encrypted. This means that only the client and the legitimate server are supposed to see the contents of their communication but no party inserting themselves into the connection. Here is how is normally works:
- The server presents an SSL certificate.
- The client verifies that the SSL certificate is valid for this server and signed by a trusted authority.
- Server and client do some magic that makes them agree on a common encryption key.
- Connection contents are now encrypted with that key.
An essential part of step 3 is that the server needs to know the private key belonging to the certificate. Only then will both sides arrive on the same encryption key.
Under usual circumstances, this is sufficient to make the connection completely opaque to any third parties getting in between. Even if a malicious actor passes along the certificate of the legitimate server, they won’t know the corresponding private key and consequently won’t get the connection’s encryption key. So even if the data is technically passing through them, they won’t be able to decrypt communication contents.
As long as the private key of the legitimate server isn’t compromised, the attacker won’t be able to make its certificate useful. Which means that they need to create their own certificate. And that one should normally be rejected in step 2 because it isn’t signed by a trusted authority.
But we aren’t talking about your regular malicious actor. We are talking about a legitimate application on the user’s computer or about one’s employer. And these solve the issue by adding their own trusted authority to user’s computer.
Now they can use that trusted authority to create their own valid SSL certificate for any connection. So rather than establishing an encrypted communication channel with the legitimate server, the client will establish one with an HTTPS proxy that is either running on their machine or on the employer’s network. And that HTTPS proxy can watch and modify the transmitted data before it is passed along.
While this approach works in principle, it is way more complicated to implement correctly than people usually realize. Some potential issues are largely obvious, others are not.
The trouble with that certificate authority
The very first and most obvious issue: there is now one more trusted authority. And that isn’t a small deal. Each and every certificate authority has, in case of a compromise, the potential of completely undermining end-to-end-encryption of HTTPS connections.
That’s why Mozilla for example created a long list of rules that certificate authorities have to follow in order to be considered trusted. These rules mandate how the private key of the certificate authority is to be kept secure, who and how should have access to it, how often external audits have to be performed and so on.
You’ve decided to add your own certificate authority to all computers of a network? Congratulations, now you have to worry about keeping the corresponding private key in a secured place, ideally on a hardware security module with restricted physical access. And you have to worry about making sure that the system can only be used to issue SSL certificates for the legitimate use case. Good luck with that.
Things are less gloomy with local HTTPS proxies like antivirus applications. These can keep the private key on the user’s machine and merely have to make sure that it isn’t accessible without administrator privileges.
Oh, and they also have to make sure that each application instance generates its own unique certificate authority. Sharing the private key between multiple or even all installations of the application is a huge no-go, the private key can no longer be considered a secret then.
TLS is a moving target
The communication protocol underlying HTTPS is Transport Layer Security (TLS). And it isn’t something set in stone but rather under continuous development. Current version is TLS 1.3.
For anybody implementing an HTTPS proxy this presents two issues. On the one hand, your proxy needs to properly support the most recent TLS version. Otherwise users won’t get the usability and privacy improvements of that version.
On the other hand, browsers disable outdated TLS versions regularly. Chances are that an HTTPS proxy still supports TLS 1.0 and 1.1 which browsers already disabled. And that’s a bad idea. It could open users to attacks that are only viable against these outdated protocol versions.
But protocol versions are only part of the story. Even without any protocol changes, it is a bad idea to take a particular OpenSSL library version and keep using it for years. That’s because TLS is complicated, implementation issues are being found and fixed continuously.
To address this among other things, Mozilla and Google release new major versions of their browsers every six weeks. Sometimes there are unscheduled additional releases to address urgent security issues. Modern browsers also have very efficient update distribution mechanisms to help bring these updates to the users ASAP.
Is that HTTPS proxy vendor similarly dedicated to staying on top of these TLS implementation issues? They rarely are…
Handling errors is tricky
With an HTTPS proxy in the middle, the client no longer has a connection to the server. The connection to the server is handled entirely by the proxy. So if anything goes wrong with that connection, it’s up to the proxy to recognize it and to inform the client.
As a first consequence, the proxy is now responsible for recognizing invalid server certificates. It will hopefully reject certificates issued for the wrong server name. What about outdated certificates? Or certificates signed by an authority that’s not trusted?
How does it even determine which authorities are trusted? There is no fixed list. Mozilla, Google, Apple, Microsoft all maintain their own lists. These also have the tendency to change over time. So if one of these lists is bundled with the application, does it receive regular updates?
Ok, there was an error and the client needs to be made aware of it. Typically, HTTPS proxies will produce an error page as a fake server response. Meaning: as far as other server pages are concerned, this error page is first-party content and can be accessed in the browser.
My Kaspersky and Bitdefender exploits involved a server that would first produce a valid response with a malicious page. Then this page triggered another request which would result in a certificate error. This allowed the malicious page to read the contents of the error page.
The security tokens within the error page included persistent user identifiers (Kaspersky), allowed overriding any SSL errors for any website (Kaspersky) or even facilitated executing arbitrary code on the user’s machine (Bitdefender). It’s generally a good idea that error pages aren’t served as first-party content but rather appear to come from another domain like
The wording of the error page is also crucial for the user to decide on the right course of action. Browser vendors had decades to perfect it. Vendors implementing HTTPS proxies instead tend to confront users with technicalities that they usually won’t understand.
Yes, browsers intentionally make it hard to override the error and access the page nevertheless. But that’s not the only reason why they require two clicks to add an exception for an invalid certificate. The other reason are clickjacking attacks.
That “I understand the risks” link in Kaspersky’s error page above? Malicious pages can trick users into clicking it by loading the error page in a hidden frame and moving it around so that it is always placed right under the mouse cursor. When the user clicks they will inadvertently override a certificate warning. This kind of attack won’t work if the user needs to click two different areas of the page.
All the other “tiny” things implemented by browsers
But you know what? Browsers sometimes won’t even offer you a choice to override a certificate error. That’s because of a security mechanism called HTTP Strict Transport Security (HSTS). If a website uses HSTS, an invalid certificate always means: “something is badly wrong, get out of here ASAP.”
And browsers respect that. HTTPS proxies? Usually not so much. Recognizing HSTS headers, keeping around a list of HSTS sites, handling expiration correctly, it isn’t all that easy.
Browsers might also implement other security mechanisms on top of TLS. For a while, browsers used to support HTTP Public Key Pinning until this mechanism was deemed too complicated and too dangerous. Other mechanisms might come to replace it in future. Will HTTPS proxies implement them?
For browser vendors providing the most secure HTTPS experience possible is a priority. So they invest significant resources into it, and that’s in fact necessary. Supporting HTTPS properly is far from being a simple task, and continuous changes are required.
Vendors implementing HTTPS proxies often have neither the know-how nor the incentives to ensure the same quality in their implementations. While their solutions appear to work, they tend to degrade the security and privacy level and to undermine the work done by browser vendors.
I'll give you another reason... using CAs not part of the standard distributions can often be a pain in the neck.
So many software development tools these days involve reaching out to online repos and data sources of some kind, and making sure all of them recognise the MitM certificates is practically impossible. So every time I hit new certificate problems in a tool, I just log a job with the IT team to whitelist the site... and the speed with which those requests get approved suggests that they don't like it either.
Of course, the same is true of legitimate corporate CAs too... but tools needing to hit internal sites seem much rarer than those wanting internet access...