Why I won’t use Let’s Encrypt

Initially written on: 2022-01-31.

TLDR:

I am so tired of hearing of all that exclamations about my sites, mostly related to various free software projects: either they do not have HTTPS, or they do not have valid/proper certificates! And general recommendation people give me is "just use gratis Let’s Encrypt".

First of all, statements about lack of HTTPS are just completely plain dumb: try to explicitly tell your computer that you desire using HTTPS protocol, by replacing http:// with https:// in URLs.

Next awful thing is that many people tend to confuse encryption and authentication of the endpoint (my websites in current case). With HTTPS you will definitely get good working encryption. Period. HTTPS clients generally complain about inability to authenticate the endpoint, but they won’t forbid using encryption. What people want for? Encryption? Then enable it by pointing to https://!

Why do not I forcefully redirect people from HTTP to HTTPS?

First reason is that I just do not have geographically distributed servers under my control. Most of my websites are hosted both on my own home server and on some VPS in another city. I can not set up TLS on VPS, because its hosting company obviously will have access to all of its internals, including TLS private keys. Currently my websites have two IP addresses, with two independent HTTP servers, but with only single HTTPS server and single dummy TCP-proxy located on VPS. If my main server is down, then the whole HTTPS is down too. If I give TLS private keys to the hosting company, then what is the point of using TLS and lying that it can authenticate the endpoint domain?

Second reason is that it is not my responsibility to impose user the desired security protocol usage. Possibly there is already IPsec transport session, transparently securing the link. Possibly we use some overlay network like Yggdrasil, where TLS is just pointless. My tarballs and Git tags are always signed with OpenPGP key. TLS even won’t give any metadata privacy there because of known tarballs/pages sizes.

Next point is that there just can not be some objectively absolutely valid and proper global scale PKI certificate. Because its validity fully depends on your exact point of view, specifically on what trust anchors you use and validation rules you apply.

X.509 PKI can be pretty secure in enterprise-level setups, where you rely on some company’s trust root certificates. But at global scale people can not be expected to have the only single point of view on any subject, as history taught as. And of course it is impossible to have single point of truth, or at least common subset of them.

Can some third-party PKI CA be trusted? Probably yes, of course. Do users really trust those third parties? Those dozens of CA automatically installed in their OS, browsers and other software? I doubt! If you can not authenticate endpoint (like asking the bank for their X.509 certificate’s SubjectPublicKeyInfo hash), then at least you have to authenticate those third parties intended to be the trusted intermediate between you and others. Hardly people do that. Neither they, nor those CAs really care about security – it is just plain old business.

Current global-scale PKI system, integrated by default in most software, literally tells that some dozens of CAs, and several hundreds of intermediate CAs can authenticate entities (like Internet domains and so far). There is no reason for me to spend my money paying one of chosen CAs, because any of hundreds CAs beside can issue "valid" certificate for MitM-ing connections.

So paying for the domain’s certificate just gives ability to show some green bars in the browsers, but the whole system does not prevent its MitM-ing by another authorities anyway by design. So what I am paying for? For the green bar? I thought we talked about security. I can get trust to some authorities, but not the hundreds of them.

I am convinced only in either trusting endpoints directly, by pinning their certificates/public keys (like most people already do with SSH for example), or using Web-of-Trust approach, like some people already do for many years with OpenPGP. I understand that in many cases there are just no any intermediaries at all, so using TOFU (trust-on-first-use) with long-term public keys pinning is the best we can do. At least one will note that end entity’s key is changed, that probably means MitM-ing attempt, so user will be warned. Decision what to do next has to be fully up to him!

Various software vendors applies varying rules of chain validations. X.509 tells us about certificate revocation lists, that have to be regularly refreshed for strict chain validation. What browsers and OSes do that? None I have ever seen by default. Some browsers used OCSP, that literally leaks your intentions about visiting different entities to third-parties in real time. That is huge privacy issue. Google decided that all CAs have to use Certificate Transparency technology. Apple decided that certificate’s validity can not long more than ~400 days. From X.509’s point of view your certificate can be pretty fully valid, but not from Google/Apple one. Otherwise is also applicable: hardly any of modern browsers does strict X.509 chain validation.

But there are free (from the point of price) CAs, like community-driven CAcert! What if we still use TOFU/pinning, spread the Web-of-Trust, but also include at least some CA’s signature, because someone may actually trust it? It won’t hurt, however probably will bring no more additional trust at all.

I used to use CAcert because of that. But here comes politics and business again! CAcert is not included in most major operating systems. Who wants to loose their business when someone does it for free? There were other gratis CAs, that also were not included in OSes. Why? US-based software vendor companies will give many reasons, but actually there is only one uniting them all: all of these free CAs are not based in US, so do not obey their jurisdiction.

Actually for some time they were indeed included in many trust anchor bundles out-of-box. But soon all of them were removed... and there suddenly appeared Let’s Encrypt (LE), that relatively immediately was praised by all major software and hardware vendors and included everywhere. Just a coincidence? Instead of having certificates spread among many CAs under various jurisdictions, that ingenious move with gratis ultimately trusted CA lead to the world where prevailing majority of all certificates are issued with single CA, based in US (at last).

Very short-lived certificates and the fact that most ACME-clients create new keypair during each renewal, heavily complicates ability to use any kind of pinning. I visit many sites once per month – and that means every time I get new public key, making pinning useless. LE tells that it is for limiting the damage from possible key compromising. Yeah, sure. However at least you are allowed not to generate new key pair.

LE is clearly a NOBUS project. But do you remember that any of CA authorities included in OS can MitM my domains anyway (by definition)? Well, partly you can prevent that for some software by using CAA DNS records, where you explicitly tell which CA authorities are authorized to issue certificates for given domains. Specifying LE in CAA means that I authorize noone to issue certificates for my domains, except for US-based forces. That is something I will never do, being the citizen of completely independent jurisdiction. I am not a traitor.

Authentication of CAA is another good questions of trust. DNSSEC is centralized (decisions can be made solely by quorum of people from NATO countries). DNSCurve is unfortunately not widely implemented.

Certificate Transparency sounds useful. But users were incapable on refreshing CRLs on their systems – and now we expect them to regularly synchronize CT’s Merkle Trees from various independent sources? Or to delegate that task again to yet another third parties? Are you kidding? I have seen noone doing this, except for myself.

What if TLS allowed us to specify multiple certificate chains with multiple trust anchors involved? That would be better, but TLS standards do not assume that possibility at all: only single end-point certificate allowed, with optional additional ones supplementing the full chain of trust.

And there is another, probably the main one, stopping issue for me: taking into account that I am russian, working on sanctioned companies, travelling to sanctioned countries and regions so far, I will be definitely banned on any of USA related services. There were already several occasions when ordinary programmers/users were banned on US-based services (like GitHub) just for visiting Iran or Crimea region.

I stopped using CAcert because of their temporary technical issues when they were not able to renew my certificates at a time. So I had no choice, than to issue them manually. As a feature I moved to ECC instead of huge and slow RSA keys.

PS: Actually I have got two CAs (and two certificate chains): ECDSA-based one, EdDSA-based one, and GOST R 34.10-2012-based one. My own godlighty web-server gives varying certificate chain for TLS 1.3 connections, by looking at SignatureSchemes extension offers.