IMO this is a policy change that can Break the Internet, as many archived/legacy sites on old-school certificates may not be able to afford the upfront tech or ongoing labor to transition from annual to effectively-monthly renewals, and will simply be shut down.
And, per other comments, this will make LE the only viable option to modernize, and thus much more of a central point of failure than before.
But Let's Encrypt is not responsible for this move, and did not vote on the ballot.
It will make ACME the only viable option. I believe there is a second free ACME CA and other CAs will likely adopt ACME if they want to stay relevant.
Ideally, this will take less ongoing labor than annual manual rotations, and I'd argue sites that can't handle this would have been likely to break at the next annual rotation anyways.
If they have certificates managed by hosters, the hosters will deal with it. If they don't, then someone was already paying for the renewal and handling the replacement on the server side, making it much more likely that it will be fixed.
I'm quite surprised the CA/Browser Forum went for this.
Nobody's paying for EV certificates now browsers don't display the EV details. The only reason to pay for a certificate is if you're rotating certificates manually, and the 90 day expiry of Lets Encrypt certificates is a hassle.
If the CA/Browser Forum is forcing everyone to run ACME clients (or outsource to a managed provider like AWS or Cloudflare) doesn't that eliminate the last substantial reason to give money to a CA?
The CA/BF has a history of terrible decisions, for example 2020's "Baseline Requirements for the Issuance and Management of Publicly-Trusted Code Signing Certificates".
Microsoft voted for it, and now they are basically the only game in town for cloud signing that is affordable for individuals. The Forum needs voting representatives for software developers and end users or else the members will just keep enriching themselves at our expense.
My case, I have to manage a portal for old tvs and those don’t accept the LE root certificate since they changed a couple of years ago. Unfortunately the vendor is unable to update the firmware with new certificates and we are sold
Yeah that LE root certificate change broke our PROD for about 25% of traffic when it happened. Everyone acts like we control our client's cert chains. Clients don't look at the failure and think "our system is broken - we should upgrade". They look at the connection failure and think "this vendor is busted - might as well switch to someone who works". I switched away from LE to the other free ACME provider for our public-facing certs after that.
Chrome root policy, and likely other root policies are moving toward 5-years rotation of the roots, and annual rotation of issuing CAs.
Cross-signing works fine for root rotation in most cases, unless you use IIS, then it becomes a fun problem.
> If roots rotate often then we build the muscle of making sure trust bundles can be updated
Five years is not enough incentive to push this change. A TV manufacturer can simply shrug and claim that the device is not under warranty anymore. We'll only end up with more bricked devices.
Free providers have limits and this new time limitation will also play into that as there will be many more certificates to renew.
Large companies will keep on using paid providers also for business continuity in case free provider will fail. Also I don’t know what kind of SLA you have on let’s encrypt.
It is more complicated than „oh it is free let’s move on”.
Most every modern "big company" I have worked for is leveraging LetsEncrypt in some capacity where appropriate; some definitely more than others. I don't think you're completely wrong but I also think you're being a bit dismissive.
> And, per other comments, this will make LE the only viable option to modernize, and thus much more of a central point of failure than before.
Let's Encrypt isn't the only free ACME provider, you can take your pick from them, ZeroSSL, SSL.com, Google and Actalis, or several of them for redundancy. If you use Caddy that's even the default behavior - it tries ZeroSSL first and automatically falls back to Let's Encrypt if that fails for whatever reason.
> If you use Caddy that's even the default behavior - it tries ZeroSSL first and automatically falls back to Let's Encrypt if that fails for whatever reason.
Which makes sense, since the ACME access to ZeroSSL must go through an account created by a manual registration step. Unless the landscape changed very recently, LE is still the only free ACME that does not require registration. Source: https://poshac.me/docs/v4/Guides/ACME-CA-Comparison/#acme-ca...
My bad, I misremembered the order. You're right that ZeroSSL requires credentials to get free certificates, but Caddy has special-case support for generating those credentials automatically provided you specify an email address in the config, so it's almost transparent to the user.
The ethical side is up to you, but in a strictly technical sense I don't think there's much that Google could do to intrude on your users privacy as a result of them issuing your SSL certificate, even if they wanted to. AIUI the ACME protocol never lets the CA see the private key, only the public key, which is public by definition anyway.
A more realistic concern with using Googles public CA is they may eventually get bored and shut it down, as they tend to do. It would be prudent to have a backup CA lined up.
It's more complicated than that. Apple (along with Google and Mozilla) basically held the CA's hostage. They started unilaterally reducing lifetimes. It was happening whether the CAB approved it or not.
The vote was more about whether the CAB would continue to be relevant. "Accept the reality, or browsers aren't even going to show up anymore".
Thanks for this history, I wasn't aware. It's an interesting point that if this is happening anyways by Apple's fiat, it's in the legacy CAs' interest to even further accelerate the mandatory timeline, so they can pivot to consulting services for their existing customers.
I do still feel that "that blog/publication that had immense cultural impact years ago, that was acquired/put on life support with annual certificate updates, will now be taken offline rather than migrated to a system that can support ACME automations, because the consultants charge more than the ad revenue" will be an unfortunate class of casualty. But that's progress, I suppose.
I think it's more broadly "browsers vs. CAs", I think the balance of power shifted sharply after the Symantec distrusting, and I think very few people on HN would prefer the status quo ante of that power shift if we laid out what it meant.
Today, people are complaining that automation of certificate renewals are annoying (I'm sure they were). Before that, the complaint was that random US companies were simply buying and deploying their own root certificates, issuing certs for arbitrary strangers domains, so their IT teams wouldn't have to update their desktop configurations.
It's interesting that this is pretty much identical to the WHATWG/W3C situation: there is theoretically a standards body, but in practice it's defunct; the browsers announce what they will ship, and the "standards body" can do nothing but meekly comply.
The difference being that there's at least a little bit of popular dissatisfaction with the status quo of browsers unilaterally dictating web standards, whereas no one came to the defense of CAs, since everybody hated them. A useful lesson that you need to do reputation management even if you're running a successful racket, since if people hate you enough they might not stick up for you even if someone comes for you "illegally".
Certificate rotation/renewal has been the biggest headache of my IT career. It’s always after the fact. It’s always a pain point. It’s always an argument with accounting over costs. It sucks. I’m glad ACME exists but man this whole thing is a cluster fuck.
Whole IT teams are just going to wash their hands of this and punt to a provider or their cloud IaaS.
Man, I agree. The whole thing sucks so much. We started building a centralized way to do this internally last year to get better visibility into renewals and expirations:
It's fine for fintechs and social accounts to require SSL, but do blogs really need certs? You know what blogs I'm reading from my DNS requests anyway. I doubt anyone is going to MITM my access to an art historian's personal website. There is zero need for security theater here.
All of these required, complex, constantly moving components mean we're beholden to larger tech companies and can't do things by ourselves anymore without increasing effort. It also makes it easier for central government to start revoking access once they put a thumb on cert issuance. Parties that the powers don't like can be pruned through cert revocation. In the future issuance may be limited to parties with state IDs.
And because mainstream tech is now incompatible with any other distribution method, you suddenly lose the ability to broadcast ideas if you fall out of compliance. The levers of 1984.
> I doubt anyone is going to MITM my access to an art historian's personal website.
But that is what ISPs did! Injecting (more) ads. Replaced ads with their own. Injecting Javascript for all sorts of things. Like loading a more compressed version of a JPEG and you had to click on a extra button to load the full thing. Removing the STARTTLS string from a SMTP connection. Early UMTS/G3 Vodafone was especially horrendous.
I also remember "art" projects where you could change the DNS of a public/school PC and it would change news stories on spiegel.de and the likes.
I agree. Fortunately for blogs, we still have an option - make sure your website is accessible via HTTP / port 80. This has the extra advantage that your website will continue to work on older tech that doesn't support these SSL certs. It will even be accessible to retro hardware that couldn't attempt decoding SSL in the first place.
Of course I have modern laptops, but I still fire up my old Pismo PowerMac G3 occasionally to make sure my sites are still working on HTTP, accessible to old hardware, and rendering tolerably on old browsers.
It started being enforced only after major USA ISPs started injecting malware into every HTTP page. If it was a theoretical concern I might agree with you, but in reality, it actually happened which overrules any theoretical arguments. Also, PRISM.
PRISM works fine to recover HTTPS-protected communications. If anything NSA would be happier if every site they could use PRISM on used HTTPS, that's simply in keeping with NOBUS principles.
They collect it straight from the company after it's already been transmitted. It's not a wiretap, it's more akin to an automated subpoena enforcement.
> Next year, you’ll be able to opt-in to 45 day certificates for early adopters and testing via the tlsserver profile. In 2027, we’ll lower the default certificate lifetime to 64 days, and then to 45 in 2028.
The good news is that the CAs signed their own death warrant with this change. If switching to ACME is more or less mandatory, what purpose do paid certificates serve? Your options are to use LE, switch to non-CA-issued encryption, or drop encryption entirely.
Paid certs are valid for 1-year from the $$ CAs. LE certs are only good for ~3-4 months before they have to be reissued. If there's no easy way to do an automated ACME setup to handle the renewal, being able to defer that for a year is worth the $20 or $70 for a wildcard.
If paid certs drop in max validity period, then yeah, zero reason to burn money for no reason.
Secure context is only required for features that are somehow privacy- or security-sensitive. Some notable features are on the list, but you can absolutely have a modern site that doesn't rely on any of these.
You assume it’s just the certs being purchased - and not support, SLAs, other related products, management platforms, private PKI and more. If all you do is public TLS, sure, that might be an issue.
Nginx and Apache are free and both can be trivially automated with ACME bot. Both can be used to set up a reverse proxy in front of legacy sites or applications.
This is not centralizing everything to Let's Encrypt. it's forcing everyone to use ACME, and many CAs support ACME (and those that don't probably will soon due to this change).
> IMO this is a policy change that can Break the Internet
Unfortunately, the people making these decisions simply do not care how they impact actual real world users. It happens over and over that browser makers dictate something that makes sense in a nice, pure theoretical context, but sucks ass for everyone stuck in the real world with all its complexities and shortcomings.
I have been saying it since the beginning that we are centralizing all the power of the internet to one organization and that this a bad thing, yet I get downvoted every time. One organization is going to have a say on whether or not you can have a website on the internet, how is this objectively a good thing?
Maybe you get downvoted because this isn't centralizing all the power of the internet to one organization rather than being downvoted because people don't have an issue with that.
The CA/Browser forum has massive power over the web whether you like it or not, because they make the browsers. And make no mistake, it's the browser representatives that are the most aggressive about tighter security and shorter certificate lives.
I'm kind of had enough of unnecessary policy ratcheting, it's a problem in a every industry where a solution is not possible or practical; so the knob that can be tweaked is always turned. Same issue with corporate compliance, I'm still rotating password, with 2fa, sometimes three or four factors for an environment, and no one can really justify it, except the fear that not doing more will create liability.
A bit off-topic, but I find this crazy. In basically every ecosystem now, you have to specifically go out of your way to turn on mandatory rotation.
It's been almost a decade since it's been explicitly advised against in every cybersec standard. Almost two since we've done the research to show how ill-advised mandatory rotations are.
PCI still recommends 90 day password changes. Luckily they've softened their stance to allow zero-trust to be used instead. They're not really equivalent controls, but clearly laid out as 'OR' in 8.3.9 regardless.
I think it's only a requirement if passwords are the sole factor, correct? Any other factor or zero-trust or risk-based authentication exempts you from the rotation. It's been awhile since I've looked at anything PCI.
But that would mean doing less, and that's by default bad. We must take action! Think of the children!
I tried at my workplace to get them to stop mandatory rotation when that research came out. My request was shot down without any attempt at justification. I don't know if it's fear of liability or if the cyber insurers are requiring it, but by gum we're going to rotate passwords until the sun burns out.
This was stated as a long-term goal long ago. The idea is that you should automate away certificate issuance and stop caring, and to eventually get lifetimes short enough that revocation is not necessary, because that's easier than trying to fix how broken revocation is.
The problem is when the automation fails, you're back to manual. And decreasing the period between updates means more chances for failure. I've been flamed by HN for admitting this, but I've never gotten automated L.E. certificate renewal to work reliably. Something always fails. Fortunately I just host a handful of hobby and club domains and personal E-mail, and don't rely on my domains for income. Now, I know it's been 90 days because one of my web sites fails or E-mail starts to complain about the certificate being bad, and I have to ssh into my VPS to muck around. This news seems to indicate that I get to babysit certbot even more frequently in the future.
Really? I've never had it fail. I simply ran the script provided by LE, it set everything up, and it renewed every time until I took the site down for unrelated (financial reasons). Out of curiousity, when did you last use LE? Did you use the script they provided you or a third party package?
I set it up ages ago, maybe before they even had a script. My setup is dead simple: A crontab that runs monthly:
0 2 1 * * /usr/local/bin/letsencrypt-renew
And the script:
#!/bin/sh
certbot renew
service lighttpd restart
service exim4 restart
service dovecot restart
... and so on for all my services
That's it. It should be bulletproof, but every few renewals I find that one of my processes never picked up the new certificates and manually re-running the script fixes it. Shrug-emoji.
I don't know how old "letsencrypt-renew" is and what it does. But you run "modern" acme clients daily. The actual renewal process starts with 30 days left. So if something doesn't work it retries at least 29 times.
I haven't touched my OpenBSD (HTTP-01) acme-client in five years:
acme-client -v website && rcctl reload httpd
My (DNS-01) LEGO client sometimes has DNS problems. But as I said, it will retry daily and work eventually.
Yes, same for me. Every few months some kind internet denizen points out to me that my certificate has lapsed, running it manually usually fixes it. LE software is pretty low quality, I've had multiple issues over the years some of which culminated in entire systems being overwritten by LE's broken python environment code.
I did manage to set it up and it has been working ok but it has been a PITA. Also for some reason they contact my server over HTTP, so I must open port 80 just to do the renovation.
Except this isn't really viable for any kind of internal certs, where random internal teams don't have access to modify the corporate DNS. TLS is already a horrible system to deal with for internal software, and browsers keep making it worse and worse.
Not to mention that the WEBPKI has made it completely unviable to deliver any kind of consumer software as an offline personal web server, since people are not going to be buying their own DNS domains just to get their browser to stop complaining that accessing local software is insecure. So, you either teach your users to ignore insecure browser warnings, or you tie the server to some kind of online subscription that you manage and generate fake certificates for your customer's private IPs just to get the browsers to shut up.
Enforcing an arbitrary mitigation to a problem the industry does not know how to solve doesn't make it a good solution. It's just a solution the corporate world prefers.
It’s a stupid policy. To solve the non-existent problem with certificates, we are pushing the problem to demonstrating that we have access to a DNS registrar’s service portal.
It also ignores the real world as the CA/Browser forum admits they don't understand how certificates are actually used in the real world. They're just breaking shit to make the world a worse place.
They are calibrated for organizations/users that have higher consequences for mis-issuance and revocation delay than someone’s holiday blog, but I don’t think they’re behaving selfishly or irrationally in this instance. There are meaningful security benefits to users if certificate lifetimes are short and revocation lists are short, and for the most part public PKI is only as strong as the weakest CA.
OCSP (with stapling) was an attempt to get these benefits with less disruption, but it failed for the same reason this change is painful: server operators don’t want to have to configure anything for any reason ever.
> They're just breaking shit to make the world a worse place.
Well, it's the people who want to MITM that started it, a lot of effort has been spent on a red queen's race ever since. If you humans would coordinate to stay in high-trust equilibria instead of slipping into lower ones you could avoid spending a lot on security.
That’s why the HTTP-01 challenge exists - it’s perfect for public single-server deployments. If you’re doing something substantial enough to need a load balancer, arranging the DNS updates (or centralizing HTTP-01 handling) is going to be the least of your worries.
Holding public PKI advancements hostage so that businesses can be lazy about their intranet services is a bad tradeoff for the vast majority of people that rely on public TLS.
and my IRC servers that don’t have any HTTP daemon (and thus have the port blocked) while being balanced by anycast geo-fenced DNS?
There are more things on the internet than web servers.
You might say “use DNS-01”; but thats reductive- I’m letting any node control my entire domain (and many of my registrars don’t even allow API access to records- let alone an API key thats limited to a single record; even cloud providers dont have that).
I don’t even think mail servers work well with the letsencrypt model unless its a single server for everything without redundancies.
I guess nobody runs those anymore though, and, I can see why.
I've operated things on the web that didn't use HTTP but used public PKI (most recently, WebTransport). But those services are ultimately guests in the house of public PKI, which is mostly attacked by people trying to skim financial information going over public HTTP. Nobody made IRC use public PKI for server verification, and I don't know why we'd except what is now an effectively free CA service to hold itself back for any edge case that piggybacks on it.
> and my IRC servers that don’t have any HTTP daemon (and thus have the port blocked) while being balanced by anycast geo-fenced DNS?
The certificate you get for the domain can be used for whatever the client accepts it for - the HTTP part only matters for the ACME provider. So you could point port 80 to an ACME daemon and serve only the challenge from there. But this is not necessarily a great solution, depending on what your routing looks like, because you need to serve the same challenge response for any request to that port.
> You might say “use DNS-01”; but thats reductive- I’m letting any node control my entire domain (and many of my registrars don’t even allow API access to records- let alone an API key thats limited to a single record; even cloud providers dont have that).
The server using the certificate doesn't have to be the one going through the ACME flow, and once you have multiple nodes it's often better that it isn't. It's very rare for even highly sophisticated users of ACME to actually provision one certificate per server.
Are we pretending browsers aren’t a universal app delivery platform, fueling internal corporate tools and hobby projects alike?
Or that TLS and HTTPS are unrelated, when HTTPS is just HTTP over TLS; and TLS secures far more, from APIs and email to VPNs, IoT, and non-browser endpoints? Both are bunk; take your pick.
Or opt for door three: Ignore how CA/B Forum’s relentless ratcheting burdens ops into forking browsers, hacking root stores, or splintering ecosystems with exploitable kludges (they won’t: they’ll go back to “this cert is invalid, proceed anyway?” for all internal users).
Nothing screams “sound security” like 45-day cert churn for systems outside the public browser fray.
And hey, remember back in the day when all the SMTP submission servers just blindly accepted any certificate they were handed because doing domain validation broke email… yeah
> Or opt for door three: Ignore how CA/B Forum’s relentless ratcheting burdens ops into forking browsers, hacking root stores, or splintering ecosystems with exploitable kludges (they won’t: they’ll go back to “this cert is invalid, proceed anyway?” for all internal users).
It does none of these. Putting more elbow grease into your ACME setup with existing, open source tools solves this for basically any use case where you control the server. If you're operating something from a vendor you may be screwed, but if I had a vote I'd vote that we shouldn't ossify public PKI forever to support the business models of vendors that don't like to update things (and refuse to provide an API to set the server certificate programmatically, which also solves this problem).
> Nothing screams “sound security” like 45-day cert churn for systems outside the public browser fray.
Yes, but unironically. If rotating certs is a once a year process and the guy who knew how to do it has since quit, how quickly is your org going to rotate those certs in the event of a compromise? Most likely some random service everyone forgot about will still be using the compromised certificate until it expires.
> And hey, remember back in the day when all the SMTP submission servers just blindly accepted any certificate they were handed because doing domain validation broke email… yeah
Everyone likes to meme on this, but TLS without verification is actually substantially stronger than nothing for server-to-server SMTP (though verification is even better). It's much easier to snoop on a TCP connection than it is to MITM it when you're communicating between two different datacenters (unlike a coffeeshop). And most mail is between major providers in practice, so they were able to negotiate how to establish trust amongst themselves and protect the vast majority of email from MITM too.
Yeah the best/worst part of this is that nobody was stopping the 'enlightened' CA/Browser Forum from issuing shorter certificates for THIER fleets, but no we couldn't be allowed to make our own decisions about how we best saw the security of the communications channel between ourselves and our users. We just weren't to be allowed to be 'adult' enough.
The ignorance about browser lock-in too, is rad.
I guess we could always, as they say, create a whole browser, from scratch to obviate the issue, one with sane limitations on certificate lifetimes.
First, one of the purposes of shorter certificates is to make revocation easier in the case of misissuance. Just having certificates issued to you be shorter-lived doesn't address this, because the attacker can ask for a longer-lived certificate.
Second, creating a new browser wouldn't address the issue because sites need to have their certificates be acceptable to basically every browser, and so as long as a big fraction of the browser market (e.g., Chrome) insists on certificates being shorter-lived and will reject certificates with longer lifetimes, sites will need to get short-lived certificates, even if some other browser would accept longer lifetimes.
I always felt like #1 would have better been served by something like RPKI in the BGP world. I.e. rather than say "some people have a need to handle ${CASE} so that is the baseline security requirement for everyone" you say "here is a common infrastructure for specifying exactly how you want your internet resources to be able to be used". In the case of BGP that turned into things like "AS 42 can originate 1.0.0.0/22 with maxlength of /23" and now if you get hijacked/spoofed/your BGP peering password leaks/etc it can result in nothing bad happening because of your RPKI config.
The same in web certs that could have been something like "domain.xyz can request non-wildcard certs for up to 10 days validity". Where I think certs fell apart with it is they placed all the eggs in client side revocation lists and then that failure fell to the admins to deal with collectively while the issuers sat back.
For the second note, I think that friction is part of their point. Technically you can, practically that doesn't really do much.
> "domain.xyz can request non-wildcard certs for up to 10 days validity"?
You could be proposing two things here:
(1) Something like CAA that told CAs how to behave.
(2) Some set of constraints that would be enforced at the client.
CAA does help some, but if you're concerned about misissuance you need to be concerned about compromise of the CA (this is also an issue for certificates issued by the CA the site actually uses, btw). The problem with constraints at the browser is that they need to be delivered to the browser in some trustworthy fashion, but the root of trust in this case is the CA. The situation with RPKI is different because it's a more centralized trust infrastructure.
> For the second note, I think that friction is part of their point. Technically you can, practically that doesn't really do much.
I'm not following. Say you managed to start a new browser and had 30% market share (I agree, a huge lift). It still wouldn't matter because the standard is set by the strictest major browser.
The RPKI-alike is more akin to #1, but avoids the step of trying to bother trusting compromised CAs. I.e., if a CA is compromised you revoke and regenerate CA's root keys and that's what gets distributed rather than rely on individual revocation checks for each known questionable key or just sitting back for 45 days (or whatever period) to wait for anything bad to expire.
> I'm not following. Say you managed to start a new browser and had 30% market share (I agree, a huge lift). It still wouldn't matter because the standard is set by the strictest major browser.
Same reasoning between us I think, just a difference in interpreting what it was saying. Kind of like sarcasm - a "yes, you can do it just as they say" which in reality highlights "no, you can't actually do _it_ though" type point. You read it as solely the former, I read it as highlighting the latter. Maybe GP meant something else entirely :).
That said, I'm not sure I 100% agree it's really related to the strictest major browser does alone though. E.g. if Firefox set the limit to 7 days then I'd bet people started using other browsers vs all sites began rotating certs every 7 days. If some browsers did and some didn't it'd depend who and how much share etc. That's one of the (many) reasons the browser makers are all involved - to make sure they don't get stuck as the odd one out about a policy change.
.
Thanks for Let's Encrypt btw. Irks about the renewal squeeze aside, I still think it was a net positive move for the web.
I don't feel the tradeoff for trying to to fix the idea of a rogue CA misissuing is addressed by the shorter life either though, the tradeoff isn't worth it.
The best assessment of the whole CA problem can be summed up the best by Moxie,
https://moxie.org/2011/04/11/ssl-and-the-future-of-authentic...
And, Well the create-a-browser was a joke, its what ive seen suggested for those who don't like the new rules.
I just post the password semi-publicly on some scratchpad (like maybe a secret gist that's always open in browser or for 2fa a custom web page with generator built in) if any of those policies get too annoying. Bringing number of factors back to one and bypassing 'cant use previous 300000' passwords bs. Works every time.
These short certificate lifetimes make Let's Encrypt a central point of failure for much of the Internet. That's a concern. Failure may be technical or political, too.
There are other free ACME-based providers, so switching should be fairly painless if needed. (I guess if you've issued CAA records or similar, you may need some manual intervention.)
You can have more than one CAA record, so it should be possible to configure backup certificate authorities. It's probably a good idea to do that for important sites.
I don't actually think Cloudflare runs an ACME Certificate Authority. They just partner with LetsEncrypt? Edit: Looks like they don't run any CA, they just delegate out to a bunch of others https://developers.cloudflare.com/ssl/reference/certificate-...
Doesn't matter. This is a push by the CA/Browser Forum. Google, Mozilla, and all the CAs got together and said, "hey, what if we just made certificates shorter because we're too stupid to figure out a revocation mechanism that actually works other than expiration." They've tried this shit before, but saner heads prevailed. This time they did not.
Can you explain how shorter certificate lifetimes make LE more of a single point of failure? I can squint and see an argument for CA diversity; I struggle to see how reducing certificate lifetimes increases CA centralization.
Shorter lifetimes means more renewal events, which means more individual occasions in which LE (or whatever other cert authority) simply must be available before sites start falling off the internet for lack of ability to renew in time.
We're not quite there yet, but the logical progression of shorter and shorter certificate lifetimes to obviate the problems related to revocation lists would suggest that we eventually end up in a place where the major ACME CAs join the list of heavily-centralized companies which are dependencies of "the internet", alongside AWS, Cloudflare, and friends. With cert lifetimes measured in years or months, the CA can have a bad day and as long as you didn't wait until the last possible minute to renew, you're unimpacted. With cert lifetimes trending towards days or less, now your CA really does need institutionally important levels of high availability.
Its less that LE becomes more of a single point of failure than it is that the concept of ACME CAs in general join the list of critically available things required to keep a site online.
> would suggest that we eventually end up in a place where the major ACME CAs join the list of heavily-centralized companies which are dependencies of "the internet"
I think that particular ship sailed a decade ago!
> Its less that LE becomes more of a single point of failure than it is that the concept of ACME CAs in general join the list of critically available things required to keep a site online.
Okay, this is what I wanted clarified. I don't disagree that CAs are critical infrastructure, and that there's latent risk whenever infrastructure becomes critical. I just think that risk is justified, and that LE in particular is no more or less of a SPOF with these policy changes.
Because when they eventually get their wet dream of 7-day renewals, everyone replies upon them once a week. LE being down for 48-hours could take out a big chunk of the Internet.
Certificates have historically been a "fire and forget" but constant re-issuance will make LE as important as DNS and web hosting.
The longer certificates were valid the more often we'd have breakage due to admins forgetting renewal, or how do install the new certificates. It was a daily occurrence, often with hours or days of downtime.
Today, it's so rare I don't even remember when I last encountered an expired certificate. And I'm pretty sure it's not because of better observability...
Oh for sure. This is stupid policy by an organization with no accountability to anyone, that represents the interests of parties with their own agendas.
I don't think it's that venal: the CABF holds CAs accountable, largely through the incentives of browsers (which in turn are the incentives of users, mediated by what Google, Microsoft, Apple, and Mozilla think is worth their time). That last mediation is perhaps not ideal, but it's also not a black hole of accountability.
I'm seeing this hot take a lot but it doesn't make sense. Are people worried than LE is going to have a 45 day outage or something? ACME is an open standard with other implementations so I'm having trouble seeing the political central point of failure too.
It's okay for something to be a good thing and to celebrate it. We don't have to frown about everything.
Yeah, doesn't the ACME bot defaults have it trying to renew the cert when it has like 30% of its life time left? Which means the CA would have to be down for Days/Weeks fo it to impact production.
Oh and you would definitely know about this outage because you would hear about it in your news, and the monitoring you already have set up to yell at you when you cert is about to retire (you already have that right? Right?). And you can STILL trivially switch to another CA that supports ACME.
Agreed. We need a second source, preferably located in the EU. They could share operational code/protocols/etc. I.e. during peace time they could collaborate.
The EU started building the Galileo GNSS ("GPS") in 2008 as a backup in case the US turned hostile. And now look where we are in 2025 with the US president openly talking about taking Greenland. Wise move. It seemed like a gigantic waste back then. It was really, really expensive.
Then lots of European countries ordered F35s from Lockheed Martin. What an own goal. This includes Denmark/Greenland.
I am at the point of looking forward to it. The CA/B is so unhinged and so unaccountable and the appetite to fix it is so small, a broad scale collapse of the Internet caused by the CA/B's incompetence is looking like the only way to finally end their regime.
Given certificate issuance basically ended up being "do you control the DNS for this domain", I feel like all of it could've been so much simpler if it was designed like that from day one.
While I love Let's Encrypt it feels so silly to use a third party to verify I can generate a Cloudflare API key (even .well-known is effectively "can you run a webserver on said dns entry").
What is the point of shortening the life time to 45 days?
Why not 60 seconds? That's way more secure. Everything is automated after all, so why risk compromised certificate for such a long period, like 45 days?
Or how about we issue a new certificate per request? That should remove most of the risks.
While I believe you're posting questions out of frustration rather than genuine curiosity, I think it's worth pointing out two things.
One: most of the reasoning is available for reading. Lots of discussion was had. If you're actually curious, I would suggest starting with the CA/B mailing group. Some conversation is in the MDSP (Mozilla's dev-security-policy) archives as well.
Two: it's good to remember that the competing interests in almost every security-related conversation is the balance between security and usability. Obviously, a 60-second lifetime is unusable. The goal is to get the overlap between secure and usable to be as big as possible.
Our internal CAs run 72 hour TTLs, because we figured "why not" 5-6 years ago, and now everyone is too stubborn to stop. You'd be surprised how much software is bad at handling certificates well.
It ranges from old systems like libpq which just loads certs on connection creation to my knowledge, so it works, down to some JS or Java libraries that just read certs into main memory on startup and never deal with them again. Or other software folding a feature request like "reload certs on SIGHUP" with "oh, transparently do listen socket transfer between listener threads on SIGHUP", and the latter is hard and thus both never happen.
45 days is going to be a huge pain for legacy systems. Less than 2 weeks is a huge pain even with modern frameworks. Even Spring didn't do it right until a year or two ago and we had to keep in-house hacks around.
You know that the original idea was to drop it to 17 days?! And i think that is still on the books.
To be honest, the issue is not the time frame, you can literally have certs being made every day. And there are plenty of ways to automated this. The issue is the CT log providers are going to go ape!
Right now, we are at 24B certificates going back to around 2017 when they really started to grow. There are about 4.5B unique domains, if we reduce this to the amount of certs per domain, its about 2.3B domain certs we currently need.
Now do 2.3B, x 8 renewals ... This is only about 18,4B new certs in CT logs per year. Given how popular LE is, we can assume that the actual growth is maybe 10B per year (those that still use 1 year or multi year + the tons of LE generated ones).
Remember, i said the total going back to 2017 currently is now only 24B ... Yea, we are going to almost double the amount of certs in CT logs, every two years.
And that assumes LE does not move to 17 days, because then i am sure we are doubling the current amount, each year.
Good luck as a CT log provider... fyi, a typical certificate to store is about 4.5kb, we are talking 45TB of space needing per year, and 100TB+ if they really drop it down to 17 days. And we did not talk databases, traffic to the CT logs, etc...
Its broken Jim ... Now imagine for fun, a daily cert, ... 1700TB per year in CT log storage?
A new system will come from Google etc because its going to become unaffordable, even for those companies.
Have you heard the good news of Merkle Tree Certificates[1,2]? They include the transparency cryptography directly in the certificate issuance pipeline. This has multiple benefits, one of them being way smaller transparency logs.
Tons of information for research, hackers, you name it ... It shows a history of domains, you can find hidden subdomains, still active, revoked etc ...
Do not forget that we had insane long certificates not that long ago.
The main issue is that currently you can not easily revoke certs, so your almost forced to keep a history of certs, and when one has been revoked in the CT logs.
In theory, if everybody is forced to change certs every 47 days, sure, you can invalidated them and permanently remove them. But it requires a ton of automatization on the user side. There is still way too much software that relies on a single year or multi year certificated that is manually added to it. Its also why the fadeout to 47 days, is over a 4 year time periode.
And it still does not change the massive increased in requests to check validation, that hits CT logs providers.
> Tons of information for research, hackers, you name it ... It shows a history of domains, you can find hidden subdomains, still active, revoked etc ...
You can store that kind of information in a lot less space. It doesn't need to be duplicated with each renewal.
> The main issue is that currently you can not easily revoke certs, so your almost forced to keep a history of certs, and when one has been revoked in the CT logs.
This is based on the number of active certificates, which has almost no connection with how long they last.
> There is still way too much software that relies on a single year or multi year certificated that is manually added to it.
Hopefully less and less going forward.
> And it still does not change the massive increased in requests to check validation, that hits CT logs providers.
I'm not really sure how that works but yeah someone needs to pay for that.
There was talking going on in one of the CA/Browser Forum regarding certificate expirations, and how they looked at potentially 17 days. The 45 days was a compromise, but the whole 17 days was never removed from the table, and was still considered as a future option.
I know this is a good thing, but I've struggled a lot on systems that don't have good/reliable NTP time updates.
Also, at some point in the lifetime graph, you start getting diminishing returns. There aren't many scenarios where you get your private keys stolen, but the bad guys couldn't maintain access for more than a couple of weeks.
In my humble opinion, if this is the direction the CA/B and other self-appointed leaders want to go, it is time to rethink the way PKI works. We should maybe stop thinking of LetsEncrypt as a CA but it (and similar services) can function as more of a real-time trust facilitators? If all they're checking for is server control, then maybe a near-real-time protocol to validate that, issue a cert, and have the webserver use that immediately is ideal? Lots of things need to change for this to work of course, but it is practical.
Not so long ago, very short DNS TTL's were met with similar apprehension. Perhaps the "cert expiry" should be tied to the DNS TTL. With the server renewing much more frequently (e.g.: If the TTL is 1 hour, the server will renew every 15 minutes).
Point being, the current system of doing things might not be the best place to experiment with low expiry lifetimes, but new ways of doing things that can make this work could be engineered.
Not precise, but for example if it's been over a day since the last time update, i start getting errors on various sites, including virtually every site behind cloudflare. (assuming you're referring to the initial issue I mentioned).
One of the setups that gives me issues is machines that are resumed from a historical snapshot and start doing things immediately, if the NTP date hasn't been updated since the last snapshot you start getting issues (despite snapshots being updated after every daily run). Most sites won't break (especially with a 24h window, although longer always have issues), but enough sites change their certs so frequently now, it's a constant issue.
Even with a 10 year cert, if you access at the right time you'll have issues, the difference now is it isn't a once in a 10 year event, but once in every few days some times.
Perhaps if TLS clients requesting a time update to the OS was a standardized thing and if NTP client daemons supported that method it would be a lot less painful?
in my case, it's more of a case of "the system still thinks it's yesterday, until the ntp daemon updates the time a minute or five after resuming". Being behind by a day wasn't a huge deal before these really short cert life spans.
This isn't something I've seen; are you running systems w/o an onboard RTC, or with ntpdate doing periodic update, etc etc?
The closest I've gotten to this would be something like a Raspberry Pi, but even then NTP is pretty snappy as soon as there's network access, and until there's network access I'm not hitting any TLS certs.
Insane that they're dropping client certificates for authentication. Reading the linked post, it's because Google wants them to be separate PKIs and forced the change in their root program.
They aren't used much, but they are a neat solution. Google forcing this change just means there's even more overhead when updating certs in a larger project.
The certification serves different purposes. It might feel like a symmetric arrangement but it isn't. On the whole i think implementing this split is sensible.
Is that a temporary situation? Is it that big a deal to implement a separate set of roots for client certs? Or do you mean that the entire infrastructure is supposed to be duplicated?
It's a good change. I've seen at least one company that had misconfigured mTLS to accept any client certificate signed by a trusted CA, rather than just by the internal corporate CA.
We wanted TLS everywhere for privacy. What we ended up with is every site needs a constant blessing from some semi-centralized authority to remain accessible. Every site is “dead by default”.
This feels in many respects worse than what we had with plain HTTP, and we can’t even go back now.
If you mean that sites with expired certificates may technically be accessible if one jumps through enough hoops and ignores scary warnings - yes, of course you’re right.
Maybe this will just teach everyone to click through SSL warnings the same way they click through GDPR popups - for better or worse.
I'm halfway tempted to go back to HTTP. You don't do breaking changes like this without giving your 'customers' a chance to stick to their ways. I have more than enough on my plate already and don't need the likes of letsencrypt to give me more work.
>If you’re requesting certificates from our tlsserver or shortlived profiles, you’ll begin to see certificates which come from the Generation Y hierarchy this week. This switch will also mark the opt-in general availability of short-lived certificates from Let’s Encrypt, including support for IP Addresses on certificates.
Does that mean IP certificates will be generally available some time this week?
Now all servers can participate in Encrypted Client Hello for enhanced user privacy: if clients open TLS connections with ECH where the server IP is used in the ClientHelloOuter and the target SNI domain is in the encrypted ClientHelloInner, then eavesdroppers won't be able to read which domain the user is connecting to.
This vision still needs a several more developments to land before it actually results in an increment in user privacy, but they are possible:
1. User agents can somehow know they can connect to a host with IP SNI and ECH (a DNS record?)
2. User agents are modified to actually do this
3. User agents use encrypted DNS to look up the domain
4. Server does not combine its IP cert with it's other domain certs (SAN)
The decrease in lifetimes has had a fair bit of discussion, but I haven't seen a lot of discussion about the mTLS changes. Is anyone else running into issues there? We'll be hit by it, as we use mTLS as one of several methods for our customers to authenticate the webhooks we deliver them, but haven't determined what we'll be doing yet.
The certificate offered from server to client and the certificate the server expects from the client do not need to share a CA.
This only affects you if you have a server set up to verify mTLS clients against the Let's Encrypt root certificate(s), or maybe every trusted CA on the system. You might do that if you're using the host HTTPS certificates handed out by certbot or other CAs as mTLS client certificates.
You can still generate your own mTLS key pairs and use them to authenticate over a connection whose hostname is verified with Let's Encrypt, which is what most people will be doing.
> The certificate offered from server to client and the certificate the server expects from the client do not need to share a CA.
Sure, but it seems like all the CAs are stopping issuing certificates with the client EKU. At least LetsEncrypt and DigiCert, since by the Google requirement they can't do that and normal certs, and I guess there's not enough market to have one just for that.
> You might do that if you're using the host HTTPS certificates handed out by certbot or other CAs as mTLS client certificate
Sure, what's wrong with that?
> You can still generate your own mTLS key pairs and use them to authenticate over a connection whose hostname is verified with Let's Encrypt, which is what most people will be doing.
That lets the client verify the host, but the server doesn't know where the connection is coming from. Generating mTLS pairs means pinning and coordinated rotation and all that. Currently servers can simply keep an up to date CA store (which is common and easy), and check the subject name, freeing the client to easily rotate their cert.
We use locally generated certs for Mtls with different lifetimes. Relying on public CAs for chains of trust like that makes me nervous, especially if something gets revoked.
I just checked, and one of my web servers has an uptime of 845 days, last login May 2 of last year. Based on the shell history I don’t believe I’ve touched the letsencrypt config on it since I set it up in 2020 ish.
Please accept the new T&C's within 90 days or your account will be terminated...
2 factor Auth now compulsory.
Please validate your identity with our third party identity provider so we can confirm you are not on the sanctions list. If you do not, your account will be blocked.
Etc etc. Every third party service requires at least a little work and brainspace.
You CNAME the acme challenge DNS to us, we manage all your certificates for you. We expose an API and agents to push certificates everywhere you need them, and then do real-time monitoring that the correct certificate is running on the webserver. End-to-end auditability.
after some failures with Let's Encrypt (almost certainly my fault wrt auto renewal) I switched to free 15 year Cloudflare certs instead and I'm very happy not worrying about it any more.
That was a smart move but those days are over. Your existing 15 year certs will continue to be accepted until they expire but then you'll have to get a new cert and be in the same 45-day-churn boat the rest of us are.
The cloudflare 15 year cert is one they issue privately and that they only use to authenticate your origin. Cloudflare manages the certificates for connections coming from the web.
I wouldn't be surprised if eventually clients just start rejecting certificates that are too long. Imagine if someone bought a domain, but a previous owner is holding a certificate for it that lasts 15 years.
At least under the new scheme if you let the domain sit for 45 days you'll know only you hold valid certificates for it.
Wondering: Is there a good tool for centralized ACME cert management when one runs a large infrastructure, highly available, multi location where it makes little sense to run the ACME client directly on each instance or location?
The other CAs with a free tier that I'm aware of (zerossl, ssl.com, actalis, google trust, cloudflare) require you to have an account (which means you're at their mercy), and most of them limit the number of free certs you can get to a very small number and don't offer free wildcard certs at all.
Let's Encrypt could easily refuse to issue a certificate for a certain domain, even if you don't have a registered account. I don't see much difference.
AWS Certificate Manager manages this all for you via DNS validation.
Granted, you're locked into their ecosystem, can't export PK, etc. so it's FAR from a perfect solution here but I've actually been pretty impressed with the product from a "I need to run my personal website and don't want to have to care about certificates" perspective. Granted, you're paying for the cert, just not directly.
Not sure why they are kicking out TLS Client certs, I understand kicking them off the default profile (they REALLY had no place there, not sure why there were there in the first place), but providing no way to get one is a bit silly
Just curious… I’ve seen this X1/X2 convention before for CA roots. Does anyone know the origin or rationale for this?
Now we have a “Y” generation showing up, but it seems like whoever thought of “X” didn’t anticipate more than three generations, or they would have used A1/A2.
I am not sure how I feel about this solution. It is already painful to deal with certs on every single piece of IT equipment. Unless you create and manage your own CA and manage it, which is an extra burden, what is the point of this? This will only create more janky scripts and annoyances for very little benefit.
What's next? Enforcing email signing with SMIME or PGP?
I used to be knee deep in PKI stuff, now I hardly pay attention.
Two quick questions:
1 - Are there any TLS libraries that enable warnings when certs are nearing expiration?
2 - Are there any extensions in the works (or previous failed attempts) for TLS to have the client validate the next planned certificate and signal both ends when that fails?
The whole thing is very silly security wise anyway.
Okay, so you cert leaked. Will having it leaked for 1.5 months be substantially less dangerous than 90 days? Nope, you're fucked from the day one, it's still massively worse than "a browser asynchronously checks whether site's cert has been revoked"
Let's Encrypt, you're not even a for-profit business; there's nobody you need to shield the blow from. Just say "we're reducing certificate lifetimes to comply with CA/Browser Forum rules". You don't need to do the cowardly "replace lower with change" in the headline thing.
That does not make any sense. Plenty of things on the internet are Open Source / Non-profit yet it affects us a lot. Of course it’s good to give people relying on your stuff heads-up etc.
It's "let's not take your picture inside of the office because everyone hates the inside of offices. let's take your picture outside instead, near the office, but not featuring the office. oh, that tree over there is nice, but darn, the lighting underneath its branches isn't great. hey, that hedge over there reads great in light test and it works with what you're wearing, so, yeah, that'll do just fine."
This isn't LE's decision: a 47 day max was voted on by the CA/Browser Forum.
https://www.digicert.com/blog/tls-certificate-lifetimes-will...
https://cabforum.org/2025/04/11/ballot-sc081v3-introduce-sch...
https://groups.google.com/a/groups.cabforum.org/g/servercert... - public votes of all members, which were unanimously Yes or Abstain.
IMO this is a policy change that can Break the Internet, as many archived/legacy sites on old-school certificates may not be able to afford the upfront tech or ongoing labor to transition from annual to effectively-monthly renewals, and will simply be shut down.
And, per other comments, this will make LE the only viable option to modernize, and thus much more of a central point of failure than before.
But Let's Encrypt is not responsible for this move, and did not vote on the ballot.
Ideally, this will take less ongoing labor than annual manual rotations, and I'd argue sites that can't handle this would have been likely to break at the next annual rotation anyways.
If they have certificates managed by hosters, the hosters will deal with it. If they don't, then someone was already paying for the renewal and handling the replacement on the server side, making it much more likely that it will be fixed.
Nobody's paying for EV certificates now browsers don't display the EV details. The only reason to pay for a certificate is if you're rotating certificates manually, and the 90 day expiry of Lets Encrypt certificates is a hassle.
If the CA/Browser Forum is forcing everyone to run ACME clients (or outsource to a managed provider like AWS or Cloudflare) doesn't that eliminate the last substantial reason to give money to a CA?
Microsoft voted for it, and now they are basically the only game in town for cloud signing that is affordable for individuals. The Forum needs voting representatives for software developers and end users or else the members will just keep enriching themselves at our expense.
Seems to me CAs have intermediate certificates and can rotate those, not much upside to rotating the root certificates, and lots of downsides.
1. These might need to happen as emergencies if something bad happens
2. If roots rotate often then we build the muscle of making sure trust bundles can be updated
I think the weird amount they are being rotated today is the real root cause if broken devices and we need to stop the bleed at some point.
Five years is not enough incentive to push this change. A TV manufacturer can simply shrug and claim that the device is not under warranty anymore. We'll only end up with more bricked devices.
If the vendor is really unable to update, then it's at best negligence when designing the product, and at worst -- planned obsolescence.
Nothing stays the same forever, software is never done. It’s absurd pretend otherwise.
The CA folks and the Browser folks may have had differences of opinions.
I expect they will introduce new, "more secure", proprietary methods, and ride the vendor lock-in until the paid certificate industries death.
Large companies will keep on using paid providers also for business continuity in case free provider will fail. Also I don’t know what kind of SLA you have on let’s encrypt.
It is more complicated than „oh it is free let’s move on”.
Let's Encrypt isn't the only free ACME provider, you can take your pick from them, ZeroSSL, SSL.com, Google and Actalis, or several of them for redundancy. If you use Caddy that's even the default behavior - it tries ZeroSSL first and automatically falls back to Let's Encrypt if that fails for whatever reason.
No, that's false. It's the other way around.
“If Caddy cannot get a certificate from Let's Encrypt, it will try with ZeroSSL”. Source: https://caddyserver.com/docs/automatic-https#issuer-fallback
Which makes sense, since the ACME access to ZeroSSL must go through an account created by a manual registration step. Unless the landscape changed very recently, LE is still the only free ACME that does not require registration. Source: https://poshac.me/docs/v4/Guides/ACME-CA-Comparison/#acme-ca...
https://caddy.community/t/using-zerossls-acme-endpoint/9406
Correction: the default behavior is to use Let's Encrypt alone, but if you provide an email then it's Let's Encrypt with fallback to ZeroSSL.
Oh, on LE the Rate Limit Adjustment Request forms the contractual things (if that's what they are?) do not load: https://isrg.formstack.com/forms/rate_limit_adjustment_reque...
ZeroSSL, SSL.com and Actalis offer paid services on top of their basic free certificates.
Google is Google.
So your "free" ssl certs are provided by surveillance capitalism, and paid for with your privacy (and probably your website user's privacy too)?
A more realistic concern with using Googles public CA is they may eventually get bored and shut it down, as they tend to do. It would be prudent to have a backup CA lined up.
That's not really how ssl certs work - google isn't getting any information they wouldn't have otherwise had by issuing the ssl cert.
The vote was more about whether the CAB would continue to be relevant. "Accept the reality, or browsers aren't even going to show up anymore".
I wrote a bunch about this recently: https://www.certkit.io/blog/47-day-certificate-ultimatum
I do still feel that "that blog/publication that had immense cultural impact years ago, that was acquired/put on life support with annual certificate updates, will now be taken offline rather than migrated to a system that can support ACME automations, because the consultants charge more than the ad revenue" will be an unfortunate class of casualty. But that's progress, I suppose.
Today, people are complaining that automation of certificate renewals are annoying (I'm sure they were). Before that, the complaint was that random US companies were simply buying and deploying their own root certificates, issuing certs for arbitrary strangers domains, so their IT teams wouldn't have to update their desktop configurations.
Things are better now.
Which AI did you use for writing it? It's pretty good.
The difference being that there's at least a little bit of popular dissatisfaction with the status quo of browsers unilaterally dictating web standards, whereas no one came to the defense of CAs, since everybody hated them. A useful lesson that you need to do reputation management even if you're running a successful racket, since if people hate you enough they might not stick up for you even if someone comes for you "illegally".
Whole IT teams are just going to wash their hands of this and punt to a provider or their cloud IaaS.
We're doing a beta of it for some other groups now. https://www.certkit.io/
All of these required, complex, constantly moving components mean we're beholden to larger tech companies and can't do things by ourselves anymore without increasing effort. It also makes it easier for central government to start revoking access once they put a thumb on cert issuance. Parties that the powers don't like can be pruned through cert revocation. In the future issuance may be limited to parties with state IDs.
And because mainstream tech is now incompatible with any other distribution method, you suddenly lose the ability to broadcast ideas if you fall out of compliance. The levers of 1984.
But that is what ISPs did! Injecting (more) ads. Replaced ads with their own. Injecting Javascript for all sorts of things. Like loading a more compressed version of a JPEG and you had to click on a extra button to load the full thing. Removing the STARTTLS string from a SMTP connection. Early UMTS/G3 Vodafone was especially horrendous.
I also remember "art" projects where you could change the DNS of a public/school PC and it would change news stories on spiegel.de and the likes.
Of course I have modern laptops, but I still fire up my old Pismo PowerMac G3 occasionally to make sure my sites are still working on HTTP, accessible to old hardware, and rendering tolerably on old browsers.
PRISM works fine to recover HTTPS-protected communications. If anything NSA would be happier if every site they could use PRISM on used HTTPS, that's simply in keeping with NOBUS principles.
Source:
They collect it straight from the company after it's already been transmitted. It's not a wiretap, it's more akin to an automated subpoena enforcement.
> Next year, you’ll be able to opt-in to 45 day certificates for early adopters and testing via the tlsserver profile. In 2027, we’ll lower the default certificate lifetime to 64 days, and then to 45 in 2028.
If paid certs drop in max validity period, then yeah, zero reason to burn money for no reason.
This is not centralizing everything to Let's Encrypt. it's forcing everyone to use ACME, and many CAs support ACME (and those that don't probably will soon due to this change).
Unfortunately, the people making these decisions simply do not care how they impact actual real world users. It happens over and over that browser makers dictate something that makes sense in a nice, pure theoretical context, but sucks ass for everyone stuck in the real world with all its complexities and shortcomings.
A bit off-topic, but I find this crazy. In basically every ecosystem now, you have to specifically go out of your way to turn on mandatory rotation.
It's been almost a decade since it's been explicitly advised against in every cybersec standard. Almost two since we've done the research to show how ill-advised mandatory rotations are.
In any case, all my homies hate PCI.
I tried at my workplace to get them to stop mandatory rotation when that research came out. My request was shot down without any attempt at justification. I don't know if it's fear of liability or if the cyber insurers are requiring it, but by gum we're going to rotate passwords until the sun burns out.
That's it. It should be bulletproof, but every few renewals I find that one of my processes never picked up the new certificates and manually re-running the script fixes it. Shrug-emoji.
I haven't touched my OpenBSD (HTTP-01) acme-client in five years: acme-client -v website && rcctl reload httpd
My (DNS-01) LEGO client sometimes has DNS problems. But as I said, it will retry daily and work eventually.
https://letsencrypt.org/docs/challenge-types/
Not to mention that the WEBPKI has made it completely unviable to deliver any kind of consumer software as an offline personal web server, since people are not going to be buying their own DNS domains just to get their browser to stop complaining that accessing local software is insecure. So, you either teach your users to ignore insecure browser warnings, or you tie the server to some kind of online subscription that you manage and generate fake certificates for your customer's private IPs just to get the browsers to shut up.
OCSP (with stapling) was an attempt to get these benefits with less disruption, but it failed for the same reason this change is painful: server operators don’t want to have to configure anything for any reason ever.
Well, it's the people who want to MITM that started it, a lot of effort has been spent on a red queen's race ever since. If you humans would coordinate to stay in high-trust equilibria instead of slipping into lower ones you could avoid spending a lot on security.
Well, you could also give every random server you happen to configure an API key with the power change any DNS record it wishes.. what could go wrong?
#security
Holding public PKI advancements hostage so that businesses can be lazy about their intranet services is a bad tradeoff for the vast majority of people that rely on public TLS.
There are more things on the internet than web servers.
You might say “use DNS-01”; but thats reductive- I’m letting any node control my entire domain (and many of my registrars don’t even allow API access to records- let alone an API key thats limited to a single record; even cloud providers dont have that).
I don’t even think mail servers work well with the letsencrypt model unless its a single server for everything without redundancies.
I guess nobody runs those anymore though, and, I can see why.
> and my IRC servers that don’t have any HTTP daemon (and thus have the port blocked) while being balanced by anycast geo-fenced DNS?
The certificate you get for the domain can be used for whatever the client accepts it for - the HTTP part only matters for the ACME provider. So you could point port 80 to an ACME daemon and serve only the challenge from there. But this is not necessarily a great solution, depending on what your routing looks like, because you need to serve the same challenge response for any request to that port.
> You might say “use DNS-01”; but thats reductive- I’m letting any node control my entire domain (and many of my registrars don’t even allow API access to records- let alone an API key thats limited to a single record; even cloud providers dont have that).
The server using the certificate doesn't have to be the one going through the ACME flow, and once you have multiple nodes it's often better that it isn't. It's very rare for even highly sophisticated users of ACME to actually provision one certificate per server.
https://hsm.tunnel53.net/article/dns-for-acme-challenges/
Or that TLS and HTTPS are unrelated, when HTTPS is just HTTP over TLS; and TLS secures far more, from APIs and email to VPNs, IoT, and non-browser endpoints? Both are bunk; take your pick.
Or opt for door three: Ignore how CA/B Forum’s relentless ratcheting burdens ops into forking browsers, hacking root stores, or splintering ecosystems with exploitable kludges (they won’t: they’ll go back to “this cert is invalid, proceed anyway?” for all internal users).
Nothing screams “sound security” like 45-day cert churn for systems outside the public browser fray.
And hey, remember back in the day when all the SMTP submission servers just blindly accepted any certificate they were handed because doing domain validation broke email… yeah
Inspired.
It does none of these. Putting more elbow grease into your ACME setup with existing, open source tools solves this for basically any use case where you control the server. If you're operating something from a vendor you may be screwed, but if I had a vote I'd vote that we shouldn't ossify public PKI forever to support the business models of vendors that don't like to update things (and refuse to provide an API to set the server certificate programmatically, which also solves this problem).
> Nothing screams “sound security” like 45-day cert churn for systems outside the public browser fray.
Yes, but unironically. If rotating certs is a once a year process and the guy who knew how to do it has since quit, how quickly is your org going to rotate those certs in the event of a compromise? Most likely some random service everyone forgot about will still be using the compromised certificate until it expires.
> And hey, remember back in the day when all the SMTP submission servers just blindly accepted any certificate they were handed because doing domain validation broke email… yeah
Everyone likes to meme on this, but TLS without verification is actually substantially stronger than nothing for server-to-server SMTP (though verification is even better). It's much easier to snoop on a TCP connection than it is to MITM it when you're communicating between two different datacenters (unlike a coffeeshop). And most mail is between major providers in practice, so they were able to negotiate how to establish trust amongst themselves and protect the vast majority of email from MITM too.
First, one of the purposes of shorter certificates is to make revocation easier in the case of misissuance. Just having certificates issued to you be shorter-lived doesn't address this, because the attacker can ask for a longer-lived certificate.
Second, creating a new browser wouldn't address the issue because sites need to have their certificates be acceptable to basically every browser, and so as long as a big fraction of the browser market (e.g., Chrome) insists on certificates being shorter-lived and will reject certificates with longer lifetimes, sites will need to get short-lived certificates, even if some other browser would accept longer lifetimes.
The same in web certs that could have been something like "domain.xyz can request non-wildcard certs for up to 10 days validity". Where I think certs fell apart with it is they placed all the eggs in client side revocation lists and then that failure fell to the admins to deal with collectively while the issuers sat back.
For the second note, I think that friction is part of their point. Technically you can, practically that doesn't really do much.
You could be proposing two things here:
(1) Something like CAA that told CAs how to behave. (2) Some set of constraints that would be enforced at the client.
CAA does help some, but if you're concerned about misissuance you need to be concerned about compromise of the CA (this is also an issue for certificates issued by the CA the site actually uses, btw). The problem with constraints at the browser is that they need to be delivered to the browser in some trustworthy fashion, but the root of trust in this case is the CA. The situation with RPKI is different because it's a more centralized trust infrastructure.
> For the second note, I think that friction is part of their point. Technically you can, practically that doesn't really do much.
I'm not following. Say you managed to start a new browser and had 30% market share (I agree, a huge lift). It still wouldn't matter because the standard is set by the strictest major browser.
> I'm not following. Say you managed to start a new browser and had 30% market share (I agree, a huge lift). It still wouldn't matter because the standard is set by the strictest major browser.
Same reasoning between us I think, just a difference in interpreting what it was saying. Kind of like sarcasm - a "yes, you can do it just as they say" which in reality highlights "no, you can't actually do _it_ though" type point. You read it as solely the former, I read it as highlighting the latter. Maybe GP meant something else entirely :).
That said, I'm not sure I 100% agree it's really related to the strictest major browser does alone though. E.g. if Firefox set the limit to 7 days then I'd bet people started using other browsers vs all sites began rotating certs every 7 days. If some browsers did and some didn't it'd depend who and how much share etc. That's one of the (many) reasons the browser makers are all involved - to make sure they don't get stuck as the odd one out about a policy change.
.
Thanks for Let's Encrypt btw. Irks about the renewal squeeze aside, I still think it was a net positive move for the web.
And, Well the create-a-browser was a joke, its what ive seen suggested for those who don't like the new rules.
Google https://pki.goog/
SSL.com https://www.ssl.com/blogs/sslcom-supports-acme-protocol-ssl-...
ZeroSSL https://zerossl.com/documentation/acme/
I don't actually think Cloudflare runs an ACME Certificate Authority. They just partner with LetsEncrypt? Edit: Looks like they don't run any CA, they just delegate out to a bunch of others https://developers.cloudflare.com/ssl/reference/certificate-...
https://hacks.mozilla.org/2025/08/crlite-fast-private-and-co...
That's been true for a while, regardless of cert length.
Everyone leans on them and unlike CF and other choke points of the internet...Let's Encrypt is a non-profit
yes
We're not quite there yet, but the logical progression of shorter and shorter certificate lifetimes to obviate the problems related to revocation lists would suggest that we eventually end up in a place where the major ACME CAs join the list of heavily-centralized companies which are dependencies of "the internet", alongside AWS, Cloudflare, and friends. With cert lifetimes measured in years or months, the CA can have a bad day and as long as you didn't wait until the last possible minute to renew, you're unimpacted. With cert lifetimes trending towards days or less, now your CA really does need institutionally important levels of high availability.
Its less that LE becomes more of a single point of failure than it is that the concept of ACME CAs in general join the list of critically available things required to keep a site online.
I think that particular ship sailed a decade ago!
> Its less that LE becomes more of a single point of failure than it is that the concept of ACME CAs in general join the list of critically available things required to keep a site online.
Okay, this is what I wanted clarified. I don't disagree that CAs are critical infrastructure, and that there's latent risk whenever infrastructure becomes critical. I just think that risk is justified, and that LE in particular is no more or less of a SPOF with these policy changes.
Hell, you can still set it to renew when cert still have month left.
I'm more worried that the clowns at the helm will push into something stupid like week or 3 days, "coz it improves security in some theoretical case"
Certificates have historically been a "fire and forget" but constant re-issuance will make LE as important as DNS and web hosting.
The longer certificates were valid the more often we'd have breakage due to admins forgetting renewal, or how do install the new certificates. It was a daily occurrence, often with hours or days of downtime.
Today, it's so rare I don't even remember when I last encountered an expired certificate. And I'm pretty sure it's not because of better observability...
It's okay for something to be a good thing and to celebrate it. We don't have to frown about everything.
Oh and you would definitely know about this outage because you would hear about it in your news, and the monitoring you already have set up to yell at you when you cert is about to retire (you already have that right? Right?). And you can STILL trivially switch to another CA that supports ACME.
There are other CA with ACME support
Including paying CA, if you really want to pay : sectigo
The EU started building the Galileo GNSS ("GPS") in 2008 as a backup in case the US turned hostile. And now look where we are in 2025 with the US president openly talking about taking Greenland. Wise move. It seemed like a gigantic waste back then. It was really, really expensive.
Then lots of European countries ordered F35s from Lockheed Martin. What an own goal. This includes Denmark/Greenland.
But i digress...
While I love Let's Encrypt it feels so silly to use a third party to verify I can generate a Cloudflare API key (even .well-known is effectively "can you run a webserver on said dns entry").
Edit: TIL about https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...
Why not 60 seconds? That's way more secure. Everything is automated after all, so why risk compromised certificate for such a long period, like 45 days?
Or how about we issue a new certificate per request? That should remove most of the risks.
One: most of the reasoning is available for reading. Lots of discussion was had. If you're actually curious, I would suggest starting with the CA/B mailing group. Some conversation is in the MDSP (Mozilla's dev-security-policy) archives as well.
Two: it's good to remember that the competing interests in almost every security-related conversation is the balance between security and usability. Obviously, a 60-second lifetime is unusable. The goal is to get the overlap between secure and usable to be as big as possible.
It ranges from old systems like libpq which just loads certs on connection creation to my knowledge, so it works, down to some JS or Java libraries that just read certs into main memory on startup and never deal with them again. Or other software folding a feature request like "reload certs on SIGHUP" with "oh, transparently do listen socket transfer between listener threads on SIGHUP", and the latter is hard and thus both never happen.
45 days is going to be a huge pain for legacy systems. Less than 2 weeks is a huge pain even with modern frameworks. Even Spring didn't do it right until a year or two ago and we had to keep in-house hacks around.
To be honest, the issue is not the time frame, you can literally have certs being made every day. And there are plenty of ways to automated this. The issue is the CT log providers are going to go ape!
Right now, we are at 24B certificates going back to around 2017 when they really started to grow. There are about 4.5B unique domains, if we reduce this to the amount of certs per domain, its about 2.3B domain certs we currently need.
Now do 2.3B, x 8 renewals ... This is only about 18,4B new certs in CT logs per year. Given how popular LE is, we can assume that the actual growth is maybe 10B per year (those that still use 1 year or multi year + the tons of LE generated ones).
Remember, i said the total going back to 2017 currently is now only 24B ... Yea, we are going to almost double the amount of certs in CT logs, every two years.
And that assumes LE does not move to 17 days, because then i am sure we are doubling the current amount, each year.
Good luck as a CT log provider... fyi, a typical certificate to store is about 4.5kb, we are talking 45TB of space needing per year, and 100TB+ if they really drop it down to 17 days. And we did not talk databases, traffic to the CT logs, etc...
Its broken Jim ... Now imagine for fun, a daily cert, ... 1700TB per year in CT log storage?
A new system will come from Google etc because its going to become unaffordable, even for those companies.
1: https://www.youtube.com/watch?v=uSP9uT_wBDw A great explainer of how they work and why they're better.
2: https://davidben.github.io/merkle-tree-certs/draft-davidben-... The current working draft
We can solve the storage requirements, it’s fine.
Do not forget that we had insane long certificates not that long ago.
The main issue is that currently you can not easily revoke certs, so your almost forced to keep a history of certs, and when one has been revoked in the CT logs.
In theory, if everybody is forced to change certs every 47 days, sure, you can invalidated them and permanently remove them. But it requires a ton of automatization on the user side. There is still way too much software that relies on a single year or multi year certificated that is manually added to it. Its also why the fadeout to 47 days, is over a 4 year time periode.
And it still does not change the massive increased in requests to check validation, that hits CT logs providers.
You can store that kind of information in a lot less space. It doesn't need to be duplicated with each renewal.
> The main issue is that currently you can not easily revoke certs, so your almost forced to keep a history of certs, and when one has been revoked in the CT logs.
This is based on the number of active certificates, which has almost no connection with how long they last.
> There is still way too much software that relies on a single year or multi year certificated that is manually added to it.
Hopefully less and less going forward.
> And it still does not change the massive increased in requests to check validation, that hits CT logs providers.
I'm not really sure how that works but yeah someone needs to pay for that.
Also, at some point in the lifetime graph, you start getting diminishing returns. There aren't many scenarios where you get your private keys stolen, but the bad guys couldn't maintain access for more than a couple of weeks.
In my humble opinion, if this is the direction the CA/B and other self-appointed leaders want to go, it is time to rethink the way PKI works. We should maybe stop thinking of LetsEncrypt as a CA but it (and similar services) can function as more of a real-time trust facilitators? If all they're checking for is server control, then maybe a near-real-time protocol to validate that, issue a cert, and have the webserver use that immediately is ideal? Lots of things need to change for this to work of course, but it is practical.
Not so long ago, very short DNS TTL's were met with similar apprehension. Perhaps the "cert expiry" should be tied to the DNS TTL. With the server renewing much more frequently (e.g.: If the TTL is 1 hour, the server will renew every 15 minutes).
Point being, the current system of doing things might not be the best place to experiment with low expiry lifetimes, but new ways of doing things that can make this work could be engineered.
One of the setups that gives me issues is machines that are resumed from a historical snapshot and start doing things immediately, if the NTP date hasn't been updated since the last snapshot you start getting issues (despite snapshots being updated after every daily run). Most sites won't break (especially with a 24h window, although longer always have issues), but enough sites change their certs so frequently now, it's a constant issue.
Even with a 10 year cert, if you access at the right time you'll have issues, the difference now is it isn't a once in a 10 year event, but once in every few days some times.
Perhaps if TLS clients requesting a time update to the OS was a standardized thing and if NTP client daemons supported that method it would be a lot less painful?
The closest I've gotten to this would be something like a Raspberry Pi, but even then NTP is pretty snappy as soon as there's network access, and until there's network access I'm not hitting any TLS certs.
They aren't used much, but they are a neat solution. Google forcing this change just means there's even more overhead when updating certs in a larger project.
I might add I've changed my mind a bit on this.
But in this case, the upsides are definitely greater than in the usual case.
This feels in many respects worse than what we had with plain HTTP, and we can’t even go back now.
Do you have any examples of sites that have been blocked by the free ACME providers?
Maybe this will just teach everyone to click through SSL warnings the same way they click through GDPR popups - for better or worse.
Does that mean IP certificates will be generally available some time this week?
This vision still needs a several more developments to land before it actually results in an increment in user privacy, but they are possible:
It's not final yet, but interesting development.
This only affects you if you have a server set up to verify mTLS clients against the Let's Encrypt root certificate(s), or maybe every trusted CA on the system. You might do that if you're using the host HTTPS certificates handed out by certbot or other CAs as mTLS client certificates.
You can still generate your own mTLS key pairs and use them to authenticate over a connection whose hostname is verified with Let's Encrypt, which is what most people will be doing.
Sure, but it seems like all the CAs are stopping issuing certificates with the client EKU. At least LetsEncrypt and DigiCert, since by the Google requirement they can't do that and normal certs, and I guess there's not enough market to have one just for that.
> You might do that if you're using the host HTTPS certificates handed out by certbot or other CAs as mTLS client certificate
Sure, what's wrong with that?
> You can still generate your own mTLS key pairs and use them to authenticate over a connection whose hostname is verified with Let's Encrypt, which is what most people will be doing.
That lets the client verify the host, but the server doesn't know where the connection is coming from. Generating mTLS pairs means pinning and coordinated rotation and all that. Currently servers can simply keep an up to date CA store (which is common and easy), and check the subject name, freeing the client to easily rotate their cert.
These days it seems like even the tiniest of projects have random sysadmin work like a compulsory change to https certs with little notice.
It's frustrating and I think has contributed to the death of the noncommercial corners of the internet.
2 factor Auth now compulsory.
Please validate your identity with our third party identity provider so we can confirm you are not on the sanctions list. If you do not, your account will be blocked.
Etc etc. Every third party service requires at least a little work and brainspace.
https://www.certkit.io/certificate-management
You CNAME the acme challenge DNS to us, we manage all your certificates for you. We expose an API and agents to push certificates everywhere you need them, and then do real-time monitoring that the correct certificate is running on the webserver. End-to-end auditability.
At least under the new scheme if you let the domain sit for 45 days you'll know only you hold valid certificates for it.
There really is no alternative to LE.
Let's Encrypt could easily refuse to issue a certificate for a certain domain, even if you don't have a registered account. I don't see much difference.
Granted, you're locked into their ecosystem, can't export PK, etc. so it's FAR from a perfect solution here but I've actually been pretty impressed with the product from a "I need to run my personal website and don't want to have to care about certificates" perspective. Granted, you're paying for the cert, just not directly.
I agree with your statement completely though.
Now we have a “Y” generation showing up, but it seems like whoever thought of “X” didn’t anticipate more than three generations, or they would have used A1/A2.
What's next? Enforcing email signing with SMIME or PGP?
Two quick questions:
1 - Are there any TLS libraries that enable warnings when certs are nearing expiration?
2 - Are there any extensions in the works (or previous failed attempts) for TLS to have the client validate the next planned certificate and signal both ends when that fails?
Okay, so you cert leaked. Will having it leaked for 1.5 months be substantially less dangerous than 90 days? Nope, you're fucked from the day one, it's still massively worse than "a browser asynchronously checks whether site's cert has been revoked"
Decreasing Certificate Lifetimes to 45 Days
https://news.ycombinator.com/item?id=46117126
I'm not sure why, but every corporate picture I've seen of someone, in this context, is standing in front of a hedge. Seems to be a California thing?
(Where I live, we only have leaves on hedges 6 months of the year)