A directory over SSH can be your git server. If your CI isn't too complex, a post-receive hook looping into Docker can be enough. I wrote up about self hosting git and builds a few weeks ago[1].
There are heavier solutions, but even setting something like this up as a backstop might be useful. If your blog is being hammered by ChatGPT traffic, spare a thought for Github. I can only imagine their traffic has ballooned phenomenally.
I use https://pipe.pico.sh for this use case. It’s a pubsub over ssh. It’s multicast so you can have multiple listeners on the same topic, and you can have it block or not block the event.
Most builds take a long time, at least in C++ and Rust (the two languages I work in). And from what I have seen of people working in Python, the builds aren't fast there either (far faster of course, but still easily a minute or two).
Also, how would PRs and code review be handled?
Your suggestion really only makes sense for a small single developer hobby project in an interpreted language. Which, if that is what you intended, fair enough. But there really wasn't enough context to ascertain that.
I did give additional context in the blog post I linked, but yes, to be clear, this is something that will really work best for small projects with reasonably fast build cycles.
If you're already at the point where you're fielding pull requests, lots of long running tests, etc., you'll probably already know you need more than git over ssh.
In moments like this, it's useful to have a "break glass" mode in your CI tooling: a way to run a production CI pipeline from scratch, when your production CI infrastructure is down. Otherwise, if your CI downtime coincides with other production downtime, you might find yourself with a "bricked" platform. I've seen it happen and it is not fun.
It can be a pain to setup a break-glass, especially if you have a lot of legacy CI cruft to deal with. But it pays off in spades during outages.
I'm biased because we (dagger.io) provide tooling that makes this break-glass setup easier, by decoupling the CI logic from CI infrastructure. But it doesn't matter what tools you use: just make sure you can run a bootstrap CI pipeline from your local machine. You'll thank me later.
At times like this is when I'm so happy I don't work with deploying to a production environment, but rather we release software that (after extensive qualification), customers can install in their environment on their airgapped networks. Using a USB stick to cross the air gap. If we miss a release by a day or thrre, there is enough slack in the process before it goes to the customer that no one will be any the wiser.
Crazy in 2026, but installable software has some pros still, for both the developer and for the customer. And I would personally love if I could do things that way for more things.
This is a must when your systems deal with critical workloads. At Fastly, we process a good chunk of the internet's traffic and can't afford to be "down" while waiting for the CI system to recover in the event of a production outage.
We built a CI platform using dagger.io on top of GH Actions, and the "break glass" pattern was not an afterthought; it was a requirement (and one of the main reasons we chose dagger as the underlying foundation of the platform in the first place)
100%. We used to design the pipeline a way that is easily reproducible locally, e.g. doesn’t rely on plugins of the CI runtime. Think build.sh shell script, normally invoked by CI runner but just as easy to run locally.
A while back I think I heard you on a podcast describing these pain points. Experienced them myself; sounded like a compelling solution. I remember Dagger docs being all about AI a year or two ago, and frankly it put me off, but that seems to have gone again. Is your focus back to CI?
Yes, we are re-focused on CI. We heard loud and clear that we should pick a lane: either a runtime for AI agents, or deterministic CI. We pick CI.
Ironically, this makes Dagger even more relevant in the age of coding agents: the bottleneck increasingly is not the ability to generate code, but to reliably test it end-to-end. So the more we all rely on coding agents to produce code, the more we will need a deterministic testing layer we can trust. That's what Dagger aspires to be.
For reference, a few other HN threads where we discussed this:
Insert the standard comment about how git doesn't even need a hub. The whole point of it is that it's distributed and doesn't need to be "hosted" anywhere. You can push or pull from any repo on anyone's machine. Shouldn't everyone just treat GitHub as an online backup? Zero reason it being down should block development.
The problem is that any kind of automatic code change process like CI, PRs, code review, deployments, etc etc are based on having a central git server. Even security may be based on SSO roles synced to GH allowing access to certain repos.
A self-hosted git server is trivial. Making sure everything built on top of that is able to fallback to that is not. Especially when GH has so many integrations out of the box
What’s interesting about outages like this is how many things depend on GitHub now beyond just git hosting.
CI pipelines, package registries, release automation, deployment triggers, webhooks — a lot of infrastructure quietly assumes GitHub is always available.
When GitHub degrades, the blast radius is surprisingly large because it breaks entire build and release chains, not just repo browsing.
Part of it is probably historical momentum.
GitHub started as “just git hosting,” so a lot of tooling gradually grew around it over the years — Actions, package registries, webhooks, release automation, etc. Once teams start wiring all those pieces together, replacing or decoupling them becomes surprisingly hard, even if everyone knows it’s a single point of failure.
I've found that a bare repo over SSH is the simplest way to keep control and reduce attack surface, especially when you don't need fancy PR workflows. I ran many projects with git init --bare on a Debian VPS, controlled access with authorized_keys and git-shell, and wrote a post-receive hook that runs docker-compose pull and systemctl restart so pushes actually deploy. The tradeoff is you lose built-in PRs, issue tracking, and easy third party CI, so either add gitolite or Gitea for access and a simple web UI, or accept writing hooks, backups, receive.denyNonFastForwards, and scheduled git gc to avoid surprises at 2AM.
I'm going to trust the constant stream of updates from the company itself which shows exactly what went down and came back up rather than a random anecdote.
Recent years have shown this to be the wrong prediction strategy. The reason seems to be an incentive imbalance where there are quite a few reasons for companies to lie (including their own CLAs) and not a lot of repercussions for doing so (everybody competes on lock-in, not on product). Of course, the word-of-mouth approach is also exploitable by dishonest actors, but thus far there doesn’t look to be a lot of exploitation going on, likely because there’s little reason to bother (once again, lock-in is king).
This seems intelligent, after all companies are incapable of making errors in reporting and also have absolutely no incentive to lie about stuff like that. Those 500 errors others have reported as experiencing must have just been the wind.
I rarely successfully get Codeberg URLs to load. Which is sad because I actually would very much like to recommend it but I find it unreliable as a source.
That being said, GitHub is Microsoft now, known for that Microsoft 360 uptime.
I have never had this issue. IIRC Codeberg has a matrix community, they are a non-profit and they would absolutely love to hear your feedback of them. I hope that you can find their matrix community and join it and talk with them
I swear this is my fault. I can go weeks without doing infra work. Github does fine, I don't see any hiccups, status page is all green.
But the day comes that I need to tweak a deploy flow, or update our testing infra and about halfway through the task I take the whole thing down. It's gotten to the point where when there's an outage I'm the first person people ask what I'm doing...and it's pretty dang consistent....
Related: In FreeBSD we used to talk often about the Wemm Field. Peter Wemm was one of the early FreeBSD developers and responsible for most of the early project server cluster, and hardware had a phenomenal habit of breaking in his vicinity. One notable story I heard involved transporting servers between data centers and hitting a Christmas tree in the middle of a highway... in March.
I think GitHub shipping Copilot while suffering availability issues is a rational choice because they get more measurable business upside from a flashy AI product than from another uptime graph. In my experience the only things that force engineering orgs to prioritize uptime are public SLOs with enforced error budgets that can halt rollouts, plus solid observability like Prometheus and OpenTelemetry tracing, canary rollouts behind feature flags, multi-region active-active deployments, and regular chaos experiments to surface regressions. If you want them to change, push for public SLOs or pay for an enterprise SLA, otherwise accept that meaningful uptime improvements cost money and will slow down the flashy stuff.
> This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
does anyone know where these "detailed root cause analysis" reports are shared? is there maybe an archive?
I really wish Graphite had just gone down the path of better Git hosting and reviewing, instead of trying to charge me $40 a month for an AI reviewer. It would be nice to have a real first class alternative to Github
I've taken to hosting everything critical like this myself on a single system with Docker Compose with regular off premises backups and a restore process that I know works because I test it every 6 months. I can swap from local hosting to a VPS in 30 mins if I need to. It seems like the majority of large services like GitHub have had increasingly annoying downtime while I try to get work done. If you know what you're doing it's a false premise that you'll just have more issues with self hosting. If you don't know what you are doing it's becoming an increasingly good time to learn. I've had 4 years of continuous uptime on my services at this point. I still push to third parties like GitHub as yet another backup and see the occasional 500 and my workflow keeps chugging along. I've gotten old and grumpy and rather just do it myself.
Maybe we should turn these weekly posts into an actionable item we can use to move organizations away from this critical infrastructure that is failing in realtime.
How reliable is githubstatus.com? I know that status pages are generally not updated until Leadership and/or PR has a chance to approve the changes; is that the case here?
Our health check checks against githubstatus.com to verify 'why' there may be a GHA failure and reports it, e.g.
Cannot run: repo clone failed — GitHub is reporting issues (Partial System Outage: 'Incident with Copilot and Actions'). No cached manifests available.
But, if it's not updated, we get more generic responses. Are there better ways that you all employ (other than to not use GHA, you silly haters :-))
I am getting really tired of github. outages happen that's a given. but on so much stuff they don't even care or try. Github is becoming the bottleneck in my agentic coding workflows. unless I make Claude do it intelligently, I hit rate limits checking on CI jobs (5000 api requests in an hour). Depot makes their CI so much better, but it is still tied to github in a couple of annoying places.
PRs are a defacto communication and coordination bus between different code review tools, its all a mess.
LLMs make it worse because I'm pushing more code to github than ever before, and it just isn't setup to deal with this type of workload when it is working well.
In many companies I worked for, there were a bunch of infrastructure astronauts who made everything very complicated in the name of zero downtime and sold them to management as “downtime would kill pur credibility and our businesses ”, and then you have billion dollar companies everyone relies on (GitHub, Cloudflare) who have repeated downtime yet it doesn't seem to affect their business in any way.
It's a multitude of factors but basically they can act like that because they are dominant on the market.
The classic "nobody ever gets fired for buying IBM".
If you pick something else, and there's issue, people will complain about your choice being wrong, should have gone with the biggest player.
Even if you provide metrics showing your solution's downtime being 1% of the big player.
Something like Cloudflare is so big and ubiquitous, that, when there's a downtime, even your grandma is aware of it because they talk about it in the news.
So nobody will put the blame on the person choosing Cloudflare.
Even if people decides to go back (I had a few customers asking us to migrate to other solutions or to build some kind of failover after the last Cloudflare incidents), it costs so much to find the solutions that can replace it with the same service level and to do the migration, that, in the end, they prefer to eat the cost of the downtimes.
Meanwhile, if you're a regular player in a very competitive market, yes, every downtime will result in lost income, customers leaving... which can hurt quite a lot when you don't have hundreds of thousands of customers.
GitHub is a distributed version control storage hub with additional add-on features. If peeps can’t work around a git server/hub being down and don’t know to have independent reproducible builds or integrations and aren’t using project software wildly better that GitHubs’, there are issues. And for how much money? A few hundred per dev per year? Forget total revenue, the billions, the entire thing is a pile of ‘suck it up, buttercup’ with ToS to match.
In contrast, I’ve been working for a private company selling patient-touching healthcare solutions and we all would have committed seppuku with outages like this. Yeah, zero downtime or as close to it as possible even if it means fixing MS bugs before they do. Fines, deaths, and public embarrassment were potential results of downtime.
All investments become smart or dumb depending on context. If management agrees that downtime would be lethal my prejudice would be to believe them since they know the contracts and sales perspective. If ‘they crashed that one time’ stops all sales, the 0% revenue makes being 30% faster than those astronauts irrelevant.
To be fair - it SUPER does. Being down frequently makes your competition look better.
Of course, once you have the momentum it doesn't matter nearly as much, at least for a while. If it happens too much though, people will start looking for alternatives.
The key to remember is Momentum is hard to redirect, but with enough force (reasons), it will.
Seems like the xkcd [1] for internet infrastructure that was posted earlier [2] should have github somewhere on it, even if just for how often it breaks. Maybe it falls under "whatever microsoft is doing"
Lowendtalk providers who take 7$ per year deals can provide more reliability than Github at this moment and I am not kidding.
If anyone is using Github professionally and pays for github actions or any github product, respectfully, why?
You can switch to a VPS provider and self host gitea/forejo in less time than you might think and pay a fraction of a fraction than you might pay now.
The point becomes more moot because github is used by developers and devs are so so much more likely to be able to spin up a vps and run forejo and run terminal. I don't quite understand the point.
There are ways to run github actions in forejo as well iirc even on locally hosted which uses https://github.com/nektos/act under the hood.
People, the time where you spent hundreds of thousands of dollars and expected basic service and no service outage issues is over.
What you are gonna get is service outage issues and lock-ins. Also, your open source project is getting trained on by the parent company of the said git provider.
PS: But if you do end up using Gitea/forejo. Please donate to Codeberg/forejo/gitea (Gitea is a company tho whereas Codeberg is non profit). I think that donating 1k$ to Codeberg would be infinitely better than paying 10k$ or 100k$ worth to Github.
I figured that it would be something like that. But it's been so frequent that I expect the leadership to act decisively towards a long-term reliability plan. Unfortunately they have near monopoly in this space, so I guess there's not enough incentive to fix the situation.
How frequent? I think the obsession with uptime is annoying. If GitHub is down, if there’s something so critical, then you need some more control of the system. Otherwise take a couple hours and get a coffee or an early lunch.
At this point I am thinking of creating a 0 days until github outage website similar to how we had the running joke of 0 days until JS framework dropped.
That site could use a little more. Maybe a count of how many in the current month and year, tallies for each year, maybe even trends. Could be nice. :)
There are heavier solutions, but even setting something like this up as a backstop might be useful. If your blog is being hammered by ChatGPT traffic, spare a thought for Github. I can only imagine their traffic has ballooned phenomenally.
1: https://duggan.ie/posts/self-hosting-git-and-builds-without-...
Also, how would PRs and code review be handled?
Your suggestion really only makes sense for a small single developer hobby project in an interpreted language. Which, if that is what you intended, fair enough. But there really wasn't enough context to ascertain that.
If you're already at the point where you're fielding pull requests, lots of long running tests, etc., you'll probably already know you need more than git over ssh.
It can be a pain to setup a break-glass, especially if you have a lot of legacy CI cruft to deal with. But it pays off in spades during outages.
I'm biased because we (dagger.io) provide tooling that makes this break-glass setup easier, by decoupling the CI logic from CI infrastructure. But it doesn't matter what tools you use: just make sure you can run a bootstrap CI pipeline from your local machine. You'll thank me later.
Crazy in 2026, but installable software has some pros still, for both the developer and for the customer. And I would personally love if I could do things that way for more things.
We built a CI platform using dagger.io on top of GH Actions, and the "break glass" pattern was not an afterthought; it was a requirement (and one of the main reasons we chose dagger as the underlying foundation of the platform in the first place)
I generally recommend that the break glass solution always be pair programmed.
Even if I get the idea of an automation before there’s a run book for it.
Ironically, this makes Dagger even more relevant in the age of coding agents: the bottleneck increasingly is not the ability to generate code, but to reliably test it end-to-end. So the more we all rely on coding agents to produce code, the more we will need a deterministic testing layer we can trust. That's what Dagger aspires to be.
For reference, a few other HN threads where we discussed this:
- https://news.ycombinator.com/item?id=46734553
- https://news.ycombinator.com/item?id=46268265
Yes, I agree on your assessment. AI means a higher rate of code changes, so you need more robust and fast CI.
A self-hosted git server is trivial. Making sure everything built on top of that is able to fallback to that is not. Especially when GH has so many integrations out of the box
I'm getting cf-mitigated: challenge on openai API requests.
https://www.cloudflarestatus.com/ https://status.openai.com/
Which is really baffling when talking about a service that has at least weekly hicups even when it's not a complete outage.
There's almost 20 outages listed on HN over the past two months: https://news.ycombinator.com/from?site=githubstatus.com so much for “always available”.
> Git Operations is experiencing degraded availability. We are continuing to investigate.
https://www.githubstatus.com/incidents/n07yy1bk6kc4
That being said, GitHub is Microsoft now, known for that Microsoft 360 uptime.
Actually here you go, I have pasted the matrix link to their community, hope it helps https://matrix.to/#/#codeberg-space:matrix.org
I mean... It's right in the name! It's up for 360 days a year.
But the day comes that I need to tweak a deploy flow, or update our testing infra and about halfway through the task I take the whole thing down. It's gotten to the point where when there's an outage I'm the first person people ask what I'm doing...and it's pretty dang consistent....
And they are gonna give a pizza party if I get them a day off. I am gonna share a slice with ya too.
Doing a github worldwide outage by magical quantum entanglement for a slice of pizza? I think I would take that deal! xD.
More existential than going down a few times a week?
does anyone know where these "detailed root cause analysis" reports are shared? is there maybe an archive?
There are also monthly availability reports: https://github.blog/tag/github-availability-report/
everyone builds off vibes and moves fast! like no, if you are a mature company you don't need to move fast, in fact you need to move slow
the only thing that can kill e.g. github is if they move fast and break things like they do recently
Our health check checks against githubstatus.com to verify 'why' there may be a GHA failure and reports it, e.g.
Cannot run: repo clone failed — GitHub is reporting issues (Partial System Outage: 'Incident with Copilot and Actions'). No cached manifests available.
But, if it's not updated, we get more generic responses. Are there better ways that you all employ (other than to not use GHA, you silly haters :-))
PRs are a defacto communication and coordination bus between different code review tools, its all a mess.
LLMs make it worse because I'm pushing more code to github than ever before, and it just isn't setup to deal with this type of workload when it is working well.
The classic "nobody ever gets fired for buying IBM".
If you pick something else, and there's issue, people will complain about your choice being wrong, should have gone with the biggest player.
Even if you provide metrics showing your solution's downtime being 1% of the big player.
Something like Cloudflare is so big and ubiquitous, that, when there's a downtime, even your grandma is aware of it because they talk about it in the news. So nobody will put the blame on the person choosing Cloudflare.
Even if people decides to go back (I had a few customers asking us to migrate to other solutions or to build some kind of failover after the last Cloudflare incidents), it costs so much to find the solutions that can replace it with the same service level and to do the migration, that, in the end, they prefer to eat the cost of the downtimes.
Meanwhile, if you're a regular player in a very competitive market, yes, every downtime will result in lost income, customers leaving... which can hurt quite a lot when you don't have hundreds of thousands of customers.
GitHub is a distributed version control storage hub with additional add-on features. If peeps can’t work around a git server/hub being down and don’t know to have independent reproducible builds or integrations and aren’t using project software wildly better that GitHubs’, there are issues. And for how much money? A few hundred per dev per year? Forget total revenue, the billions, the entire thing is a pile of ‘suck it up, buttercup’ with ToS to match.
In contrast, I’ve been working for a private company selling patient-touching healthcare solutions and we all would have committed seppuku with outages like this. Yeah, zero downtime or as close to it as possible even if it means fixing MS bugs before they do. Fines, deaths, and public embarrassment were potential results of downtime.
All investments become smart or dumb depending on context. If management agrees that downtime would be lethal my prejudice would be to believe them since they know the contracts and sales perspective. If ‘they crashed that one time’ stops all sales, the 0% revenue makes being 30% faster than those astronauts irrelevant.
Of course, once you have the momentum it doesn't matter nearly as much, at least for a while. If it happens too much though, people will start looking for alternatives.
The key to remember is Momentum is hard to redirect, but with enough force (reasons), it will.
And the frequency they can tolerate is surprisingly high given that we're talking about the 20th or so outage of 2026 for github. (See: https://news.ycombinator.com/from?site=githubstatus.com)
https://mrshu.github.io/github-statuses/
Most individual services have two nines... but not all of them.
that's....gobsmacking...I knew it was memeably bad but I had no idea it was going so badly
Octocat (The OG github mascot) has a family that it goes to the park with anytime he wants.
Luckily his boss Microslop, is busy with destroying windows of his house and banning people from its discord server.
[1]: https://www.reddit.com/r/ProgrammerHumor/comments/1p204nx/ac... [2]: https://news.ycombinator.com/item?id=47230704
If anyone is using Github professionally and pays for github actions or any github product, respectfully, why?
You can switch to a VPS provider and self host gitea/forejo in less time than you might think and pay a fraction of a fraction than you might pay now.
The point becomes more moot because github is used by developers and devs are so so much more likely to be able to spin up a vps and run forejo and run terminal. I don't quite understand the point.
There are ways to run github actions in forejo as well iirc even on locally hosted which uses https://github.com/nektos/act under the hood.
People, the time where you spent hundreds of thousands of dollars and expected basic service and no service outage issues is over.
What you are gonna get is service outage issues and lock-ins. Also, your open source project is getting trained on by the parent company of the said git provider.
PS: But if you do end up using Gitea/forejo. Please donate to Codeberg/forejo/gitea (Gitea is a company tho whereas Codeberg is non profit). I think that donating 1k$ to Codeberg would be infinitely better than paying 10k$ or 100k$ worth to Github.
I'm on the lookout for an alternative, this really is not acceptable.
Should have self hosted.
https://www.windowscentral.com/microsoft/using-ai-is-no-long...
https://thenewstack.io/github-will-prioritize-migrating-to-a...
https://mrshu.github.io/github-statuses
Born just in time to talk about this situation on hackernews xD (/jk)
> Too slow: https://github-incidents.pages.dev/
I am not even mad that I am slow honestly, this is really funny lol.