> this server is physically held by a long time contributor with a proven track record of securely hosting services. We can control it remotely, we know exactly where it is, and we know who has access.
I can’t be the only one who read this and had flashbacks to projects that fell apart because one person had the physical server in their basement or a rack at their workplace and it became a sticking point when an argument arose.
I know self-hosting is held as a point of pride by many, but in my experience you’re still better off putting lower cost hardware in a cheap colo with the contract going to the business entity which has defined ownership and procedures. Sending it over to a single member to put somewhere puts a lot of control into that one person’s domain.
I hope for the best for this team and I’m leaning toward believing that this person really is trusted and capable, but I would strongly recommend against these arrangements in any form in general.
EDIT: F-Droid received a $400,000 grant from a single source this year ( https://f-droid.org/2025/02/05/f-droid-awarded-otf-grant.htm... ) so now I’m even more confused about how they decided to hand this server to a single team member to host in unspoken conditions instead of paying basic colocation expenses.
> 400K would go -fast- if they stuck to a traditional colo setup.
I don’t know where you’re pricing coloration, but I could host a single server indefinitely from the interest alone on $400K at the (very nice) data centers I’ve used.
Collocation is not that expensive. I’m not understanding how you think $400K would disappear “fast” unless you think it’s thousands of dollars per month?
Yup. But the same can happen in shared hosting/colo/aws just as easily if only one person controls the keys to the kingdom. I know of at least a handful of open source projects that had to essentially start over because the leader went AWOL or a big fight happened.
That said, I still think that hosting a server in a member's house is a terrible decision for a project.
> if only one person controls the keys to the kingdom
True, which is why I said the important parts need to be held by the legal entity representing the organization. If one person tries to hold it hostage, it becomes a matter of demonstrating that person doesn’t legally have access any more.
I’ve also seen projects fall apart because they forgot to transfer some key element into the legal entity. A common one is the domain name, which might have been registered by one person and then just never transferred over. Nobody notices until that person has a falling out and starts holding the domain name hostage.
I wonder if anyone knows about Droid-ify. Whether it it a safe option, or better to stay away of it?
It showed up one day while I searched about why F-Droid was always so extremely slow to update and download... then trying Droid-ify, that was never a problem any more, it clearly had much better connectivity (or simply less users?)
Ugh. This 100% shows how janky and unmaintained their setup is.
All the hand waving and excuses around global supply chains, quotes, etc...it took pretty long for them to acquire commodity hardware and shove it in a special someone's basement and they're trying to make it seem like a good thing?
F-Droid is often discussed in the GrapheneOS community, the concerns around centralization and signing are valid.
I understand this is a volunteer effort, but it's not a good look.
As someone that has run many volunteer open source communities and projects for more than 2 decades, I totally get how big "small" wins like this are.
The internet is run on binaries compiled in servers in random basements and you should be thankful for those basements because the corpos are never going to actually help fund any of it.
"I understand this is a volunteer effort, but it's not a good look."
I would agree, that it is not a good look for this society, to lament so much about the big evil corporations and invest so little in the free alternatives.
I read it a bit differently: you don't need to be a mega-corp with millions of servers to actually make a difference for the better. It really doesn't take much!
The issue isn’t the hardware, it’s the fact that it’s hosted somewhere private in conditions they wont name under the control of a single member. Typically colo providers are used for this.
They didn't say what conditions it's held in. You're just adding FUD, please stop. It could be under the bed, it could be in a professional server room of the company ran by the mentioned contributor.
100%. Just as an example I have several racks at home, business fiber, battery backup, and a propane generator as a last resort. Also 4th amendment protections so no one gets access without me knowing about it. I host a lot of things at home and trust it more than any DC.
Isn't a business line quite expensive to maintain per month along with a hefty upfront cost? For a smaller team with a tight budget, just going somewhere with all of that stuff included is probably cheaper and easier like a colo DC.
> Also 4th amendment protections so no one gets access without me knowing about it
“F-Droid is not hosted in just any data center where commodity hardware is managed by some unknown staff. We worked out a special arrangement so that this server is physically held by a long time contributor with a proven track record of securely hosting services. We can control it remotely, we know exactly where it is, and we know who has access.”
Yikes. You don't need a "special arrangement" for those requirements. This is the bare minimum at many professionally run colocation data centers. There is not a security requirement that can't be met by a data center -- being secure to your requirements is their entire business.
I never questioned or thought twice about F-Droid's trustworthiness until I read that. It makes it sound like a very amateurish operation.
I had passively assumed something like this would be a Cloud VM + DB + buckets. The "hardware upgrade" they are talking about would have been a couple clicks to change the VM type, a total nothingburger. Now I can only imagine a janky setup in some random (to me) guy's closet.
In any case, I'm more curious to know exactly what kind hardware is required for F-Droid, they didn't mention any specifics about CPU, Memory, Storage etc.
A "single server" covers a pretty large range of scale, its more about how F-droid is used and perceived. Package repos are infrastructure, and reliability is important. A server behind someone's TV is much more susceptible to power outages, network issues, accidents, and tampering. Again, I don't know that's the case since they didn't really say anything specific.
> not hosted in just any data center where commodity hardware is managed by some unknown staff
I took this to mean it's not in a colo facility either, assumed it mean't someone's home, AKA residential power and internet.
> It makes it sound like a very amateurish operation.
Wait until you find out how every major Linux distributions and software that powers the internet is maintained. It is all a wildly under-funded shit show, and yet we do it anyway because letting the corpos run it all is even worse.
"F-Droid is not hosted in a data centre with proper procedures, access controls, and people whose jobs are on the line. Instead it's in some guy's bedroom."
> State actor? Gets into data centre, or has to break into a privately owned apartment.
I don’t think a state actor would actually break in to either in this case, but if they did then breaking into the private apartment would be a dream come true. Breaking into a data center requires coordination and ensuring a lot of people with access and visibility stay quiet. Breaking into someone’s apartment means waiting until they’re away from the premises for a while and then going in.
Getting a warrant for a private residence also would likely give them access to all electronic devices there as no 3rd party is keeping billing records of which hardware is used for the service.
> Dumb accidents? Well, all buildings can burn or have an power outage.
Data centers are built with redundant network connectivity, backup power, and fire suppression. Accidents can happen at both, but that’s not the question. The question is their relative frequency, which is where the data center is far superior.
It could just be a colo, there are still plenty of data centres around the globe that will sell you a space in a shared rack with a certain power density per U of space. The list of people who can access that shared locked rack is likely a known quantity with most such organisations and I know in the past we had some details of the people who were responsible for it
In some respects, having your entire reputation on the line matters just as much. And sure, someone might have a server cage in their residence, or maybe they run their own small business and it's there. But the vagueness is troubling, I agree.
A picture of the "living conditions" for the server would go a long way.
The set of people who can maliciously modify it is the people who run f-droid, instead of the cloud provider and the people who run f-droid.
It'd be nice if we didn't have to trust the people who run f-droid, but given we do I see an argument that it's better for them to run the hardware so we only have to trust them and not someone else as well.
You actually do not have to trust the people who run f-droid for those apps whose maintainers enroll in reproducible builds and multi-party signing, which only f-droid supports unlike any alternatives.
That looks cool, which might just be the point of your comment, but I don't think it actually changes the argument here.
You still have to trust the app store to some extent. On first use, you're trusting f-droid to give you the copy of the app with appropriate signatures. Running in someone else's data-center still means you need to trust that data-center plus the people setting up the app store, instead of just the app store. It's just a breach of trust is less consequential since the attacker needs to catch the first install (of apps that even use that technology).
F-droid makes the most sense when shipped as the system appstore, along with pinned CA keychains as Calyxos did. Ideally f-droid was compiled from source and validated by the rom devs.
The F-droid app itself can then verify signatures from both third party developers and first party builds on an f-droid machine.
For all its faults (of which there are many) it is still a leaps and bounds better trust story than say Google Play. Developers can only publish code, and optional signatures, but not binaries.
Combine that with distributed reproducible builds with signed evidence validated by the app and you end up not having to trust anything but the f-droid app itself on your device.
Modern machines go up to really mental levels of performance when you think about it and for a lot of small scale things like F droid I doubt it takes a lot of hardware to actually host it. A lot of its going to be static files so a basic web server could put through 100s of thousands of requests and even on a modest machine saturate 10 gbps which I suspect is enough for what they do.
This just reads to me like they have racked a box in a colo with a known person running the shared rack rather than someone’s basement but who really knows they aren't exactly handing out details.
which is pretty mad. You can buy a second hand system with tons of ram and a 16-core Ryzen for like $400. 12-year old hardware is only marginally faster than a RPi 5.
Plus the fact that it's been running for 5 years. Does that mean they bought 7 year old hardware back then? Or is that just when it was last restarted?
Building a budget AM4 system for roughly $500 would be within the realm of reason. ($150 mobo, $100 cpu, $150 RAM, that leaves $100 for storage, still likely need power and case.)
It's frankly embarrassing how many of the comments on this thread are some version of looking at the XKCD "dependency" meme and deciding the best course of action is to throw spitballs at the maintainers of the critical project holding everything else up.
> Another important part of this story is where the server lives and how it is managed. F-Droid is not hosted in just any data center where commodity hardware is managed by some unknown staff.
> The previous server was 12 year old hardware and had been running for about five years. In infrastructure terms, that is a lifetime. It served F-Droid well, but it was reaching the point where speed and maintenance overhead were becoming a daily burden.
lol. if they're gonna use gitlab just use a proper setup - bigco is already in the critical path...
I can’t be the only one who read this and had flashbacks to projects that fell apart because one person had the physical server in their basement or a rack at their workplace and it became a sticking point when an argument arose.
I know self-hosting is held as a point of pride by many, but in my experience you’re still better off putting lower cost hardware in a cheap colo with the contract going to the business entity which has defined ownership and procedures. Sending it over to a single member to put somewhere puts a lot of control into that one person’s domain.
I hope for the best for this team and I’m leaning toward believing that this person really is trusted and capable, but I would strongly recommend against these arrangements in any form in general.
EDIT: F-Droid received a $400,000 grant from a single source this year ( https://f-droid.org/2025/02/05/f-droid-awarded-otf-grant.htm... ) so now I’m even more confused about how they decided to hand this server to a single team member to host in unspoken conditions instead of paying basic colocation expenses.
Personally I would feel better about round robin across multiple maintainer-home-hosted machines.
I don’t know where you’re pricing coloration, but I could host a single server indefinitely from the interest alone on $400K at the (very nice) data centers I’ve used.
Collocation is not that expensive. I’m not understanding how you think $400K would disappear “fast” unless you think it’s thousands of dollars per month?
That said, I still think that hosting a server in a member's house is a terrible decision for a project.
True, which is why I said the important parts need to be held by the legal entity representing the organization. If one person tries to hold it hostage, it becomes a matter of demonstrating that person doesn’t legally have access any more.
I’ve also seen projects fall apart because they forgot to transfer some key element into the legal entity. A common one is the domain name, which might have been registered by one person and then just never transferred over. Nobody notices until that person has a falling out and starts holding the domain name hostage.
It showed up one day while I searched about why F-Droid was always so extremely slow to update and download... then trying Droid-ify, that was never a problem any more, it clearly had much better connectivity (or simply less users?)
All the hand waving and excuses around global supply chains, quotes, etc...it took pretty long for them to acquire commodity hardware and shove it in a special someone's basement and they're trying to make it seem like a good thing?
F-Droid is often discussed in the GrapheneOS community, the concerns around centralization and signing are valid.
I understand this is a volunteer effort, but it's not a good look.
The internet is run on binaries compiled in servers in random basements and you should be thankful for those basements because the corpos are never going to actually help fund any of it.
I would agree, that it is not a good look for this society, to lament so much about the big evil corporations and invest so little in the free alternatives.
Also, even 12-year-old hardware is wicked fast.
They didn't say what conditions it's held in. You're just adding FUD, please stop. It could be under the bed, it could be in a professional server room of the company ran by the mentioned contributor.
> Also 4th amendment protections so no one gets access without me knowing about it
laughs in FISA
“F-Droid is not hosted in just any data center where commodity hardware is managed by some unknown staff. We worked out a special arrangement so that this server is physically held by a long time contributor with a proven track record of securely hosting services. We can control it remotely, we know exactly where it is, and we know who has access.”
I had passively assumed something like this would be a Cloud VM + DB + buckets. The "hardware upgrade" they are talking about would have been a couple clicks to change the VM type, a total nothingburger. Now I can only imagine a janky setup in some random (to me) guy's closet.
In any case, I'm more curious to know exactly what kind hardware is required for F-Droid, they didn't mention any specifics about CPU, Memory, Storage etc.
> not hosted in just any data center where commodity hardware is managed by some unknown staff
I took this to mean it's not in a colo facility either, assumed it mean't someone's home, AKA residential power and internet.
I agree that "behind someone's TV" would be a terrible idea.
Wait until you find out how every major Linux distributions and software that powers the internet is maintained. It is all a wildly under-funded shit show, and yet we do it anyway because letting the corpos run it all is even worse.
Not reassuring.
State actor? Gets into data centre, or has to break into a privately owned apartment.
Criminal/3rd party state intelligence service? Could get into both, at a risk or with blackmail, threats, or violence.
Dumb accidents? Well, all buildings can burn or have an power outage.
I don’t think a state actor would actually break in to either in this case, but if they did then breaking into the private apartment would be a dream come true. Breaking into a data center requires coordination and ensuring a lot of people with access and visibility stay quiet. Breaking into someone’s apartment means waiting until they’re away from the premises for a while and then going in.
Getting a warrant for a private residence also would likely give them access to all electronic devices there as no 3rd party is keeping billing records of which hardware is used for the service.
> Dumb accidents? Well, all buildings can burn or have an power outage.
Data centers are built with redundant network connectivity, backup power, and fire suppression. Accidents can happen at both, but that’s not the question. The question is their relative frequency, which is where the data center is far superior.
A picture of the "living conditions" for the server would go a long way.
The set of people who can maliciously modify it is the people who run f-droid, instead of the cloud provider and the people who run f-droid.
It'd be nice if we didn't have to trust the people who run f-droid, but given we do I see an argument that it's better for them to run the hardware so we only have to trust them and not someone else as well.
You still have to trust the app store to some extent. On first use, you're trusting f-droid to give you the copy of the app with appropriate signatures. Running in someone else's data-center still means you need to trust that data-center plus the people setting up the app store, instead of just the app store. It's just a breach of trust is less consequential since the attacker needs to catch the first install (of apps that even use that technology).
The F-droid app itself can then verify signatures from both third party developers and first party builds on an f-droid machine.
For all its faults (of which there are many) it is still a leaps and bounds better trust story than say Google Play. Developers can only publish code, and optional signatures, but not binaries.
Combine that with distributed reproducible builds with signed evidence validated by the app and you end up not having to trust anything but the f-droid app itself on your device.
This just reads to me like they have racked a box in a colo with a known person running the shared rack rather than someone’s basement but who really knows they aren't exactly handing out details.
> The previous server was 12 year old hardware
which is pretty mad. You can buy a second hand system with tons of ram and a 16-core Ryzen for like $400. 12-year old hardware is only marginally faster than a RPi 5.
A Dell R620 is over 12 years old and WAY faster than a RPi 5 though...
Sure, it'll be way less power efficient, but I'd definitely trust it to serve more concurrent users than a RPi.
Building a budget AM4 system for roughly $500 would be within the realm of reason. ($150 mobo, $100 cpu, $150 RAM, that leaves $100 for storage, still likely need power and case.)
https://www.amazon.com/Timetec-Premium-PC4-19200-Unbuffered-...
https://www.amazon.com/MSI-MAG-B550-TOMAHAWK-Motherboard/dp/...
For a server that's replacing a 12 year old system, you don't need DDR5 and other bleeding edge hardware.
Saying this on HN, of course.
> The previous server was 12 year old hardware and had been running for about five years. In infrastructure terms, that is a lifetime. It served F-Droid well, but it was reaching the point where speed and maintenance overhead were becoming a daily burden.
lol. if they're gonna use gitlab just use a proper setup - bigco is already in the critical path...