Author must not have worked in enterprise software before.
That's a classic trick where the developer will push back on the bug author and say "I can't reproduce this, can you verify it with the latest version?" without actually doing anything. And if it doesn't get confirmed then they can close it as User Error or Not Reproducible.
Of course, the only way to counter this is by saying "Yes I verified it" without actually verifying it.
From experience with Microsoft (paid) support (after doing 5 tickets because it's never the right team and apparently moving tickets internally is for losers), they will ask for proof of the reproduction. And they will take every opportunity to shift the blame ("Oh I can see in the log you're running an antivirus, open a ticket with them. Closed").
My favourite variant of this merrygoround is when they ask you to demonstrate the issue live in a Teams session, you do so, and there's this moment of silence followed by an "Oh... I see".
Then you assume, naively, that this means that they've recognised that there really is a product problem and will go off and fix it. However, then in turn the support tech needs to reproduce the the issue to the development team.
They invariably fail to do so for any number of reasons, such as: This only happens in my region, not others. Or the support tech's lab environment doesn't actually allow them to spin up the high-spec thing that's broken. Or whatever.
Then the ticket gets reject with "can't reproduce" after you've reproduced the issue, with a recorded video and everything as evidence.
If you then navigate that gauntlet, the ticket is most typically rejected with "It is broken like that by design, closed."
I recompiled OpenSSL to make s_server -www return the correct, static XML blob for a .NET application that was buggy to make a reproducer for them that didn't rely on our product at all and which could be self-contained on a very barren windows VM they could play with to their heart's content and which didn't even care about the network because everything was connecting via loopback, so they couldn't blame that, eitehr.
Turns out there was a known bug in Microsoft schannel that had yet to be patched and they'd wasted weeks of our effort by not searching their own bug tracker properly.
It'd kind of sad, how the market went. I suppose there are pluses too.
But back in the 80s and 90s, margins were significantly higher. If you look at hardware, I recall selling hardware with 30% margin, if not more... even 80% on some items.
Yet what came with that was support, support, support. And when you sell 5 computers a month, instead of 500, well.. you need that margin to even have a store. Which you need, because no wide-scale internet.
On the software side, it was sort of the same. I remember paying $80 for some pieces of software, which would be like $200 today. You'd pay $1 on an app store for such software, but I'd also call the author if there was a bug. He'd send an update in the mail.
I guess my point is, in those days, it was fun to fix issues. The focus was more specific, there was time to ply the trade, to enjoy it, to have performant, elegant fixes.
Now, it's all "my boss is hassling me and another bug will somehow mean I have to work harder", which is .. well, sad.
Yep. On the other side of the curtain this often isn't nefarious. It's a simple cost/benefit analysis of spending time on something that one user is complaining about versus a backlog of higher business priorities. I've seen this in my work and it makes me sad for the user, but it often does take a bit of effort to spear these bug reports through.
I totally understand that from the perspective of individual employees: they have little incentive to do more than the bare minimum to close tickets. But this behavior is typically a symptom of broken corporate culture and failure to align internal metrics. For every customer who takes the trouble to submit a formal bug report there are likely many others who just live with it, and badmouth you to other customers. Doing deep investigations of even minor bug reports also tends to expose other, more serious latent bugs. And root cause analysis allows you to create closed-loop solutions to prevent similar future bugs.
Large monopolistic tech companies like Apple and Microsoft can afford to ignore this stuff for years because there are few realistic alternatives. But longer term eventually a disruptive competitor comes along who takes product quality and customer service more seriously.
There's also going to be mountains of bugs resulting from cosmic rays hitting the computer, defective ram chips, weird modifications of the system the reporter hasn't mentioned.
You could sink an infinite amount of time investigating and find nothing. At some point you have to cut off the time investment when only one person has reported it and no devs have been able to reproduce it.
What if no devs even tried to reproduce it, and they have no reason to believe they've fixed the bug with any other changes?
That seems to be the case described in the article. In such a situation, I think it's dishonest to ask the reporter to expend even more effort when you've spent zero. Just close it if you don't want to do it, you don't have to be a jerk to your customers, too, by sending them off on a wild goose chase.
Otherwise, why not ask the reporter to reproduce the issue every single day until you choose to fix it in some unknown point in the future, and if they miss a day, it gets closed? That seems just as arbitrary.
It's a false dichotomy - something being "a simple cost/benefit analysis" doesn't remove the ethical dimension, and can absolutely be nefarious. A movie villain saying "it was just business" doesn't make their actions less villainous.
I’d argue that there should be no higher business priority than shipping a product you already sold. If you sold a product and your customer spends their time documenting exactly why and how you sold them something that’s broken, you should make that a high priority. As a natural progression, you’ll start shipping less buggy / better tested products and that’s how you unlock yourself from the obligation you made to your existing customers to do other work.
Not directed at you of course, just the proverbial “you” from the frustration of a purchaser of software.
Careful saying that too loudly, the “ship new features at all costs” gang will come for your head. They don’t approve of things like “quality software” and “making stuff that works past the demo and cursory inspection” or “actual user utility”.
As an open source maintainer, I feel that statement is really unfair. Yes, we do sometimes close bug reports without evidence they are fixed. But:
- We owe you nothing! And the fact that people still expect maintainers to work for them is really sad, IMHO.
- Unlike corporate workers, nobody is measuring our productivity therefore we have no incentive to close issues if we believe they are unfixed. That means that when we close the issue, we believe it has a high chance of being fixed, and also we weigh the cost of having many maybe-fixed open issues against maybe closing a standing issue, and (try to) choose what's best for the project.
Hi, bigcorp employee getting showered with tickets here.
I don't have enough time in the day to deal with the tickets where the reporter actually tries, let alone the tickets where they don't.
If I tell you to update your shit, it's because it's wildly out of date, to the point that your configuration is impossible for me to reproduce without fucking up my setup to the point that I can't repro 8 other tickets.
Nobody memorizes the contents of the git-log with every release. If you show me someone who does, I will show you a liar. You updating your thing once saves me from updating and downgrading dozens of times. Deal with it.
Please tell us where you work so we can avoid all of your company’s software. Unless it’s Microsoft, because we’ve already seen the results of that attitude there.
I don't see how it's an unreasonable request. If you demand that I work with some ancient version, I then have to install and uninstall said program every time I work on your ticket specifically. You will be prioritized last, because my effectiveness is measured by how many tickets I close.
Back when I worked at Apple I would just try it in whatever I had installed. If it didn't reproduce I'd write "Cannot reproduce in 10.x.x" and close it. Maybe a third were like that, duplicates of some other issue that was resolved long ago.
Anyone that attached a repro file to their issue got attention because it was easy enough to test. Sometimes crash traces got attention, I'd open the code and check out what it was. If it was like a top 15 crash trace then I'd spend a lot longer on it.
If the ticket was long and involved like "make an iMovie and tween it in just such and such a way" then probably I'd fiddle around for 10-15 minutes before downgrading its priority and hope a repro file would come about.
There were a bunch of bug reports for a deprecated codec that I closed and one guy angrily replied that I couldn't just close issues I didn't want to fix!
Guess what buddy, nobody's ever going to fix it.
The oldest bug like that I ever fixed was a QuickDraw bug from when I was 8 but it was just an easy bounds check one liner.
But the mistake OP is making is assuming this one thing that annoyed him somehow applies to the whole Apple org. Most issues were up to engineers and project managers to prioritize, every team had their own process when I was there.
I think that's entirely dependent on the workload the company is placing on their support staff. If Apple decides the techs should be handling 10 tickets at once, then the techs have a choice:
1. Tell everyone to update their shit, and close tickets if they don't.
2. Waste several hours per day uninstalling and reinstalling 10 versions of the same program.
One of these will allow you to close lots of tickets immediately, and handle the remaining ones as efficiently as possible. Yay! Good job, peon! You get a raise!
The other approach will result in a deep backlog, slow turnaround times, and lower apparent output from management's perspective. Boo! Bad job, peon! You're fired!
I got reeeeally good at producing repro gifs that I could plug straight inline into email replies to "can't repro"; it's forever clear that most developers either don't know how to test the product they are building, or simply can't be bothered to try.
I've worked with enterprise software. The result its that people will eventually just wait a few hours/days and lie if they even care enough to do that. The perverse incentives destroy what utility a bug tracker could bring. Int theory transparency could help by changing the incentives if third parties analyze the metrics and call out bullshit to an audicence that matters.
> Of course, the only way to counter this is by saying "Yes I verified it" without actually verifying it.
I'm not going to lie. That's not who I am. If Apple really wants to close a bug report when the bug isn't fixed, that's on their conscience, if they have one.
If you are a veteran of software in a big company, we all know there will be weekly or bi-weekly meetings that some PM will set up. All the PM will do is go over the JIRA tickets and be like "is this still happening". Default answer is "no", as in "I didn't even try to reproduce it, do you think I have time to even do it?". Default answer by spineless QA person is also "didn't try it again yet". Then, the PM closes the ticket. It is much easier for QA person to say "Yes I verified it" if you are remote and developer cannot see the lies on your bad poker face.
Ooh this gives me an interesting passive-aggressive idea to counter pointless "is this still relevant" questions. "No, I haven't hit this in the last 2 days." "No, I haven't hit this since I gave up trying to do it with your tool." And so forth.
The less passive-aggressive version is to use this obviously-unhelpful answer of the obviously-unhelpful question, to actually have a conversation to get the PM to recognize that the default state of a ticket is in fact "no change." Ultimately that may turn into a stale bot if the PM realizes the policy they actually want is some sort of timeout, but at least it's not a time consuming meeting!
(Note, a cathartic thought experiment, but not really good manners to actually do!)
I also hate this pressure of it being on the user to come up with a minimal reproducing example. That means that any bug of any moderate complexity will never get fixed because you can't always reduce them to a few steps and they may be statistical.
A bug is a bug, no matter the developers' opinion or the complexity of the bug.
All kinds of open source projects do this too. It's really annoying. It's one thing if the authors actually try and fail to verify the bug, but these days it seems like most projects just close "stale" bugs as a matter of course. This is equivalent to assuming that any given bug is automatically fixed after X amount of time, which is pretty absurd.
In Scotland, they close an issue by taking a vote of "OK", "Broken", or "Not Proven".
I believe they also have attorneys. Perhaps that's how Apple could make bug-tracking more effective -- hire a prosecuting attorney and a defending attorney for each bug.
> perhaps praying that the bug had magically disappeared on its own, with no effort from Apple.
I suspect that this is a common approach. It maybe even works, often enough, to make it standard practice.
For myself, I've stopped submitting bug reports.
It's not the being ignored, that bothers me; it's when they pay attention, they basically insist that I become an unpaid systems engineering QC person, and go through enormous effort to prove the bug exists.
> they basically insist that I become an unpaid systems engineering QC person
Microsoft support is guilty of this, especially for Azure & 365 issues.
Like sorry, but you aren't paying me to debug your software. Here's a report, and here's proof of me reproducing the problem & some logs. That's all I'm going to provide. It's your software, you debug it.
Damn. I've put quite a lot of effort into open source tools w.r.t. debugging and bugfixing, but yeah putting that for a corporate product that doesn't even respect you must be draining.
I recognize that this is annoying from a user perspective, but I do understand it. Not all bugs are easily reproducible (and even if they are 100% reproducible for the user, it's not always so easy for the developers). Also sometimes you make a change to the code that you think might be in a related area, and so sometimes the most "efficient" thing is just to ask the user to re-test.
When I close an old bug that is not actionable, I do feel bad about it. But keeping the bug open when realistically I can't really do anything with it might be worse.
Back in another part of my career I worked a lot with putting Macs on ActiveDirectory. And there was a common refrain from Apple about bugs in that implementation: "works on 17!".
The joke is that Apple owns the 17.x.x.x class-A range on the Internet (they got in early, the also have a second class-B and used to have a second class-B that they gave back), and what engineers were really saying is that they could not reproduce on the AD systems that Apple had setup (lots of times it was because AD had been setup with a .local domain, a real no-no, but it was in Microsoft's training materials as an example at the time...).
I used to think that there is no harm in keeping the bug open. I think if you honestly feel that you have the time and resources to go back to the bug and fix it, then by all means keep it open.
But I find that sometimes I can tell from experience that the IR is not actionable and that it will never be fixed. Some examples:
* There's not enough info to reproduce the issue and the user either can't or won't be able to reproduce it themselves. Intermittent bugs generally fall into this category.
* The bug was filed against some version of the software that's no longer in production (think of the cloud context where the backend service has been upgraded to a newer version).
Sometimes the cost to investigate a bug is so high relative to the pain caused that it just closed as a WONTFIX. These sometimes suck the most because they are often legitimate bugs with possible fixes, but they will never be prioritized high enough to get fixed.
Or sometimes the bug is only reproducible using some proprietary data that I don't have access to and so you sometimes have no choice but to ask the bug filer "can you still reproduce this?".
Computer systems are complicated. And real-world systems consisting of multiple computer systems are even more complicated.
I think asking someone if they can still reproduce an issue is valid. Especially if it was trivially reproducible for them, and now it isn't, that seems like a fine resolution, and the bug should be closed.
But in the other cases, closing the bug seems to me to be a way to perturb metrics. It might be true that you'll never fix a given bug, but shouldn't there be a record of the "known defects", or "errata" as some call them?
For your specific scenarios:
- lack of information on how to reproduce or resolve a bug doesn't mean it doesn't exist, just that it's not well understood.
- For the "new version" claim, I've seen literal complete rewrites contain the same defects as the previous version. IMHO the author of the new version needs to confirm that the bug is fixed (and how/why it was fixed)
- I agree there are high cost bugs that nobody has resources to fix, but again, that doesn't mean they don't exist (important for errata)
- Similarly with proprietary data, if you aren't allowed to access it, but it still triggers the bug, then the defect exists
In general my philosophy is to treat the existence of open bugs as the authoritative record of known issues. Yes, some of them will never be solved. But having them in the record is important in and of itself.
What is the use in keeping it open when no one will ever look at it again after it goes stale? It still exists in the system if you ever wanted to find it again or if someone reports the same issue again. But after a certain time without reconfirming the bug exists, there is no point investigating because you will never know if you just haven't found it yet or if it was fixed already.
See my reply to eminence32 - bug tracking serves as a list of known defects, not as a list of work the engineers are going to do this [day/month/year].
How is that worse? Leaving it open signals to anyone searching about it that's it's still an issue of concern. It will show up in filters for active bugs, etc. Closing it without fixing it just obfuscates the situation. It costs nothing (except pride?) to leave "Issues (1)" if there is indeed an Issue.
Apple did not say they couldn't reproduce it. Neither did they say that they thought they fixed it. They refused to say anything except "Verify with macOS 26.4 beta 4".
> and even if they are 100% reproducible for the user, it's not always so easy for the developers
It's not easy for the user! Like I said in the blog post, I don't usually run the betas, so it would have been an ordeal to install macOS 26.4 beta 4 just to test this one bug. If anything, it's easier for Apple to test when they're developing the beta.
> the most "efficient" thing is just to ask the user to re-test.
Efficient from Apple's perspective, but grossly inefficient from the bug reporter's perspective.
> realistically I can't really do anything with it
In this case, I provided Apple with a sample Xcode project and explicit steps to reproduce. So realistically, they could have tried that.
I suspect that your underlying assumption is incorrect: I don't think Apple did anything with my bug report. This is not the first time Apple has asked me to "verify" an unfixed bug in a beta version. This seems to be a perfunctory thing they do before certain significant OS releases, clear out some older bug reports. Maybe they want to focus now on macOS 27 for WWDC and pretend that there are no outstanding issues remaining. I don't know exactly what's going through their corporate minds, but what spurred me to blog about it is that they keep doing this same shit.
I don't work at Apple, so I can't comment on that. But that doesn't always help. There's been plenty of times where I have a full HAR file from the user and I can clearly see that something went wrong, but that doesn't always mean I can reproduce the issue. (I recognize a HAR file doesn't represent the complete state of the world, but it's often one of the best things a backend developer can get)
That’s easy enough. The hard part is doing so without capturing a bunch of email, messages, and other private data that happens to be in memory at the time.
Ignorant question, if privacy didn’t matter and they had an atomically identical machine, would there still be plenty of edge cases where it was the printer or the Wi-Fi causing the issue?
In any case I would have said it sounds difficult on every front
I should be more precise. Capturing the system state isn’t too hard. Turning that into a reproducer may be quite hard, because of things like you say. There are certainly a lot of bugs that such a capture would make easier to figure out, but it wouldn’t be a panacea.
Story time. I used to work for Facebook (and Google) and lots of games were played around bugs.
At some point the leadership introduced an SLA for high then medium priority bugs. Why? because bugs would sit in queues for years. The result? Bugs would often get downgraded in priority at or close to the SLA. People even wrote automated rules to see if their bugs filed got downgraded to alert them.
Another trick was to throw it back to the user, usually after months, ostensibly to request information, to ask "is this still a problem?" or just adding "could not reproduce". Often you'd get no response. sometimes the person was no longer on the team or with the company. Or they just lost interest or didn't notice. Great, it's off your plate.
If you waited long enough, you could say it was "no longer relevant" because that version of the app or API had been deprecated. It's also a good reason to bounce it back with "is still this relevant?"
Probably the most Machiavellian trick I saw was to merge your bug with another one vaguely similar that you didn't own. Why? Because this was hard to unwind and not always obvious.
Anyone who runs a call center or customer line knows this: you want to throw it back at the customer because a certain percentage will give up. It's a bit like health insurance companies automatically sending a denial for a prior authorization: to make people give up.
I once submitted some clear bugs to a supermarket's app and I got a response asking me to call some 800 number and make a report. My bug report was a complete way to reproduce the issue. I knew what was going on. Somebody simply wanted to mark the issue as "resolved". I'm never going to do that.
I don't think you can trust engineering teams (or, worse, individuals) to "own" bugs. They're not going to want to do them. They need to be owned by a QA team or a program team that will collate similar bugs and verify something is actually fixed.
Google had their own versions of things. IIRC bugs had both a priority and s everity for some reason (they were the same 99% of the time) between 0 and 4. So a standard bug was p2/s2. p0/s0 was the most severe and meant a serious user-facing outage. People would often change a p2/s2 to p3/s3, which basically meant "I'm never going to do this and I will never look at it again".
I've basically given up on filing bug reports because I'm aware of all these games and getting someone to actually pay attention is incredibly difficult. So much of this comes down to stupid organizational-level metrics about bug resolution SLAs and policies.
> Google had their own versions of things. IIRC bugs had both a priority and s everity for some reason (they were the same 99% of the time) between 0 and 4. So a standard bug was p2/s2. p0/s0 was the most severe and meant a serious user-facing outage. People would often change a p2/s2 to p3/s3, which basically meant "I'm never going to do this and I will never look at it again".
Yeah, I've done that. I find it much more honest than automatically closing it as stale or asking the reporter to repeatedly verify it even if I'm not going to work on it. The record still exists that the bug is there. Maybe some day the world will change and I'll have time to work on it.
I'm sure the leadership who set SLAs on medium-priority bugs anticipated a lot of bugs would become low-priority. They forced triage; that's the point.
> People even wrote automated rules to see if their bugs filed got downgraded to alert them.
This part though is a sign people are using the "don't notify" box inappropriately, denying reporters/watchers the opportunity to speak up if they disagree about the downgrade.
I’ve been dealing with ElevenLabs pulling this same garbage.
I’ll fill out a bug report, wait a few days to a week to get a response, which are often AI generated, and then 48 hours afterward their bot marks it as stale. Telling me to check if it’s still broken or they assume it’s fixed lol
My only positive experience reporting bugs post early startup was with the chromium team, i get usually assigned to a dedicated reproducer that verifies and is reachable for helping them recreate in a matter of a few days. I had two experiences where bugs were taking less than a week from report to fix in canary.
That's impressive. I've only reported one bug to Chromium, years ago. It was a bug in their CSS engine and I included an HTML file with a full repro. It took them a few years to actually fix it since the person who was initially assigned it never bothered, eventually left Google, and nobody picked it back up for a while. But they did eventually fix it, so that's something, I suppose.
Edit: this comment elsewhere in the thread is closer to my experience: https://news.ycombinator.com/item?id=47523107 Certainly in my own stint at Google I saw the same thing--bugs below a certain priority level would just never get looked at.
I was literally just coming in here to comment "in before someone says this is fine and there's no issue." and the first(!) comment is effectively "this is fine and there's no issue."
The sentiment feels like software folks are optimizing for the local optimum.
It's the programmer equivalent of "if it's important they'll call back." while completely ignoring the real world first and second-order effects of such a policy.
Feeling overwhelmed by insurmountable mountain of bugs and issues is not the way either. We can argue that closing the tickets is not the best way, but if realistically nobody will ever look at them, why not make the developers feel better.
Either you truly need to fix the bugs, in which case the feeling is good and maybe more effort should go that way (more resources assigned to it or whatever), or you're at a scale where tackling everything is impossible and you shouldn't feel overwhelmed by seeing the noise then.
But I think modern industry pretends all it's fine to convince themselves that it's ok to chase the next feature instead.
Move them to a deficated status. “Never triaged”, “lost”, “won’t do”, what have you.
That way, you’re at least not deluding yourself about your own capacity to triage and fix problems, and can hopefully search for and reopen issues that are resurfaced.
It’s really a question of whether a team believes bugs are defects that deserve to be fixed, or annoyances that get in the way of shipping features. And all too often, KPIs and promotions are tied to the features, not the bugs.
Plus, I’ve been in jobs where fixing bugs ends up being implicitly discouraged; if you fix a bug then it invites questions from above for why the bug existed, whether the fix could cause another bug, how another regression will be prevented and so on. But simply ignoring bug reports never triggered attention.
I have been on the other side where I can't replicate/verfiy and the think the user would tell me if it was fixed. After exhausting myself and contacting the user only to find out it was resolved.
If you are looking at it from a business perspective, there is little value to fixing a bug that is not impacting your revenue.
Of course, the developers should be determining if the bug may have a greater impact that will or does cause a problem that impacts revenue before closing it - not doing that is negligent.
Considering Apple is one of the largest companies in the world, raking in money, what consequential effects are you talking about? It certainly doesn't seem to hurt their bottom line, which is the only thing they care about.
As a software developer, I don't have any problem with this. If a bug doesn't bother somebody enough for them to follow up, then spend time fixing bugs for people who will. Apple isn't obligated to fix anybody's bug.
It's not like they were nagging him about it - it's been years, and they had major releases in the mean time. Quite possible it was fixed as a side effect of something else.
> It certainly doesn't seem to hurt their bottom line, which is the only thing they care about.
I want to draw out this comment because it's so antithetical to what Apple marketed that it stood for (if you remember, the wonderful 1984 commercial Apple created; which was very much against the big behemoths of the day and the way they operated).
We're at the point where we've normalized crappy behavior and crappy software so long as the bottom line keeps moving up and to the right on the graph.
Not, "Let's build great software that people love.", but "How much profit can we squeeze out? Let's try to squeeze some more."
We've optimized for profit instead of happiness and customer satisfaction. That's why it feels like quality in general is getting worse, profit became the end goal, not the by-product of a customer-centric focus. We've numbed ourselves to the pain and discomfort we endure and cause every single day in the name of profit.
Basically every single old bug report I've ever seen is essentially a red-herring that is usually not able to be reproduced anymore after N years and takes away time from focusing on newer and more solvable issues. I don't see the issue with removing that noise if it's no longer being reported, but to each their own I suppose.
Sure. So try to reproduce on a current build, and close with a "No longer reproduceable on ___". That'd be good practice. Closing silently because no one can be bothered to evaluate at all is horrendous, and creates the user expectation that "no one looks at these, so I'm not going to keep reporting it" which "justifies" developers closing old bugs.
Every other month I get an email from a legacy pre-GH bug tracker that's either a "me too" or "bug fixed in latest release" a decade after I filed these one-offs you would be so quick to throw away. Bugs with no activity for years on end.
the right thing to do is to actually ping the original reporter if possible, or a developer that you might assign the bug to and try to drive it to a conclusion.
if the answer is 'everything in that part of the code has been rewritten' or 'yeah, that was a dup, we fixed that' or 'there isn't enough information here to try to reproduce it even if we wanted to' or 'this a feature request that we would never even consider' or some other similar thing, then sure delete it.
otherwise you're just throwing away useful information.
edit: I think this difference of option is due to a cultural difference between (a) the software should be as correct as reasonably possible and (b) if no one is complaining then there isn't a problem
Closing bugs because of a rewrite is probably the most harmful practice in the whole industry. The accumulated unresolved issues of your existing code base are a rich resource of test cases. Writing the new code base without checking to see if it fixes the old bugs is a mistake.
At work I literally just spent a half hour meeting with colleagues doing backlog management to clear out old bugs that were random one-offs and never came up again.
Observation: Long, long ago I submitted a bug to Microsoft. I was new at the time and didn't distill it down to the minimum, just gave a scenario that would 100% reproduce. I was contacted months later because someone looked at it and couldn't reproduce.
Yeah, I had found one manifestation of something else that they fixed by the time someone looked at it. The fix in the notes didn't look anything like my bug, only by observing that it now worked I was able to figure out that I had been the blind man trying to describe an elephant.
I love that when I search for an odd behaviour or bug in macos or iOS, most of the time I will find a years old bug report with some irrelevant or useless "work around".
This is not too unusual. I've completely given up on bug reports, it's almost always a complete waste of my time.
I'm currently going around in circles with a serious performance issue with two different vendors. They want logs, process lists and now real time data. It's an issue multiple people have complained about in their forums and on reddit. The fact that this exact same thing is going on with TWO different companies ...
One being that the most recent version is on their cdn but not their [npm package](https://www.npmjs.com/package/livephotoskit?activeTab=readme) which was never updated for 7 years.
You know what they did with this issue? They've marked it as "Unable to diagnose".
Also I've mentioned something about their documentation not being up to date for a function definition. This issue has remained open for 4 years now.
Former Apple employee here. This is a deeper quirk of Apple culture than one would guess.
Each and every Radar (Apple's internal issue tracker is called Radar, and each issue is called a Radar) follows a state machine, going from the untriaged state to the done state. One hard-coded state in this is Verify. Each and every bug, once Fixed, cannot move to Closed without passing through the Verify state. It seems like a cool idea on the surface. It means that Apple assumes and demands that everything must be verified as fixed (or feature complete) by someone. Quite the corporate value to hold the line on, and it goes back decades.
I seriously hated the Verify state. It caused many pathologies. Imagine trying to run a burndown of your sprint when zero of the Radars are closed, because they have to be verified in production before being closed, meaning you cannot verify until after the release. Another pathology is that lots (thousands and thousands) of Radars end up stranded in Verify. Many, many engineers finish their fix, check it in, it gets released and then they move on. This led to a pathology that the writer of this post got caught up in: There is lots of "org health" reporting that goes out showing how many Radars are unverified and how long your Radars stay in the unverified state on average. A lot of teams simply close Radars that remain unverified for some amount of time because they are being "graded" on this.
> Imagine trying to run a burndown of your sprint when zero of the Radars are closed, because they have to be verified in production before being closed, meaning you cannot verify until after the release.
I think most teams use verify as a "closed" state to hide all that messiness. But sure, zero bugs is a project management fiction and produces perverse outcomes.
There is some bot that will match your issue to some other 3 vaguely related issue, then auto close in 3 days. The other vaguely related issues are auto closed for inactivity. Nothing is ever fixed, which is why they can't keep the thing from messing with your scroll position for years now.
> FB22057274 “Pinned tabs: slow-loading target="_blank" links appear in the wrong tab
If you're not testing your code under extreme latency it will almost certainly fail in all kinds of hilarious ways.
I spend a lot of time with 4G as my only internet connection. It makes me feel that most software is quickly produced, poorly tested, and thrown out the door on a whim.
The replies here suggest that many of us have been on both sides and that Apple's behavior it's a great way to trade bug triaging time on the org side for a few frustrated reporters on the customer side.
The problem is it frustrates the most diligent of bug reporters who put time into filing high quality issues resulting in overall lower bug submission quality.
A good compromise might be select high quality bugs or users with good rep and disable auto-closing for them. In the age of AI it shouldn't be too hard to correlate all those low quality duplicates and figure out what's worth keeping alive, no?
Oh you sweet summer child. Everyone else does this.
Yes, I hate it too.
Put yourself in the position of the employee on the other side. They currently have 647 bugs in their backlog. And they also have actual work to do that's not even related to these bugs.
You come to work. Over night there's 369 emails (after many filters have been applied), 27 new bugs (14 of which are against a previous version). You triage. If you think 8h is enough to deal with 369 emails (67 of which are actionable. But which 67?) and actually close 27 bugs, then… well then you'd be assigned another 82 bugs and get put on email lists for advisory committees.
Before you jump to "why don't they just…", you should stop yourself and acknowledge that this in an unsolved problem. Ignore them, let them pile up? That's not a solution? Close them? No! It's still a problem! Ask you to verify it (and implicitly confirm that you still care)? That's… a bit better actually.
"Just hire more experts"… experts who are skilled enough, yet happy to work all day trying to reproduce these bugs? Sure, you can try. But it's extremely not a "why don't they just…".
That's a classic trick where the developer will push back on the bug author and say "I can't reproduce this, can you verify it with the latest version?" without actually doing anything. And if it doesn't get confirmed then they can close it as User Error or Not Reproducible.
Of course, the only way to counter this is by saying "Yes I verified it" without actually verifying it.
Then you assume, naively, that this means that they've recognised that there really is a product problem and will go off and fix it. However, then in turn the support tech needs to reproduce the the issue to the development team.
They invariably fail to do so for any number of reasons, such as: This only happens in my region, not others. Or the support tech's lab environment doesn't actually allow them to spin up the high-spec thing that's broken. Or whatever.
Then the ticket gets reject with "can't reproduce" after you've reproduced the issue, with a recorded video and everything as evidence.
If you then navigate that gauntlet, the ticket is most typically rejected with "It is broken like that by design, closed."
Turns out there was a known bug in Microsoft schannel that had yet to be patched and they'd wasted weeks of our effort by not searching their own bug tracker properly.
But back in the 80s and 90s, margins were significantly higher. If you look at hardware, I recall selling hardware with 30% margin, if not more... even 80% on some items.
Yet what came with that was support, support, support. And when you sell 5 computers a month, instead of 500, well.. you need that margin to even have a store. Which you need, because no wide-scale internet.
On the software side, it was sort of the same. I remember paying $80 for some pieces of software, which would be like $200 today. You'd pay $1 on an app store for such software, but I'd also call the author if there was a bug. He'd send an update in the mail.
I guess my point is, in those days, it was fun to fix issues. The focus was more specific, there was time to ply the trade, to enjoy it, to have performant, elegant fixes.
Now, it's all "my boss is hassling me and another bug will somehow mean I have to work harder", which is .. well, sad.
Large monopolistic tech companies like Apple and Microsoft can afford to ignore this stuff for years because there are few realistic alternatives. But longer term eventually a disruptive competitor comes along who takes product quality and customer service more seriously.
You could sink an infinite amount of time investigating and find nothing. At some point you have to cut off the time investment when only one person has reported it and no devs have been able to reproduce it.
That seems to be the case described in the article. In such a situation, I think it's dishonest to ask the reporter to expend even more effort when you've spent zero. Just close it if you don't want to do it, you don't have to be a jerk to your customers, too, by sending them off on a wild goose chase.
Otherwise, why not ask the reporter to reproduce the issue every single day until you choose to fix it in some unknown point in the future, and if they miss a day, it gets closed? That seems just as arbitrary.
Not directed at you of course, just the proverbial “you” from the frustration of a purchaser of software.
Or with open source projects. Fucking stalebot.
- We owe you nothing! And the fact that people still expect maintainers to work for them is really sad, IMHO.
- Unlike corporate workers, nobody is measuring our productivity therefore we have no incentive to close issues if we believe they are unfixed. That means that when we close the issue, we believe it has a high chance of being fixed, and also we weigh the cost of having many maybe-fixed open issues against maybe closing a standing issue, and (try to) choose what's best for the project.
I don't think I've seen an issue of theirs that wasn't auto-closed.
I don't have enough time in the day to deal with the tickets where the reporter actually tries, let alone the tickets where they don't.
If I tell you to update your shit, it's because it's wildly out of date, to the point that your configuration is impossible for me to reproduce without fucking up my setup to the point that I can't repro 8 other tickets.
Nobody memorizes the contents of the git-log with every release. If you show me someone who does, I will show you a liar. You updating your thing once saves me from updating and downgrading dozens of times. Deal with it.
Anyone that attached a repro file to their issue got attention because it was easy enough to test. Sometimes crash traces got attention, I'd open the code and check out what it was. If it was like a top 15 crash trace then I'd spend a lot longer on it.
If the ticket was long and involved like "make an iMovie and tween it in just such and such a way" then probably I'd fiddle around for 10-15 minutes before downgrading its priority and hope a repro file would come about.
There were a bunch of bug reports for a deprecated codec that I closed and one guy angrily replied that I couldn't just close issues I didn't want to fix!
Guess what buddy, nobody's ever going to fix it.
The oldest bug like that I ever fixed was a QuickDraw bug from when I was 8 but it was just an easy bounds check one liner.
But the mistake OP is making is assuming this one thing that annoyed him somehow applies to the whole Apple org. Most issues were up to engineers and project managers to prioritize, every team had their own process when I was there.
1. Tell everyone to update their shit, and close tickets if they don't.
2. Waste several hours per day uninstalling and reinstalling 10 versions of the same program.
One of these will allow you to close lots of tickets immediately, and handle the remaining ones as efficiently as possible. Yay! Good job, peon! You get a raise!
The other approach will result in a deep backlog, slow turnaround times, and lower apparent output from management's perspective. Boo! Bad job, peon! You're fired!
I'm not going to lie. That's not who I am. If Apple really wants to close a bug report when the bug isn't fixed, that's on their conscience, if they have one.
The less passive-aggressive version is to use this obviously-unhelpful answer of the obviously-unhelpful question, to actually have a conversation to get the PM to recognize that the default state of a ticket is in fact "no change." Ultimately that may turn into a stale bot if the PM realizes the policy they actually want is some sort of timeout, but at least it's not a time consuming meeting!
(Note, a cathartic thought experiment, but not really good manners to actually do!)
A bug is a bug, no matter the developers' opinion or the complexity of the bug.
I believe they also have attorneys. Perhaps that's how Apple could make bug-tracking more effective -- hire a prosecuting attorney and a defending attorney for each bug.
I suspect that this is a common approach. It maybe even works, often enough, to make it standard practice.
For myself, I've stopped submitting bug reports.
It's not the being ignored, that bothers me; it's when they pay attention, they basically insist that I become an unpaid systems engineering QC person, and go through enormous effort to prove the bug exists.
Microsoft support is guilty of this, especially for Azure & 365 issues.
Like sorry, but you aren't paying me to debug your software. Here's a report, and here's proof of me reproducing the problem & some logs. That's all I'm going to provide. It's your software, you debug it.
When I close an old bug that is not actionable, I do feel bad about it. But keeping the bug open when realistically I can't really do anything with it might be worse.
The joke is that Apple owns the 17.x.x.x class-A range on the Internet (they got in early, the also have a second class-B and used to have a second class-B that they gave back), and what engineers were really saying is that they could not reproduce on the AD systems that Apple had setup (lots of times it was because AD had been setup with a .local domain, a real no-no, but it was in Microsoft's training materials as an example at the time...).
I've heard this from others before but I really don't understand the mindset.
What's the harm in keeping the bug open?
But I find that sometimes I can tell from experience that the IR is not actionable and that it will never be fixed. Some examples:
* There's not enough info to reproduce the issue and the user either can't or won't be able to reproduce it themselves. Intermittent bugs generally fall into this category. * The bug was filed against some version of the software that's no longer in production (think of the cloud context where the backend service has been upgraded to a newer version).
Sometimes the cost to investigate a bug is so high relative to the pain caused that it just closed as a WONTFIX. These sometimes suck the most because they are often legitimate bugs with possible fixes, but they will never be prioritized high enough to get fixed.
Or sometimes the bug is only reproducible using some proprietary data that I don't have access to and so you sometimes have no choice but to ask the bug filer "can you still reproduce this?".
Computer systems are complicated. And real-world systems consisting of multiple computer systems are even more complicated.
But in the other cases, closing the bug seems to me to be a way to perturb metrics. It might be true that you'll never fix a given bug, but shouldn't there be a record of the "known defects", or "errata" as some call them?
For your specific scenarios:
- lack of information on how to reproduce or resolve a bug doesn't mean it doesn't exist, just that it's not well understood.
- For the "new version" claim, I've seen literal complete rewrites contain the same defects as the previous version. IMHO the author of the new version needs to confirm that the bug is fixed (and how/why it was fixed)
- I agree there are high cost bugs that nobody has resources to fix, but again, that doesn't mean they don't exist (important for errata)
- Similarly with proprietary data, if you aren't allowed to access it, but it still triggers the bug, then the defect exists
In general my philosophy is to treat the existence of open bugs as the authoritative record of known issues. Yes, some of them will never be solved. But having them in the record is important in and of itself.
Apple did not say they couldn't reproduce it. Neither did they say that they thought they fixed it. They refused to say anything except "Verify with macOS 26.4 beta 4".
> and even if they are 100% reproducible for the user, it's not always so easy for the developers
It's not easy for the user! Like I said in the blog post, I don't usually run the betas, so it would have been an ordeal to install macOS 26.4 beta 4 just to test this one bug. If anything, it's easier for Apple to test when they're developing the beta.
> the most "efficient" thing is just to ask the user to re-test.
Efficient from Apple's perspective, but grossly inefficient from the bug reporter's perspective.
> realistically I can't really do anything with it
In this case, I provided Apple with a sample Xcode project and explicit steps to reproduce. So realistically, they could have tried that.
I suspect that your underlying assumption is incorrect: I don't think Apple did anything with my bug report. This is not the first time Apple has asked me to "verify" an unfixed bug in a beta version. This seems to be a perfunctory thing they do before certain significant OS releases, clear out some older bug reports. Maybe they want to focus now on macOS 27 for WWDC and pretend that there are no outstanding issues remaining. I don't know exactly what's going through their corporate minds, but what spurred me to blog about it is that they keep doing this same shit.
https://devblogs.microsoft.com/oldnewthing/20241108-00/?p=11...
In any case I would have said it sounds difficult on every front
At some point the leadership introduced an SLA for high then medium priority bugs. Why? because bugs would sit in queues for years. The result? Bugs would often get downgraded in priority at or close to the SLA. People even wrote automated rules to see if their bugs filed got downgraded to alert them.
Another trick was to throw it back to the user, usually after months, ostensibly to request information, to ask "is this still a problem?" or just adding "could not reproduce". Often you'd get no response. sometimes the person was no longer on the team or with the company. Or they just lost interest or didn't notice. Great, it's off your plate.
If you waited long enough, you could say it was "no longer relevant" because that version of the app or API had been deprecated. It's also a good reason to bounce it back with "is still this relevant?"
Probably the most Machiavellian trick I saw was to merge your bug with another one vaguely similar that you didn't own. Why? Because this was hard to unwind and not always obvious.
Anyone who runs a call center or customer line knows this: you want to throw it back at the customer because a certain percentage will give up. It's a bit like health insurance companies automatically sending a denial for a prior authorization: to make people give up.
I once submitted some clear bugs to a supermarket's app and I got a response asking me to call some 800 number and make a report. My bug report was a complete way to reproduce the issue. I knew what was going on. Somebody simply wanted to mark the issue as "resolved". I'm never going to do that.
I don't think you can trust engineering teams (or, worse, individuals) to "own" bugs. They're not going to want to do them. They need to be owned by a QA team or a program team that will collate similar bugs and verify something is actually fixed.
Google had their own versions of things. IIRC bugs had both a priority and s everity for some reason (they were the same 99% of the time) between 0 and 4. So a standard bug was p2/s2. p0/s0 was the most severe and meant a serious user-facing outage. People would often change a p2/s2 to p3/s3, which basically meant "I'm never going to do this and I will never look at it again".
I've basically given up on filing bug reports because I'm aware of all these games and getting someone to actually pay attention is incredibly difficult. So much of this comes down to stupid organizational-level metrics about bug resolution SLAs and policies.
Yeah, I've done that. I find it much more honest than automatically closing it as stale or asking the reporter to repeatedly verify it even if I'm not going to work on it. The record still exists that the bug is there. Maybe some day the world will change and I'll have time to work on it.
I'm sure the leadership who set SLAs on medium-priority bugs anticipated a lot of bugs would become low-priority. They forced triage; that's the point.
> People even wrote automated rules to see if their bugs filed got downgraded to alert them.
This part though is a sign people are using the "don't notify" box inappropriately, denying reporters/watchers the opportunity to speak up if they disagree about the downgrade.
I’ll fill out a bug report, wait a few days to a week to get a response, which are often AI generated, and then 48 hours afterward their bot marks it as stale. Telling me to check if it’s still broken or they assume it’s fixed lol
Edit: this comment elsewhere in the thread is closer to my experience: https://news.ycombinator.com/item?id=47523107 Certainly in my own stint at Google I saw the same thing--bugs below a certain priority level would just never get looked at.
The sentiment feels like software folks are optimizing for the local optimum.
It's the programmer equivalent of "if it's important they'll call back." while completely ignoring the real world first and second-order effects of such a policy.
You feeling accomplished by seeing an empty list is not the goal!
But I think modern industry pretends all it's fine to convince themselves that it's ok to chase the next feature instead.
That way, you’re at least not deluding yourself about your own capacity to triage and fix problems, and can hopefully search for and reopen issues that are resurfaced.
"I deficated this issue. Closed."
Plus, I’ve been in jobs where fixing bugs ends up being implicitly discouraged; if you fix a bug then it invites questions from above for why the bug existed, whether the fix could cause another bug, how another regression will be prevented and so on. But simply ignoring bug reports never triggered attention.
These auto-closing policies usually originate from somewhere else.
Of course, the developers should be determining if the bug may have a greater impact that will or does cause a problem that impacts revenue before closing it - not doing that is negligent.
As a software developer, I don't have any problem with this. If a bug doesn't bother somebody enough for them to follow up, then spend time fixing bugs for people who will. Apple isn't obligated to fix anybody's bug.
It's not like they were nagging him about it - it's been years, and they had major releases in the mean time. Quite possible it was fixed as a side effect of something else.
I want to draw out this comment because it's so antithetical to what Apple marketed that it stood for (if you remember, the wonderful 1984 commercial Apple created; which was very much against the big behemoths of the day and the way they operated).
We're at the point where we've normalized crappy behavior and crappy software so long as the bottom line keeps moving up and to the right on the graph.
Not, "Let's build great software that people love.", but "How much profit can we squeeze out? Let's try to squeeze some more."
We've optimized for profit instead of happiness and customer satisfaction. That's why it feels like quality in general is getting worse, profit became the end goal, not the by-product of a customer-centric focus. We've numbed ourselves to the pain and discomfort we endure and cause every single day in the name of profit.
:)
Funny at first but I’m coming around to that perspective
…yet
Apple has done the best job of creating this expectation.
Apple Feedback = compliments (and ideas)
Public Web = complaints & bug reports
Apple Support = important bug reports (can create feedback first then call immmediately)
—
Prev comment w/link (2mo ago): https://news.ycombinator.com/item?id=46591541
Good luck doing that when the bug report (like virtually all bug reports in nature) doesn't provide sufficient reproduction steps.
Closing bugs automatically after a cron job demanded that the user verify reproducibility for the 11th time: obviously bad.
if the answer is 'everything in that part of the code has been rewritten' or 'yeah, that was a dup, we fixed that' or 'there isn't enough information here to try to reproduce it even if we wanted to' or 'this a feature request that we would never even consider' or some other similar thing, then sure delete it.
otherwise you're just throwing away useful information.
edit: I think this difference of option is due to a cultural difference between (a) the software should be as correct as reasonably possible and (b) if no one is complaining then there isn't a problem
Pretty standard process.
Yeah, I had found one manifestation of something else that they fixed by the time someone looked at it. The fix in the notes didn't look anything like my bug, only by observing that it now worked I was able to figure out that I had been the blind man trying to describe an elephant.
This is not too unusual. I've completely given up on bug reports, it's almost always a complete waste of my time.
I'm currently going around in circles with a serious performance issue with two different vendors. They want logs, process lists and now real time data. It's an issue multiple people have complained about in their forums and on reddit. The fact that this exact same thing is going on with TWO different companies ...
One being that the most recent version is on their cdn but not their [npm package](https://www.npmjs.com/package/livephotoskit?activeTab=readme) which was never updated for 7 years. You know what they did with this issue? They've marked it as "Unable to diagnose".
Also I've mentioned something about their documentation not being up to date for a function definition. This issue has remained open for 4 years now.
Each and every Radar (Apple's internal issue tracker is called Radar, and each issue is called a Radar) follows a state machine, going from the untriaged state to the done state. One hard-coded state in this is Verify. Each and every bug, once Fixed, cannot move to Closed without passing through the Verify state. It seems like a cool idea on the surface. It means that Apple assumes and demands that everything must be verified as fixed (or feature complete) by someone. Quite the corporate value to hold the line on, and it goes back decades.
I seriously hated the Verify state. It caused many pathologies. Imagine trying to run a burndown of your sprint when zero of the Radars are closed, because they have to be verified in production before being closed, meaning you cannot verify until after the release. Another pathology is that lots (thousands and thousands) of Radars end up stranded in Verify. Many, many engineers finish their fix, check it in, it gets released and then they move on. This led to a pathology that the writer of this post got caught up in: There is lots of "org health" reporting that goes out showing how many Radars are unverified and how long your Radars stay in the unverified state on average. A lot of teams simply close Radars that remain unverified for some amount of time because they are being "graded" on this.
I think most teams use verify as a "closed" state to hide all that messiness. But sure, zero bugs is a project management fiction and produces perverse outcomes.
In this case the bug wasn't fixed.
> A lot of teams simply close Radars that remain unverified for some amount of time because they are being "graded" on this.
The simple solution here: you should also be graded on closing bugs that get re-opened.
1. Apple engineers actually attempted to fix the bug.
2. Feedback Assistant "Please verify with the latest beta" matches the Radar "Verify" state.
I don't believe either of those are true.
There is some bot that will match your issue to some other 3 vaguely related issue, then auto close in 3 days. The other vaguely related issues are auto closed for inactivity. Nothing is ever fixed, which is why they can't keep the thing from messing with your scroll position for years now.
Well, it was trained on StackOverflow.
It is known for decades that Apple largely ignores bugreports.
If you're not testing your code under extreme latency it will almost certainly fail in all kinds of hilarious ways.
I spend a lot of time with 4G as my only internet connection. It makes me feel that most software is quickly produced, poorly tested, and thrown out the door on a whim.
A good compromise might be select high quality bugs or users with good rep and disable auto-closing for them. In the age of AI it shouldn't be too hard to correlate all those low quality duplicates and figure out what's worth keeping alive, no?
Yes, I hate it too.
Put yourself in the position of the employee on the other side. They currently have 647 bugs in their backlog. And they also have actual work to do that's not even related to these bugs.
You come to work. Over night there's 369 emails (after many filters have been applied), 27 new bugs (14 of which are against a previous version). You triage. If you think 8h is enough to deal with 369 emails (67 of which are actionable. But which 67?) and actually close 27 bugs, then… well then you'd be assigned another 82 bugs and get put on email lists for advisory committees.
Before you jump to "why don't they just…", you should stop yourself and acknowledge that this in an unsolved problem. Ignore them, let them pile up? That's not a solution? Close them? No! It's still a problem! Ask you to verify it (and implicitly confirm that you still care)? That's… a bit better actually.
"Just hire more experts"… experts who are skilled enough, yet happy to work all day trying to reproduce these bugs? Sure, you can try. But it's extremely not a "why don't they just…".
Mozilla is famous for having 20 year old bug reports that gets fixed after all that time.