The challenges of soft delete

(atlas9.dev)

194 points | by buchanae 15 hours ago

30 comments

  • MaxGabriel 13 hours ago
    This might stem from the domain I work in (banking), but I have the opposite take. Soft delete pros to me:

    * It's obvious from the schema: If there's a `deleted_at` column, I know how to query the table correctly (vs thinking rows aren't DELETEd, or knowing where to look in another table)

    * One way to do things: Analytics queries, admin pages, it all can look at the same set of data, vs having separate handling for historical data.

    * DELETEs are likely fairly rare by volume for many use cases

    * I haven't found soft-deleted rows to be a big performance issue. Intuitively this should be true, since queries should be O log(N)

    * Undoing is really easy, because all the relationships stay in place, vs data already being moved elsewhere (In practice, I haven't found much need for this kind of undo).

    In most cases, I've really enjoyed going even further and making rows fully immutable, using a new row to handle updates. This makes it really easy to reference historical data.

    If I was doing the logging approach described in the article, I'd use database triggers that keep a copy of every INSERT/UPDATE/DELETEd row in a duplicate table. This way it all stays in the same database—easy to query and replicate elsewhere.

    • postexitus 2 hours ago
      Soft deletes in banking are just a Band-Aid to the much bigger problem of auditability. You may keep the original record by soft deleting it, but if you don't take care of amends, you will still lose auditability. The correct way is to use EventSourcing, with each change to an otherwise immutable state being recorded as an Event, including a Delete (both of an Event and the Object). This is even more problematic from a performance sense, but Syncs and Snapshots are for that exact purpose - or you can back the main table with a separate events table, with periodic "reconstruct"s.
    • nine_k 13 hours ago
      > DELETEs are likely fairly rare by volume for many use cases

      All your other points make sense, given this assumption.

      I've seen tables where 50%-70% were soft-deleted, and it did affect the performance noticeably.

      > Undoing is really easy

      Depends on whether undoing even happens, and whether the act of deletion and undeletion require audit records anyway.

      In short, there are cases when soft-deletion works well, and is a good approach. In other cases it does not, and is not. Analysis is needed before adopting it.

      • whizzter 3 hours ago
        If only 50-70% of your data is dead and causing issues then you probably have an underlying indexing issue anyhow (because scaling to 2x-3x customers would cause the same issues by magnitude).

        That said, we've had soft-deletes and during discussions of keeping it on one argument was that it was really only a half-assed measure (data lost due to updates rather than deletes aren't really saved)

      • KronisLV 4 hours ago
        > I've seen tables where 50%-70% were soft-deleted, and it did affect the performance noticeably.

        I think we largely need support for "soft deletes" to be baked into SQL or its dialects directly and treated as something transparent (selecting soft deleted rows = special case, regular selects skip those rows; support for changing regular DELETE statements into doing soft deletes under the hood).

        https://news.ycombinator.com/item?id=43781109

        https://news.ycombinator.com/item?id=41272903

        And then make dynamically sharding data by deleted/not deleted really easy to configure.

        You soft deleted a few rows? They get moved to another DB instance, an archive/bin of sorts. Normal queries wouldn't even consider it, only when you explicitly try to select soft deleted rows would it be reached out to.

        • indigo945 42 minutes ago
          Well, Microsoft SQL Server has built-in Temporal Tables [1], which even take this one step further: they track all data changes, such that you can easily query them as if you were viewing them in the past. You can not only query deleted rows, but also the old versions of rows that have been updated.

          (In my opinion, replicating this via a `validity tstzrange` column is also often a sane approach in PostgreSQL, although OP's blog post doesn't mention it.)

          [1]: https://learn.microsoft.com/en-us/sql/relational-databases/t...

      • scott_w 3 hours ago
        > I've seen tables where 50%-70% were soft-deleted, and it did affect the performance noticeably.

        Depending on your use-case, having soft-deletes doesn't mean you can't clean out old deleted data anyway. You may want a process that grabs all data soft-deleted X years ago and just hard-delete it.

        > Depends on whether undoing even happens, and whether the act of deletion and undeletion require audit records anyway.

        Yes but this is no more complex than the current situation, where you have to always create the audit records.

      • tharkun__ 12 hours ago
        Agreed. And if deletes are soft, you likely really just wanted a complete audit history of all updates too (at least that's for the cases I've been part of). And then performance _definitely_ would suffer if you don't have a separate audit/archive table for all of those.
        • pixl97 11 hours ago
          I mean, yes, growth forever doesn't tend to work.

          I've seen a number of apps that require audit histories work on a basis where they are archived at a particular time, and that's when the deletes occurred and indexes fully rebuilt. This is typically scheduled during the least busy time of the year as it's rather IO intensive.

          • tharkun__ 7 hours ago
            Oldest I've worked with was a project started in ~1991. I don't recall when they started keeping history and for how long and they might have trimmed history after some legal period that's shorter but, I worked on it ~15 years after that. And that's like what, 15,..., 20 years ago by now and I doubt they changed that part of the system. You've all likely bought products that were administered through this system.

            FWIW, no "indexes fully rebuilt" upon "actual deletion" or anything like that. The regular tables were always just "current" tables. History was kept in archive tables that were always up-to-date via triggers. Essentially, current tables never suffered any performance issues and history was available whenever needed. If history access was needed for extensive querying, read replicas were able to provide this without any cost to the main database but if something required "up to the second" consistency, the historic tables were available on the main database of course with good performance (as you can tell from the timelines, this was pre-SSDs, so multi-path I/O over fibre was what they had at the time I worked with it with automatic hot-spare failover between database hosts - no clouds of any kind in sight). Replication was done through replicating the actual SQL queries modifying the data on each replica (multiple read replicas across the world) vs. replicating the data itself. Much speedier, so that the application itself was able to use read replicas around the globe, without requiring multi-master for consistency. Weekends used to "diff" in order to ensure there were no inconsistencies for whatever reason (as applying the modifying SQL queries to each replica does of course have the potential to have the data go out of sync - theoretically).

            Gee, I'm old, lol!

      • da_chicken 11 hours ago
        > I've seen tables where 50%-70% were soft-deleted, and it did affect the performance noticeably.

        At that point you should probably investigate partitioning or data warehousing.

        • nextaccountic 3 hours ago
          What would be the benefit of data warehousing in this case?
          • da_chicken 30 minutes ago
            The reason to soft delete is to preserve the deleted data for later use. If you need to not query that data for a significant amount of the system use that 75% soft deletes is a performance problem, then you either need to move the soft deleted data out of the way inside the table (partition) or to another table entirely.

            The correct thing to do if your retention policy is causing a performance problem is to sit down and actually decide what the data is truly needed for, and if you can make some transformations/projections to combine only the actual data you really use to a different location so you can discard the rest. That's just data warehousing.

            Data warehouse doesn't only mean "cube tables". It also just means "a different location for data we rarely need, stored in a way that is only convenient for the old data needs". It doesn't need to be a different RDBMS or even a different database.

      • paulddraper 7 hours ago
        50-70% as the worst case isn't even necessarily that bad.

        (Again, a lot is O(log n) right?)

    • arnsholt 53 minutes ago
      The core system at my previous employer (an insurance company) worked along the lines of the solution you outline at the end: each table is an append only log of point in time information about some object. So the current state is in the row with the highest timestamp, and all previous stars can be observed with appropriate filters. It’s a really powerful approach.
      • arter45 18 minutes ago
        So basically something like this?

        (timestamp, accountNumber, value, state)

        And then you just

        SELECT state FROM Table WHERE accountNumber = ... ORDER BY timestamp DESC LIMIT 1

        right?

        • arnsholt 8 minutes ago
          Yeah, basically. The full system actually has more date stuff going on, to support some other more advanced stuff than just tracking objects themselves, but that's the overall idea. When you need to join stuff it can be annoying to get the SQL right in order to join the correct records from a different table onto your table of interest (thank Bob for JOIN LATERAL), but once you get the hang of it it's fairly straightforward. And it gives you the full history, which is great.
          • arter45 2 minutes ago
            Sounds cool! Do you keep all data forever in the same table? I assume you need long retention, so do you keep everything in the same table for years or do you keep a master table for, let's say, the current year and then "rotate" (like logrotate) previous stuff to other tables?

            Even with indices, a table with, let's say, a billion rows can be annoying to traverse.

    • gleenn 11 hours ago
      If you're implementing immutable DB semantics maybe you should consider Datomic or alternatives because then you get that for free, for everything, and you also get time travel which is an amazing feature on top. It lets you be able to see the full, coherent state of the DB at any moment!
      • arter45 13 minutes ago
        My understanding is that Datomic uses something like Postgres as a storage backend. Am I right?

        Also, it doesn't support non-immutable use cases AFAIK, so if you need both you have to use two database technologies (interfaces?), which can add complexity.

    • ozim 9 hours ago
      DELETEs are likely fairly rare by volume for many use cases

      I think one of our problems is getting users to delete stuff they don’t need anymore.

    • eddd-ddde 12 hours ago
      I never got to test this, but I always wanted to explore in postgres using table partitions to store soft deleted items in a different drive as a kind of archived storage.

      I'm pretty sure it is possible, and it might even yield some performance improvements.

      That way you wouldn't have to worry about deleted items impacting performance too much.

      • gleenn 11 hours ago
        It's definitely an interesting approach but the problem is now you have to change all your queries and undeleting get more complicated. There are strong trade-offs with almost all the approaches I've heard of.
        • snuxoll 11 hours ago
          With partitioning? No you don't. It gets a bit messy if you also want to partition a table by other values (like tenant id or something), since then you probably need to get into using table inheritance instead of the easier declarative partitioning - but either technique just gives you a single effective table to query.
          • edmundsauto 9 hours ago
            Pg moves the data between positions on update?
            • bandrami 8 hours ago
              If you are updating the parent table and the partition key is correctly defined, then an update that puts a row in a different partition is translated into a delete on the original child table and an insert on the new child table, since v11 IIRC. But this can lead to some weird results if you're using multiple inheritance so, well, don't.
              • tomnipotent 8 hours ago
                I believe they were just pointing out that Postgres doesn't do in-place updates, so every update (with or without partitions) is a write followed by marking the previous tuple deleted so it can get vacuumed.
                • snuxoll 3 hours ago
                  That’s not at all what the child to me was saying in even a generous reading.

                  But HOT updates are a thing, too.

      • paulddraper 7 hours ago
        IDK if the different drive is necessary, but yes partitioning on a deleted field would work.

        Memory >>>>> Disk in importance.

    • rawgabbit 11 hours ago
      I have worked with databases my entire career. I hate triggers with a passion. The issue is no one “owns” or has the authority to keep triggers clean. Eventually triggers become a dumping ground for all sorts of nasty slow code.

      I usually tell people to stop treating databases like firebase and wax on/wax off records and fields willy nilly. You need to treat the database as the store of your business process. And your business processes demand retention of all requests. You need to keep the request to soft delete a record. You need to keep a request to undelete a record.

      Too much crap in the database, you need to create a field saying this record will be archived off by this date. On that date, you move that record off into another table or file that is only accessible to admins. And yes, you need to keep a record of that archival as well. Too much gunk in your request logs? Well then you need to create an archive process for that as well.

      These principles are nothing new. They are in line with “Generally Accepted Record Keeping Principles” which are US oriented. Other countries have similar standards.

      • indigo945 39 minutes ago
        What you describe is basically event sourcing, which is definitely popular. However, for OLAP, you will still want a copy of your data that only has the actual dimensions of interest, and not their history - and the easiest way to create that copy and to keep it in sync with your events is via triggers.
  • MarginalGainz 39 minutes ago
    The hidden cost we battle in e-commerce isn't just DB storage/performance, it's Search Index Pollution. We treat 'availability' as a complex state machine (In Stock, Backorder, Discontinued-but-visible, Soft Deleted). Trying to map this logic directly into a Postgres query with WHERE deleted_at IS NULL works for CRUD, but it creates massive friction for discovery.

    We found that strict CQRS/Decoupling is the only way to scale this. Let the operational DB keep the soft-deletes for audit/integrity (as mentioned by others), but the Search Index must be a clean, ephemeral projection of only what is currently purchasable.

    Trying to filter soft-deletes at query time inside the search engine is a recipe for latency spikes.

  • patates 6 hours ago
    Trigger-based approach is the only one that really works in my experience. Partition the archive table in a way that makes sense for your data and you're good to go.

    Some more rules to keep it under control:

    Partition table has to be append-only. Duh.

    Recovering from a delete needs to be done in the application layer. The archive is meant to be a historical record, not an operational data store. Also by the time you need to recover something, the world may have changed. The application can validate that restoring this data still makes sense.

    If you need to handle updates, treat them as soft deletes on the source table. The trigger captures both the old state (before update) and continues normally. Your application can then reconstruct the timeline by ordering archive records by timestamp.

    Needless to say, make sure your trigger fires BEFORE the operation, not AFTER. You want to capture the row state before it's gone. And keep the trigger logic dead simple as any complexity there will bite you during high-traffic periods.

    For the partition strategy, I've found monthly partitions work well for most use cases. Yearly if your volume is low, daily if you're in write-heavy territory. The key is making sure your common queries (usually "show me history for entity X" or "what changed between dates Y and Z") align with your partition boundaries.

  • hnthrow0287345 11 hours ago
    Maybe I'm shooting for the moon, but I'd like soft delete to be some kind of built-in database feature. It would be nice to enable it on a table then choose some built-in strategies on how it's handled.

    Soft-delete is a common enough ask that it's probably worth putting the best CS/database minds to developing some OOTB feature.

    • Centigonal 9 hours ago
      Many data warehousing paradigms (e.g. Iceberg, Delta Lake, BigQuery) offer built-in "time travel," sometimes combined with scheduled table backups. That said, a lot of the teams I've worked with who want soft-delete also have other requirements that necessitate taking a custom approach (usually plain ol' SCD) instead of using the platform-native implementation.
      • refset 5 hours ago
        > other requirements

        In my experience, usually along the lines of "what was the state of the world?" (valid-time as-of query) instead of "what was the state of the database?" (system-time as-of query).

  • talesmm14 13 hours ago
    I've worked at companies where soft delete was implemented everywhere, even in irrelevant internal systems... I think it's a cultural thing! I still remember a college professor scolding me on an extension project because I hadn't implemented soft delete... in his words, "In the business world, data is never deleted!!"
    • salomonk_mur 12 hours ago
      But... It's true. Deleting data completely is an easy way to gimp and lobotomize your future analysis.

      Storage is cheap. Never delete data.

      • ziml77 11 hours ago
        I prefer audit tables. Soft deletes don't capture updates, audit tables do (you could make every update a delete and insert in a soft delete table, but that adds a lot of bloat to the table)
      • yxhuvud 8 hours ago
        Deleting data is also a very easy way to not get GDPR compliance issues. Data is a cost and a risk, and should be minimised to what is actually relevant. Storage is the least part of the cost.
        • phito 7 hours ago
          Not an issue if you're not building SaaS
          • yxhuvud 1 hour ago
            Depends on your jurisdiction I suppose. If you are in EU it's a question if you have PII or not - if you are a SaaS or not is totally irrelevant.
      • sfn42 4 hours ago
        Depends on the data in question. Some data is worth keeping, other data isn't.
    • mrkeen 12 hours ago
      No comment from the professor on modifications though?
  • rorylaitila 13 hours ago
    Databases store facts. Creating a record = new fact. "Deleting" a record = new fact. But destroying rows from tables = disappeared fact. That is not great for most cases. In rare cases the volume of records may be a technical hurdle; in which case, move facts to another database. The times I've wanted to destroy large volume of facts is approximately zero.
    • pixl97 10 hours ago
      When you start thinking of data as a potentially toxic asset with a maintenance cost to ensure it doesn't leak and cause an environmental disaster, it becomes more likely that you'd want to get rid of large volumes of facts.
    • dpark 10 hours ago
      Unless your database is immutable, every changed a record causes a “disappeared fact”.

      There are many legitimate reasons to delete data. The decision to retain data forever should not be taken lightly.

      • pmontra 4 hours ago
        Yes. Another way to look at databases is that they store the state at given time. We can augment tables with valid_from, valid_to columns to retrieve the state at a particular time. In that case there is never a DELETE, only INSERTs and UPDATEs of the valid_to column. Maybe this is what you mean with immutable database.

        The problems are mostly the same as with soft delete: valid_to is more or less the same as deleted_at, which we probably need anyway to mark a record as deleted instead of simply updated. Furthermore, there are way more records in the db. And what about the primary key? Maybe those extra records go to an history table to keep the current table slim and with a unique primary key which is not augmented by some artificial extra key. There are a number of possible designs.

    • keithluu 5 hours ago
      Agreed. In fact I believe there should be 2 main operations in a data store: retrieve and insert. For this to actually work in practice, you probably need different types of data stores for different phases of data. Unfortunately few people have a good understanding of the Data life cycle.
  • jamilbk 12 hours ago
    At Firezone we started with soft-deletes thinking it might be useful for an audit / compliance log and quickly ran into each of the problems described in this article. The real issue for us was migrations - having to maintain structure of deleted data alongside live data just didn't make sense, and undermined the point of an immutable audit trail.

    We've switched to CDC using Postgres which emits into another (non-replicated) write-optimized table. The replication connection maintains a 'subject' variable to provide audit context for each INSERT/UPDATE/DELETE. So far, CDC has worked very well for us in this manner (Elixir / Postgrex).

    I do think soft-deletes have their place in this world, maybe for user-facing "restore deleted" features. I don't think compliance or audit trails are the right place for them however.

    • d0100 5 hours ago
      In simple projects where database is only changed via an API, we just audit the API instead. It's easier to display and easier to store than tracking each DB change a single transaction does
  • nottorp 3 hours ago
    Why deleted_at?

    We have soft_deleted as boolean which excludes data from all queries and last_updated which a particular query can use if it needs to.

    If over 50% of your data is soft deleted then it's more like historical data for archiving purposes and yes, you need to move it somewhere else. But then maybe you shouldn't use soft delete for it but a separate "archive" procedure?

    • hapidjus 3 hours ago
      Are you asking why we wouldn’t use 'last_updated' to store when the record was deleted?

      One reason is that you might want to know when it was last updated before it was deleted.

      • nottorp 2 hours ago
        No, more like why you'd use a more expensive filter to hide soft deleted data, instead of just a flag.
        • masklinn 2 hours ago
          Checking whether `deleted_at is null` should be extremely cheap, and it avoids the duplication and desynchronisation of having both “deleted” and “deleted_at”.
          • nottorp 1 hour ago
            Yes, if your database has null. I know this is about postgres, but a lot of stuff is nosql now.
            • indigo945 31 minutes ago
              Even in MongoDB, you can can index `null` values, so I don't understand in what database system this would be a problem.
  • tracker1 12 hours ago
    I like having archive/history tables. I often do similar with job queues when persisting to a database, in this way the pending table can stay small and avoid full scans to skip the need for deleted records...

    Aside, another idea that I've kicked forward for event driven databases is to just use a database like sqlite and copy/wipe the whole thing as necessary after an event or the work that's related to that database. For example, all validation/chain of custody info for ballot signatures... there's not much point in having it all online or active, or even mixed in with other ballot initiatives and the schema can change with the app as needed for new events. Just copy that file, and you have that archive. Compress the file even and just have it hard archived and backed up if needed.

  • maxchehab 13 hours ago
    How do you handle schema drift?

    The data archive serialized the schema of the deleted object representative the schema in that point in time.

    But fast-forward some schema changes, now your system has to migrate the archived objects to the current schema?

    • buchanae 13 hours ago
      In my experience, archived objects are almost never accessed, and if they are, it's within a few hours or days of deletion, which leaves a fairly small chance that schema changes will have a significant impact on restoring any archived object. If you pair that with "best-effort" tooling that restores objects by calling standard "create" APIs, perhaps it's fairly safe to _not_ deal with schema changes.

      Of course, as always, it depends on the system and how the archive is used. That's just my experience. I can imagine that if there are more tools or features built around the archive, the situation might be different.

      I think maintaining schema changes and migrations on archived objects can be tricky in its own ways, even kept in the live tables with an 'archived_at' column, especially when objects span multiple tables with relationships. I've worked on migrations where really old archived objects just didn't make sense anymore in the new data model, and figuring out a safe migration became a difficult, error-prone project.

  • stevefan1999 2 hours ago
    That's why adding a DELETE FROM ... RETENTION UNTIL <date> for SQL would be very nice, combining both hard and soft delete with an internal TTL to reduce the impact
  • alkonaut 3 hours ago
    Can't most db systems just create a view over the data where archived_at is null, and this view is the table you use for 99% of your business needs (except auditing, undelete, ...)?
    • arethuza 2 hours ago
      I'd go for two views - one, as you describe, that gives you the "active" records and another that gives you the "inactive" records.
  • theLiminator 13 hours ago
    Privacy regulations make soft delete unviable in many of the cases where it's useful.
    • wavemode 12 hours ago
      Soft deletion and privacy deletion serve different purposes.

      If you leave a comment on a forum, and then delete it, it may be marked as soft-deleted so that it doesn't appear publicly in the thread anymore, but admins can still read what you wrote for moderation/auditing purposes.

      On the other hand, if you send a privacy deletion request to the forum, they would be required to actually fully delete or anonymize your data, so even admins can no longer tie comments that you wrote back to you.

      Most social media sites probably have to implement both of these processes/systems.

      • SchemaLoad 12 hours ago
        Imo there should be some retention period for moderation but then hard deletion after that. Why would a moderator need to look up a deleted post a year after it was deleted?
        • strken 12 hours ago
          "Hi SchemaLoad, I'm Officer John from the Department of Not Letting Children Be Abused. I'm following up on something one of your users posted three years ago. Can you tell me the IP address(es) associated with the following deleted posts: A B C D"
          • SchemaLoad 11 hours ago
            You'd be required to show what you have but you aren't required to store everything forever just in case someone years later asks for it. Would be like showing up to fingerprint the scene 3 years after and being surprised it's too late.
          • antonvs 11 hours ago
            “Hi Officer John, that data is deleted and is no longer possible to access.”

            Unless there’s a regulatory requirement (which there currently isn’t in any jurisdiction I’ve heard of), that’s a perfectly acceptable response.

          • direwolf20 2 hours ago
            Think of the children! We can't have privacy because children might be abused if we have privacy!
    • sedatk 13 hours ago
      The opposite is true in countries where there are data retention laws. Soft-delete is mandatory in those cases.
      • bux93 2 hours ago
        In practice when I discuss retention requirements in my country (EU), the issue is the _maximum_ retention limit - after which data must be deleted. A minimum retention limit (e.g. business records for tax purposes) is almost never an issue. Systems that need soft-delete, bi-temporal state, etc. typically already have it, whereas actually deleting stuff is an afterthought.

        I guess I'm saying the former is usually a functional requirement in the first place, and the latter is a non-functional (compliance) requirement.

  • clickety_clack 12 hours ago
    We have soft delete, with hard delete running on deletions over 45 days old. Sometimes people delete things by accident and this is the only way to practically recover that.
  • 3rodents 12 hours ago
    Soft deletes are an example of where engineers unintentionally lead product instead of product leading engineering. Soft delete isn’t language used by users so it should not be used by engineers when making product facing decisions.

    “Delete” “archive” “hide” are the type of actions a user typically wants, each with their own semantics specific to the product. A flag on the row, a separate table, deleting a row, these are all implementation options that should be led by the product.

    • dpark 11 hours ago
      > Soft delete isn’t language used by users so it should not be used by engineers when making product facing decisions.

      Users generally don’t even know what a database record is. There is no reason that engineers should limit their discussions of implementation details to terms a user might use.

      > “Delete” “archive” “hide” are the type of actions a user typically wants, each with their own semantics specific to the product.

      Users might say they want “delete”, but then also “undo”, and suddenly we’re talking about soft delete semantics.

      > A flag on the row, a separate table, deleting a row, these are all implementation options that should be led by the product.

      None of these are terms an end user would use.

      • pjerem 2 hours ago
        > Users might say they want “delete”, but then also “undo”, and suddenly we’re talking about soft delete semantics.

        I've worked for a company where some users managed very personal informations on behalf of other users, like, sometimes, very intimate data and I always fought product on soft deletion.

        Users are adults, and when part of their job is being careful with the data _they_ manage and _they_ are legally responsible for, I don't feel like the software owes them anything else than a clear information about what is going to happen when they click on "CONFIRM DELETION".

        "Archive" is a good pattern for those use cases. It's what have been used for decades for OS "Recycle Bin". Why not call it Delete if you really want to but in this case, bring a user facing "Recycle Bin" interface and be clear than anything x days old will be permanently deleted.

        • indigo945 23 minutes ago
          Right, but I think that the Recycle Bin is exactly what is causing the issue here. Users have been taught for decades that if they delete something, it is not really gone, as they can always just go back to their Recycle Bin or Deleted Items folder and restore it. (I have worked with clients that used the Deleted Items folder in Outlook as an archive for certain conversations, and would regularly reference it.)

          So users have been taught that the term "delete" means "move somewhere out of my sight". If you design a UI and make "delete" mean something completely different from what everyone already understands it to mean, the problem is you, not the user.

    • monkpit 12 hours ago
      Why would implementation details be led by product? “Undo” is an action that the user may want, which would be led by product. Not the implementation in the db.
      • strken 12 hours ago
        I believe that was the point. Soft delete isn't a product requirement, it's an implementation detail, so product teams should talk about the user experience using language like "delete" or "archive" or "undo" or "customer support retrieves deleted data".
        • Terr_ 11 hours ago
          Yeah: You don't "delete" a bank account, you close it, and you don't "undo", you reopen it, etc. The processes have conditions, audit rules, attached information, side-effects, etc. In some cases the same entity can't be restored, and you have to instead create a successor.

          "Undo" may work as shorthand for "whatever the best reversing actions happen to be", but as any system grows it stops being simple.

        • dpark 11 hours ago
          Sure. Did someone say that the behavior should be described to customers as soft delete, though?

          I read a blog about a technical topic aimed at engineers, not customers.

    • nitwit005 6 hours ago
      I'd be careful of thinking of everything as product facing or not. Many features are for support, legal compliance, etc.

      It's fairly common in some industries to get support requests to recover lost data.

    • antonvs 11 hours ago
      It depends on the product. Google Cloud Storage has a soft delete feature in its product, for example: https://docs.cloud.google.com/storage/docs/soft-delete
  • JohnLeitch 8 hours ago
    My brother's now ex-wife learned the hard way about the challenges of soft delete. Too bad about the contents of that SQLite database, but his knowing was for the better.
  • cj 14 hours ago
    We deal with soft delete in a Mongo app with hundreds of millions of records by simply moving the objects to a separate collection (table) separate from the “not deleted” data.

    This works well especially in cases where you don’t want to waste CPU/memory scanning soft deleted records every time you do a lookup.

    And avoids situations where app/backend logic forgets to apply the “deleted: false” filter.

    • vjvjvjvjghv 14 hours ago
      I guess that works well with NoSQL. In a relational database it gets harder to move record out if they have relationships with other tables.
      • tempest_ 13 hours ago
        Eh you could implement this pretty simply with postgres table partitions
        • buchanae 13 hours ago
          Ah, that's an interesting idea! I had never considered using partitions. I might write a followup post with these new ideas.
          • tempest_ 13 hours ago
            There are a bunch of caveats around primary keys and uniqueness but I suspect it could be made to work depending on your data model.
  • iterateoften 12 hours ago
    I used to be pretty adamant about implementing soft delete for core business objects.

    However after 15 years I prefer to just back up regularly, have point in time restores and then just delete normally.

    The amount of times I have “undeleted” something are few and far between.

    • lelanthran 7 hours ago
      > I used to be pretty adamant about implementing soft delete for core business objects.

      > However after 15 years I prefer to just back up regularly, have point in time restores and then just delete normally.

      > The amount of times I have “undeleted” something are few and far between.

      Similar take from me. Soft deletes sorta makes sense if you have a very simply schema, but the biggest problem I have is that a soft delete leads to broken-ness - some other table now has a reference to a record in the target table that is not supposed to be visible. IOW, DB referential integrity is out the window because we can now have references to records that should not exist!

      My preferred way (for now, anyway) is to copy the record to a new audit table and nuke it in the target table in a single transaction. If the delete fails we can at least log the fact somewhere that some FK somewhere is preventing a deletion.

      With soft deletes, all sorts of logic rules and constraints are broken.

  • ntonozzi 13 hours ago
    I've given up on soft delete -- the nail in the coffin for me was my customers' legal requirements that data is fully deleted, not archived. It never worked that well anyways. I never had a successful restore from a large set of soft-deleted rows.
    • zahlman 13 hours ago
      > customers' legal requirements that data is fully deleted

      Strange. I've only ever heard of legal requirements preventing deletion of things you'd expect could be fully deleted (in case they're needed as evidence at trial or something).

      • jandrewrogers 12 hours ago
        While not common, regulations requiring a hard delete do exist in some fields even in the US. The ones I familiar with are effectively "anti-retention" laws that mandate data must be removed from the system after some specified period of time e.g. all data in the system is deleted no more than 90 days after insertion. This allows compliance to be automated.

        The data subject to the regulation had a high potential for abuse. Automated anti-retention limits the risk and potential damage.

      • SchemaLoad 12 hours ago
        I had an integration with a 3rd party where their legal contract required we hard delete any data from them after a year. Presumably so we couldn't build a competing product using their dataset with full history.
      • pessimizer 10 hours ago
        You're thinking of "legal requirements" as requirements that the law insists upon rather than requirements that your legal department insists upon. You often want to delete records unrecoverably as soon as legally possible; it's likely why you wrote your data retention policy.
      • ntonozzi 13 hours ago
        Many privacy regulations enforce full deletion of data, including GDPR: https://gdpr-info.eu/.
  • LorenPechtel 12 hours ago
    The % of records that are deleted is a huge factor.

    You keep 99%, soft delete 1%, use some sort of deleted flag. While I have not tried it whalesalad's suggestion of a view sounds excellent. You delete 99%, keep 1%, move it!

    • da_chicken 10 hours ago
      A view only makes sense if your RDBMS supports indexed views or the query engine is otherwise smart enough to pierce the view definition. Not all of them can do those things.
  • cadamsdotcom 5 hours ago
    Why not use a trigger to prevent unarchiving?

    And perf problems are only speculative until you actually have them. Premature optimization and all that.

  • cess11 56 minutes ago
    I don't know, pruning based on age and restoring by writing a new row based on the soft deleted one seems less complex than the cascade handling in the trigger solution.
  • nemothekid 14 hours ago
    The trigger architecture is actually quite interesting, especially because cleanup is relatively cheap. As far as compliance goes, it's also simply to declare that "after 45 days, deletions are permanent" as a catch all, and then you get to keep restores. For example, I think (IANAL), the CCPA gives you a 45 day buffer for right to erasure requests.

    Now instead of chasing down different systems and backups, you can simply set ensure your archival process runs regularly and you should be good.

  • whalesalad 13 hours ago
    A good solution here (can be) to utilize a view. The underlying table has soft-delete field and the view will hide rows that have been soft deleted. Then the application doesn't need to worry about this concern all over the place.
    • elyobo 13 hours ago
      postgres with rls to hide soft deleted records means that most of the app code doesn't need to know or care about them, still issues reads, writes, deletes to the same source table and as far as the app knows its working
      • Ronsenshi 10 hours ago
        I would also say that most modern ORMs and frameworks also either come with soft delete feature (with automatic filtering on all queries) as part of the package or there are third-party libraries available for ORMs adding this functionality without the hassle of dealing with views (maybe it's me, but I've never had good experience with DB views).
  • IgorPartola 11 hours ago
    I have a love/hate relationship with soft deleted. There are cases where it’s not really a delete but rather a historical fact. For example, let’s say I have a table which stores an employee’s current hourly rate. They are hired at say $15/hour, then go to $17 six months later, then to $20/hour three months later. All of these three things are true and I want to be able to query which rate the employee had on a specific date even after their rate had changed. When I have a starts_on and an ends_on dates and the latter is nullable, with some data consistency logic I can create a linear history of compensation and can query historical and current data the same exact way. I also get

    But this is such a huge PITA because you constantly have to mind if any given object has this setup or not and what if related objects have different start/end dates? And something like a scheduled raise for next year to $22/hour can get funny if I then try to insert that just for July it will be $24/hour (this would take my single record for next year and split it into two and then you gotta figure out which gets the original ID and which is the new row.

    Another alternative to this is a pattern where you store the current state and separately you store mutations. So you have a compensation table and a compensation_mutations table which says how to evolve a specific row in a compensation table and when. The mutations for anything in the future can be deleted but the past ones cannot which lets you reconstruct who did what, when, and why. But this also has drawbacks. One of them is that you can’t query historical data the same way as current data. You also have to somehow apply these mutations (cron job? DB trigger?)

    And of course there are database extensions that allow soft deletes but I have never tried them for vague portability reasons (as if anyone ever moved off Postgres).

  • nerdponx 13 hours ago
    One thing that often gets forgotten in the discussions about whether to soft delete and how to do it is: what about analysis of your data? Even if you don't have a data science team, or even a dedicated business analyst, there's a good chance that somebody at some point will want to analyze something in the data. And there's a good chance that the analysis will either be explicitly "intertemporal" in that it looks at and compares data from various points in time, or implicitly in that the data spans a long time range and you need to know the states of various entities "as of" a particular time in history. If you didn't keep snapshots and you don't have soft edits/deletes you're kinda SoL. Don't forget the data people down the line... which might include you, trying to make a product decision or diagnose a slippery production bug.
  • pjs_ 12 hours ago
    Tried implementing this crap once. Never again
  • cyberax 12 hours ago
    Soft deletes + GC for the win!

    We have an offline-first infrastructure that replicates the state to possibly offline clients. Hard deletes were causing a lot of fun issues with conflicts, where a client could "resurrect" a deleted object. Or deletion might succeed locally but fail later because somebody added a dependent object. There are ways around that, of course, but why bother?

    Soft deletes can be handled just like any regular update. Then we just periodically run a garbage collector to hard-delete objects after some time.

  • MORPHOICES 1 hour ago
    [dead]
  • Barathkanna 3 hours ago
    TLDR: Soft deletes look easy, but they spread complexity everywhere. Actually deleting data and archiving it separately often keeps databases simpler, faster, and easier to maintain.