The best way to store information depends on how you intend to use (query) it.
The query itself represents information. If you can anticipate 100% of the ways in which you intend to query the information (no surprises), I'd argue there might be an ideal way to store it.
There are, however, several objectively bad ways. In "Service Model" (a novel that I recommend) a certain collection of fools decides to sort bits by whether it's a 1 or a 0, ending up with a long list of 0's followed by a long list of 1's.
I also like (old) .ini / TOML for small (bootstrap) config files / data exchange blobs a human might touch.
+
Re: PostgreSQL 'unfit' conversations.
I'd like some clearer examples of the desired transactions which don't fit well. After thinking about them in the background a bit I've started to suspect it might be an algorithmic / approach issue obscured by storage patterns that happen to be enabled by some other platforms which work 'at scale' supported by hardware (to a given point).
As an example of a pattern that might not perform well under PostgreSQL, something like lock-heavy multiple updates for flushing a transaction atomically. E.G. Bank Transaction Clearance like tasks. If every single double-entry booking requires it's own atomic transaction that clearly won't scale well in an ACID system. Rather the smaller grains of sand should be combined into a sandstone block / window of transactions which are processed at the same time and applied during the same overall update. The most obvious approach to this would be to switch from a no-intermediate values 'apply deduction and increment atomically' action to a versioned view of the global data state PLUS a 'pending transactions to apply' log / table (either/both can be sharded). At a given moment the transactions can be reconciled, for performance a cache for 'dirty' accounts can store the non-contested value of available balance.
The query itself represents information. If you can anticipate 100% of the ways in which you intend to query the information (no surprises), I'd argue there might be an ideal way to store it.
Also, let us not confuse "relative" with "not objective". My father is objectively my father, but he is objectively not your father.
I also like (old) .ini / TOML for small (bootstrap) config files / data exchange blobs a human might touch.
+
Re: PostgreSQL 'unfit' conversations.
I'd like some clearer examples of the desired transactions which don't fit well. After thinking about them in the background a bit I've started to suspect it might be an algorithmic / approach issue obscured by storage patterns that happen to be enabled by some other platforms which work 'at scale' supported by hardware (to a given point).
As an example of a pattern that might not perform well under PostgreSQL, something like lock-heavy multiple updates for flushing a transaction atomically. E.G. Bank Transaction Clearance like tasks. If every single double-entry booking requires it's own atomic transaction that clearly won't scale well in an ACID system. Rather the smaller grains of sand should be combined into a sandstone block / window of transactions which are processed at the same time and applied during the same overall update. The most obvious approach to this would be to switch from a no-intermediate values 'apply deduction and increment atomically' action to a versioned view of the global data state PLUS a 'pending transactions to apply' log / table (either/both can be sharded). At a given moment the transactions can be reconciled, for performance a cache for 'dirty' accounts can store the non-contested value of available balance.