Half the entropy is trying to figure out which pieces of this article's text are supposed to be the silly falsehoods being corrected, and which pieces are just the second or third paragraph of a preceding 'Fact'. Deadpool is easier to follow.
I saw a note from an earlier year's discussion saying the css has been changed over the years. Perhaps it was easier then to discern fact or myth, truth or fiction.
This is a good place as any to ask, last time I didn't get any answer: has there ever been a serious Linux exploit from manipulating/predicting bad PRNG? Apart from the Debian SSH key generation fiasco from years ago, of course.
Having a good entropy source makes mathematical sense, and you want something a bit more "random" than a dice roll, but I wonder at which point it becomes security theatre.
Of all the possible avenues for exploiting a modern OS might have, I figure kernel PRNG prediction to be very, very far down the list of things to try.
It’s both hard to attack but also a hugely audited system with a lot of attention paid.
That being said, [1] from 2012. The challenge with security is that structural weaknesses can take a long time to be discovered but once they are it’s catastrophic. Modern Linux finally switched to CSPRNG and proper construction and relies less on the numerology of entropy estimation it had been using (ie real security instead of theater). RDRAND has also been there for a long time on the x86 side which is useful because even if it’s insecure it gets mixed with other entropy sources like instruction execution time and scheduling jitter to protect standalone servers and iot devices.
Of course you hit the nail on the head in terms of the challenge of distinguishing security theater because you won’t know if the hardening is useful until there’s a problem, but there’s enough knowledgeable people on it that it’s less security theater than it might seem if you know what’s going on.
> Note from 2024: This article was published on March 16th, 2014. It is still correct in its discussion of entropy and randomness, but the Linux kernel random number generator has been reworked several times since then and does not look like this anymore. Good news: the separation between /dev/urandom and /dev/random is practically gone.
My understanding is that on modern Linux system:
At early boot phases, /dev/random can still block, because not enough entropy has been seeded yet. /dev/urandom will not block, but the random data might be of poor quality and not suitable for crypto purposes. This happens very early in the boot, so probably it's not even possible to run user stuff at this time. At least on my laptop, the message "random: crng init done" gets logged almost instantly after boot and long before even initrd starts. Might be different for exotic platforms, I guess.
Once there was enough entropy seeded, both /dev/random and /dev/urandom works identically, they don't block and they return high quality random data. So for most userspace purposes, these files can be used interchangeably, one is not better than another.
It started looking a whole lot like OpenBSD’s random number system. Private entropy pool from good system entropy seeds a ChaCha20 stream with random reseeds for forward secrecy in case of compromise. I think Linux is even more paranoid in the early boot environment where even in the presence of a seed file it prefers to get system entropy mixed in before confidently saying it can do crypto activities.
> Might be different for exotic platforms, I guess.
Short-lived isolated VMs (like might be used for CI) are one place where entropy can be a problem. The relevant definition of “platform” here is less about the CPU architecture and more about the environment.
Should, yes. Will, perhaps, but better be aware of the potential problem and check.
Just yesterday I encountered people complaining about a VM not connecting to a cloud service when they neglected to put their DNS server’s address in the config for the DHCP server used by that particular host. And a dysfunctional RNG is much more difficult to detect.
Having a good entropy source makes mathematical sense, and you want something a bit more "random" than a dice roll, but I wonder at which point it becomes security theatre.
Of all the possible avenues for exploiting a modern OS might have, I figure kernel PRNG prediction to be very, very far down the list of things to try.
That being said, [1] from 2012. The challenge with security is that structural weaknesses can take a long time to be discovered but once they are it’s catastrophic. Modern Linux finally switched to CSPRNG and proper construction and relies less on the numerology of entropy estimation it had been using (ie real security instead of theater). RDRAND has also been there for a long time on the x86 side which is useful because even if it’s insecure it gets mixed with other entropy sources like instruction execution time and scheduling jitter to protect standalone servers and iot devices.
Of course you hit the nail on the head in terms of the challenge of distinguishing security theater because you won’t know if the hardening is useful until there’s a problem, but there’s enough knowledgeable people on it that it’s less security theater than it might seem if you know what’s going on.
[1] https://www.usenix.org/system/files/conference/usenixsecurit...
I also believe there were some android ASLR issues based on the same weakness (i.e., low early boot-time entropy).
But this is all quite old, and there've been massive improvements. Basically, "don't use a very old linux kernel" is your mitigation for these issues.
* https://news.ycombinator.com/item?id=7359992
Also:
2020: https://news.ycombinator.com/item?id=22683627
2018: https://news.ycombinator.com/item?id=17779657
2017: https://news.ycombinator.com/item?id=13332741
2015: https://news.ycombinator.com/item?id=10149019
Edit: can't count.
> Note from 2024: This article was published on March 16th, 2014. It is still correct in its discussion of entropy and randomness, but the Linux kernel random number generator has been reworked several times since then and does not look like this anymore. Good news: the separation between /dev/urandom and /dev/random is practically gone.
My understanding is that on modern Linux system:
At early boot phases, /dev/random can still block, because not enough entropy has been seeded yet. /dev/urandom will not block, but the random data might be of poor quality and not suitable for crypto purposes. This happens very early in the boot, so probably it's not even possible to run user stuff at this time. At least on my laptop, the message "random: crng init done" gets logged almost instantly after boot and long before even initrd starts. Might be different for exotic platforms, I guess.
Once there was enough entropy seeded, both /dev/random and /dev/urandom works identically, they don't block and they return high quality random data. So for most userspace purposes, these files can be used interchangeably, one is not better than another.
Short-lived isolated VMs (like might be used for CI) are one place where entropy can be a problem. The relevant definition of “platform” here is less about the CPU architecture and more about the environment.
Just yesterday I encountered people complaining about a VM not connecting to a cloud service when they neglected to put their DNS server’s address in the config for the DHCP server used by that particular host. And a dysfunctional RNG is much more difficult to detect.