From 4cc1cb8d7b34eb99d77fd35a53797e6929b348f7 Mon Sep 17 00:00:00 2001 From: Marshall Lochbaum Date: Fri, 23 Jul 2021 18:47:32 -0400 Subject: Comment on reservoir sampling --- implementation/primitive/random.md | 2 ++ 1 file changed, 2 insertions(+) (limited to 'implementation/primitive/random.md') diff --git a/implementation/primitive/random.md b/implementation/primitive/random.md index afab34c0..fcf83a93 100644 --- a/implementation/primitive/random.md +++ b/implementation/primitive/random.md @@ -17,3 +17,5 @@ A [simple random sample](https://en.wikipedia.org/wiki/Simple_random_sample) fro `Deal` uses a [Knuth shuffle](https://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle), stopping after the first `𝕨` elements have been shuffled, as the algorithm won't touch them again. Usually it creates `↕𝕩` explicitly for this purpose, but if `𝕨` is very small then initializing it is too slow. In this case we initialize `↕𝕨`, but use a "hash" table with an identity hash—the numbers are already random—for `𝕨↓↕𝕩`. The default is that every value in the table is equal to its key, so that only entries where a swap has happened need to be stored. The hash table is the same design as for self-search functions, with open addressing and linear probing. `Subset` uses [Floyd's method](https://math.stackexchange.com/questions/178690/whats-the-proof-of-correctness-for-robert-floyds-algorithm-for-selecting-a-sin), which is sort of a modification of shuffling where only the selected elements need to be stored, not what they were swapped with. This requires a lookup structure that can be updated efficiently and output all elements in sorted order. The choices are a bitset for large `𝕨` and another not-really-hash table for small `𝕨`. The table uses a right shift—that is, division by a power of two—as a hash so that hashing preserves the ordering, and inserts like an insertion sort: any larger entries are pushed forward. Really this is an online sorting algorithm, that works because we know the input distribution is well-behaved (it degrades to quadratic performance only in very unlikely cases). When `𝕨>𝕩÷2`, we always use a bitset, but select `𝕩-𝕨` elements and invert the selection. + +I'm aware of [algorithms](https://richardstartin.github.io/posts/reservoir-sampling) like Vitter's Method D that generate a sorted sample in order, using the statistics of samples. There are a few reasons why I prefer Floyd's method. It's faster, because it uses one random generation per element while Vitter requires several, and it does less branching. It's exact, in that if it's given uniformly random numbers then it makes a uniformly random choice of sample. Vitter's method depends on floating-point randomness and irrational functions, so it can't accomplish this with finite precision random inputs. And the pattern of requests for Floyd's method is simple and deterministic. The advantage of reservoir algorithms like Vitter is that they are able to generate samples one at a time using a small fixed amount of memory. `•MakeRand` only allows the user to request a sample all at once, so this advantage doesn't matter as much. The CBQN algorithms are tuned to use much more temporary memory than the size of the final result. It could be lowered, but there's usually plenty of temporary memory available. -- cgit v1.2.3