From c2afaad9f6951c2f635f2ac63842f190dd799573 Mon Sep 17 00:00:00 2001 From: Marshall Lochbaum Date: Thu, 29 Jul 2021 21:59:25 -0400 Subject: Start updating sorting implementation notes for Fluxsort --- docs/implementation/primitive/sort.html | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) (limited to 'docs/implementation/primitive') diff --git a/docs/implementation/primitive/sort.html b/docs/implementation/primitive/sort.html index def9c975..86c36738 100644 --- a/docs/implementation/primitive/sort.html +++ b/docs/implementation/primitive/sort.html @@ -19,14 +19,15 @@

For Bins, use a branching binary search: see On binary search above. But there are also interesting (although, I expect, rare) cases where only one argument is compound. Elements of this argument should be reduced to fit the type of the other argument, then compared to multiple elements. For the right argument, this just means reducing before doing whatever binary search is appropriate to the left argument. If the left argument is compound, its elements should be used as partitions. Then switch back to binary search only when the partitions get very small—probably one element.

Simple data

The name of the game here is "branchless".

-

For sorting the best general algorithm I know of is a hybrid of pdqsort with counting sort, described below. I've found some improvements to pdqsort, but they all maintain its basic strategy and pattern-defeating nature, so I think the name still fits.

-

Both pdqsort and counting sort are not stable, so they can't be used for Grade. For small-range Grade, counting sort must be replaced with bucket sort, at a significant performance cost. I don't have any method I'm happy with for other data. Stabilizing pdqsort by sorting the indices at the end is possible but slow. Wolfsort is a hybrid radix/merge sort that might be better.

+

For sorting, the fastest algorithms for generic data and generic hardware are branchless quicksorts. Fluxsort is new and very exciting because it's a stable algorithm that's substantially faster than runner-up pdqsort on random arrays. However, it's immature and is missing a lot of the specialized strategies pdqsort has. I'm working on adapting these improvements to work for stable sorting and also on hybridizing with counting/bucket sort.

A branchless binary search is adequate for Bins but in many cases—very small or large 𝕨, and small range—there are better methods.

Counting and bucket sort

Both counting and bucket sort are small-range algorithms that begin by counting the number of each possible value. Bucket sort, as used here, means that the counts are then used to place values in the appropriate position in the result in another pass. Counting sort does not read from the initial values again and instead reconstructs them from the counts. It might be written (/≠¨)(-min) in BQN, with ¨ as a single efficient operation.

Bucket sort can be used for Grade or sort-by (), but counting sort only works for sorting itself. It's not-even-unstable: there's no connection between result values and the input values except that they are constructed to be equal. But with fast Indices, Counting sort is vastly more powerful, and is effective with a range four to eight times the argument length. This is large enough that it might pose a memory usage problem, but the memory use can be made arbitrarily low by partitioning.

Quicksort

+

Fluxsort attains high performance with a branchless stable partition that places one half on top of existing data and the other half somewhere else. One half ends up in the appropriate place in the sorted array. The other is in swap memory, and will be shifted back by subsequent partitions and base-case sorting. Aside from the partitioning strategy, Fluxsort makes a number of other decisions differently from pdqsort, including a fairly complicated merge sort (Quadsort) as the base case. I haven't fully evaluated these.

This paper gives a good description of pdqsort. I'd start with the Rust version, which has some advantages but can still be improved further. The subsections below describe improved partitioning and an initial pass with several benefits. I also found that the pivot randomization methods currently used are less effective because they swap elements that won't become pivots soon; the pivot candidates and randomization targets need to be chosen to overlap. The optimistic insertion sort can also be improved: when a pair of elements is swapped the smaller one should be inserted as usual but the larger one can also be pushed forward at little cost, potentially saving many swaps and handling too-large elements as gracefully as too-small ones.

+

While the stable partitioning for Fluxsort seems to be an overall better choice, pdqsort's unstable partitioning is what I've worked with in the past. The following sections are written from the perspective of pdqsort and will be rewritten for Fluxsort as the methods are adapted.

Partitioning

In-place quicksort relies on a partitioning algorithm that exchanges elements in order to split them into two contiguous groups. The Hoare partition scheme does this, and BlockQuicksort showed that it can be performed quickly with branchless index generation; this method was then adopted by pdqsort. But the bit booleans to indices method is faster and fits well with vectorized comparisons.

It's simplest to define an operation P that partitions a list 𝕩 according to a boolean list 𝕨. Partitioning permutes 𝕩 so that all elements corresponding to 0 in 𝕨 come before those corresponding to 1. The quicksort partition step, with pivot t, is (t𝕩)P𝕩, and the comparison can be vectorized. Interleaving comparison and partitioning in chunks would save memory (a fraction of the size of 𝕩, which should have 32- or 64-bit elements because plain counting sort is best for smaller ones) but hardly speeds things up: only a few percent, and only for huge lists with hundreds of millions of elements. The single-step P is also good for Bins, where the boolean 𝕨 will have to be saved.

-- cgit v1.2.3