From eac3e1ef70e8fffaf337fed4cb81f88014830867 Mon Sep 17 00:00:00 2001 From: Marshall Lochbaum Date: Thu, 21 Jul 2022 20:00:16 -0400 Subject: Link to published K benchmarks specifically --- docs/implementation/kclaims.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'docs/implementation/kclaims.html') diff --git a/docs/implementation/kclaims.html b/docs/implementation/kclaims.html index 0bc4c005..04612d3e 100644 --- a/docs/implementation/kclaims.html +++ b/docs/implementation/kclaims.html @@ -12,7 +12,7 @@

This page has now been discussed on Hacker News and Lobsters (not really the intention… just try not to be too harsh to the next person who says L1 okay?), and I have amended and added information based on comments there and elsewhere.

The fastest array language

When you ask what the fastest array language is, chances are someone is there to answer one of k, kdb, or q. I can't offer benchmarks that contradict this, but I will argue that there's little reason to take these people at their word.

-

The reason I have no measurements is that every contract for a commercial K includes an anti-benchmark clause. For example, Shakti's license says users cannot "distribute or otherwise make available to any third party any report regarding the performance of the Software benchmarks or any information from such a report". This "DeWitt clause" is common in commercial databases (history; survey). It takes much more work to refute bad benchmarks than to produce them, so perhaps in the high-stakes DBMS world it's not as crazy as it seems—even though it's a terrible outcome. As I would be unable to share the results, I have not taken benchmarks of any commercial K. Or downloaded one for that matter. So I'm limited to a small number of approved benchmarks that have been published, which focus on performance as a database and not a programming language. I also run tests with ngn/k, which is developed with goals similar to Whitney's K; the author says it's slower than Shakti but not by too much.

+

The reason I have no measurements is that every contract for a commercial K includes an anti-benchmark clause. For example, Shakti's license says users cannot "distribute or otherwise make available to any third party any report regarding the performance of the Software benchmarks or any information from such a report". This "DeWitt clause" is common in commercial databases (history; survey). It takes much more work to refute bad benchmarks than to produce them, so perhaps in the high-stakes DBMS world it's not as crazy as it seems—even though it's a terrible outcome. As I would be unable to share the results, I have not taken benchmarks of any commercial K. Or downloaded one for that matter. So I'm limited to a small number of approved benchmarks that have been published (STAC-M3, NYC Taxi, vs Python and my bumbling replication), which focus on performance as a database and not a programming language. I also run tests with ngn/k, which is developed with goals similar to Whitney's K; the author says it's slower than Shakti but not by too much.

The primary reason I don't give any credence to claims that K is the best is that they are always devoid of specifics. Most importantly, the same assertion is made across decades even though performance in J, Dyalog, and NumPy has improved by leaps and bounds in the meantime—I participated in advances of 26% and 10% in overall Dyalog benchmarks in the last two major versions. Has K4 (the engine behind kdb and Q) kept pace? Maybe it's fallen behind since Arthur left but Shakti K is better? Which other array languages has the poster used? Doesn't matter—they are all the same but K is better.

A related theme I find is equivocating between different kinds of performance. I suspect that for interpreting scalar code K is faster than APL and J but slower than Javascript, and certainly any compiled language. For operations on arrays, maybe it beats Javascript and Java but loses to current Dyalog and tensor frameworks. Simple database queries, Shakti says it's faster than Spark and Postgres but is silent about newer in-memory databases. The most extreme K advocates sweep away all this complexity by comparing K to weaker contenders in each category. Just about any language can be "the best" with this approach.

Before getting into array-based versus scalar code, here's a simpler case. It's well known that K works on one list at a time, that is, if you have a matrix—in K, a list of lists—then applying an operation (say sum) to each row works on each one independently. If the rows are short then there's function overhead for each one. In APL, J, and BQN, the matrix is stored as one unit with a stride. The sum can use one metadata computation for all rows, and there's usually special code for many row-wise functions. I measured that Dyalog is 30 times faster than ngn/k to sum rows of a ten-million by three float (double) matrix, for one fairly representative example. It's fine to say—as many K-ers do—that these cases don't matter or can be avoided in practice; it's dishonest (or ignorant) to claim they don't exist.

-- cgit v1.2.3