diff options
Diffstat (limited to 'implementation')
| -rw-r--r-- | implementation/codfns.md | 19 |
1 files changed, 9 insertions, 10 deletions
diff --git a/implementation/codfns.md b/implementation/codfns.md index 62588217..fb41fd80 100644 --- a/implementation/codfns.md +++ b/implementation/codfns.md @@ -12,7 +12,7 @@ The shared goals of BQN and Co-dfns are to implement a compiler for an array lan Co-dfns development historically focused on the core compiler, and not parsing, code generation, or the runtime. The associated Ph.D. thesis and famous 17 lines figure refer to this section, which transforms the abstract syntax tree (AST) of a program to a lower-level form, and resolves lexical scoping by linking variables to their definitions. While all of Co-dfns is written in APL, other sections aren't necessarily data-parallel, and could perform differently. As of late 2021, tokenization and most of parsing also follow the style used by the core compiler, but looser about nesting in my estimation. This brings Co-dfns closer to BQN, which is all in a data-parallel style. -The core Co-dfns compiler is based on manipulating the syntax tree, which is mostly stored as parent and sibling vectors—that is, lists of indices of other nodes in the tree. BQN is less methodical, but generally treats the source tokens as a simple list. This list is reordered in various ways, allowing operations that behave like tree traversals to be performed with scans under the right ordering. This strategy allows BQN to be much stricter in the kinds of operations it uses. Co-dfns regularly uses `⍣≡` (repeat until convergence) for iteration and creates nested arrays with `⌸` (Key, like [Group](../doc/group.md)), but BQN has only a single instance of iteration to resolve quotes and comments, plus one complex but parallelizable scan for numeric literal processing, and only uses Group to extract identifiers and strings. This means that most primitives, if we count simple reductions and scans as single primitives, are executed a fixed number of times during execution, making complexity analysis even easier than in Co-dfns. +The core Co-dfns compiler is based on manipulating the syntax tree, which is mostly stored as parent and sibling vectors—that is, lists of indices of other nodes in the tree. BQN is less methodical, but generally treats the source tokens as a simple list. This list is reordered in various ways, allowing operations that behave like tree traversals to be performed with scans under the right ordering. This strategy allows BQN to be much stricter in the kinds of operations it uses. Co-dfns regularly uses `⍣≡` (repeat until convergence) for iteration and creates nested arrays with `⌸` (Key, like [Group](../doc/group.md)). BQN has only a single instance of iteration to resolve quotes and comments, plus one complex but parallelizable scan for numeric literal processing. And it only uses Group to extract identifiers and strings, and collect metadata for final output. This means that most primitives, if we count simple reductions and scans as single primitives, are executed a fixed number of times during execution, making complexity analysis even easier than in Co-dfns. ## Backends and optimization @@ -50,27 +50,26 @@ The sort of static guarantee I want is not really a type system but an *axis* sy ### Performance -In his Co-dfns paper Aaron compares to nanopass implementations of his compiler passes. Running on the CPU and using Chez Scheme (not Racket, which is also presented) for nanopass, he finds Co-dfns is up to **10 times faster** for large programs. The GPU is of course slower for small programs and faster for larger ones, breaking even above 100,000 AST nodes—quite a large program. I think comparing the self-hosted BQN compiler to the one in dzaima/BQN shows that this large improvement is caused as much by nanopass being slow as Co-dfns being fast. +In his Co-dfns paper Aaron compares to nanopass implementations of his compiler passes. Running on the CPU and using Chez Scheme (not Racket, which is also presented) for nanopass, he finds Co-dfns is up to **10 times faster** for large programs. The GPU is of course slower for small programs and faster for larger ones, breaking even above 100,000 AST nodes—quite a large program. -The self-hosted compiler running in CBQN reaches full performance at about 1KB of dense source code. Handling over 3MB/s, it's around **half as fast** as dzaima/BQN's compiler (but it's complicated—dbqn is usually slower on the first run but gets up to 3 times faster with some files, after hundreds of runs and with 3GB of memory use). This compiler was written in Java by dzaima in a much shorter time than the self-hosted compiler, and is equivalent for benchmarking purposes. While there are minor differences in syntax accepted and the exact bytecode output, I'm sure that either compiler could be modified to match the other with negligible changes in compilation time. The Java compiler is written with performance in mind, but dzaima has expended only a moderate amount of effort to optimize it. +The self-hosted compiler running in CBQN reaches full performance at 3KB or so of dense source code, and slows down slightly at larger sizes as the type for indices gets larger. Handling at least 5MB/s, it's between the **same speed** and **half as fast** as dzaima/BQN's compiler when that's fully warmed up (which takes hundreds of runs and uses 3GB of RAM). That compiler was written in Java by dzaima in a much shorter time than the self-hosted compiler, and is very similar for benchmarking purposes—it's missing some functionality but this shouldn't change timings more than 10% at a very conservative estimate. The Java compiler is written with performance in mind, but dzaima has expended only a moderate amount of effort to optimize it. -A few factors other than the speed of the nanopass compiler might partly cause the discrepancy, or otherwise be worth taking into account. I doubt that these can add up to a factor of 20, so I think that nanopass is simply not as fast as more typical imperative compiler methods. +There are a few factors to consider regarding the difference in results: -- The CBQN runtime is still suboptimal, missing SIMD implementations for some primitives used in the compiler. But improvements will be limited for operations like selection that don't vectorize as well. I estimate less than a factor of 2 improvement remains from improving speed to match Dyalog, and would find a factor of 3 unlikely. -- On the other hand Java isn't the fastest language for a compiler and a C-based compiler would likely be faster. I don't have an estimate for the size of the difference. +- CBQN is a different runtime from Dyalog. We've profiled the compiler and implemented lots of things in SIMD, and I'm confident CBQN runs the compiler at least as fast as Dyalog 17.0, the latest version for Aaron's thesis, would have. - Co-dfns and BQN use different compilation strategies. I think that my methods are at least as fast, and scale better to a full compiler. -- The portion of the compiler implemented by Co-dfns could be better for arrays than other sections, which in fact seems likely to me—I would say parsing is the worst for array relative to scalar programming. I think the whole-compiler comparison is more informative, although if this effect is very strong (I don't think it is), then hybrid array-scalar compilers could make sense. +- The portion of the compiler implemented by Co-dfns could be better for arrays than other sections, which in fact seems likely to me—I would say parsing is the worst for array relative to scalar programming. - The Co-dfns and nanopass implementations are pass-for-pass equivalent, while BQN and Java are only comparable as a whole. As the passes were chosen to be ideal for Co-dfns, there's some chance this could be slowing down nanopass in a way that doesn't translate to real-world performance. -Overall, it seems safe to say that an ideal array-based compiler would be competitive with a scalar one on the CPU, not vastly better as Aaron's benchmarks suggest. This result is still remarkable! APL and BQN are high-level dynamically-typed languages, and wouldn't be expected to go near the performance of a compiled language like Java. However, it makes it much harder to recommend array-based compilation than the numbers from the Co-dfns paper would. +I reckon that most of the effect is that the nanopass compiler just isn't very fast. I see our whole-compiler results as superseding Aaron's comparison, and it's worth noting as well that C could probably beat Java for a BQN compiler. So I conclude that, with current technology, array-based compilers are competitive with scalar compilers on the CPU and not vastly better. This result is still remarkable! APL and BQN are high-level dynamically-typed languages, and wouldn't be expected to go near the performance of a compiled language like Java. However, it makes it much harder to recommend array-based compilation than the numbers from the Co-dfns paper would. -I stress here that I don't think there's anything wrong about the way Aaron has conducted or presented his research. The considerations described above are speculative even in (partial) hindsight. I think Aaron chose nanopass because of his familiarity with the functional programming compiler literature and because its multi-pass system is more similar to Co-dfns. And I know that he actively sought out other programmers willing to implement the compiler in other ways including imperative methods; apparently these efforts didn't pan out. Aaron even mentioned to me that his outstanding results were something of a problem for him, because reviewers found them unbelievable! +I stress here that I don't think there's anything wrong about the way Aaron has conducted or presented his research, even if it now appears that nanopass was a poor baseline. I think he chose nanopass because of his familiarity with the functional programming compiler literature and because its multi-pass system is more similar to Co-dfns. And I know that he actively sought out other programmers willing to implement the compiler in other ways including imperative methods; apparently these efforts didn't pan out. Aaron even mentioned to me that his outstanding results were something of a problem for him, because reviewers found them unbelievable! #### What about the GPU? BQN's compiler could certainly be made to run on a GPU, and it's fascinating that this is possible merely because I stuck to an array-based style. In Co-dfns, Aaron found a maximum factor of 6 improvement by running on the GPU, and this time it's the GPU runtime that we should expect to be slower than Dyalog. So we could expect an array-based compiler to run faster on large source files in this case. The problem is this: who could benefit from this speed? -Probably not BQN. Recall that the BQN compiler runs at 3MB/s. This is fast enough that it almost certainly takes much longer to run the program than to compile it. The exception would be when a lot of code is loaded but not used, which can be solved by splitting the code into smaller files and only using those which are needed. +Probably not BQN. Recall that the BQN compiler runs at 5MB/s. This is fast enough that it almost certainly takes much longer to run the program than to compile it. The exception would be when a lot of code is loaded but not used, which can be solved by splitting the code into smaller files and only using those which are needed. The programmers who complain about compile times are using languages like C++, Rust, and Julia that compile to native code, often through LLVM. The things that these compilers spend time on aren't the things that the BQN compiler does! BQN has a rigid syntax, no metaprogramming, and compiles to bytecode. The slow compilers are churning to perform tasks like: - Type checking |
