aboutsummaryrefslogtreecommitdiff
path: root/docs/implementation/codfns.html
blob: 4e9647fe44c60ddd08836cf8ca978e1aeb8ba8d0 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
<head>
  <link href="../favicon.ico" rel="shortcut icon" type="image/x-icon"/>
  <link href="../style.css" rel="stylesheet"/>
  <title>Co-dfns versus BQN's implementation</title>
</head>
<div class="nav">(<a href="https://github.com/mlochbaum/BQN">github</a>) / <a href="../index.html">BQN</a> / <a href="index.html">implementation</a></div>
<h1 id="co-dfns-versus-bqns-implementation"><a class="header" href="#co-dfns-versus-bqns-implementation">Co-dfns versus BQN's implementation</a></h1>
<p><em>Co-dfns is under active development so this document might not reflect its current state. Last update 2022-07-18.</em></p>
<p>The BQN self-hosted compiler is directly inspired by the <a href="https://github.com/Co-dfns/Co-dfns">Co-dfns</a> project, a compiler for a subset of <a href="../doc/fromDyalog.html">Dyalog APL</a>. I'm very grateful to Aaron for showing that array-oriented compilation is even possible! In addition to the obvious difference of target language, BQN differs from Co-dfns both in goals and methods.</p>
<p>The shared goals of BQN and Co-dfns are to implement a compiler for an array language with whole-array operations. This provides the theoretical benefit of a short <em>critical path</em>, which in practice means that both compilers can make good use of a GPU or a CPU's vector instructions simply by providing an appropriate runtime (Co-dfns has good runtimes—an ArrayFire program on the GPU and Dyalog APL on the CPU—while CBQN isn't at the same level yet). The two implementations also share a preference for working &quot;close to the metal&quot; by passing around arrays of numbers rather than creating abstract types to work with data. Objects are right out. These choices lead to a compact source code implementation, and may have some benefits in terms of how easy it is to write and understand the compiler.</p>
<h2 id="compilation-strategy"><a class="header" href="#compilation-strategy">Compilation strategy</a></h2>
<p>Co-dfns development historically focused on the core compiler, and not parsing, code generation, or the runtime. The associated Ph.D. thesis and famous 17 lines figure refer to this section, which transforms the abstract syntax tree (AST) of a program to a lower-level form, and resolves lexical scoping by linking variables to their definitions. While all of Co-dfns is written in APL, other sections aren't necessarily data-parallel, and could perform differently. As of late 2021, tokenization and most of parsing also follow the style used by the core compiler, but looser about nesting in my estimation. This brings Co-dfns closer to BQN, which is all in a data-parallel style.</p>
<p>The core Co-dfns compiler is based on manipulating the syntax tree, which is mostly stored as parent and sibling vectors—that is, lists of indices of other nodes in the tree. BQN is less methodical, but generally treats the source tokens as a simple list. This list is reordered in various ways, allowing operations that behave like tree traversals to be performed with scans under the right ordering. This strategy allows BQN to be much stricter in the kinds of operations it uses. Co-dfns regularly uses <code><span class='Value'></span><span class='Function'></span></code> (repeat until convergence) for iteration and creates nested arrays with <code><span class='Value'></span></code> (Key, like <a href="../doc/group.html">Group</a>), but BQN has only a single instance of iteration to resolve quotes and comments, plus one complex but parallelizable scan for numeric literal processing, and only uses Group to extract identifiers and strings. This means that most primitives, if we count simple reductions and scans as single primitives, are executed a fixed number of times during execution, making complexity analysis even easier than in Co-dfns.</p>
<h2 id="backends-and-optimization"><a class="header" href="#backends-and-optimization">Backends and optimization</a></h2>
<p>Co-dfns was designed from the beginning to build GPU programs, and outputs code in ArrayFire (a C++ framework), which is then compiled. GPU programming is quite limiting, and as a result Co-dfns has strict limitations in functionality that are slowly being removed. It now has partial support for nested arrays and array ranks higher than 4. BQN is designed with performance in mind, but implementation effort focused on functionality first, so that arbitrary array structures as well as trains and lexical closures have been supported from the beginning. Rather than target a specific language, it outputs object code to be interpreted by a <a href="vm.html">virtual machine</a>. Another goal for BQN was to not only write the compiler in BQN but to use BQN for the runtime as much as possible. The BQN-based runtime uses a small number of basic array operations provided by the VM. The extra abstraction causes this runtime to be very slow, but this can be fixed by overwriting functions from the runtime with natively-implemented ones.</p>
<p>Neither BQN nor Co-dfns significantly optimize their output at the time of writing (it could be said that Co-dfns relies on the ArrayFire backend to optimize). BQN does have one optimization, which is to compute variable lifetimes in functions so that the last access to a variable can clear it. Further optimizations often require finding properties such as reachability in a graph of expressions that likely can't be done efficiently in a strict array style. For this and other reasons it would probably be best to structure compiler optimization as a set of additional modules that can be provided during a given compilation.</p>
<h2 id="error-reporting"><a class="header" href="#error-reporting">Error reporting</a></h2>
<p>Co-dfns initially didn't check for compilation errors, but has started to add some checks with messages. It's behind BQN, which has complete error checking and good error messages, and includes source positions in compiler errors as well as in the compiled code for use in runtime errors. Position tracking and error checking add up to a little more than 20% overhead for the compiler, both in runtime and lines of code. And improving the way errors are reported once found has no cost for working programs, because reporting code only needs to be run if there's a compiler error. The only thing that really takes advantage of this now is the reporting for bracket matching, which goes over all brackets with a stack-based (not array-oriented or parallel) method.</p>
<h2 id="comments"><a class="header" href="#comments">Comments</a></h2>
<p>At the time I began work on BQN, Aaron advocated the almost complete separation of code from comments (thesis) in addition to his very terse style as a general programming methodology. I find that such a style makes it hard to connect the documentation to the code, and is very slow in providing a summary or reminder of functionality that a comment might. I chose one comment per line as a better balance of compactness and faster accessibility.</p>
<p>Subsequently Co-dfns undertook perhaps the greatest shift in comment-to-code ratio that's ever happened. Aaron began by adding a full-line comment for every group of two to three lines, not too far from BQN. Then he converted the whole thing to a literate program using the noweb framework and is working on writing the prose half, perhaps a half page per line of code. This could make development harder but it seems like a great way to make it easier to get into a difficult style of programming, so I'm interested to see how it goes.</p>
<h2 id="is-it-a-good-idea"><a class="header" href="#is-it-a-good-idea">Is it a good idea?</a></h2>
<p>In short: <strong>no</strong>, I don't think it makes sense to use an array style for a compiler without a good reason. BQN uses it so it can self-host while maintaining good performance; Co-dfns uses it to prove it's possible to implement the core of a compiler with low parallel asymptotic complexity. It could also make a fun hobby project, although it's very draining. If the goal is to produce a working compiler then my opinion is that using the array style will take longer and require more skill, and the resulting compiler will be slower than a traditional one in a low-level language for practical tasks. Improvements in methodology could turn this around, but I'm pessimistic.</p>
<p>It's important to note that my judgment here applies specifically to implementing compilers. Many people already apply APL successfully to problems in finance, or NumPy in science, TensorFlow in machine learning, and so on. Whatever their other qualities, these programs can be developed quickly and have competitive performance. Other problems, such as sparse graph algorithms, are unlikely to be approachable with array programming (and the <a href="https://en.wikipedia.org/wiki/P-complete">P-complete</a> problems are thought to not be efficiently parallelizable, which would rule out any array solution in the sense used here). Compiling seems to occupy an interesting spot near the fuzzy boundary of that useful-array region. Is it more inside, or outside?</p>
<h3 id="ease-of-development"><a class="header" href="#ease-of-development">Ease of development</a></h3>
<p>It needs to be said: Aaron is easily one of the most talented programmers I know, and I have something of a knack for arrays myself. At present, building an array compiler requires putting together array operations in new and complicated ways, often with nothing but intuition to hint at which ones to use. It is much harder than making a compiler the normal way. However, there is some reason for hope in the <a href="https://en.wikipedia.org/wiki/History_of_compiler_construction">history</a> of traditional compilers. It took a team led by John Backus two and a half years to produce the first FORTRAN compiler, and they gave him a Turing award for it! Between the 1950s and 1970s, developments like the LR parser brought compilers from being a challenge for the greatest minds to something accessible to a typical programmer with some effort. I don't believe the change will be so dramatic for array-based compilers, because many advantages in languages and tooling—keep in mind the FORTRAN implementers used assembly—are shared with ordinary programming. But Aaron and I have already discovered some concepts like tree manipulation and depth-based reordering that make it easier to think about compilation, and there are certainly more to be found.</p>
<p>I find that variable management is a major difficulty in working with the compiler. This is a problem that Aaron doesn't (or didn't) have, because his compiler is 17 lines long. What happens in a larger program is that various properties need to be computed in one place and used in another, making it hard to keep track of how these were computed and what they mean. In BQN, different sections of the compiler use different source orderings (one thing I've expended some effort on is to reduce the number of orderings used). A tree-based compiler would probably have similar problems, unless all the state is going to be transformed at each step, which would perform poorly. Using a variable with one ordering in the wrong place is a frequent source of errors, particularly if the ordering is something like expanding function bodies that has no effect in many small programs. Is there some way to protect against these errors?</p>
<h4 id="does-apl-need-a-type-system"><a class="header" href="#does-apl-need-a-type-system">Does APL Need a Type System?</a></h4>
<p><a href="https://www.youtube.com/watch?v=z8MVKianh54">Here's Aaron's take</a>. Honestly I can't really go along with this: I think he ignores a lot of real distinctions between array and typed functional programming because it's convenient for the point he wants to make. On the other hand, it's abundantly clear that C-style types would be useless for an array compiler, because nearly every variable is a list of integers.</p>
<p>The sort of static guarantee I want is not really a type system but an <em>axis</em> system. That is, if I take <code><span class='Value'>a</span><span class='Function'></span><span class='Value'>b</span></code> I want to know that the arithmetic mapping makes sense because the two variables use the same axis. And I want to know that if <code><span class='Value'>a</span></code> and <code><span class='Value'>b</span></code> are compatible, then so are <code><span class='Value'>i</span><span class='Function'></span><span class='Value'>a</span></code> and <code><span class='Value'>i</span><span class='Function'></span><span class='Value'>b</span></code>, but not <code><span class='Value'>a</span></code> and <code><span class='Value'>i</span><span class='Function'></span><span class='Value'>b</span></code>. I could use a form of <a href="https://en.wikipedia.org/wiki/Hungarian_notation">Hungarian notation</a> for this, and write <code><span class='Value'>ia</span><span class='Gets'></span><span class='Value'>i</span><span class='Function'></span><span class='Value'>a</span></code> and <code><span class='Value'>ib</span><span class='Gets'></span><span class='Value'>i</span><span class='Function'></span><span class='Value'>b</span></code>, but it's inconvenient to rewrite the axis every time the variable appears, and I'd much prefer a computer checking agreement rather than my own fallible self.</p>
<h3 id="performance"><a class="header" href="#performance">Performance</a></h3>
<p>In his Co-dfns paper Aaron compares to nanopass implementations of his compiler passes. Running on the CPU and using Chez Scheme (not Racket, which is also presented) for nanopass, he finds Co-dfns is up to <strong>10 times faster</strong> for large programs. The GPU is of course slower for small programs and faster for larger ones, breaking even above 100,000 AST nodes—quite a large program. I think comparing the self-hosted BQN compiler to the one in dzaima/BQN shows that this large improvement is caused as much by nanopass being slow as Co-dfns being fast.</p>
<p>The self-hosted compiler running in CBQN reaches full performance at about 1KB of dense source code. Handling over 3MB/s, it's around <strong>half as fast</strong> as dzaima/BQN's compiler (but it's complicated—dbqn is usually slower on the first run but gets up to 3 times faster with some files, after hundreds of runs and with 3GB of memory use). This compiler was written in Java by dzaima in a much shorter time than the self-hosted compiler, and is equivalent for benchmarking purposes. While there are minor differences in syntax accepted and the exact bytecode output, I'm sure that either compiler could be modified to match the other with negligible changes in compilation time. The Java compiler is written with performance in mind, but dzaima has expended only a moderate amount of effort to optimize it.</p>
<p>A few factors other than the speed of the nanopass compiler might partly cause the discrepancy, or otherwise be worth taking into account. I doubt that these can add up to a factor of 20, so I think that nanopass is simply not as fast as more typical imperative compiler methods.</p>
<ul>
<li>The CBQN runtime is still suboptimal, missing SIMD implementations for some primitives used in the compiler. But improvements will be limited for operations like selection that don't vectorize as well. I estimate less than a factor of 2 improvement remains from improving speed to match Dyalog, and would find a factor of 3 unlikely.</li>
<li>On the other hand Java isn't the fastest language for a compiler and a C-based compiler would likely be faster. I don't have an estimate for the size of the difference.</li>
<li>Co-dfns and BQN use different compilation strategies. I think that my methods are at least as fast, and scale better to a full compiler.</li>
<li>The portion of the compiler implemented by Co-dfns could be better for arrays than other sections, which in fact seems likely to me—I would say parsing is the worst for array relative to scalar programming. I think the whole-compiler comparison is more informative, although if this effect is very strong (I don't think it is), then hybrid array-scalar compilers could make sense.</li>
<li>The Co-dfns and nanopass implementations are pass-for-pass equivalent, while BQN and Java are only comparable as a whole. As the passes were chosen to be ideal for Co-dfns, there's some chance this could be slowing down nanopass in a way that doesn't translate to real-world performance.</li>
</ul>
<p>Overall, it seems safe to say that an ideal array-based compiler would be competitive with a scalar one on the CPU, not vastly better as Aaron's benchmarks suggest. This result is still remarkable! APL and BQN are high-level dynamically-typed languages, and wouldn't be expected to go near the performance of a compiled language like Java. However, it makes it much harder to recommend array-based compilation than the numbers from the Co-dfns paper would.</p>
<p>I stress here that I don't think there's anything wrong about the way Aaron has conducted or presented his research. The considerations described above are speculative even in (partial) hindsight. I think Aaron chose nanopass because of his familiarity with the functional programming compiler literature and because its multi-pass system is more similar to Co-dfns. And I know that he actively sought out other programmers willing to implement the compiler in other ways including imperative methods; apparently these efforts didn't pan out. Aaron even mentioned to me that his outstanding results were something of a problem for him, because reviewers found them unbelievable!</p>
<h4 id="what-about-the-gpu"><a class="header" href="#what-about-the-gpu">What about the GPU?</a></h4>
<p>BQN's compiler could certainly be made to run on a GPU, and it's fascinating that this is possible merely because I stuck to an array-based style. In Co-dfns, Aaron found a maximum factor of 6 improvement by running on the GPU, and this time it's the GPU runtime that we should expect to be slower than Dyalog. So we could expect an array-based compiler to run faster on large source files in this case. The problem is this: who could benefit from this speed?</p>
<p>Probably not BQN. Recall that the BQN compiler runs at 3MB/s. This is fast enough that it almost certainly takes much longer to run the program than to compile it. The exception would be when a lot of code is loaded but not used, which can be solved by splitting the code into smaller files and only using those which are needed.</p>
<p>The programmers who complain about compile times are using languages like C++, Rust, and Julia that compile to native code, often through LLVM. The things that these compilers spend time on aren't the things that the BQN compiler does! BQN has a rigid syntax, no metaprogramming, and compiles to bytecode. The slow compilers are churning to perform tasks like:</p>
<ul>
<li>Type checking</li>
<li>Templates, polymorphism, and other code generation</li>
<li>Reachability computations, for optimization</li>
<li>Automatic vectorization</li>
</ul>
<p>I don't know how to implement these in an array style. I suspect many are simply not possible. They tend to involve relationships between distant parts of the program, and are approached with graph-based methods for which efficient array-style implementations aren't known.</p>
<p>It might be possible to translate a compiler with less optimization such as Go to an array style, partially or completely. But Go compiles very fast, so speeding it up isn't as desirable. In slower compilers, attacking problems like the above strikes me as many levels beyond the work on Co-dfns or BQN.</p>
<h2 id="pareas"><a class="header" href="#pareas">Pareas</a></h2>
<p><a href="https://github.com/Snektron/pareas">Pareas</a> is a GPU-based compiler that takes a simple custom-designed imperative language as input and outputs RISC-V. It's implemented in <a href="https://github.com/diku-dk/futhark">Futhark</a>, an array-oriented ML-family language with GPU support. Pareas and the page you are reading were developed in… parallel, and I became aware of it after writing the pessimistic take on GPU compilers above. I'm very glad this work is being done, but it seems fully compatible with my conclusion that our current data-parallel methods aren't advanced enough to make useful improvements in slow compilers. Both the source and target language had to be kept simple and not many optimizations are performed. It does show that APL syntax isn't necessary to develop a GPU compiler. I think it's too different from Co-dfns and BQN to say whether one development style might be more effective, though.</p>
<p>Pareas seems to focus on applying general methods in a smart way before going special-purpose, even more so than Co-dfns. I think this is absolutely a good choice and the right direction for GPU research. But it does reduce the power available to the compiler writers in some sense, making Pareas appear less capable than it might otherwise—until you account for the flexibility to handle other languages. The main parser targets a class of languages called <em>LLP</em>(<em>q</em>,<em>k</em>), and for simplicity it's restricted further to <em>LLP</em>(1,1). This isn't expressive enough for any serious language, and even with their specialized input language the authors add a second pass using tree manipulation to refine the grammar. In particular, BQN isn't in this class because any part of an expression can be parenthesized an arbitrary number of times, requiring unbounded lookahead (or behind) to determine the role from the outside.</p>
<p>As it stands, the compiler's performance isn't great, but that's entirely due to slow register allocation. The rest of the compiler is capable of over 1GB/s on their high-end GPU. Sure, it takes megabytes of source to reach these speeds, but that leaves us at &quot;compile just about anything in a fraction of a second&quot; which is very attractive. The register allocation is kind of worrying because they rejected graph coloring for performance reasons and went with a very basic allocator that spills whenever things get tough. Nonetheless I think challenges like more realistic syntax and a good allocator can probably be accomodated and perform substantially faster than current CPU-based code. But I see that allocator as the tip of an iceberg: it's nowhere near the hardest or scalar-est task for an optimizing compiler, and a few intractable problems can sink the whole thing. Maybe there's room to pass such things off to the CPU. Maybe a linker is a better goal for GPU hosting? Even though I don't expect a GPU gcc on the horizon, I'm excited to see what comes out of this line of research!</p>