aboutsummaryrefslogtreecommitdiff
path: root/docs/implementation
diff options
context:
space:
mode:
authorMarshall Lochbaum <mwlochbaum@gmail.com>2022-04-11 08:17:28 -0400
committerMarshall Lochbaum <mwlochbaum@gmail.com>2022-04-11 08:17:28 -0400
commit846425fabe9b4c5c9bbe2be0c785fd1662a0daaa (patch)
tree0cc4e935ef26811e4e7f8e6527606c7fe691f8c4 /docs/implementation
parente3bdf0aa984961023ef80414cd93ef225ec07117 (diff)
Typo (fixes #64)
Diffstat (limited to 'docs/implementation')
-rw-r--r--docs/implementation/compile/intro.html2
1 files changed, 1 insertions, 1 deletions
diff --git a/docs/implementation/compile/intro.html b/docs/implementation/compile/intro.html
index 0431c827..957baf32 100644
--- a/docs/implementation/compile/intro.html
+++ b/docs/implementation/compile/intro.html
@@ -43,7 +43,7 @@
<p>Three major efforts to apply ahead-of-time compilation to APL are <a href="https://www.snakeisland.com/apexup.htm">APEX</a>, <a href="https://github.com/melsman/apltail">apltail</a>, and <a href="https://github.com/Co-dfns/Co-dfns">Co-dfns</a>. It's worth noting that none has had any apparent uptake in the APL world. I think this is because they have to leave out significant functionality to compile ahead of time, and because performance doesn't actually matter to APL programmers, as mentioned in the introduction. From my reading it appears that APEX is statically typed, since each value in the source code can have only one type, and declarations (written as APL comments) are required to disambiguate if the are multiple possibilities. apltail and Co-dfns are dynamically typed, but apltail compiles to Typed Array Intermediate Language (TAIL), which, as the name insists, is statically typed. As far as I know TAIL is used only as a target for apltail. Some effort has been expended on the <a href="https://github.com/henrikurms/tail2futhark">tail2futhark</a> project, but it's no longer maintained.</p>
<h3 id="compiling-with-dynamic-types"><a class="header" href="#compiling-with-dynamic-types">Compiling with dynamic types</a></h3>
<p>Moving to dynamically-typed languages, the actual compilation isn't going to change that much. What we are interested in is types. When and how are they determined for values that haven't been created yet?</p>
-<p>First I think it's worth discussing <a href="https://julialang.org/">Julia</a>, which I would describe as the most successful compiled dynamically-typed array language. Each array has a particular type, and it sticks with it: for example if you multiple two <code><span class='Function'>Int8</span></code> arrays then the results will wrap around rather than increasing the type. But functions can accept many different argument types. Julia does this by compiling a function again whenever it's called on types it hasn't seen before. The resulting function is fast, but the time spent compiling causes significant delays. The model of arrays with a fixed type chosen from many options is the same as NumPy, which follows a traditional interpreted model. But it's different from APL and BQN, which have only one number type and optimize using subsets. J and K sit somewhere in between, with a small number of logical types (such as separate integers and floats) and some possibility for optimization.</p>
+<p>First I think it's worth discussing <a href="https://julialang.org/">Julia</a>, which I would describe as the most successful compiled dynamically-typed array language. Each array has a particular type, and it sticks with it: for example if you multiply two <code><span class='Function'>Int8</span></code> arrays then the results will wrap around rather than increasing the type. But functions can accept many different argument types. Julia does this by compiling a function again whenever it's called on types it hasn't seen before. The resulting function is fast, but the time spent compiling causes significant delays. The model of arrays with a fixed type chosen from many options is the same as NumPy, which follows a traditional interpreted model. But it's different from APL and BQN, which have only one number type and optimize using subsets. J and K sit somewhere in between, with a small number of logical types (such as separate integers and floats) and some possibility for optimization.</p>
<p>The ahead-of-time compilers apltail and Co-dfns mentioned in the previous section take different approaches. apltail uses a powerful (but not dependent) type system with type inference to detect which types the program uses. Co-dfns compiles to ArrayFire code that is still somewhat dynamic, with switches on rank or types. It's possible the ArrayFire compiler can optimize some of them out. I think that while these impressive projects are definitely doing something worthwhile, ahead-of-time compilation on its own is ultimately not a good basis for an array language implementation (but it's just my opinion, and I may well be wrong! Don't let me stop you!). There's too much to gain by having access to the actual data at compilation time, and being able to fit it into a smaller type.</p>
<p>However, I would be very interested in compiling BQN's stack-based IR using these ahead-of-time methods. The ArrayFire code from Co-dfns would be easiest to generate and can probably be adapted to BQN primitives (Dyalog has indicated in a non-committal way that they're interested in integrating Co-dfns into Dyalog APL as well). Other backends could provide better performance at the cost of type analysis, which brings us to the question of how to approach types in running dynamic code.</p>
<p>A very early example of JIT compilation, <a href="https://aplwiki.com/wiki/APL%5C3000">APL\3000</a>, begins by compiling each function for the exact types it's called with on the first call and recompiles for more general types when its assumptions are broken. This keeps the program from spending all its time compiling, but also can't optimize a function well over multiple types; given how cheap memory is now I think it's better to compile multiple versions more readily.</p>