aboutsummaryrefslogtreecommitdiff
path: root/implementation/compile
diff options
context:
space:
mode:
authorMarshall Lochbaum <mwlochbaum@gmail.com>2022-04-11 08:17:28 -0400
committerMarshall Lochbaum <mwlochbaum@gmail.com>2022-04-11 08:17:28 -0400
commit846425fabe9b4c5c9bbe2be0c785fd1662a0daaa (patch)
tree0cc4e935ef26811e4e7f8e6527606c7fe691f8c4 /implementation/compile
parente3bdf0aa984961023ef80414cd93ef225ec07117 (diff)
Typo (fixes #64)
Diffstat (limited to 'implementation/compile')
-rw-r--r--implementation/compile/intro.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/implementation/compile/intro.md b/implementation/compile/intro.md
index ad9e4e35..ca115d59 100644
--- a/implementation/compile/intro.md
+++ b/implementation/compile/intro.md
@@ -73,7 +73,7 @@ Three major efforts to apply ahead-of-time compilation to APL are [APEX](https:/
Moving to dynamically-typed languages, the actual compilation isn't going to change that much. What we are interested in is types. When and how are they determined for values that haven't been created yet?
-First I think it's worth discussing [Julia](https://julialang.org/), which I would describe as the most successful compiled dynamically-typed array language. Each array has a particular type, and it sticks with it: for example if you multiple two `Int8` arrays then the results will wrap around rather than increasing the type. But functions can accept many different argument types. Julia does this by compiling a function again whenever it's called on types it hasn't seen before. The resulting function is fast, but the time spent compiling causes significant delays. The model of arrays with a fixed type chosen from many options is the same as NumPy, which follows a traditional interpreted model. But it's different from APL and BQN, which have only one number type and optimize using subsets. J and K sit somewhere in between, with a small number of logical types (such as separate integers and floats) and some possibility for optimization.
+First I think it's worth discussing [Julia](https://julialang.org/), which I would describe as the most successful compiled dynamically-typed array language. Each array has a particular type, and it sticks with it: for example if you multiply two `Int8` arrays then the results will wrap around rather than increasing the type. But functions can accept many different argument types. Julia does this by compiling a function again whenever it's called on types it hasn't seen before. The resulting function is fast, but the time spent compiling causes significant delays. The model of arrays with a fixed type chosen from many options is the same as NumPy, which follows a traditional interpreted model. But it's different from APL and BQN, which have only one number type and optimize using subsets. J and K sit somewhere in between, with a small number of logical types (such as separate integers and floats) and some possibility for optimization.
The ahead-of-time compilers apltail and Co-dfns mentioned in the previous section take different approaches. apltail uses a powerful (but not dependent) type system with type inference to detect which types the program uses. Co-dfns compiles to ArrayFire code that is still somewhat dynamic, with switches on rank or types. It's possible the ArrayFire compiler can optimize some of them out. I think that while these impressive projects are definitely doing something worthwhile, ahead-of-time compilation on its own is ultimately not a good basis for an array language implementation (but it's just my opinion, and I may well be wrong! Don't let me stop you!). There's too much to gain by having access to the actual data at compilation time, and being able to fit it into a smaller type.