aboutsummaryrefslogtreecommitdiff
path: root/implementation
diff options
context:
space:
mode:
Diffstat (limited to 'implementation')
-rw-r--r--implementation/codfns.md2
1 files changed, 2 insertions, 0 deletions
diff --git a/implementation/codfns.md b/implementation/codfns.md
index 42523c49..7af281ee 100644
--- a/implementation/codfns.md
+++ b/implementation/codfns.md
@@ -30,6 +30,8 @@ Aaron advocates the almost complete separation of code from comments (thesis) in
In short: **no**, I don't think it makes sense to use an array style for a compiler without a good reason. BQN uses it so it can self-host while maintaining good performance; Co-dfns uses it to prove it's possible to implement the core of a compiler with low parallel asymptotic complexity. It could also make a fun hobby project, although it's very draining. If the goal is to produce a working compiler then my opinion is that using the array style will take longer and require more skill, and the resulting compiler will be slower than a traditional one in a low-level language for practical tasks. Improvements in methodology could turn this around, but I'm pessimistic.
+It's important to note that my judgment here applies specifically to implementing compilers. Many people already apply APL successfully to problems in finance, or NumPy in science, TensorFlow in machine learning, and so on. Whatever their other qualities, these programs can be developed quickly and have competitive performance. Other problems, such as sparse graph algorithms, are unlikely to be approachable with array programming (and the [P-complete](https://en.wikipedia.org/wiki/P-complete) problems are thought to not be efficiently parallelizable, which would rule out any array solution in the sense used here). Compiling seems to occupy an interesting spot near the fuzzy boundary of that useful-array region. Is it more inside, or outside?
+
### Ease of development
It needs to be said: Aaron is easily one of the most talented programmers I know, and I have something of a knack for arrays myself. At present, building an array compiler requires putting together array operations in new and complicated ways, often with nothing but intuition to hint at which ones to use. It is much harder than making a compiler the normal way. However, there is some reason for hope in the [history](https://en.wikipedia.org/wiki/History_of_compiler_construction) of traditional compilers. It took a team led by John Backus two and a half years to produce the first FORTRAN compiler, and they gave him a Turing award for it! Between the 1950s and 1970s, developments like the LR parser brought compilers from being a challenge for the greatest minds to something accessible to a typical programmer with some effort. I don't believe the change will be so dramatic for array-based compilers, because many advantages in languages and tooling—keep in mind the FORTRAN implementers used assembly—are shared with ordinary programming. But Aaron and I have already discovered some concepts like tree manipulation and depth-based reordering that make it easier to think about compilation, and there are certainly more to be found.