diff options
| author | Marshall Lochbaum <mwlochbaum@gmail.com> | 2023-03-11 20:12:34 -0500 |
|---|---|---|
| committer | Marshall Lochbaum <mwlochbaum@gmail.com> | 2023-03-11 20:12:34 -0500 |
| commit | 3577976fda09e50a2efb1ea40afe9ea3a8c08f82 (patch) | |
| tree | b42824ed31a45a4b8e67666de875bf3b183587cb /docs/implementation/bootbench.html | |
| parent | c5a716dedd583bf758c26e30b1f88d68f7e02b55 (diff) | |
Typo
Diffstat (limited to 'docs/implementation/bootbench.html')
| -rw-r--r-- | docs/implementation/bootbench.html | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/docs/implementation/bootbench.html b/docs/implementation/bootbench.html index 924e3463..b2790e94 100644 --- a/docs/implementation/bootbench.html +++ b/docs/implementation/bootbench.html @@ -54,4 +54,4 @@ <p>I think it's a bit of both, and that a full C compiler would fall between the Java timing of 567μs and the 1133μs I calculated by scaling proportionately to BQN. Probably more like 700-800μs? That's a wild guess.</p> <p>Java faster than C, can that happen? If you give HotSpot a few seconds to examine and optimize the program, sure. Note that Java gets not only runtime information from compiling but from compiling on this specific source: it sees that there aren't any namespaces or 2-modifiers or strands and can try to make the checks for these as fast as possible. Java compiled ahead of time with GraalVM <a href="https://www.graalvm.org/native-image/">Native Image</a> is a good bit slower than the OpenJDK run. General compilation is also fairly allocation-heavy and Java's allocator and GC are very advanced. The C compiler uses BQN's allocator, which is a lot faster than malloc, and boot2 doesn't need that many allocations, but it still spends at least 20% of time in memory management, freeing memory particularly. Generational GC makes freeing very cheap most of the time.</p> <p>On the other side, why would an array compiler scale worse than a scalar one? By default, an array compiler pays for syntax even in regions where it isn't used. This can sometimes be mitigated by working on an extracted portion of the source, such as pulling out the contents of headers for header processing. The extraction is still there, so this reduces but doesn't eliminate the cost on non-header parts of the code. And the constant cost for small files is still there. In contrast, a scalar compiler that uses switch statements to decide what to do at a given point won't pay that much for added syntax. The cost of a switch statement is sub-linear in the number of cases: logarithmic for a decision tree and constant for a jump table.</p> -<p>Array-based compiling does have its advantages still. Once all that syntax is supported the cost depends very little on the contents of the input file. This is particularly true for blocks, which need to have some metadata output for each one. A scalar compiler creates this metadata one block at a time, which leads to a lot of branching, while the BQN compiler creates it all at once then splits into blocks with Group (<code><span class='Function'>⊔</span></code>), a faster method overall. An informal test with <a href="https://github.com/mlochbaum/Singeli/blob/master/singeli.bqn">singeli.bqn</a>, which many blocks, showed Java's advantage at about 15%, lower than the 30% seen above.</p> +<p>Array-based compiling does have its advantages still. Once all that syntax is supported the cost depends very little on the contents of the input file. This is particularly true for blocks, which need to have some metadata output for each one. A scalar compiler creates this metadata one block at a time, which leads to a lot of branching, while the BQN compiler creates it all at once then splits into blocks with Group (<code><span class='Function'>⊔</span></code>), a faster method overall. An informal test with <a href="https://github.com/mlochbaum/Singeli/blob/master/singeli.bqn">singeli.bqn</a>, which has many blocks, showed Java's advantage at about 15%, lower than the 30% seen above.</p> |
