From 8389e763344637c01d0d7161091e5f2cd9b14251 Mon Sep 17 00:00:00 2001 From: Marshall Lochbaum Date: Mon, 27 Jun 2022 22:00:55 -0400 Subject: Yet still more editing --- docs/implementation/compile/dynamic.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'docs/implementation/compile/dynamic.html') diff --git a/docs/implementation/compile/dynamic.html b/docs/implementation/compile/dynamic.html index 62c8ec2b..30a84d38 100644 --- a/docs/implementation/compile/dynamic.html +++ b/docs/implementation/compile/dynamic.html @@ -51,7 +51,7 @@

A simple and widely-used strategy to reduce slowdown due to dynamic compilation is to compile blocks in a separate thread from the one that runs them. The new code needs to be added in a thread-safe manner, which is not hard as the set of optimized implementations is a small lookup table of some sort with only one writer.

If the implementation is able to make use of all available threads (possible when working with large arrays), then it's still important to minimize compilation time as that thread could be put to better use. If there are idle threads then the only costs of compilation overhead are minor: the optimized code can't be put to use as quickly, and there is more power draw and possible downclocking.

Anticipation

-

The hot path strategy depends on targetting code for optimization based on history. Anticipation would identify in advance what code will take longer to run, and allocate a fraction of the time taken for optimizing that code. This is most useful for code that runs a small number of times on large arrays. An example where anticipation would be very important is for a programmer trying experimental one-off queries on a large dataset.

+

The hot path strategy depends on targeting code for optimization based on history. Anticipation would identify in advance what code will take longer to run, and allocate a fraction of the time taken for optimizing that code. This is most useful for code that runs a small number of times on large arrays. An example where anticipation would be very important is for a programmer trying experimental one-off queries on a large dataset.

The end result seems similar to that obtained by thunks as discussed at Dyalog '18 (video, slides). A thunk runs as part of a primitive, detecting that computing the result will be slow and outputting a deferred computation instead. Anticipation is more powerful because it can scan ahead in the bytecode instead of deciding as primitives are called whether or not to expand the thunk.

Anticipation attempts to improve program speed while bounding the added overhead. For example, it might be constrained to add no more than 5% to the time to first program output, relative to base-level interpretation. The idea is to exit normal interpretation as soon as a large enough lower bound is established on this time, for example if an operation would create a large array. At this point it begins analysis, which will involve at least some shape propagation and probably increase the lower bound and optimization budget.

Optimization level can be gated based on the ratio of expected time to code length (which presumably controls cost of optimization). But optimization doesn't need to be performed all at once: upcoming code should be run as soon as it can be optimized at an appropriate level, in order to have more information available for later operations. Optimization might include primitive combinations or intermediate data formats, so it's important to check how the results will be used before running expensive code.

-- cgit v1.2.3