From 3dcd003bea2356e15cafc267d775eaa188bf8f46 Mon Sep 17 00:00:00 2001 From: Marshall Lochbaum Date: Wed, 16 Mar 2022 10:19:45 -0400 Subject: Include a perf measurement of the markdown generator --- docs/implementation/kclaims.html | 36 ++++++++++++++++++++++++------------ 1 file changed, 24 insertions(+), 12 deletions(-) (limited to 'docs/implementation') diff --git a/docs/implementation/kclaims.html b/docs/implementation/kclaims.html index 8627b228..837ac1c2 100644 --- a/docs/implementation/kclaims.html +++ b/docs/implementation/kclaims.html @@ -51,16 +51,26 @@ 0.557255985 seconds time elapsed -

Here's the BQN call that builds CBQN's object code sources:

+

Here are the BQN calls that build CBQN's object code sources, and this website:

 Performance counter stats for './genRuntime /home/marshall/BQN/':
 
-       241,224,322      cycles:u
-         5,452,372      icache_16b.ifdata_stall:u
-           829,146      cache-misses:u
-         6,954,143      L1-dcache-load-misses:u
-         1,291,804      L1-icache-load-misses:u
+       232,456,331      cycles:u
+         4,482,531      icache_16b.ifdata_stall:u
+           707,909      cache-misses:u
+         5,058,125      L1-dcache-load-misses:u
+         1,315,281      L1-icache-load-misses:u
 
-       0.098228740 seconds time elapsed
+       0.103811282 seconds time elapsed
+
+ Performance counter stats for './gendocs':
+
+     5,633,327,936      cycles:u
+       494,293,472      icache_16b.ifdata_stall:u
+         8,755,069      cache-misses:u
+        37,565,924      L1-dcache-load-misses:u
+       265,985,526      L1-icache-load-misses:u
+
+       2.138414849 seconds time elapsed
 

And the Python-based font tool I use to build font samples for this site:

 Performance counter stats for 'pyftsubset […more stuff]':
@@ -74,15 +84,17 @@
        0.215698059 seconds time elapsed
 

Dividing the stall number by total cycles gives us percentage of program time that can be attributed to L1 instruction misses.

-↗️
    "J""BQN""Python" ˘ 100 × 565.425 ÷ 1_457241499
+↗️
    l  "J""BQN""BQN""Python"
+    l ˘ 100 × 564.549425 ÷ 1_4572325_633499
 ┌─                            
 ╵ "J"      3.843514070006863  
-  "BQN"    2.240663900414938  
+  "BQN"    1.939655172413793  
+  "BQN"    8.76974968933073   
   "Python" 5.01002004008016   
                              ┘
 
-

So, roughly 4%, 2%, and 5%. The cache miss counts are also broadly in line with these numbers. Note that full cache misses are pretty rare, so that most misses just hit L2 or L3 and don't suffer a large penalty. Also note that instruction cache misses are mostly lower than data misses, as expected.

-

Don't get me wrong, I'd love to improve performance even by 2%. But it's not exactly world domination, is it? And it doesn't matter how cache-friendly K is, that's the absolute limit.

+

So, roughly 4%, 2 to 9%, and 5%. The cache miss counts are also broadly in line with these numbers. Note that full cache misses are pretty rare, so that most misses just hit L2 or L3 and don't suffer a large penalty. Also note that instruction cache misses are mostly lower than data misses, as expected.

+

Don't get me wrong, I'd love to improve performance even by 2%. But it's not exactly world domination, is it? The perf results are an upper bound for how much these programs could be sped up with better treatment of the instruction cache. If K is faster by more than that, it's because of other optimizations.

For comparison, here's ngn/k (which does aim for a small executable) running one of its unit tests—test 19 in the a20/ folder, chosen because it's the longest-running of those tests.

 Performance counter stats for '../k 19.k':
 
@@ -94,4 +106,4 @@
 
        1.245378356 seconds time elapsed
 
-

The stalls are less than 1% here, so maybe the smaller executable is paying off in some way. I can't be sure, because the programs being run are very different: 19.k is 10 lines while the others are hundreds of lines long. But I don't have a longer K program handy to test with (and you could always argue the result doesn't apply to Whitney's K anyway). Again, it doesn't matter much: the point is that the absolute most the other interpreters could gain from being more L1-friendly is about 5% on those fairly representative programs.

+

The stalls are less than 1% here, so maybe the smaller executable is paying off in some way. I can't be sure, because the programs being run are very different: 19.k is 10 lines while the others are hundreds of lines long. But I don't have a longer K program handy to test with (and you could always argue the result doesn't apply to Whitney's K anyway). Again, it doesn't matter much: the point is that the absolute most the other interpreters could gain from being more L1-friendly is about 10% on those fairly representative programs.

-- cgit v1.2.3