Charting Guide
How to generate tables, speedup charts, speedup charts, and tables from benchmark results
How to generate tables, speedup charts, speedup charts, and tables from benchmark results
poly-bench generates SVG charts directly from your .bench file. Charts are produced by calling directives inside a suite-level after { } block. This guide walks through every chart type with real examples and the most useful parameter combinations.
| Chart | Best for |
|---|---|
drawTable | Comparing all languages side-by-side across multiple benchmarks |
drawSpeedupChart | Showing how performance scales as input size grows |
drawSpeedupChart | Highlighting how much faster one language is relative to a baseline |
drawTable | Embedding a results summary in a README or report |
All four require use std::charting at the top of the file and must be called inside an after { } block.
drawTableThe simplest possible table — just a title and output file.
1use std::charting2
3suite hashBench {4 # ... setup and benchmarks ...5
6 after {7 charting.drawTable(8 title: "Hash Performance",9 output: "hash-bar.svg"10 )11 }12}Sort by speedup descending and only show benchmarks where the fastest language is at least 1.5× faster than the slowest.
1after {2 charting.drawTable(3 title: "Hash Performance — Notable Differences",4 output: "hash-bar-filtered.svg",5 sortBy: "speedup",6 sortOrder: "desc",7 minSpeedup: 1.5,8 limit: 109 )10}Show memory allocation data alongside timing. Requires memory: true on the suite or individual benchmarks.
1suite allocBench {2 memory: true3
4 # ... benchmarks ...5
6 after {7 charting.drawTable(8 title: "Allocation Comparison",9 output: "alloc-bar.svg",10 showMemory: true,11 showTotalTime: true12 )13 }14}Show 95% confidence interval error bars to communicate measurement uncertainty.
1suite cryptoBench {2 count: 10 3
4
5 after {6 charting.drawTable(7 title: "Keccak256 — 95% CI",8 output: "keccak-ci.svg",9 showErrorBars: true,10 ciLevel: 95,11 errorBarOpacity: 0.8,12 errorBarThickness: 1.513 )14 }15}Fit a complexity curve to the bars. Useful when benchmarks encode input sizes in their names (e.g. n100, n200).
1after {2 charting.drawTable(3 title: "Sort Performance with Regression",4 output: "sort-regression.svg",5 xlabel: "Array Size",6 showRegression: true,7 regressionModel: "nlogn",8 showRSquared: true,9 showEquation: true,10 regressionStyle: "dashed"11 )12}Available regressionModel values: "auto", "constant", "log", "linear", "nlogn", "quadratic", "cubic". Use "auto" to let poly-bench pick the best fit.
When results span several orders of magnitude, a log scale prevents small values from being invisible.
1after {2 charting.drawTable(3 title: "Mixed Complexity Benchmarks",4 output: "mixed-log.svg",5 yScale: "log",6 showGrid: true,7 gridOpacity: 0.158 )9}Reduces padding and panel sizes — useful for README embeds or narrow viewports.
1after {2 charting.drawTable(3 title: "Quick Summary",4 output: "summary-compact.svg",5 compact: true,6 showConfig: false,7 showDistribution: false,8 width: 800,9 height: 40010 )11}Generate a focused chart from a subset of the suite's benchmarks.
1after {2 # Only show the two most interesting benchmarks3 charting.drawTable(4 title: "Key Results",5 output: "key-results.svg",6 includeBenchmarks: ["hashShort", "hashLong"]7 )8
9 # Show everything except the trivial warmup bench10 charting.drawTable(11 title: "Full Suite",12 output: "full-suite.svg",13 excludeBenchmarks: ["trivialAdd"]14 )15}Only show benchmarks where a specific language wins.
1after {2 charting.drawTable(3 title: "Benchmarks Go Wins",4 output: "go-wins.svg",5 filterWinner: "go"6 )7
8 charting.drawTable(9 title: "Benchmarks Rust Wins",10 output: "rust-wins.svg",11 filterWinner: "rust"12 )13}drawSpeedupChartSpeedup charts are designed for scaling benchmarks where benchmark names encode a numeric size. poly-bench extracts the number from the name (e.g. n100 → 100, size1024 → 1024) and uses it as the x-axis value.
1use std::charting2
3suite sortN {4 mode: "auto"5 targetTime: 500ms6
7 # ... setup ...8
9 fixture s100 { hex: @file("fixtures/sort/sort_100.hex") }10 fixture s500 { hex: @file("fixtures/sort/sort_500.hex") }11 fixture s1000 { hex: @file("fixtures/sort/sort_1000.hex") }12
13 bench n100 { go: sortGo(s100) ts: sortTs(s100) rust: sort_rust(&s100) }14 bench n500 { go: sortGo(s500) ts: sortTs(s500) rust: sort_rust(&s500) }15 bench n1000 { go: sortGo(s1000) ts: sortTs(s1000) rust: sort_rust(&s1000) }16
17 after {18 charting.drawSpeedupChart(19 title: "Sort Performance — O(n log n)",20 output: "sort-line.svg",21 xlabel: "Array Size (n elements)"22 )23 }24}Shade ±1 standard deviation around each line to show measurement spread. Requires count: 3 or higher.
1suite matmulN {2 count: 53
4 # ... benchmarks ...5
6 after {7 charting.drawSpeedupChart(8 title: "Matrix Multiply — O(n³)",9 output: "matmul-line.svg",10 xlabel: "Matrix Size (n×n)",11 )12 }13}Overlay a fitted complexity curve to confirm theoretical complexity.
1after {2 charting.drawSpeedupChart(3 title: "Random Stats — O(n) Confirmed",4 output: "random-line.svg",5 xlabel: "Array Size",6 showRegression: true,7 regressionModel: "linear",8 showRSquared: true,9 showRegressionBand: true,10 regressionBandOpacity: 0.1211 )12}Flip the y-axis to show operations per second instead of time per operation.
1after {2 charting.drawSpeedupChart(3 title: "Throughput Scaling",4 output: "throughput-line.svg",5 xlabel: "Payload Size",6 chartMode: "throughput",7 timeUnit: "s"8 )9}Use a specific benchmark as the 1× reference point, and sort by speedup.
1after {2 charting.drawSpeedupChart(3 title: "Speedup vs hashShort Baseline",4 output: "speedup-sorted.svg",5 baselineBenchmark: "hashShort",6 sortBy: "speedup",7 sortOrder: "desc",8 showGrid: true,9 gridOpacity: 0.15,10 legendPosition: "top-right"11 )12}1after {2 charting.drawSpeedupChart(3 title: "Significant Speedups Only (>2×)",4 output: "speedup-significant.svg",5 minSpeedup: 2.0,6 sortBy: "speedup",7 sortOrder: "desc"8 )9}drawTableGenerates an SVG table — useful for embedding in GitHub READMEs, reports, or CI artifacts.
1after {2 charting.drawTable(3 title: "Benchmark Results",4 output: "results-table.svg"5 )6}1after {2 charting.drawTable(3 title: "Keccak256 Suite — Full Results",4 output: "keccak-table.svg",5 showStats: true,6 showConfig: true,7 showWinCounts: true,8 showGeoMean: true,9 sortBy: "speedup",10 sortOrder: "desc",11 timeUnit: "auto",12 precision: 213 )14}1after {2 charting.drawTable(3 title: "Results",4 output: "ci-table.svg",5 compact: true,6 showConfig: false,7 showStats: false,8 width: 7009 )10}You can call any number of chart directives in a single after block. Each produces a separate SVG file.
1use std::charting2
3suite sortN {4 description: "O(n log n) sort — stdlib sort on int32 array"5 warmup: 506 compare: true7 baseline: "go"8 mode: "auto"9 targetTime: 500ms10 count: 311 memory: true12
13 # ... setup, fixtures, benchmarks ...14
15 after {16 charting.drawSpeedupChart(17 title: "Sort Performance — O(n log n)",18 description: "Scaling behavior across Go, TypeScript, and Rust",19 output: "sort-line.svg",20 xlabel: "Array Size (n elements)",21 showRegression: true,22 regressionModel: "nlogn",23 showRSquared: true24 )25
26 charting.drawTable(27 title: "Sort Comparison",28 description: "Grouped bars — all sizes",29 output: "sort-bar.svg",30 xlabel: "Array Size",31 sortBy: "speedup",32 sortOrder: "desc",33 showMemory: true,34 showErrorBars: true,35 ciLevel: 9536 )37
38 charting.drawSpeedupChart(39 title: "Speedup vs Go",40 output: "sort-speedup.svg",41 sortBy: "speedup",42 sortOrder: "desc"43 )44
45 charting.drawTable(46 title: "Sort Results Summary",47 output: "sort-table.svg",48 showStats: true,49 showGeoMean: true,50 compact: false51 )52 }53}--output FlagChart files are written to the directory specified by --output on the CLI. Without it, charts are not generated even if after { } directives are present.
$# Charts saved to results/$poly-bench run benchmarks/sort.bench --output results/
$# Charts saved alongside the bench file$poly-bench run benchmarks/sort.bench --output benchmarks/out/
$# Run without saving charts (console output only)$poly-bench run benchmarks/sort.benchThe output parameter inside a chart directive sets the filename within that directory:
1after {2 charting.drawTable(3 title: "Results",4 output: "sort-bar.svg" # → results/sort-bar.svg5 )6 charting.drawSpeedupChart(7 title: "Scaling",8 output: "sort-line.svg" # → results/sort-line.svg9 )10}If output is omitted from a directive, poly-bench generates a filename from the title (lowercased, spaces replaced with hyphens).
after { } syntax--output and --report flags