Skip to content

Latest commit

 

History

History
127 lines (98 loc) · 44.3 KB

README.md

File metadata and controls

127 lines (98 loc) · 44.3 KB

Faster CPython Benchmark Infrastructure

🔒 ▶️ START A BENCHMARK RUN

Results

Here are some recent and important revisions. 👉 Complete list of results.

Most recent pystats on main (22a4421)

linux x86_64 (linux)

date fork/ref hash/flags vs. 3.12.6: vs. 3.13.0rc2: vs. base:
2025-01-23 python/732670d93b9b0c0ff8ad 732670d 1.068x ↑
📄📈
1.030x ↑
📄📈
2025-01-23 python/732670d93b9b0c0ff8ad 732670d (NOGIL) 1.075x ↓
📄📈
1.108x ↓
📄📈
1.127x ↓
📄📈🧠
2025-01-23 python/ec91e1c2762412f1408b ec91e1c 1.064x ↑
📄📈
1.026x ↑
📄📈
2025-01-23 python/ec91e1c2762412f1408b ec91e1c (NOGIL) 1.084x ↓
📄📈
1.119x ↓
📄📈
1.136x ↓
📄📈🧠
2025-01-23 python/327a257e6ae4ad0e3b6e 327a257 1.060x ↑
📄📈
1.018x ↑
📄📈
2025-01-23 python/327a257e6ae4ad0e3b6e 327a257 (NOGIL) 1.088x ↓
📄📈
1.122x ↓
📄📈
1.134x ↓
📄📈🧠
2025-01-22 python/86c1a60d5a28cfb51f88 86c1a60 (NOGIL) 1.087x ↓
📄📈
1.117x ↓
📄📈
1.120x ↓
📄📈🧠
2025-01-22 python/86c1a60d5a28cfb51f88 86c1a60 1.049x ↑
📄📈
1.010x ↑
📄📈
2025-01-21 python/01bcf13a1c5bfca5124c 01bcf13 (NOGIL) 1.097x ↓
📄📈
1.132x ↓
📄📈
1.128x ↓
📄📈🧠
2025-01-21 python/01bcf13a1c5bfca5124c 01bcf13 1.048x ↑
📄📈
1.005x ↑
📄📈
2025-01-20 python/e65a1eb93ae35f9fbab1 e65a1eb 1.068x ↑
📄📈
1.025x ↑
📄📈
2025-01-20 python/e65a1eb93ae35f9fbab1 e65a1eb (NOGIL) 1.077x ↓
📄📈
1.109x ↓
📄📈
1.128x ↓
📄📈🧠

linux x86_64 (vultr)

date fork/ref hash/flags vs. 3.12.6: vs. 3.13.0rc2: vs. base:
2025-01-23 python/c05a851ac59e6fb7bd43 c05a851 (NOGIL) 1.058x ↓
📄📈
1.088x ↓
📄📈
2025-01-23 python/732670d93b9b0c0ff8ad 732670d 1.095x ↑
📄📈
1.056x ↑
📄📈
2025-01-23 python/732670d93b9b0c0ff8ad 732670d (NOGIL) 1.064x ↓
📄📈
1.094x ↓
📄📈
1.146x ↓
📄📈🧠
2025-01-23 colesbury/gh_129236_gc_stackpo 1ec055b (NOGIL) 1.048x ↓
📄📈
1.078x ↓
📄📈
1.010x ↑
📄📈🧠
2025-01-23 python/ec91e1c2762412f1408b ec91e1c 1.092x ↑
📄📈
1.054x ↑
📄📈
2025-01-23 python/ec91e1c2762412f1408b ec91e1c (NOGIL) 1.057x ↓
📄📈
1.088x ↓
📄📈
1.137x ↓
📄📈🧠
2025-01-23 faster-cpython/remove_most_conditio 584015a (NOGIL) 1.057x ↓
📄📈
1.087x ↓
📄📈
1.006x ↑
📄📈🧠
2025-01-23 python/a10f99375e7912df863c a10f993 (NOGIL) 1.063x ↓
📄📈
1.093x ↓
📄📈
2025-01-23 python/327a257e6ae4ad0e3b6e 327a257 1.085x ↑
📄📈
1.046x ↑
📄📈
2025-01-23 python/327a257e6ae4ad0e3b6e 327a257 (NOGIL) 1.087x ↓
📄📈
1.117x ↓
📄📈
1.159x ↓
📄📈🧠
2025-01-22 nascheme/gh_129201_gc_mark_pr 1b4e8c3 (NOGIL) 1.071x ↓
📄📈
1.102x ↓
📄📈
1.009x ↓
📄📈🧠
2025-01-22 python/2ed5ee9a50454b3fce87 2ed5ee9 1.084x ↑
📄📈
1.047x ↑
📄📈
2025-01-22 python/2ed5ee9a50454b3fce87 2ed5ee9 (NOGIL) 1.079x ↓
📄📈
1.109x ↓
📄📈
1.151x ↓
📄📈🧠
2025-01-22 mpage/ft_aa_test_1 dc449a1 (NOGIL) 1.079x ↓
📄📈
1.109x ↓
📄📈
1.002x ↓
📄📈🧠
2025-01-22 mpage/ft_aa_test_0 fcbf62d (NOGIL) 1.081x ↓
📄📈
1.111x ↓
📄📈
1.001x ↓
📄📈🧠
2025-01-22 python/24c84d816f2f2ecb76b8 24c84d8 (NOGIL) 1.076x ↓
📄📈
1.106x ↓
📄📈
2025-01-22 colesbury/revert_gh_128914 68ce740 (NOGIL) 1.056x ↓
📄📈
1.086x ↓
📄📈
1.025x ↑
📄📈🧠
2025-01-22 colesbury/revert_gh_128914 68ce740 1.089x ↑
📄📈
1.051x ↑
📄📈
1.005x ↑
📄📈🧠
2025-01-22 python/86c1a60d5a28cfb51f88 86c1a60 (NOGIL) 1.084x ↓
📄📈
1.113x ↓
📄📈
2025-01-22 python/86c1a60d5a28cfb51f88 86c1a60 1.089x ↑
📄📈
1.051x ↑
📄📈
1.001x ↑
📄📈🧠
2025-01-22 python/767cf708449fbf13826d 767cf70 1.088x ↑
📄📈
1.050x ↑
📄📈
1.004x ↓
📄📈🧠
2025-01-21 python/01bcf13a1c5bfca5124c 01bcf13 (NOGIL) 1.075x ↓
📄📈
1.105x ↓
📄📈
2025-01-21 mpage/aa_test_6 01bcf13 1.087x ↑
📄📈
1.049x ↑
📄📈
1.006x ↓
📄📈🧠
2025-01-21 python/01bcf13a1c5bfca5124c 01bcf13 1.093x ↑
📄📈
1.054x ↑
📄📈
1.002x ↓
📄📈🧠
2025-01-21 mpage/aa_test_5 2ea0525 1.089x ↑
📄📈
1.051x ↑
📄📈
1.001x ↑
📄📈🧠
2025-01-21 python/29caec62ee0650493c83 29caec6 1.094x ↑
📄📈
1.056x ↑
📄📈
2025-01-20 python/e65a1eb93ae35f9fbab1 e65a1eb 1.096x ↑
📄📈
1.057x ↑
📄📈
2025-01-20 python/e65a1eb93ae35f9fbab1 e65a1eb (NOGIL) 1.081x ↓
📄📈
1.110x ↓
📄📈
1.004x ↑
📄📈🧠
2025-01-20 python/e54ac3b69edacf414998 e54ac3b (NOGIL) 1.083x ↓
📄📈
1.113x ↓
📄📈
2025-01-20 python/ab61d3f4303d14a413bc ab61d3f (NOGIL) 1.083x ↓
📄📈
1.112x ↓
📄📈
1.031x ↓
📄📈🧠
2025-01-20 python/0a6412f9cc9e694e7629 0a6412f (NOGIL) 1.052x ↓
📄📈
1.083x ↓
📄📈
2025-01-17 python/3829104ab412a47bf3f3 3829104 (NOGIL) 1.060x ↓
📄📈
1.090x ↓
📄📈
2025-01-21 Yhg1s/bb495b05f9c1a3d5224b bb495b0 (NOGIL) 1.166x ↓
📄📈
1.193x ↓
📄📈
1.001x ↓
📄📈🧠

* indicates that the exact same versions of pyperformance was not used.

Longitudinal speed improvement

Improvement of the geometric mean of key merged benchmarks, computed with pyperf compare. The results have a resolution of 0.01 (1%).

Configuration speed improvement

Documentation

Running benchmarks from the GitHub web UI

Visit the 🔒 benchmark action and click the "Run Workflow" button.

The available parameters are:

  • fork: The fork of CPython to benchmark. If benchmarking a pull request, this would normally be your GitHub username.
  • ref: The branch, tag or commit SHA to benchmark. If a SHA, it must be the full SHA, since finding it by a prefix is not supported.
  • machine: The machine to run on. One of linux-amd64 (default), windows-amd64, darwin-arm64 or all.
  • benchmark_base: If checked, the base of the selected branch will also be benchmarked. The base is determined by running git merge-base upstream/main $ref.
  • pystats: If checked, collect the pystats from running the benchmarks.

To watch the progress of the benchmark, select it from the 🔒 benchmark action page. It may be canceled from there as well. To show only your benchmark workflows, select your GitHub ID from the "Actor" dropdown.

When the benchmarking is complete, the results are published to this repository and will appear in the complete table. Each set of benchmarks will have:

  • The raw .json results from pyperformance.
  • Comparisons against important reference releases, as well as the merge base of the branch if benchmark_base was selected. These include
    • A markdown table produced by pyperf compare_to.
    • A set of "violin" plots showing the distribution of results for each benchmark.

The most convenient way to get results locally is to clone this repo and git pull from it.

Running benchmarks from the GitHub CLI

To automate benchmarking runs, it may be more convenient to use the GitHub CLI. Once you have gh installed and configured, you can run benchmarks by cloning this repository and then from inside it:

gh workflow run benchmark.yml -f fork=me -f ref=my_branch

Any of the parameters described above are available at the commandline using the -f key=value syntax.

Collecting Linux perf profiling data

To collect Linux perf sampling profile data for a benchmarking run, run the _benchmark action and check the perf checkbox. Follow this by a run of the _generate action to regenerate the plots.

License

This repo is licensed under the BSD 3-Clause License, as found in the LICENSE file.