API¶
Basics¶
- class true_north.Group(name: str | None = None)¶
Collection of benchmarks.
If name is not specified, file name and line number will be used instead.
- add(func: Func | None = None, *, name: str | None = None, loops: int | None = None, repeats: int = 5, min_time: float = 0.2, timer: Timer = <built-in function perf_counter>) Callable[[Func], Check] ¶
Register a new benchmark function in the group.
The first registered benchmark will be used as the baseline for all others.
- Parameters:
name – if not specified, the function name will be used.
loops – how many times to run the benchmark in each repeat. If not specified, will be automatically detected to make each repeat last at least min_time seconds.
repeats – how many times repeat the benchmark (all loops). The results will show only the best repeat to reduce how external factors affect the results.
min_time – the minimum run time to target if loops is not specified.
timer – function used to get the current time.
- print(config: ~true_north._config.Config = Config(stream=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>, opcodes=False, allocations=False, histogram_lines=None)) None ¶
Run all benchmarks in the group and print their results.
- Parameters:
stream – the stream where to write all output. Default is stdout.
opcodes – count opcodes. Slow but reproducible.
allocations – track memory allocations. Slow but interesting.
- class true_north.types.Check(name: str, func: Func, loops: int | None, repeats: int, min_time: float, timer: Timer)¶
A single benchmark.
Don’t instancinate directly, use Group.add decorator instead.
- check_mallocs(lines: int, loops: int = 1) MallocResult ¶
Run the benchmark and trace memory allocations.
- check_opcodes(loops: int = 1, best: float = 0) OpcodesResult ¶
Run the benchmark and count executed opcodes.
- check_timing() TimingResult ¶
Run benchmarks for the check.
Results¶
- class true_north.types.BaseResult¶
- format_text() str ¶
Represent the result as a human-friendly text.
- class true_north.types.TimingResult(total_timings: list[float], each_timings: list[float])¶
The result of benchmarking a code execution time.
- property best: float¶
The best of all total timings (repeats).
- format_histogram(limit: int = 64, lines: int = 2) str ¶
Histogram of timings (repeats).
- format_text() str ¶
Represent the timing result as a human-friendly text.
- property loop_timings: list[float]¶
Execution time of each loop in a single repeat (bechmarking function call).
- property stdev: float¶
Standard deviation of loops in a single repeat.
If there is only one loop in each repeat, use all repeats instead.
- property total_timings: list[float]¶
Average time per loop for each repeat (bechmarking function call).
- class true_north.types.OpcodesResult(opcodes: int, lines: int, timings: list[float], best: float)¶
The result of benchmarking opcodes executed by a code.
- property durations: list[float]¶
How long it took to execute each opcode.
- format_text() str ¶
Generate a human-friendly representation of opcodes.
- property lines: int¶
Number of lines of code executed.
See lnotab_notes.txt in CPython to learn more what is considered line.
- property opcodes_count: int¶
Number of opcodes executed.
- property timings: list[float]¶
The time when each opcode was executed.
- class true_north.types.MallocResult(totals: list[int], allocs: list[Counter[str]])¶
The result of benchmarking memory allocations of a code.
- property allocs: list[Counter[str]]¶
Memory allocations in each file for each sample.
Each item of the list is a Counter for a single sample. The Counter holds the number of allocations in each file.
- format_text() str ¶
Generate a human-friendly representation of memory allocations.
- property total_allocs: int¶
Total memory allocations during the code execution.
- property totals: list[int]¶
Total memory used by the code on each sample.