NAME
App::BencherUtils - Utilities related to bencher
VERSION
This document describes version 0.245 of App::BencherUtils (from Perl distribution App-BencherUtils), released on 2022-08-24.
SYNOPSIS
DESCRIPTION
This distribution includes several utilities:
FUNCTIONS
bencher_code
Usage:
bencher_code(%args) -> [$status_code, $reason, $payload, \%result_meta]
Accept a list of codes and perform benchmark.
% bencher-code 'code1' 'code2'
is basically a shortcut for creating a scenario like this:
{
participants => [
{code_template=>'code1'},
{code_template=>'code2'},
],
}
and running that scenario with bencher
.
This function is not exported.
Arguments ('*' denotes required arguments):
codes* => array[str]
precision => float
startup => bool (default: 0)
Use code_startup mode instead of normal benchmark.
with_process_size => bool
Returns an enveloped result (an array).
First element ($status_code) is an integer containing HTTP-like status code (200 means OK, 4xx caller error, 5xx function error). Second element ($reason) is a string containing error message, or something like "OK" if status is 200. Third element ($payload) is the actual result, but usually not present when enveloped result is an error response ($status_code is not 2xx). Fourth element (%result_meta) is called result metadata and is optional, a hash that contains extra information, much like how HTTP response headers provide additional metadata.
Return value: (any)
bencher_for
Usage:
bencher_for(%args) -> [$status_code, $reason, $payload, \%result_meta]
List distributions that benchmarks specified modules.
This utility consults lcpan (local indexed CPAN mirror) to check if there are distributions that benchmarks a specified module. This is done by checking the presence of a dependency with the relationship x_benchmarks
.
This function is not exported.
Arguments ('*' denotes required arguments):
modules* => array[perl::modname]
Returns an enveloped result (an array).
First element ($status_code) is an integer containing HTTP-like status code (200 means OK, 4xx caller error, 5xx function error). Second element ($reason) is a string containing error message, or something like "OK" if status is 200. Third element ($payload) is the actual result, but usually not present when enveloped result is an error response ($status_code is not 2xx). Fourth element (%result_meta) is called result metadata and is optional, a hash that contains extra information, much like how HTTP response headers provide additional metadata.
Return value: (any)
bencher_module_startup_overhead
Usage:
bencher_module_startup_overhead(%args) -> [$status_code, $reason, $payload, \%result_meta]
Accept a list of module names and perform startup overhead benchmark.
% bencher-module-startup-overhead Mod1 Mod2 Mod3
is basically a shortcut for creating a scenario like this:
{
module_startup => 1,
participants => [
{module=>"Mod1"},
{module=>"Mod2"},
{module=>"Mod3"},
],
}
and running that scenario with bencher
.
To specify import arguments, you can use:
% bencher-module-startup-overhead Mod1 Mod2=arg1,arg2
which will translate to this Bencher scenario:
{
module_startup => 1,
participants => [
{module=>"Mod1"},
{module=>"Mod2", import_args=>'arg1,arg2'},
],
}
This function is not exported.
Arguments ('*' denotes required arguments):
modules* => array[perl::modargs]
with_process_size => bool
Returns an enveloped result (an array).
First element ($status_code) is an integer containing HTTP-like status code (200 means OK, 4xx caller error, 5xx function error). Second element ($reason) is a string containing error message, or something like "OK" if status is 200. Third element ($payload) is the actual result, but usually not present when enveloped result is an error response ($status_code is not 2xx). Fourth element (%result_meta) is called result metadata and is optional, a hash that contains extra information, much like how HTTP response headers provide additional metadata.
Return value: (any)
chart_bencher_result
Usage:
chart_bencher_result(%args) -> [$status_code, $reason, $payload, \%result_meta]
Generate chart of bencher result and display it.
This function is not exported.
Arguments ('*' denotes required arguments):
json* => str
JSON data.
Returns an enveloped result (an array).
First element ($status_code) is an integer containing HTTP-like status code (200 means OK, 4xx caller error, 5xx function error). Second element ($reason) is a string containing error message, or something like "OK" if status is 200. Third element ($payload) is the actual result, but usually not present when enveloped result is an error response ($status_code is not 2xx). Fourth element (%result_meta) is called result metadata and is optional, a hash that contains extra information, much like how HTTP response headers provide additional metadata.
Return value: (any)
cleanup_old_bencher_results
Usage:
cleanup_old_bencher_results(%args) -> [$status_code, $reason, $payload, \%result_meta]
Delete old results.
By default it will only keep 1 latest result for each scenario for the same CPU and the same module versions.
You can use --dry-run
first to see which files would be deleted without actually deleting them.
This function is not exported.
This function supports dry-run operation.
Arguments ('*' denotes required arguments):
detail => bool
num_keep => int (default: 0)
Number of old results to keep.
query => array[str]
result_dir* => str
Directory to store results files in.
Special arguments:
-dry_run => bool
Pass -dry_run=>1 to enable simulation mode.
Returns an enveloped result (an array).
First element ($status_code) is an integer containing HTTP-like status code (200 means OK, 4xx caller error, 5xx function error). Second element ($reason) is a string containing error message, or something like "OK" if status is 200. Third element ($payload) is the actual result, but usually not present when enveloped result is an error response ($status_code is not 2xx). Fourth element (%result_meta) is called result metadata and is optional, a hash that contains extra information, much like how HTTP response headers provide additional metadata.
Return value: (any)
format_bencher_result
Usage:
format_bencher_result(%args) -> [$status_code, $reason, $payload, \%result_meta]
Format bencher raw/JSON result.
This function is not exported.
Arguments ('*' denotes required arguments):
as => str (default: "bencher_table")
json* => str
JSON data.
Returns an enveloped result (an array).
First element ($status_code) is an integer containing HTTP-like status code (200 means OK, 4xx caller error, 5xx function error). Second element ($reason) is a string containing error message, or something like "OK" if status is 200. Third element ($payload) is the actual result, but usually not present when enveloped result is an error response ($status_code is not 2xx). Fourth element (%result_meta) is called result metadata and is optional, a hash that contains extra information, much like how HTTP response headers provide additional metadata.
Return value: (any)
list_bencher_results
Usage:
list_bencher_results(%args) -> [$status_code, $reason, $payload, \%result_meta]
List results in results directory.
This function is not exported.
Arguments ('*' denotes required arguments):
detail => bool
exclude_scenarios => array[str]
fmt => bool
Display each result with bencher-fmt.
include_scenarios => array[str]
latest => bool
module_startup => bool
query => array[str]
result_dir* => str
Directory to store results files in.
Returns an enveloped result (an array).
First element ($status_code) is an integer containing HTTP-like status code (200 means OK, 4xx caller error, 5xx function error). Second element ($reason) is a string containing error message, or something like "OK" if status is 200. Third element ($payload) is the actual result, but usually not present when enveloped result is an error response ($status_code is not 2xx). Fourth element (%result_meta) is called result metadata and is optional, a hash that contains extra information, much like how HTTP response headers provide additional metadata.
Return value: (any)
list_bencher_scenario_modules
Usage:
list_bencher_scenario_modules(%args) -> [$status_code, $reason, $payload, \%result_meta]
List Bencher scenario modules.
This function is not exported.
Arguments ('*' denotes required arguments):
detail => bool
query => str
Returns an enveloped result (an array).
First element ($status_code) is an integer containing HTTP-like status code (200 means OK, 4xx caller error, 5xx function error). Second element ($reason) is a string containing error message, or something like "OK" if status is 200. Third element ($payload) is the actual result, but usually not present when enveloped result is an error response ($status_code is not 2xx). Fourth element (%result_meta) is called result metadata and is optional, a hash that contains extra information, much like how HTTP response headers provide additional metadata.
Return value: (any)
HOMEPAGE
Please visit the project's homepage at https://metacpan.org/release/App-BencherUtils.
SOURCE
Source repository is at https://github.com/perlancar/perl-App-BencherUtils.
AUTHOR
perlancar <perlancar@cpan.org>
CONTRIBUTING
To contribute, you can send patches by email/via RT, or send pull requests on GitHub.
Most of the time, you don't need to build the distribution yourself. You can simply modify the code, then test via:
% prove -l
If you want to build the distribution (e.g. to try to install it locally on your system), you can install Dist::Zilla, Dist::Zilla::PluginBundle::Author::PERLANCAR, Pod::Weaver::PluginBundle::Author::PERLANCAR, and sometimes one or two other Dist::Zilla- and/or Pod::Weaver plugins. Any additional steps required beyond that are considered a bug and can be reported to me.
COPYRIGHT AND LICENSE
This software is copyright (c) 2022, 2021, 2019, 2018, 2017, 2016 by perlancar <perlancar@cpan.org>.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
BUGS
Please report any bugs or feature requests on the bugtracker website https://rt.cpan.org/Public/Dist/Display.html?Name=App-BencherUtils
When submitting a bug or request, please include a test-file or a patch to an existing test-file that illustrates the bug or desired feature.