NAME
Math::NLopt - Math::NLopt - Perl interface to the NLopt optimization library
VERSION
version 0.06
SYNOPSIS
use Math::NLopt ':algorithms';
my $opt = Math::NLopt->new( NLOPT_LD_MMA, 2 );
$opt->set_lower_bounds( [ -HUGE_VAL(), 0 ] );
$opt->set_min_objective( sub ( $x, $grad, $data ) { ... } );
$opt->set_xtol_rel( ... );
\@optimized_pars = $opt->optimize( \@initial_pars );
DESCRIPTION
NLopt is a
library for nonlinear local and global optimization, for functions
with and without gradient information. It is designed as a simple,
unified interface and packaging of several free/open-source
nonlinear optimization libraries.
Math::NLopt is a Perl binding to NLopt. It uses the Alien::NLopt module to find or install a Perl local instance of the NLopt library.
This module provides an interface using native Perl arrays.
The main documentation for NLopt may be found at https://nlopt.readthedocs.io/; this document focuses on the Perl specific implementation, which is more Perlish than the C API (and is very similar to the Python one).
API
The Perl API uses an object, constructed by the "new" class method, to maintain state. The optimization process is controlled by invoking methods on the object.
In general results are returned directly from the methods; method parameters are used primarily as input data for the methods (the objective and constraint callbacks more closely follow the C API).
The Perl methods are named similarly to the C functions, e.g.
nlopt_<method>( opt, ... );
becomes
$opt->method( ... );
Where $opt
is provided by the "new" class method.
As an example, the C API for starting the optimization process is
nlopt_result nlopt_optimize(nlopt_opt opt, double *x, double *opt_f);
where x is used for both passing in the initial model parameters as well as retrieving their final values. The final value of the optimization function is stored in opt_f. A code specifying the success or failure of the process is returned.
The Perl interface (similar to the Python and C++ versions) is
\@final = $opt->optimize( \@initial_pars );
$opt_f = $opt->last_optimum_value;
$result_code = $opt->last_optimize_result;
The Perl API throws exceptions on failures, similar to the behavior of the C++ and Python API's. Where the C API returns error codes, Math::NLopt
throws objects in similarly named exception classes:
Math::NLopt::Exception::Failure
Math::NLopt::Exception::OutOfMemory
Math::NLopt::Exception::InvalidArgs
Math::NLopt::Exception::RoundoffLimited>
Math::NLopt::Exception::ForcedStop
These all extend the Math::NLopt::Exception class; see it for more information on retrieving messages from the objects.
Constants
Math::NLopt defines constants for the optimization algorithms, result codes, and utilities.
The algorithm constants have the same names as the NLopt constants, and may be imported individually by name or en-masse with the ':algorithms' tag:
use Math::NLopt 'NLOPT_LD_MMA';
use Math::NLopt ':algorithms';
Importing result codes is similar:
use Math::NLopt 'NLOPT_FORCED_STOP';
use Math::NLopt ':results';
As are the utility subroutines:
use Math::NLopt 'algorithm_from_string';
use Math::NLopt ':utils';
Callbacks
NLopt handles the optimization of the objective function, relying upon user provided subroutines to calculate the objective function and non-linear constraints (see below for the required calling signature).
The callback subroutines are called with a user-provided structure which can be used to pass additional information to the callback (or the subroutines can use closures).
Objective Functions
Objective functions callbacks are registered via either
$opt->set_min_objective( \&func, ?$data );
$opt->set_max_objective( \&func, ?$data );
where $data
is an optional structure passed to the callback which can be used for any purpose.
The objective function has the signature
$value = sub ( \@params, \@gradient, $data ) { ... }
It returns the value of the optimization function for the passed set parameters, @params.
if \@gradient is not undef
, it must be filled in by the objective function.
$data
is the structure registered with the callback. It will be undef
if none was provided.
Non-linear Constraints
Nonlinear constraint callbacks are registered via either of
$opt->add_equality_constraint( \&func, ?$data, ?$tol = 0 );
$opt->add_inequality_constraint( \&func, ?$data, ?$tol = 0 );
where $data
is an optional structure passed to the callback which can be used for any purpose, and $tol
is a tolerance. Pass undef
for $data
if a tolerance is required but $data
is not.
The callbacks have the same signature as the objective callbacks.
Vector-valued Constraints
Vector-valued constraint callbacks are registered via either of
$opt->add_equality_mconstraint( \&func, $m, ?$data, ?\@tol );
$opt->add_inequality_mconstraint( \&func, $m, ?$data, ?\@tol );
where $m
is the length of the vector, $data
is an optional structure passed on to the callback function, and @tol
is an optional array of length $m
containing the tolerance for each component of the vector
Vector valued constraints callbacks have the signature
sub ( \@result, \@params, \@gradient, $data ) { ... }
The $m
length vector of constraints should be stored in \@result
. If \@gradient
is not undef
, it is a $n x $m length array which should be filled by the callback.
$data
is the optional structure passed to the callback.
Preconditioned Objectives
These are registered via one of
$opt->set_precond_min_objective( \&func, \&precond, ?$data);
$opt->set_precond_max_objective( \&func, \&precond, ?$data);
\&func
has the same signature as before (see "Objective Functions"), and $data
is as before.
The \&precond
fallback has this signature:
sub (\@x, \@v, \@vpre, $data) {...}
\@x
, \@v
, and \@vpre
are arrays of length $n
. \@x
, \@v
are input and \@vpre
should be filled in by the routine.
METHODS
Most methods have the same calling signature as their C versions, but not all!
add_equality_constraint
$opt->add_equality_constraint( \&func, ?$data, ?$tol = 0 );
add_equality_mconstraint
$opt->add_equality_mconstraint( \&func, $m, ?$data, ?\@tol );
add_inequality_constraint
$opt->add_inequality_constraint( \&func, ?$data, ?$tol = 0 );
add_inequality_mconstraint
$opt->add_inequality_mconstraint( \&func, $m, ?$data, ?\@tol );
force_stop
$opt->force_stop;
get_algorithm
$algorithm_int_id = $opt->get_algorithm;
get_dimension
$n = $opt->get_dimension;
get_errmsg
$string $opt->get_errmsg;
get_force_stop
$stop = $opt->get_force_stop;
get_ftol_abs
$tol = $opt->get_ftol_abs;
get_ftol_rel
$tol = $opt->get_ftol_rel;
get_initial_step
\@steps = $opt->get_initial_step( \@init_x );
get_lower_bounds
\@lb = $opt->get_lower_bounds;
get_maxeval
$max_eval = $opt->get_maxeval;
get_maxtime
$max_time = $opt->get_maxtime;
get_numevals
$num_evals = $opt->get_numevals;
get_param
$val = $opt->get_param( $name, $defaultval);
Return parameter value, or $defaultval
if not set.
get_population
$pop = $opt->get_population;
get_stopval
$val = $opt->get_stopval;
get_upper_bounds
\@ub = $opt->get_upper_bounds;
get_vector_storage
$dim = $opt->get_vector_storage;
get_x_weights
\@weights = $opt->get_x_weights;
get_xtol_abs
\@tol = $opt->get_xtol_abs;
get_xtol_rel
$tol = $opt->get_xtol_rel;
has_param
$bool = $opt->has_param( $name );
True if the parameter with $name
was set.
nth_param
$name = $opt->nth_param( $i );
Return the name of algorithm specific parameter $i
.
last_optimize_result
$result_code = $opt->last_optimize_result;
Return the result code after an optimization.
last_optimum_value
$min_f = $opt->last_optimum_value;
Return the objective value obtained after an optimization.
num_params
$n_algo_params = $opt->num_params;
Return the number of algorithm specific parameters.
optimize
\@optimized_pars = $opt->optimize( \@input_pars );
Returns the parameter values determined from the optimization. The status of the optimization (e.g. NLopt's result code) can be retrieved via the "last_optimize_result" method. The final value of the objective function is available via the "last_aptimum_value" method.
remove_equality_constraints
$opt->remove_equality_constraints;
remove_inequality_constraints
$opt->remove_inequality_constraints;
set_force_stop
$opt->set_force_stop( $val );
set_ftol_abs
$opt->set_ftol_abs( $tol );
set_ftol_rel
$opt->set_ftol_rel( $tol );
set_initial_step
$opt->set_initial_step(\@dx);
@dx
has length $n
.
set_initial_step1
$opt->set_initial_step1( $dx );
set_local_optimizer
$opt->set_local_optmizer( $local_opt );
set_lower_bound
$opt->set_lower_bound( $i, $ub );
Set the lower bound for parameter $i
(zero based) to $ub
set_lower_bounds
$opt->set_lower_bounds(\@ub);
@ub
has length $n
.
set_lower_bounds1
$opt->set_lower_bounds1 ($ub);
set_max_objective
$opt->set_max_objective( \&func, ?$data );
set_maxeval
$opt->set_maxeval( $max_iterations );
set_maxtime
$opt->set_maxtime( $time );
set_min_objective
$opt->set_min_objective( \&func, ?$data );
set_param
$opt->set_param( $name, $value );
set_population
$opt->set_population( $pop );
set_precond_max_objective
$opt->set_precond_max_objective( \&func, \&precond, ?$data);
See "Preconditioned Objectives"
set_precond_min_objective
$opt->set_precond_min_objective( \&func, \&precond, ?$data);
See "Preconditioned Objectives"
set_stopval
$opt->set_stopval( $stopval);
set_upper_bound
$opt->set_upper_bound( $i, $ub );
Set the upper bound for parameter $i
(zero based) to $ub
set_upper_bounds
$opt->set_upper_bounds(\@ub);
@ub
has length $n
.
set_upper_bounds1
$opt->set_upper_bounds1 ($ub);
set_vector_storage
$opt->set_vector_storage( $dim )
set_x_weights
$opt->set_x_weights( \@weights );
@weights
has length $n
.
set_x_weights1
$opt->set_x_weights1( $weight );
set_xtol_abs
$opt->set_xtol_abs( \@tol );
@tol
has length $n
.
set_xtol_abs1
$opt->set_xtol_abs1( $tol );
set_xtol_rel
$opt->set_xtol_rel( $tol );
new
my $opt = Math::NLopt->new( $algorithm, $n );
Create an optimization object for the given algorithm and number of parameters. $algorithm is one of the algorithm constants, e.g.
use Math::NLopt 'NLOPT_LD_MMA';
my $opt = Math::NLopt->new( NLOPT_LD_MMA, 3 );
CONSTRUCTORS
UTILITY SUBROUTINES
These are exportable individually, or en-masse via the :utils
tag, but beware that srand has same name as the Perl srand
routine, and version
is rather generic.
algorithm_from_string
$algorithm_int_id = algorithm_from_string( $algorithm_string_id );
return an integer id (e.g. NLOPT_LD_MMA) from a string id (e.g. 'LD_MMA').
algorithm_name
$algorithm_name = algorithm_from_string( $algorithm_int_id );
return a descriptive name from an integer id
algorithm_to_string
$algorithm_string_id = algorithm_to_string( $algorithm_int_id );
result_from_string
$result_int_id = result_from_string( $result_string_id );
return an integer id (e.g. NLOPT_SUCCESS) from a string id (e.g. 'SUCCESS').
result_to_string
$result_string_id = result_to_string( $result_int_id );
srand
srand( $seed )
srand_time
version
($major, $minor, $bugfix ) = Math::NLopt::version()
SUPPORT
Bugs
Please report any bugs or feature requests to bug-math-nlopt@rt.cpan.org or through the web interface at: https://rt.cpan.org/Public/Dist/Display.html?Name=Math-NLopt
Source
Source is available at
https://gitlab.com/djerius/math-nlopt
and may be cloned from
https://gitlab.com/djerius/math-nlopt.git
SEE ALSO
Please see those modules/websites for more information related to this module.
AUTHOR
Diab Jerius <djerius@cpan.org>
COPYRIGHT AND LICENSE
This software is Copyright (c) 2024 by Smithsonian Astrophysical Observatory.
This is free software, licensed under:
The GNU General Public License, Version 3, June 2007