Why not adopt me?
NAME
AI::ANN - an artificial neural network simulator
VERSION
version 0.008
SYNOPSIS
AI::ANN is an artificial neural network simulator. It differs from existing solutions in that it fully exposes the internal variables and allows - and forces - the user to fully customize the topology and specifics of the produced neural network. If you want a simple solution, you do not want this module. This module was specifically written to be used for a simulation of evolution in neural networks, not training. The traditional 'backprop' and similar training methods are not (currently) implemented. Rather, we make it easy for a user to specify the precise layout of their network (including both topology and weights, as well as many parameters), and to then retrieve those details. The purpose of this is to allow an additional module to then tweak these values by a means that models evolution by natural selection. The canonical way to do this is the included AI::ANN::Evolver, which allows the addition of random mutations to individual networks, and the crossing of two networks. You will also, depending on your application, need a fitness function of some sort, in order to determine which networks to allow to propagate. Here is an example of that system.
use AI::ANN; my $network = new AI::ANN ( input_count => $inputcount, data => \@neuron_definition ); my $outputs = $network->execute( \@inputs ); # Basic network use use AI::ANN::Evolver; my $handofgod = new AI::ANN::Evolver (); # See that module for calling details my $network2 = $handofgod->mutate($network); # Random mutations # Test an entire 'generation' of networks, and let $network and $network2 be # among those with the highest fitness function in the generation. my $network3 = $handofgod->crossover($network, $network2); # Perhaps mutate() each network either before or after the crossover to # introduce variety.
We elected to do this with a new module rather than by extending an existing module because of the extensive differences in the internal structure and the interface that were necessary to accomplish these goals.
METHODS
new
ANN::new(input_count => $inputcount, data => [{ iamanoutput => 0, inputs => {$inputid => $weight, ...}, neurons => {$neuronid => $weight}}, ...])
input_count is number of inputs. data is an arrayref of neuron definitions. The first neuron with iamanoutput=1 is output 0. The second is output 1. I hope you're seeing the pattern... minvalue is the minimum value a neuron can pass. Default 0. maxvalue is the maximum value a neuron can pass. Default 1. afunc is a reference to the activation function. It should be simple and fast. The activation function is processed /after/ minvalue and maxvalue. dafunc is the derivative of the activation function. We strongly advise that you memoize your afunc and dafunc if they are at all complicated. We will do our best to behave.
execute
$network->execute( [$input0, $input1, ...] )
Runs the network for as many iterations as necessary to achieve a stable network, then returns the output. We store the current state of the network in two places - once in the object, for persistence, and once in $neurons, for simplicity. This might be wrong, but I couldn't think of a better way.
get_state
$network->get_state()
Returns three arrayrefs, [$input0, ...], [$neuron0, ...], [$output0, ...], corresponding to the data from the last call to execute(). Intended primarily to assist with debugging.
get_internals
$network->get_internals()
Returns the weights in a not-human-consumable format.
readable
$network->readable()
Returns a human-friendly and diffable description of the network.
backprop
$network->backprop(\@inputs, \@outputs)
Performs back-propagation learning on the neural network with the provided training data. Uses backprop_eta as a training rate and dafunc as the derivative of the activation function.
AUTHOR
Dan Collins <DCOLLINS@cpan.org>
COPYRIGHT AND LICENSE
This software is Copyright (c) 2011 by Dan Collins.
This is free software, licensed under:
The GNU General Public License, Version 3, June 2007