NAME

AI::NeuralNet::BackProp - A simple back-prop neural net that uses Delta's and Hebbs' rule.

SYNOPSIS

use AI::NeuralNet::BackProp;

# Create a new neural net with 2 layers and 3 neurons per layer
my $net = new AI::NeuralNet::BackProp(2,3);

# Associate first pattern and print benchmark
print "Associating (1,2,3) with (4,5,6)...\n";
print $net->learn([1,2,3],[4,5,6]);

# Associate second pattern and print benchmark
print "Associating (4,5,6) with (1,2,3)...\n";
print $net->learn([4,5,6],[1,2,3]);

# Run a test pattern
print "\nFirst run output: (".join(',',@{$net->run([1,3,2])}).")\n\n";


# Declare patterns to learn
my @pattern = (	15, 3,  5  );
my @result  = ( 16, 10, 11 );

# Display patterns to associate using sub interpolation into a string.
print "Associating (@{[join(',',@pattern)]}) with (@{[join(',',@result)]})...\n";

# Run learning loop and print benchmarking info.
print $net->learn(\@pattern,\@result);

# Run final test
my @test 	  = ( 14, 9,  3  );
my $array_ref = $net->run(\@test);

# Display test output
print "\nSecond run output: (".join(',',@{$array_ref}).")\n";

UPDATES

This is version 0.42, the patch for the first public release which was version 0.40. It includes several updates and fixes, including more 'intelligent' learning, smaller memory size, actual loading and saving of network, internal support for strings as network maps and targets, and integrated support for direct loading of PCX-format bitmap files.

DESCRIPTION

AI::NeuralNet::BackProp is the flagship package for this file. It implements a nerual network similar to a feed-foward, back-propagtion network; learning via a mix of a generalization of the Delta rule and a disection of Hebbs rule. The actual neruons of the network are implemented via the AI::NeuralNet::BackProp::neuron package.

You constuct a new network via the new constructor:

my $net = new AI::NeuralNet::BackProp(2,3,1);
	

The new() constructor accepts two arguments and one optional argument, $layers, $size, and $outputs is optional (in this example, $layers is 2, $size is 3, and $outputs is 1).

$layers specifies the number of layers, including the input and the output layer, to use in each neural grouping. A new neural grouping is created for each pattern learned. Layers is typically set to 2. Each layer has $size neurons in it. Each neuron's output is connected to one input of every neuron in the layer below it.

This diagram illustrates a simple network, created with a call to "new AI::NeuralNet::BackProp(2,2)" (2 layers, 2 neurons/layer).

 input
 /  \
O    O
|\  /|
| \/ |
| /\ |
|/  \|
O    O
 \  /
mapper

In this diagram, each neuron is connected to one input of every neuron in the layer below it, but there are not connections between neurons in the same layer. Weights of the connection are controlled by the neuron it is connected to, not the connecting neuron. (E.g. the connecting neuron has no idea how much weight its output has when it sends it, it just sends its output and the weighting is taken care of by the receiving neuron.) This is the method used to connect cells in every network built by this package.

Input is fed into the network via a call like this:

use AI;
my $net = new AI::NeuralNet::BackProp(2,2);

my @map = (0,1);

my $result = $net->run(\@map);

Now, this call would probably not give what you want, because the network hasn't "learned" any patterns yet. But this illustrates the call. Run expects an array refrence, and run gets mad if you don't give it what it wants. So be nice.

Run returns a refrence with $size elements (Remember $size? $size is what you passed as the second argument to the network constructor.) This array contains the results of the mapping. If you ran the example exactly as shown above, $result would contain (1,1) as its elements.

To make the network learn a new pattern, you simply call the learn method with a sample input and the desired result, both array refrences of $size length. Example:

use AI;
my $net = new AI::NeuralNet::BackProp(2,2);

my @map = (0,1);
my @res = (1,0);

$net->learn(\@map,\@res);

my $result = $net->run(\@map);

Now $result will conain (1,0), effectivly flipping the input pattern around. Obviously, the larger $size is, the longer it will take to learn a pattern. Learn() returns a string in the form of

Learning took X loops and X wallclock seconds (X.XXX usr + X.XXX sys = X.XXX CPU).

With the X's replaced by time or loop values for that loop call. So, to view the learning stats for every learn call, you can just:

print $net->learn(\@map,\@res);

If you call "$net->debug(4)" with $net being the refrence returned by the new() constructor, you will get benchmarking information for the learn function, as well as plenty of other information output. See notes on debug() , in METHODS, below.

If you do call $net->debug(1), it is a good idea to point STDIO of your script to a file, as a lot of information is output. I often use this command line:

$ perl some_script.pl > .out

Then I can simply go and use emacs or any other text editor and read the output at my leisure, rather than have to wait or use some 'more' as it comes by on the screen.

This system was originally created to be a type of content-addressable-memory system. As such, it implements "groups" for storing patterns and maps. After the network has learned the patterns you want, then you can call run with a pattern it has never seen before, and it will decide which of the stored patterns best fit the new pattern, returning the results the same as the above examples (as an array ref from $net->run()).

METHODS

new AI::NeuralNet::BackProp($layers, $size [, $outputs])

Returns a newly created neural network from an AI::NeuralNet::BackProp object. Each group of this network will have $layers number layers in it and each layer will have $size number of neurons in that layer.

There is an optional parameter of $outputs, which specifies the number of output neurons to provide. If $outputs is not specified, $outputs defaults to equal $size. $outputs may not exceed $size. If $outputs exceeds $size, the new() constructor will return undef.

Before you can really do anything useful with your new neural network object, you need to teach it some patterns. See the learn() method, below.

$net->learn($input_map_ref, $desired_result_ref [, options ]);

This will 'teach' a network to associate an new input map with a desired resuly. It will return a string containg benchmarking information. You can retrieve the pattern index that the network stored the new input map in after learn() is complete with the pattern() method, below.

The first two arguments must be array refs, and must be of the same length.

Options should be written on hash form, there is two options present as for now, inc=>$learning_gradient and max=>$maximum_iterations.

$learning_gradient is an optional value used to adjust the weights of the internal connections. If $learning_gradient is ommitted, it defaults to 0.20.

$maximum_iterations is the maximum numbers of iteration the loop should do. It defaults to 1024. Set it to 0 if you never want the loop to quit before the pattern is perfectly learned.

$net->run($input_map_ref);

This method will apply the given array ref at the input layer of the neural network.

It will return undef on an error. An error is caused by one of two events.

The first is the possibility that the argument passed is not an array ref. If it is not an array ref, it returns silently a value of undef.

The other condition that could cause an error is the fact that your map contained an element with an undefined value. We don't allow this because it has been in testing that an undefined value can never be weighted. We all know that 12958710924 times 0 is still 0, right? The network can't handle that, though. It will still try to apply as much weight as it can to a 0 value, but the weighting will always come back 0, and therefore, never be able to match the desired result output, thereby creating an infinite learn() loop cycle. Soooo... to prevent the infinite looping, we simply don't allow 0 values to be run. You can always shift all values of your map up one number to account for this if need be, and then subtract one number from every element of the output to shift it down again. Let me know if anyone comes up with a better way to do this.

$net->benchmarked();

This returns a benchmark info string for the last learn() or the last run() call, whichever occured later. It is easily printed as a string, as following:

print $net->benchmarked() . "\n";
$net->debug($level)

Toggles debugging off if called with $level = 0 or no arguments. There are four levels of debugging.

Level 0 ($level = 0) : Default, no debugging information printed, except for the 'Cannot run 0 value.' error message. Other than that one message, all printing is left to calling script.

Level 1 ($level = 1) : This causes ALL debugging information for the network to be dumped as the network runs. In this mode, it is a good idea to pipe your STDIO to a file, especially for large programs.

Level 2 ($level = 2) : A slightly-less verbose form of debugging, not as many internal data dumps.

Level 3 ($level = 3) : JUST prints weight mapping as weights change.

Level 4 ($level = 4) : JUST prints the benchmark info for EACH learn loop iteteration, not just learning as a whole. Also prints the percentage difference for each loop between current network results and desired results, as well as learning gradient ('incremenet').

Level 4 is useful for seeing if you need to give a smaller learning incrememnt to learn() . I used level 4 debugging quite often in creating the letters.pl example script and the small_1.pl example script.

An aditional parameter in the network that can be set is $net->{col_width}. This is useful for formating the debugging output of Level 4 if you are learning simple bitmaps. This will set the debugger to automatically insert a line break after that many elements in the map output when dumping the currently run map during a learn loop.

Toggles debuging off when called with no arguments.

$net->save($filename);

This will save the complete state of the network to disk, including all weights and any words crunched with crunch() .

$net->load($filename);

This will load from disk any network saved by save() and completly restore the internal state at the point it was save() was called at.

$net->join_cols($array_ref,$row_length_in_elements,$high_state_character,$low_state_character);

This is more of a utility function than any real necessary function of the package. Instead of joining all the elements of the array together in one long string, like join() , it prints the elements of $array_ref to STDIO, adding a newline (\n) after every $row_length_in_elements number of elements has passed. Additionally, if you include a $high_state_character and a $low_state_character, it will print the $high_state_character (can be more than one character) for every element that has a true value, and the $low_state_character for every element that has a false value. If you do not supply a $high_state_character, or the $high_state_character is a null or empty or undefined string, it join_cols() will just print the numerical value of each element seperated by a null character (\0). join_cols() defaults to the latter behaviour.

$net->pdiff($array_ref_A, $array_ref_B);

This function is used VERY heavily internally to calculate the difference in percent between elements of the two array refs passed. It returns a %.02f (sprintf-format) percent sting.

$net->intr($float);

Rounds a floating-point number rounded to an integer using sprintf() and int() , Provides better rounding than just calling int() on the float. Also used very heavily internally.

$net->high($array_ref);

Returns the index of the element in array REF passed with the highest comparative value.

$net->low($array_ref);

Returns the index of the element in array REF passed with the lowest comparative value.

$net->show();

This will dump a simple listing of all the weights of all the connections of every neuron in the network to STDIO.

$net->crunch(qw( [...words...] ));

This convets a qw()ed array of words passed into an array ref containing unique indexes to the words. The words are stored in an intenal array and preserved across load() and save() calls. This is designed to be used to generate unique maps sutible for passing to learn() and run() directly. It returns an array ref.

The words are not duplicated internally. For example:

$net->crunch(qw(How are you?));

Will probably return an array ref containing 1,2,3. A subsequent call of:

$net->crunch(qw(How is Jane?));

Will probably return an array ref containing 1,4,5. Notice, the first element stayed the same. That is because it already stored the word "How". So, each word is stored only once internally and the returned array ref reflects that.

$net->uncrunch($array_ref);

Uncrunches a map (array ref) into an array of words (not an array ref) and returns array. This is ment to be used as a counterpart to the crunch() method, above, possibly to uncrunch() the output of a run() call. Consider the below code (also in ./examples/ex1.pl):

use AI::NeuralNet::BackProp;
my $net = AI::NeuralNet::BackProp->new(2,3);

for (0..3) {
	$net->learn($net->crunch(qw(I love chips.)),  $net->crunch(qw(That's Junk Food!)));
	$net->learn($net->crunch(qw(I love apples.)), $net->crunch(qw(Good, Healthy Food.)));
	$net->learn($net->crunch(qw(I love pop.)),    $net->crunch(qw(That's Junk Food!)));
	$net->learn($net->crunch(qw(I love oranges.)),$net->crunch(qw(Good, Healthy Food.)));
}

my $response = $net->run($net->crunch(qw(I love corn.)));

print join(' ',$net->uncrunch($response));

On my system, this responds with, "Good, Healthy Food." If you try to run crunch() with qw(I love pop.), though, you will probably get "Food! apples. apples." (At least it returns that on my system.) As you can see, the associations are not yet perfect, but it can make for some interesting demos!

$net->crunched($word);

This will return undef if the word is not in the internal crunch list, or it will return the index of the word if it exists in the crunch list.

$net->load_pcx($filename);

Oh heres a treat... this routine will load a PCX-format file (yah, I know...ancient format... but it is the only one I could find specs for to write it in Perl. If anyone can get specs for any other formats, or could write a loader for them, I would be very grateful!) Anyways, a PCX-format file that is exactly 320x200 with 8 bits per pixel, with pure Perl. It returns a blessed refrence to a AI::NeuralNet::BackProp::PCX object, which supports the following routinges/members. See example files pcx.pl and pcx2.pl in the ./examples/ directory.

$pcx->{image}

This is an array refrence to the entire image. The array containes exactly 64000 elements, each element contains a number corresponding into an index of the palette array, details below.

$pcx->{palette}

This is an array ref to an AoH (array of hashes). Each element has the following three keys:

$pcx->{palette}->[0]->{red};
$pcx->{palette}->[0]->{green};
$pcx->{palette}->[0]->{blue};

Each is in the range of 0..63, corresponding to their named color component.

$pcx->get_block($array_ref);

Returns a rectangular block defined by an array ref in the form of:

[$left,$top,$right,$bottom]

These must be in the range of 0..319 for $left and $right, and the range of 0..199 for $top and $bottom. The block is returned as an array ref with horizontal lines in sequental order. I.e. to get a pixel from [2,5] in the block, and $left-$right was 20, then the element in the array ref containing the contents of coordinates [2,5] would be found by [5*20+2] ($y*$width+$x).

print (@{$pcx->get_block(0,0,20,50)})[5*20+2];

This would print the contents of the element at block coords [2,5].

$pcx->get($x,$y);

Returns the value of pixel at image coordinates $x,$y. $x must be in the range of 0..319 and $y must be in the range of 0..199.

$pcx->rgb($index);

Returns a 3-element array (not array ref) with each element corresponding to the red, green, or blue color components, respecitvely.

$pcx->avg($index);

Returns the mean value of the red, green, and blue values at the palette index in $index.

OTHER INCLUDED PACKAGES

AI::NeuralNet::BackProp::File

AI::NeuralNet::BackProp::File implements a simple 'relational'-style database system. It is used internally by AI::NeuralNet::BackProp for storage and retrival of network states. It can also be used independently of AI::NeuralNet::BackProp. PODs are not yet included for this package, I hope to include documentation for this package in future releases.

AI::NeuralNet::BackProp::File depends on Storable, version 0.611 for low- level disk access. This dependency is noted in Makefile.PL, and should be handled automatically when you installe this AI::NeuralNet::BackProp.

AI::NeuralNet::BackProp::neuron

AI::NeuralNet::BackProp::neuron is the worker package for AI::NeuralNet::BackProp. It implements the actual neurons of the nerual network. AI::NeuralNet::BackProp::neuron is not designed to be created directly, as it is used internally by AI::NeuralNet::BackProp.

AI::NeuralNet::BackProp::_run
AI::NeuralNet::BackProp::_map

These two packages, _run and _map are used to insert data into the network and used to get data from the network. The _run and _map packages are connected to the neurons so that the neurons think that the IO packages are just another neuron, sending data on. But the IO packs. are special packages designed with the same methods as neurons, just meant for specific IO purposes. You will never need to call any of the IO packs. directly. Instead, they are called whenever you use the run() or learn() methods of your network.

BUGS

This is the beta release of AI::NeuralNet::BackProp, and that holding true, I am sure there are probably bugs in here which I just have not found yet. If you find bugs in this module, I would appreciate it greatly if you could report them to me at <jdb@wcoil.com>, or, even better, try to patch them yourself and figure out why the bug is being buggy, and send me the patched code, again at <jdb@wcoil.com>.

AUTHOR

Josiah Bryan <jdb@wcoil.com>

Copyright (c) 2000 Josiah Bryan. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.

The AI::NeuralNet::BackProp and related modules are free software. THEY COME WITHOUT WARRANTY OF ANY KIND.

Many thanks to Tobias Bronx, <tobiasb@odin.funcom.com> for many tips, suggestions, and the occasional argument or two. Thanks Tobias!

DOWNLOAD

You can always download the latest copy of AI::NeuralNet::BackProp from http://www.josiah.countystart.com/modules/AI/cgi-bin/rec.pl

3 POD Errors

The following errors were encountered while parsing the POD:

Around line 2475:

You forgot a '=back' before '=head1'

Around line 2477:

'=item' outside of any '=over'

Around line 2512:

You forgot a '=back' before '=head1'