NAME
PkgForge::Job - Represents a build job for the LCFG Package Forge
VERSION
This documentation refers to PkgForge::Job version 1.4.8
SYNOPSIS
use PkgForge::Job;
use PkgForge::Source::SRPM;
my $job = PkgForge::Job->new( bucket => "lcfg",
archs => ["i386","x86_64"],
platform => ["sl5"] );
my $package =
PkgForge::Source::SRPM->new( file => "foo-1.2.src.rpm");
$job->add_packages($package);
my $ok = eval { $job->validate };
if ( !$ok || $@ ) {
die "Invalid job $job: $@\n";
}
DESCRIPTION
This module provides a representation of a build job which is used by the LCFG Package Forge software suite.
It can be used to submit new jobs and also query and validate jobs which have already been submitted.
The object represents a set of source packages which should be built as a single build job. It also holds all the information covering the platforms and architectures on which the packages should be built, the repository into which the generated binary packages should be submitted, who submitted the job and when.
ATTRIBUTES
- platforms
-
This is the list of platforms for which the set of packages in the job should be built. By default this list contains just the string
auto
and the generated list of platforms is based on those which are active and listed as being available for adding automatically. If the list contains the stringall
then the build will be attempted on all available platforms.It is possible to block some platforms to ensure they are not attempted by prefixing the platform name with
!
(exclamation mark). If the list only contains negated platforms then builds will be attempted on all platforms except those negated. If the list contains a mixture of platforms and negated platforms then only those requested will be attempted. If we consider an example where three platforms are supported, e.g. el5, f12 and f13, here are possible values:[ "all" ]
gives[ "el5", "f12", "f13" ]
[ "all", "!el5" ]
gives[ "f12", "f13" ]
[ "f12", "f13" ]
gives[ "f12", "f13" ]
[ "el5", "!f13" ]
gives[ "el5" ]
Any platform string which is not recognised is ignored.
- archs
-
This is the list of architectures for which the set of packages in the job should be built. If the list contains the string "all" then the build will be attempted on all available architectures (which is the default).
In the same way as for the platforms, it is possible to block some architectures to ensure they are not attempted by prefixing the platform name with
!
(exclamation mark). If the list only contains negated architectures then builds will be attempted on all architectures except those negated. If the list contains a mixture of architectures and negated architectures then only those requested will be attempted. If we consider an example where three architectures are supported, e.g. i386, x86_64 and ppc, here are possible values:[ "all" ]
gives[ "i386", "x86_64", "ppc" ]
[ "all", "!i386" ]
gives[ "x86_64", "ppc" ]
[ "x86_64", "ppc" ]
gives[ "x86_64", "ppc" ]
[ "i386", "!ppc" ]
gives[ "i386" ]
-
Note that it is NOT possible to specify a single build job as being for different sets of architectures on each of the different specified platforms.
Any architecture string which is not recognised is ignored.
- bucket
-
This is the LCFG package bucket into which built packages will be submitted. This is normally something like "lcfg", "world", "uoe" or "inf". There is no default value and the bucket
MUST
be specified.When building RPMs with mock this bucket is also used to control which mock configuration file is used. This controls which package repositories mock has access to for fulfilling build requirements. This is done to ensure that packages do not have auto-generated dependency lists which cannot be fulfilled from within that bucket or the base/updates package repositories.
- packages
-
This is a list of source packages for the build job which are to be built for the set of platforms and architectures. The list takes objects which implement the PkgForge::Source role. You can specify as many source packages as you like and mix the types within a single job. It is left to the individual build daemons to decide whether they are capable of building from particular types of source package.
When building RPMs, within a single job, once a package has been built it becomes available immediately for use as a build-requirement for the building of subsequent packages. This means that the order in which the packages are specified is significant. Note that no attempt is made to solve the build-dependencies for the source packages within a build job. This extension might be considered at some point in the future.
A build job is not valid if no packages have been specified.
- size
-
This is the total size of the source packages, measured in bytes.
- report
-
This is a list of email addresses to which a final report will be sent. By default no reports are sent.
- directory
-
This is the directory in which the packages and the configuration file for a build job are stored. It does not have to be specified, the default is assumed to be the current directory where necessary.
- yamlfile
-
This is the location of the build job configuration file. This is used for serialisation of the job object for later reuse. Note that not all attributes are stored when this is written and not all are read when it is reloaded. See
store_in_yamlfile
andnew_from_yamlfile
for details. - id
-
This is the UUID for the build job. If none has been specified then a default value is generated using the Data::UUID module, in which case the UUID is also converted into base64 and made URL-safe. Any string which only contains characters matching the set
A-Za-z0-9_-
is acceptable but beware that if you submit a job with a user-specified ID and it has previously been used the job will be rejected. - subtime
-
This is the time of submission for a job, it only really has meaning from the point-of-view of the build system. Jobs are built in order of submission time but setting this before submitting a job will not have any effect on the sequence in which jobs will be built.
- submitter
-
This is the user name of the submitter. Currently it is taken from the ownership of the submitted job directory. If a move was made to digitally-signed build files then the submitter attribute could reflect that instead. It is not used for any authorization checks, the submitter is purely used for tracking jobs so that users can easily query the status of their own jobs.
- verbose
-
This is a boolean value which controls the verbosity of output to STDERR when class methods are called. By default the methods will not be verbose.
SUBROUTINES/METHODS
- clone
-
This will do a deep clone of a Job object using the
dclone
function provided by the Storable module. - new()
-
This will create a new Job object. You must specify the package bucket.
- new_from_yamlfile( $file, $dir )
-
This will load an object from the data stored in the meta-file. You must also specify the directory name, that will then be used to set the
directory
attribute for the Job and thebasedir
attribute for the Source package objects. Any setting of thedirectory
andyamlfile
attributes in the meta-file are ignored and reset to the passed in arguments. Only attributes which have thePkgForge::Serialise
trait will be loaded. - new_from_dir($dir)
-
This creates a new Job object based on the meta-file and packages stored within the specified directory. It uses
new_from_yamlfile
to load the meta-file and set thedirectory
attributes appropriately. - new_from_qentry($qentry)
-
This creates a new Job object from the information stored in a PkgForge::Queue::Entry object. The
new_from_dir
method is used with the Queue::Entrypath
attribute. Thesubmitter
andsubtime
attributes are set for the Job based on the values in the Queue::Entry object. - overdue($timeout)
-
This takes a timeout, in seconds, and returns a boolean value which signifies whether or not the Job is more than that many seconds old.
- process_build_targets(@platforms)
-
This takes a list of available, active, platforms, each entry in the list is a reference to a hash which has values for name, arch and auto. The arch is the architecture, e.g.
i386
orx86_64
. Theauto
value is a boolean which shows whether the platform should be added automatically or only when explicitly requested.The method returns a list of requested platforms. Each entry in the incoming and returned lists is a pair of platform name and architecture. For example:
[ ['sl5','i386' ], ['sl5','x86_64'] ]
This is basically just a convenience method with does the work of
process_platforms
andprocess_archs
in one step. See the documentation below for details of how the processing is done. - process_platforms(\@all, \@auto)
-
This method takes references to two lists of platform names. The first is the complete set of platforms and the second is the set of platforms which should be added automatically. The two sets may well be identical. Platforms which are only in the 'auto' set will only be added if explicitly requested.
This method uses the platform lists to process the rules in the
platforms
attribute, it then returns a list of requested platforms. See the documentation above on theplatforms
attribute for full details on how to write the rules.For example, if the
platforms
attribute is set to:[ "all", "!el5" ]
and the platforms list passed in as an argument is:
( "el5", "f12" )
then the returned list is:
("f12")
- process_archs(\@archs)
-
This takes a reference to a list of available archs and uses them to process the rules in the
archs
attribute, it then returns a list of requested archs. See the documentation above on thearchs
attribute for full details on how to write the rules.For example, if the
archs
attribute is set to:[ "all", "!i386" ]
and the archs list passed in as an argument is:
( "i386", "x86_64" )
then the returned list is:
("x86_64")
- store_in_yamlfile([$file])
-
This will save an object to the meta-file. If the file name is not passed in as an argument then the
yamlfile
attribute will be examined. This method will fail if no file is specified through either route. Thedirectory
andyamlfile
attributes for the Job and thebasedir
attribute for the packages are not stored into the meta-file. Only attributes which have thePkgForge::Serialise
trait will be stored. - transfer($target_dir)
-
This will take a Job stored in one directory and copy it all to a new target directory. The Job will be stored using the
store_in_yamlfile
method. Once the copy is complete it callsvalidate
to ensure that the copied Job is correct. If anything fails then the target directory will be erased. If the transfer succeeds then a new Job object will be returned which represents the copy. - validate
-
This method validates the state of the Job. It requires that there are Source packages, checks the SHA1 sum for each package and calls the
validate
method on each package. If anything fails then the method will die with an appropriate message. If the method succeeds then a boolean true value will be returned. - scrub($options)
-
This method will erase the directory associated with this build job. Note that it also blows away the object since it no longer has any physical meaning once the directory is gone. Internally this uses the
remove_tree
subroutine provided by PkgForge::Utils. It is possible, optionally, to pass in a reference to a hash of options to control how theremove_tree
subroutine functions. - update_job_size
-
This will recalculate the job size by summing the sizes of all the source packages. It is not normally necessary to do this manually as it will be updated automatically whenever the packages list is altered.
DEPENDENCIES
This module is powered by Moose and uses MooseX::Types. It also requires Data::UUID::Base64URLSafe for UUID generation, UNIVERSAL::require for loading source package modules, l<YAML::Syck> and Data::Structure::Util for reading the build files and converting them back into Job objects.
SEE ALSO
PkgForge, PkgForge::Source, PkgForge::Utils and PkgForge::Types
PLATFORMS
This is the list of platforms on which we have tested this software. We expect this software to work on any Unix-like platform which is supported by Perl.
ScientificLinux5, Fedora13
BUGS AND LIMITATIONS
Please report any bugs or problems (or praise!) to bugs@lcfg.org, feedback and patches are also always very welcome.
AUTHOR
Stephen Quinney <squinney@inf.ed.ac.uk>
LICENSE AND COPYRIGHT
Copyright (C) 2010 University of Edinburgh. All rights reserved.
This library is free software; you can redistribute it and/or modify it under the terms of the GPL, version 2 or later.
1 POD Error
The following errors were encountered while parsing the POD:
- Around line 682:
You forgot a '=back' before '=head1'