NAME

Object::Remote - Call methods on objects in other processes or on other hosts

SYNOPSIS

Creating a connection:

use Object::Remote;

my $conn = Object::Remote->connect('myserver'); # invokes ssh

Calling a subroutine:

my $capture = IPC::System::Simple->can::on($conn, 'capture');

warn $capture->('uptime');

Using an object:

my $eval = Eval::WithLexicals->new::on($conn);

$eval->eval(q{my $x = `uptime`});

warn $eval->eval(q{$x});

Importantly: 'myserver' only requires perl 5.8+ - no non-core modules need to be installed on the far side, Object::Remote takes care of it for you!

DESCRIPTION

Object::Remote allows you to create an object in another process - usually one running on another machine you can connect to via ssh, although there are other connection mechanisms available.

The idea here is that in many cases one wants to be able to run a piece of code on another machine, or perhaps many other machines - but without having to install anything on the far side.

COMPONENTS

Object::Remote

The "main" API, which provides the "connect" method to create a connection to a remote process/host, "new::on" to create an object on a connection, and "can::on" to retrieve a subref over a connection.

Object::Remote::Connection

The object representing a connection, which provides the "remote_object" in Object::Remote::Connection and "remote_sub" in Object::Remote::Connection methods that are used by "new::on" and "can::on" to return proxies for objects and subroutines on the far side.

Object::Remote::Future

Code for dealing with asynchronous operations, which provides the "start::method" in Object::Remote::Future syntax for calling a possibly asynchronous method without blocking, and "await_future" in Object::Remote::Future and "await_all" in Object::Remote::Future to block until an asynchronous call completes or fails.

METHODS

connect

my $conn = Object::Remote->connect('-'); # fork()ed connection

my $conn = Object::Remote->connect('myserver'); # connection over ssh

my $conn = Object::Remote->connect('user@myserver'); # connection over ssh

my $conn = Object::Remote->connect('root@'); # connection over sudo

new::on

my $eval = Eval::WithLexicals->new::on($conn);

my $eval = Eval::WithLexicals->new::on('myserver'); # implicit connect

my $obj = Some::Class->new::on($conn, %args); # with constructor arguments

can::on

my $hostname = Sys::Hostname->can::on($conn, 'hostname');

my $hostname = Sys::Hostname->can::on('myserver', 'hostname');

ENVIRONMENT

OBJECT_REMOTE_PERL_BIN

When starting a new Perl interpreter the contents of this environment variable will be used as the path to the executable. If the variable is not set the path is 'perl'

OBJECT_REMOTE_LOG_LEVEL

Setting this environment variable will enable logging and send all log messages at the specfied level or higher to STDERR. Valid level names are: trace debug verbose info warn error fatal

OBJECT_REMOTE_LOG_FORMAT

The format of the logging output is configurable. By setting this environment variable the format can be controlled via printf style position variables. See Object::Remote::Logging::Logger.

OBJECT_REMOTE_LOG_FORWARDING

Forward log events from remote connections to the local Perl interpreter. Set to 1 to enable this feature which is disabled by default. See Object::Remote::Logging.

OBJECT_REMOTE_LOG_SELECTIONS

Space seperated list of class names to display logs for if logging output is enabled. Default value is "Object::Remote::Logging" which selects all logs generated by Object::Remote. See Object::Remote::Logging.

KNOWN ISSUES

Large data structures

Object::Remote communication is encapsalated with JSON and values passed to remote objects will be serialized with it. When sending large data structures or data structures with a lot of deep complexity (hashes in arrays in hashes in arrays) the processor time and memory requirements for serialization and deserialization can be either painful or unworkable. During times of serialization the local or remote nodes will be blocked potentially causing all remote interpreters to block as well under worse case conditions.

To help deal with this issue it is possible to configure resource ulimits for a Perl interpreter that is executed by Object::Remote. See Object::Remote::Role::Connector::PerlInterpreter for details on the perl_command attribute.

User can starve run loop of execution opportunities

The Object::Remote run loop is responsible for performing I/O and managing timers in a cooperative multitasing way but it can only do these tasks when the user has given control to Object::Remote. There are times when Object::Remote must wait for the user to return control to the run loop and during these times no I/O can be performed and no timers can be executed.

As an end user of Object::Remote if you depend on connection timeouts, the watch dog or timely results from remote objects then be sure to hand control back to Object::Remote as soon as you can.

Run loop favors certain filehandles/connections
High levels of load can starve timers of execution opportunities

These are issues that only become a problem at large scales. The end result of these two issues is quite similiar: some remote objects may block while the local run loop is either busy servicing a different connection or is not executing because control has not yet been returned to it. For the same reasons timers may not get an opportunity to execute in a timely way.

Internally Object::Remote uses timers managed by the run loop for control tasks. Under high load the timers can be preempted by servicing I/O on the filehandles and execution can be severely delayed. This can lead to connection watchdogs not being updated or connection timeouts taking longer than configured.

Deadlocks

Deadlocks can happen quite easily because of flaws in programs that use Object::Remote or Object::Remote itself so the Object::Remote::WatchDog is available. When used the run loop will periodically update the watch dog object on the remote Perl interpreter. If the watch dog goes longer than the configured interval with out being updated then it will terminate the Perl process. The watch dog will terminate the process even if a deadlock condition has occured.

Log forwarding at scale can starve timers of execution opportunities

Currently log forwarding can be problematic at large scales. When there is a large amount of log events the load produced by log forwarding can be high enough that it starves the timers and the remote object watch dogs (if in use) don't get updated in timely way causing them to erroneously terminate the Perl process. If the watch dog is not in use then connection timeouts can be delayed but will execute when load settles down enough.

Because of the load related issues Object::Remote disables log forwarding by default. See Object::Remote::Logging for information on log forwarding.

SUPPORT

IRC: #web-simple on irc.perl.org

AUTHOR

mst - Matt S. Trout (cpan:MSTROUT) <mst@shadowcat.co.uk>

CONTRIBUTORS

bfwg - Colin Newell (cpan:NEWELLC) <colin.newell@gmail.com>

phaylon - Robert Sedlacek (cpan:PHAYLON) <r.sedlacek@shadowcat.co.uk>

triddle - Tyler Riddle (cpan:TRIDDLE) <t.riddle@shadowcat.co.uk>

SPONSORS

Parts of this code were paid for by

Socialflow L<http://www.socialflow.com>

Shadowcat Systems L<http://www.shadow.cat>

COPYRIGHT

Copyright (c) 2012 the Object::Remote "AUTHOR", "CONTRIBUTORS" and "SPONSORS" as listed above.

LICENSE

This library is free software and may be distributed under the same terms as perl itself.