NAME

Locale::Maketext::Gettext - brings gettext and Maketext together

SYNOPSIS

In your localization class:

package MyPackage::L10N;
use base qw(Locale::Maketext::Gettext);
return 1;

In your application:

use MyPackage::L10N;
$LH = MyPackage::L10N->get_handle or die "What language?";
$LH->bindtextdomain("mypackage", "/home/user/locale");
$LH->textdomain("mypackage");
$LH->maketext("Hello, world!!");

DESCRIPTION

Locale::Maketext::Gettext brings GNU gettext and Maketext together. It is a subclass of Locale::Maketext(3) that follows the way GNU gettext works. It works seamlessly, both in the sense of GNU gettext and Maketext.

You start as an usual GNU gettext localization project: Work on PO files with the help of translators, reviewers and Emacs. Turn them into MO files with msgfmt. Copy them into the appropriate locale directory, such as /usr/share/locale/de/LC_MESSAGES/myapp.mo.

Then, build your Maketext localization class, with your base class changed from Locale::Maketext(3) to Locale::Maketext::Gettext. That's all. ^_*'

METHODS

$LH->bindtextdomain(DOMAIN, LOCALEDIR)

Register a text domain with a locale directory. It is only a registration. Nothing really happens here. No check is ever made whether this LOCALEDIR exists, nor if DOMAIN really sit in this LOCALEDIR. Returns LOCALEDIR itself. If LOCALEDIR is omitted, the registered locale directory of DOMAIN is returned. If DOMAIN is not even registered yet, returns undef. This method always success.

Don't do $LH->bindtextdomain(DOMAIN, $LH->bindtextdomain) on an unregistered domain. This is an infinite loop, and I'm not planning to fix it, in order to conform to the GNU gettext behavior. You should always use

defined($_ = $LH->bindtextdomain(DOMAIN)) and $LH->bindtextdomain(DOMAIN, $_)

instead.

$LH->textdomain(DOMAIN)

Set the current text domain. It reads the corresponding MO file and replaces the %Lexicon with this new lexicon. If anything goes wrong, for example, MO file not found, unreadable, NFS disconnection, etc., it returns immediatly and the your lexicon becomes empty. Returns the DOMAIN itself. If DOMAIN is omitted, the current text domain is returned. If the current text domain is not even set yet, returns undef. This method always success.

Don't do $LH->textdomain($LH->textdomain) before your text domain is set. This is an infinite loop, and I'm not planning to fix it, in order to conform to the GNU gettext behavior. You should always use

defined($_ = $LH->textdomain(DOMAIN)) and $LH->textdomain(DOMAIN, $_)

instead.

$LH->language_tag

Retrieve the output encoding. This is the same method in Locale::Maketext(3). It is readonly.

$LH->encoding(ENCODING)

Set or retrieve the output encoding. The default is the same encoding as the gettext MO file. You should not override this method in your localization subclasses, as contract to the current practice of Locale::Maketext(3).

WARNING: You should always trust the encoding in the gettext MO file. GNU gettext msgfmt will check the illegal characters for you when you compile your MO file fro your PO file. If you try to output to an wrong encoding, maketext will die for illegal characters in your text. For example, try to turn Chinese text into US-ASCII. If you DO need to output to a different encoding, use the value of this method and from_to from Encode(3) to do your job. I'm not planning to supply an option for this die. So, change the output encoding at your own risk.

If you need the behavior of auto Traditional Chinese/Simplfied Chinese conversion, as GNU gettext smartly does, do it yourself with the Encode::HanExtra(3), too. There may be a solution for this in the future, but not now.

$text = $LH->maketext($key, @param...)

The same method in Locale::Maketext(3), with a wrapper that return the text string encoded according to the current encoding.

$LH->die_for_lookup_failures(SHOULD_I_DIE)

Maketext dies for lookup failures, but GNU gettext never fails. By default Lexicon::Maketext::Gettext follows the GNU gettext behavior. But if you are old-styled, or if you want a better control over the failures, set this to 1. Returns the current setting.

$LH->reload_text

Purge the MO text cache. It purges the MO text cache from the base class Locale::Maketext::Gettext. The next time maketext is called, the MO file will be read and parse from the disk again. This is used whenever your MO file is updated, but you cannot shutdown and restart the application. For example, you are a co-hoster on a mod_perl-enabled Apache, or your mod_perl-enabled Apache is too vital to be restarted for every update of your MO file, or if you are running a vital daemon, such as an X display server.

FUNCTIONS

($encoding, %Lexicon) = readmo($MOfile);

Read and parse the MO file and return a suggested default encoding and %Lexicon. The suggested encoding is the encoding of the MO file itself. This subroutine is called by the textdomain method to retrieve the current %Lexicon. The result is cached, to reduce the file I/O and parsing overhead. This is essential to mod_perl where textdomain asks for %Lexicon for each request. This is the same way GNU gettext works. If you DO need to re-read the modified MO file, call the reload_text method above.

readmo() is exported by default.

NOTES

WARNING: Don't try to put any lexicon in your language subclass. When the textdomain method is called, the current lexicon will be replaced, but not appended. This is to accommodate the way textdomain works. Messages from the previous text domain should not stay in the current text domain.

An essential benefit of this Locale::Maketext::Gettext over the original Locale::Maketext(3) is that: GNU gettext is multibyte safe, but perl code is not. GNU gettext is safe to Big5 characters like \xa5\x5c (Gong1). You always have to escape bytes like \x5c, \x40, \x5b, etc, and your non-technical translators and reviewers will be presented with a mess, the so-called "Luan4Ma3". Sorry to say this, but it is, in fact, weird for a localization framework to be not multibyte-safe. But, well, here comes Locale::Maketext::Gettext to rescue. With Locale::Maketext::Gettext, you can sit back and leave all these mess to the excellent GNU gettext from now on. ^_*'

The idea of Locale::Maketext::Getttext came from Locale::Maketext::Lexicon(3), a great work by autrijus. But it is simply not finished yet and not practically usable. So I decide to write a replacement.

The part of calling msgunfmt is removed. The gettext MO file format is officially documented, so I decided to parse it by myself. It is not hard. It reduces the overhead to raising a subshell. It benefits from the fact that reading and parsing MO binary files is much faster then PO text files, since regular expression is not involved. Also, after all, msgunfmt is not portable on non-GNU systems.

Locale::Maketext::Gettext also solved the problem of lack of the ability to handle the encoding in Locale::Maketext(3). I implement this since this is what GNU gettext does. When %Lexicon is read from MO files by readmo(), the encoding tagged in gettext MO files is used to decode the text into perl's internal encoding. Then, when extracted by maketext, it is encoded by the current encoding value. The encoding can be changed at run time, so that you can run a daemon and output to different encoding according to the language settings of individual users, without having to restart the application. This is an improvement to the Locale::Maketext(3), and is essential to daemons and mod_perl applications.

dgettext and dcgettext in GNU gettext are not implemented. It's not possible to temporarily change the current text domain in the current design of Locale::Maketext::Gettext. Besides, it's meaningless. Locale::Maketext is object-oriented. You can always raise a new language handle for another text domain. This is different from the situation of GNU gettext. Also, the category is always LC_MESSAGES. It's meaningless to change it.

Avoid creating different language handles with different textdomain on the same localization subclass. This currently works, but it violates the basic design of Locale::Maketext(3). In Locale::Maketext(3), %Lexicon is saved as a class variable, in order for the lexicon inheritance system to work. So, multiple language handles to a same localization subclass shares a same lexicon space. Their lexicon space clash. I tried to avoid this problem by saving a copy of the current lexicon as an instance variable, and replacing the class lexicon with the current instance lexicon whenever it is changed by another language handle instance. But this involves large scaled memory copy, which affects the proformance seriously. This is discouraged. You are adviced to use a single textdomain for a single localization class.

BUGS

All the problems I have noticed have been fixed. You are welcome to submit new ones. ^_*' Maybe a long-winded docmuentation is a bug, too. :p

SEE ALSO

Locale::Maketext(3), Locale::Maketext::TPJ13(3), Locale::Maketext::Lexicon(3), Encode(3), bindtextdomain(3), textdomain(3). Also, please refer to the official GNU gettext manual at http://www.gnu.org/manual/gettext/.

AUTHOR

imacat <imacat@mail.imacat.idv.tw>

COPYRIGHT

Copyright (c) 2003 imacat. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.