NAME

pdf2xml - extract text from PDF files and wraps it in XML

SYNOPSIS

pdf2xml [OPTIONS] pdf-file > output.xml

OPTIONS

-c ............. split strings into character sequences before finding words
-h ............. skip de-hypenation (keep hyphenated words)
-H ............. max heap size for Java VM
-J path ........ path to Java
-l lexicon ..... provide a list of words or a text in the target language
-L ............. skip lowercasing (which is switched on by default)
-m ............. skip merging character sequences (not recommended)
-r ............. skip 'pdftotext -raw'
-x ............. skip standard 'pdftotext'
-X ............. use pdfXtk to convert to XHTML (default)
-T ............. use Apache Tika for the basic conversion
-v ............. verbose output

DESCRIPTION

pdf2xml tries to combine the output of several conversion tools in order to improve the extraction of text from PDF documents. Currently, it uses pdftotext, Apache Tika and pdfxtk. In the default mode, it calls all tools to extract text and pdfxtk is used to create the basic XML file that will be used to produce the final output. Several post-processing heuristics are implemented to split and merge character sequences in order to cleanup the text. Consider the example given below:

raw:    <p>PRESENTATION ET R A P P E L DES PRINCIPAUX RESULTATS 9</p>
clean:  <p>PRESENTATION ET RAPPEL DES PRINCIPAUX RESULTATS 9</p>

raw:    <p>2. Les c r i t è r e s de choix : la c o n s o m m a t i o n 
           de c o m b u s - t ib les et l e u r moda l i t é 
           d ' u t i l i s a t i on d 'une p a r t , 
           la concen t r a t ion d ' a u t r e p a r t 16</p>

clean:  <p>2. Les critères de choix : la consommation 
           de combustibles et leur modalité 
           d'utilisation d'une part, 
           la concentration d'autre part 16</p>

TODO

This is quite slow and loading Apache Tika for each conversion is not very efficient. Using the server mode of Apache Tika would be a solution.

Character merging heuristics are very simple. Using the longest string forming a valid word from the vocabulary may lead to many incorrect words in context for some languages. Also, the implementation of the merging procedure is probably not the most efficient one.

De-hyphenation heuristics could also be improved. The problem is to keep it as language-independent as possible.

SEE ALSO

Apache Tika: http://tika.apache.org

The Poppler Developers - http://poppler.freedesktop.org

pdfXtk http://sourceforge.net/projects/pdfxtk/

COPYRIGHT AND LICENSE

Copyright (C) 2013 by Joerg Tiedemann

This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.8.8 or, at your option, any later version of Perl 5 you may have available.