NAME

WWW::Search::HotJobs - class for searching HotJobs

SYNOPSIS

require WWW::Search::Scraper;
$search = new WWW::Search::Scraper('HotJobs');

DESCRIPTION

This class is an HotJobs specialization of WWW::Search. It handles making and interpreting HotJobs searches http://www.HotJobs.com.

This class exports no public interface; all interaction should be done through WWW::Search objects.

OPTIONS

None at this time (2001.04.25)

search_url=URL

Specifies who to query with the HotJobs protocol. The default is at http://www.HotJobs.com/cgi-bin/job-search.

search_debug, search_parse_debug, search_ref Specified at WWW::Search.

SEE ALSO

To make new back-ends, see WWW::Search, or the specialized HotJobs searches described in options.

HOW DOES IT WORK?

native_setup_search is called before we do anything. It initializes our private variables (which all begin with underscores) and sets up a URL to the first results page in {_next_url}.

native_retrieve_some is called (from WWW::Search::retrieve_some) whenever more hits are needed. It calls the LWP library to fetch the page specified by {_next_url}. It parses this page, appending any search hits it finds to {cache}. If it finds a ``next'' button in the text, it sets {_next_url} to point to the page for the next set of results, otherwise it sets it to undef to indicate we're done.

AUTHOR and CURRENT VERSION

WWW::Search::HotJobs is written and maintained by Glenn Wood, <glenwood@alumni.caltech.edu>.

The best place to obtain WWW::Search::HotJobs is from Martin Thurn's WWW::Search releases on CPAN. Because HotJobs sometimes changes its format in between his releases, sometimes more up-to-date versions can be found at http://alumni.caltech.edu/~glenwood/SOFTWARE/index.html.

COPYRIGHT

Copyright (c) 2001 Glenn Wood All rights reserved.

This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.

--------------------------

Search.pm and Search::AltaVista.pm (of which HotJobs.pm is a derivative) is Copyright (c) 1996-1998 University of Southern California. All rights reserved.

Redistribution and use in source and binary forms are permitted provided that the above copyright notice and this paragraph are duplicated in all such forms and that any documentation, advertising materials, and other materials related to such distribution and use acknowledge that the software was developed by the University of Southern California, Information Sciences Institute. The name of the University may not be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.

XML Scaffolding

Look at the idea from the perspective of the XML "scaffold" I'm suggesting for parsing the response HTML.

(This is XML, but looks superficially like HTML)

<HTML> <BODY> <TABLE NAME="name" or NUMBER="number"> <TR TYPE="header"/> <TR TYPE = "detail*"> <TD BIND="title" /> <TD BIND="description" /> <TD BIND="location" /> <TD BIND="url" PARSE="anchor" /> </TR> </TABLE> </BODY> </HTML>

This scaffold describes the relevant skeleton of an HTML document; there's HTML and BODY elements, of course. Then the <TABLE> entry tells our parser to skip to the TABLE in the HTML named "name", or skip "number" TABLE entries (default=0, to pick up first TABLE element.) Then the TABLE is described. The first <TR> is described as a "header" row. The parser throws that one away. The second <TR> is a "detail" row (the "*" means multiple detail rows, of course). The parser picks up each <TD> element, extracts it's content, and places that in the hash entry corresponding to its BIND= attribute. Thus, the first TD goes into $result->_elem('title') (I needed to learn to use LWP::MemberMixin. Thanks, another lesson learned!) The second TD goes into $result->_elem('description'), etc. (Of course, some of these are _elem_array, but these details will be resolved later). The PARSE= in the url TD suggests a way for our parser to do special handling of a data element. The generic scaffold parser would take this XML and convert it to a hash/array to be processed at run time; we wouldn't actually use XML at run time. A backend author would use that hash/array in his native_setup_search() code, calling the "scaffolder" scanner with that hash as a parameter.

As I said, this works great if the response is TABLE structured, but I haven't seen any responses that aren't that way already.

This converts to an array tree that looks like this:

   my $scaffold = [ 'HTML', 
                    [ [ 'BODY', 
                      [ [ 'TABLE', 'name' ,                  # or 'name' = undef; multiple <TABLE number=n> mean n 'TABLE's here ,
                        [ [ 'NEXT', 1, 'NEXT &gt;' ] ,       # meaning how to find the NEXT button.
                          [ 'TR', 1 ] ,                      # meaning "header".
                          [ 'TR', 2 ,                        # meaning "detail*"
                            [ [ 'TD', 1, 'title' ] ,         # meaning clear text binding to _elem('title').
                              [ 'TD', 1, 'description' ] ,
                              [ 'TD', 1, 'location' ] ,
                              [ 'TD', 2, 'url' ]             # meaning anchor parsed text binding to _elem('title').
                            ]
                        ] ]
                      ] ]
                    ] ]
                 ];