I hope you can make this public. It is very interesting. I'd like to do
the same thing, except that I'd transport the Web docs through UUCP to
sites that only have dialup, batched access. I'd like to get daily diffs
between what the site has and the current state of the real Web.
I believe this is best done through a CERN httpd proxy cache, which saves
the web documents and files under a directory hierarchy.
Such a thing would increase the audience of the Web to non-IP sites, such
as BBSes and non-profits which might not want to spare time, equipment and
connection fees on even SLIP/PPP. Of course, no interactive forms, searches,
etc.
Re:
> We are currently developing such a thing. It will be able to
> download a whole bunch documents following all the links. Well, not
> *all* the links since this would provide you with the contents of
> the whole Web (or more likely run out of disk space before).
-- Miguel A. Paraz -=-=-=-=-=||-= Dream : Free Access for the Masses -=-=-=-= cparaz@balut.admu.edu.ph =||-= Ateneo de Manila University, Philippines -=