I agree. Except that this isn't evolving away from LaTeX's language
orientation, but towards LaTeX's fundamental goal. Other than trivial
syntactic differences (\begin ... \end and \command{argument} instead
of <tag> ... </tag>), the ideal of LaTeX has always been to describe
the document's logical structure, and to leave formatting decisions to
the document style and to preamble commands. Admittedly, the need for
high-quality typesetting and its implementation with TeX macros have
meant that the current version of LaTeX does not achieved that ideal.
LaTeX3 will get closer, but will still be far from the goal.
Most of what I have seen of the HTML+ discussion has been couched in
terms of SGML, as if finding the correct SGML representation will
solve the problem.
SGML is not a solution. SGML is a statement of the problem.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
LaTeX provides a first step towards the goal of producing good quality
output from a logical description of the document. I propose that the
best practical approach is to evolve from an imperfect, successful
system; not to start from scratch--or from the current HTML, which is
just about from scratch as far as serious scientific document
production is concerned. (One can view most of the current HTML as
the addition of simple hypertext links to a syntactic variant of an
infinitesimal subset of LaTeX.)
LaTeX succeeded because it did a reasonable job, and because it
appeared at the right time to take advantage of the platform provided
by TeX. There now seems to be a window of opportunity for a new system
to take advantage of the Web in a similar way. A new system must
provide the reasonably high quality output that people have come to
expect of typesetting programs, as well as the hypertext facilities
that people expect from the Web. And the system must be developed
now, before a myriad of competing systems appear. We don't have the
luxury of developing the perfect system. We have neither the time nor
the resources. We need to develop something adequate that works
today.
With a couple of weeks of TeX hacking and C programming, using
existing tools for creating gif files from dvi files, Stephan Merz is
in the process of creating a system that will take standard LaTeX
input and produce a hypertext document that can be displayed by
Mosaic. Because of the limitations of HTML, it will not be able to
make arbitrary regions active; only one or more complete paragraphs
can be made active. And performance problems with Mosaic makes it
impractical to have more than a handful of such regions on any one
page. Instead of the usual small active regions, there will be
highlighted "indexed terms", and an accompanying active index.
It would take fairly simple extensions to HTML, and simple
modifications to a dvi to Postscript converter, to allow real
hypertext, sprinkled with active links, to be produced in this way.
This is not a LaTeX2HTML hack in which the program tries to convert
LaTeX to HTML; TeX does all the typesetting, and the typeset pages are
displayed as gif files.
I'm not proposing this as the standard approach to be adopted. What
I'm pointing out is that this is the kind of result one can achieve
with a couple of man-weeks of effort by using existing programs.
Imagine what could be done with a man-year of similar effort. In
contrast, it took Knuth seven years to build TeX. It would take
ordinary mortals dozens of man-years to build the kind of ideal system
that we'd all like to see. We need a more modest goal that can be
achieved quickly. Otherwise, we'll spend the next couple of years
dreaming about an ideal system and let the opportunity to build a real
one slip away.
Leslie Lamport
P.S. I'm not sure if my cc will get this to the mailing list. Please
forward it if it doesn't.