Ah, I think I see... I was interpreting style sheets as analogous to
templates (i.e., we have a style sheet for each section of the online
version of our magazine, to try and give a coherent look to things).
I now realize you mean stylesheets as in, "this is how to render this
arbitrary DTD". This I am more comfortable with, definitely, though I
think there is a long way to go in improving what we have before
attacking this problem (even though I hear your point made in the
subsequent parable about the Indian about attacking problems at their
root level rather than their symptoms).
As to whether arbitrary-DTD SGML browsers could be made lightweight, I
defer that to more knowlegable people, as I have almost no experience
in SGML (though since it is an environment I figured it would be a
rather large beast).
> Bo Frese Rasmussen <bfrasmus@eso.org> writes:
> >In order to truly overcome these problems I think we need to invent a
> >new protocol. Something that would allow us to have more detailed
> >interaction, like not having to repaint the entire screen each time
> >something needs to be represented to the user, etc.. Something along the
> >lines of Mosaic<->HTTPD, where you have a generic client that will work
> >with whatever services provided on the net, without the need to download
> >and install any additional programs. I guess you could view this as a
> >higher level X protocol, or a lower level HTTP protocol (closer to the
> >HTTP)
If a lot of work could be put into making the X protocol as compact and
compressible as possible (I heard there were benefits in X11R6 but I'm
not sure) I think this could be the way to go. Add in better security
and we're set. X Windows can run on just about every platform out there,
and there is a large cache of usage-history to draw from.
> I agree that a more "heavyweight" protocol for online odocument
> delivery is desired. For a lot of commercial publishers, the
> connection time penalty will become a serious problem.
This is more of a technical issue, but I'll make an attempt at
commenting on it anyways. The system overhead to open sockets for
every transaction isn't necessarily higher than keeping one socket
open for a period of time. I do expect to see large commercial
publishers getting web hits on the order of NCSA's 100,000/day within
the next year, but with a combination of strong servers and
round-robin IP addressing this isn't necessarily a problem. The
back-end databases I envision most publishers will be setting up for
interactive services will probably dwarf the IP overhead. Combine
that with proposals for an MGET method in HTTP, and the connection
time penalty will probably be negligible.
> However, I do
> not think that a lower-level protocol for screen display is needed.
> All that is needed is more intelligent renderers. Beside, there are
> half a dozen protocols for distributed graphics (NAPLS, RFC??? etc.).
Hopefully these will come out in the soon-forthcoming discussion on VRML
:)
> With intelligent renderers, stylesheets, arbitray DTD's, pay-per-use,
> and encryption, etc. the WWW will live a long life indeed. Without all
> of these, I feel it will never live up to it's full potential.
No one disagrees, the question is simply where is the most appropriate
place in the spectrum of protocols and standards and what is the
appropriate level of attention to pay to each.
> >The idea is to make a simple HTTP script that works as a dispatcher - that
> >is : On first connection it would assign a unique id to the user, and start
>
> What happens when each of these scripts invokes a multi-megabyte
> frontend? Every time a connection over HTTP comes in, we reexecute it.
> There *are* ways around this, but I think they feel kludgy.
Multimegabyte programs on the server you mean? Modular programs, of course...
> >hallam@dxal18.cern.ch (HALLAM-BAKER Phillip)
> >Whenever you get a copy of WiReD you know it is genuine because it has the
> >trademark stamped on the front cover. Misuse of a trademark is a criminal
> >act in many countries when the intent is counterfeiting.
Well, our trademark logo isn't necessarily a mark of authentication, of
course - someone with a full-color photocopier could make an attempt
to copy it, and it would of course be illegal.
> Do you think this will stop people from copying it? I remember well
> the days when most software came with copy protection, and people
> spent *weeks* breaking it, and then distributing copies. On a network
> of 20,000,000 people, do you think trademarks will work? How about in
> countries where there are no laws, or where the government just
> doesn't care?
Trademarks will of course work as they always have. I'll avoid
getting into issues of copyright and trademark and "the net being
above the law" as it doesn't really belong on www-talk - for an
interesting discussion on it check out the MTV vs Adam Curry debate
on the newsgroups. But I will say that these issues don't have to
necessarily be a separate part of either the document language or
HTTP - simply give us authentication and encryption and I think
publishers will be happy.
> My hat *does* go off to all have have brought the net this far, and it
> will be ready for those who take it further.
Same here!
Brian