|>In message <Pine.3.89.9405271042.A4118-0100000@brazos.is.rice.edu>, Rick Troth
|>writes:
|>> I'm surprised and crushed by Dan's response.
|>
|>Gee... I didn't mean to crush anybody!
|>
|>>> HTTP is not Internet Mail.
|>>
|>> Right. And Internet Mail is broken. Let's not see HTTP
|>>break because someone misinterpreted the spec. We need to clarify this.
|>>I say that we should clarify it in the looser direction w/r/t plain
|>>text and trailing whitespace in particular. I see no reason to
|>>penalize clients and servers that have platform limitations ...
|>>unless it's just out of spite. What's the deal, Dan?
|>
|>I don't consider it spite. I just consider it clean design.
|>
|>We clearly disagree. I think both sides of the argument have
|>been presented. I don't plan on writing any code in this area
|>any time soon, so it's really up to somebody else to decide
|>what to deploy. I might try to influence the HTTP spec editor,
|>though :-)
I would like to state the case for being strict. The problems due to
misinterpretation of the spec should be solved by making the spec clearer
and unambiguous, not by making the servers ambiguous.
|>>> HTTP is not for the human eye: it's for a piece of software that groks
|>>> TCP (or perhaps some other reliable transport eventually...).
|>>
|>> If by this statement you're pointing out a misimplication
|>>in my note, I accept the correction. I didn't mean to suggest
|>>that HTTP is for human consumption. What I *did* (still do)
|>>mean to suggest is that, to the greatest extent possible,
|>>HTTP be clearly defined as a PLAIN TEXT protocol.
|>
|>I disagree. Internet mail and USENET news serve a community
|>that is not tied together by reliable 8-bit protocols. HTTP
|>does. I see no reason to support multiple representations
|>of the same information in HTTP headers.
I take the middle line HTTP at present is designed to support debugging
using telnet. The developers want that. We should look towards the day
when we can go towards a more compressed binary form for header information.
This should be a clean replacement of the RFC822 idiom with something
else. But we dont have to and don't need to think about that at the moment.
|>For example, look at XDR (part of NFS, etc.). Some systems
|>are little-endian and some are big-endian, but they all write
|>the bytes on the wire in the same order.
Have to disagree with you here Dan. It is easy enough to make the protocol
byte order neutral so that the machines automaticaly detect and adapt to the
byte order being sent. Adopting one as a `standard' is unnecessary and a bit
pointless when the `standard' adopted by the internet is different to the
standard in the marketplace which is set by MSDOS, not by the handfull of
UNIX machines. All processors comming out today are bi-endian, protocols should
be as well.
The scheme I use in PROTOPLASM is to send a version encoding number at the
start of the stream. This has by definition bit 31 clear and bit 7 set. It
is sent out in the byte order as the sender by default. Doing this means that
the same code can be written for all platforms without ugly #ifdefs which
is a worthwhile saving in itself since kludges like configure arn't needed.
One thing we must insist on in any new scheme. It MUST have a version number
system worked out in advance so that we don't need the HTTP/1.0 kludges.
-- Phillip M. Hallam-BakerNot Speaking for anyone else.