|>troth@is.rice.edu (Rick Troth) wrote:
|>> It's non-trivial to indicate end-of-file without closing the
|>> TCP connection. This is probably why FTP uses a second TCP connection
|>> (but I'm not Postel; you'll have to ask him)
|>>
|>> MIME is a Good Thing. Multipart/mixed is a Good Thing.
|>> Given the lack of out-of-band EOF indicators aside from shutting down,
|>> multipart/mixed with just >one< object may help a bit. The problem
|>> then becomes keeping the multipart boundary unique or (better) out of band.
|>> This implies Base64 encoding for too many things.
|>
|>One way around this is to define a new MIME
|>Content-Transfer-Encoding: binary-packetized,
|>something like:
|>
|> The server sends a binary stream in packets,
|> each packet prefixed with a 2-byte packet
|> length in network byte order. Packets can be
|> any convenient size from 1-65K bytes. The data
|> stream is terminated by a zero-length packet.
|>
|>This would have very little overhead compared to
|>straight binary encoding, and much less overhead than
|>base64.
Hey we are getting somewhere.
I like this idea except that 2 byte is not really enough. I would prefer to
have at least 32 and preferably 64 bits. 32 bits is only 4Gb. For intra
process transfers 4gb is not much.
Alternatively what we can do is :-
Content-Transfer-Encoding: binary-packet; field=2
For a 2 byte field. Clients would have to accept 1, 2 and 4 byte ids. Perhaps
we should go for an 8 byte id as well... It would not be too difficult even
on a 32 bit machine.
This could be overlaid on any content type and is orthogonal which is
rather nice. The only difficulty I see is that the packets should be the
outermost encapsulation. But at the moment encryption is the outermost
encapsulation since you want to compress before you encrypt.
-- Phillip M. Hallam-BakerNot Speaking for anyone else.