Whatever you wanna call it.
> I thought a 2-byte segment length was reasonable because
> typically a server process is going to use a small (< 64K)
> buffer *anyway* as it's sending data over the network -- a
Perhaps not. Perhaps it looks at the actual size of the file it will be
sending, allocates a buffer that large, sucks it in, writes it out, and
away you go. I use programs that work this way all the time.
> program that reads a 4 megabyte file into memory and writes
> it back out with only two system calls is going to have
> problems
Why? I do it all the time. Why do you think OS's are using the VM manager
to manage persistant-file I/O also? Why not map the file into memory and
then squirt it back out a chunk at a time? If it's bigger then 64K,
you're going to need multiple writes for each segment to do it this way,
even if it's the most efficient way of moving files onto the network
otherwise.
>-- and this also provides a reasonable maximum
> buffer size for clients to allocate.
You can still allocate whatever buffer size you want. If you allocate a
64k buffer when your segments are only 4K long, you're going to be
wasting space too. If you allocate a 64K buffer and get a 128K segment,
you'll need to do two reads. But then, that's OK, because you'd be "having
problems" anyway.
-- Darren