I don't understand this. The load averaged over time will not change just
because somebody is doing simultaneous connections. The same amount of
data is being transferred either way. With the caching which is part of
1.0, this amount of data will be *drastically* reduced by using Netscape.
Do a simple chart with 4 users retreiving 4 things (a doc and 3 images) with
a single time unit for all transfers and each user starting at 1 time unit
increments.
old way:
--------
| 4 |
user 1: aaaaaaaaabbbbbbbbbccccccccc|ddddddddd|
user 2: aaaaaaaaabbbbbbbbb|ccccccccc|ddddddddd
user 3: aaaaaaaaa|bbbbbbbbb|cccccccccddddddddd
user 4: |aaaaaaaaa|bbbbbbbbbcccccccccddddddddd
Now if we expand this scenario over infinite time, it is clear that
the server will stay in time unit 4 and be processing 4 things.
new way:
--------
Each user starts 4 connections in their time slot and finishes. The constant
load over time is _also_ 4.
This is an overly simplistic look at Web serving but I don't think that it
is entirely out of line. Our Web server had 250,000 hits yesterday. A fair
look at the load would be average hits/sec. This has nothing to do with
parallel versus linear loading. The only thing that this *might* affect
would be the Maximum load at any given time. However, with the natural
varience of when people hit a server, my intuition tells me that this would
average out over time as well...
BTW, I have set my # of connections up to 200 and pounded a BSD based
server with no major problems...
-Jon