|>I'd also like to see some benchmarks for servers using user authentication.
|>I did some simple stuff the other day with NCSA httpd on our Indigo which
|>suggested that for sites that expect their heavy load to be around 3
|>accesses per second, a user database of about 10,000 is as high as one
|>should go, though I think that could be improved by a daemon that keeps
|>the user database in memory (an option only as long as the user database
|>is the same for the whole system, I suppose).
User Authentication can really foul up the system. If you use RSA public key
the time for doing the key checking starts to dominate very quickly. This
is why EIT and myself are trying to work out how to minimise the number of
RSA ops needed. We are doing a lot of symetric and hashing key stuff at
the moment.
On strategies for making the code run faster I think we need to consider
some radical departures. The essential starting point however is to improve
the code base.
In the to do list is making the MIME parsers sensible, better error recovery
and memory handling. Once that is done we should try to make the interface
to configuration files much cleaner. This would then allow us to splice in a
database engine. A database may be slow compared to raw UNIX for 100 users.
But a good database takes more or less the same time for querying 100 users,
10,000 users or 100,000. More on this stuff later...
The encryption stuff I expect will be solved eventually through hardware.
Whatever the technical merits we will never get the public to accept a
software only system. There must be a physical action required to confirm,
something a little harder than clicking a button (press here to order
a lorry load of quick drying cement/ Ferrari F40/ Clipper chip). This
indicates to me that there will be a credit card type device, an a cheap
RS232 reader ($5 each in quantity, sent out to subscribers for free). So
I expect the RSA stuff to have hardware assist in any case.
-- Phillip M. Hallam-BakerNot Speaking for anyone else.