From: "Kabra, Anant" <Anant.Ka...@usa.xerox.com>
Subject: Interesting analysis of linux kernel threading by IBM
Date: 2000/01/19
Message-ID: <fa.hor0n9v.1uk839u@ifi.uio.no>#1/1
X-Deja-AN: 574471422
Original-Date: Tue, 18 Jan 2000 17:29:52 -0500
Sender: owner-linux-ker...@vger.rutgers.edu
Content-transfer-encoding: 7BIT
Original-Message-id: <DADB394AF196D211958700805F0DBA31032C41C8@USA0875MS1>
To: linux-ker...@vger.rutgers.edu
Content-return: allowed
Content-Type: text/plain; charset=iso-8859-1
X-Orcpt: rfc822;linux-kernel-outgoing-dig
Organization: Internet mailing list
MIME-version: 1.0
Newsgroups: fa.linux.kernel
X-Loop: majord...@vger.rutgers.edu
I don't know if people have read this yet, but looks like some good analysis
of linux kernel threading by IBM
http://www-4.ibm.com/software/developer/library/java2/index.html
Anant Kabra
anant.ka...@usa.xerox.com
(716)-423-3955 (telephone)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
From: "Davide Libenzi" <davi...@maticad.it>
Subject: Re: Interesting analysis of linux kernel threading by IBM
Date: 2000/01/19
Message-ID: <fa.h3gdorv.agmmrg@ifi.uio.no>#1/1
X-Deja-AN: 574868136
Original-Date: Wed, 19 Jan 2000 20:08:02 +0100
Sender: owner-linux-ker...@vger.rutgers.edu
Content-Transfer-Encoding: 7bit
Original-Message-ID: <022201bf62b0$81cee7d0$1f0104c0@maticad>
References: <fa.lnsnfqv.1c0o227@ifi.uio.no>
To: "David Lang" <dl...@diginsite.com>
Original-References: <Pine.LNX.4.21.0001191323530.1241-100...@dlang.diginsite.com>
X-Priority: 3
Content-Type: text/plain; charset="iso-8859-1"
X-Orcpt: rfc822;linux-kernel-outgoing-dig
X-MimeOLE: Produced By Microsoft MimeOLE V5.00.2314.1300
Organization: Internet mailing list
X-MSMail-Priority: Normal
MIME-Version: 1.0
Newsgroups: fa.linux.kernel
X-Loop: majord...@vger.rutgers.edu
Wednesday, January 19, 2000 10:25 PM
David Lang <dl...@diginsite.com> wrote :
> This has probably been asked before, but how difficult would it be to have
> two different schedulers available as compile time options? that way they
> system could be optimized for the expected load.
Hi David,
my patch has great performance ( 80% with 300 tasks ) with a lot of tasks
and low overhead ( 1.5% with 2 tasks ).
And my patch has 0.00 optimizations about CPU fetches and Co.
IMVHO 1-1.5 % of overhead is a price the we can afford given the performace
with many tasks.
My patch equals the current implementation with 8 tasks.
Cheers,
Davide.
--
Debian, the freedom in freedom.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
From: Horst von Brand <vonbr...@pincoya.inf.utfsm.cl>
Subject: Re: Interesting analysis of linux kernel threading by IBM
Date: 2000/01/20
Message-ID: <fa.j26bdiv.1p2aphb@ifi.uio.no>#1/1
X-Deja-AN: 575242713
Original-Date: Thu, 20 Jan 2000 09:51:01 -0300
Sender: owner-linux-ker...@vger.rutgers.edu
Original-Message-Id: <200001201251.JAA02902@pincoya.inf.utfsm.cl>
References: <fa.h3gdorv.agmmrg@ifi.uio.no>
To: "Davide Libenzi" <davi...@maticad.it>
X-Orcpt: rfc822;linux-kernel-outgoing-dig
Organization: Internet mailing list
Newsgroups: fa.linux.kernel
X-Loop: majord...@vger.rutgers.edu
"Davide Libenzi" <davi...@maticad.it> said:
[...]
> my patch has great performance ( 80% with 300 tasks ) with a lot of tasks
> and low overhead ( 1.5% with 2 tasks ).
> And my patch has 0.00 optimizations about CPU fetches and Co.
> IMVHO 1-1.5 % of overhead is a price the we can afford given the performace
> with many tasks.
> My patch equals the current implementation with 8 tasks.
So it is a net loss. This machine here (a personal workstation) has
typically 1 to 3 running tasks.
Hondreds of tasks is just not a typical (perhaps even realistic)
workload.
--
Dr. Horst H. von Brand mailto:vonbr...@inf.utfsm.cl
Departamento de Informatica Fono: +56 32 654431
Universidad Tecnica Federico Santa Maria +56 32 654239
Casilla 110-V, Valparaiso, Chile Fax: +56 32 797513
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
From: Phillip Ezolt <ez...@perf.zko.dec.com>
Subject: Re: Interesting analysis of linux kernel threading by IBM
Date: 2000/01/20
Message-ID: <fa.m6rfi2v.115i9b5@ifi.uio.no>#1/1
X-Deja-AN: 575358105
Original-Date: Thu, 20 Jan 2000 08:35:45 -0500 (EST)
Sender: owner-linux-ker...@vger.rutgers.edu
Original-Message-ID: <Pine.OSF.3.96.1000120082459.25370G-100000@perf.zko.dec.com>
References: <fa.j26bdiv.1p2aphb@ifi.uio.no>
To: Horst von Brand <vonbr...@pincoya.inf.utfsm.cl>
Content-Type: TEXT/PLAIN; charset=US-ASCII
X-Orcpt: rfc822;linux-kernel-outgoing-dig
Organization: Internet mailing list
MIME-Version: 1.0
Newsgroups: fa.linux.kernel
X-Loop: majord...@vger.rutgers.edu
On Thu, 20 Jan 2000, Horst von Brand wrote:
> "Davide Libenzi" <davi...@maticad.it> said:
>
> [...]
>
> > my patch has great performance ( 80% with 300 tasks ) with a lot of tasks
> > and low overhead ( 1.5% with 2 tasks ).
> > And my patch has 0.00 optimizations about CPU fetches and Co.
> > IMVHO 1-1.5 % of overhead is a price the we can afford given the performace
> > with many tasks.
> > My patch equals the current implementation with 8 tasks.
>
> So it is a net loss. This machine here (a personal workstation) has
> typically 1 to 3 running tasks.
>
> Hondreds of tasks is just not a typical (perhaps even realistic)
> workload.
Yes it is.
If you are running a webserver.
Or a highly threaded application.
Or a machine with a lot of users. (For example, a University unix server)
Or an ftp server. (Where is the Linux equivalent of FreeBSD's ftp.cdrom.com?)
It is really a question of "Where does Linux want to go?"
If it wants to be a high performance server, Linux needs a new scheduler.
If it wants to be the most efficient desktop machine, then it doesn't
need it NOW. However, the average number of programs people are
running on their machine are increasing, not decreasing.
Linux's real penetration has been in the server market. Why not make
it the best server it can be?
--Phil
Compaq: High Performance Server Division/Benchmark Performance Engineering
---------------- Alpha, The Fastest Processor on Earth --------------------
Phillip.Ez...@compaq.com |C|O|M|P|A|Q| ez...@perf.zko.dec.com
------------------- See the results at www.spec.org -----------------------
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
From: Blu3Viper <da...@killerlabs.com>
Subject: Re: Interesting analysis of linux kernel threading by IBM
Date: 2000/01/21
Message-ID: <fa.n1asvgv.8keni8@ifi.uio.no>#1/1
X-Deja-AN: 575505834
Original-Date: Thu, 20 Jan 2000 14:37:58 -0800 (PST)
Sender: owner-linux-ker...@vger.rutgers.edu
Original-Message-ID: <Pine.LNX.4.21.0001201436070.29486-100000@james.kalifornia.com>
References: <fa.j26bdiv.1p2aphb@ifi.uio.no>
To: Horst von Brand <vonbr...@pincoya.inf.utfsm.cl>
X-Sender: da...@james.kalifornia.com
Content-Type: TEXT/PLAIN; charset=US-ASCII
X-Orcpt: rfc822;linux-kernel-outgoing-dig
Organization: Internet mailing list
MIME-Version: 1.0
Newsgroups: fa.linux.kernel
X-Loop: majord...@vger.rutgers.edu
On Thu, 20 Jan 2000, Horst von Brand wrote:
> > my patch has great performance ( 80% with 300 tasks ) with a lot of tasks
> > and low overhead ( 1.5% with 2 tasks ).
> > And my patch has 0.00 optimizations about CPU fetches and Co.
> > IMVHO 1-1.5 % of overhead is a price the we can afford given the performace
> > with many tasks.
> > My patch equals the current implementation with 8 tasks.
>
> So it is a net loss. This machine here (a personal workstation) has
> typically 1 to 3 running tasks.
>
> Hondreds of tasks is just not a typical (perhaps even realistic)
> workload.
So it is a net gain on any machine with 8 or more running processes. Pretty
much all of my machines fall in that range and most of them are personal
workstations.
-d
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
From: h...@transmeta.com (H. Peter Anvin)
Subject: Re: Interesting analysis of linux kernel threading by IBM
Date: 2000/01/21
Message-ID: <fa.inmfqjv.cl8628@ifi.uio.no>#1/1
X-Deja-AN: 575675885
Original-Date: 20 Jan 2000 16:29:39 -0800
Sender: owner-linux-ker...@vger.rutgers.edu
Content-Transfer-Encoding: 7BIT
Original-Message-ID: <86899j$ihh$1@cesium.transmeta.com>
References: <fa.n1asvgv.8keni8@ifi.uio.no>
To: linux-ker...@vger.rutgers.edu
Original-References: <200001201251.JAA02...@pincoya.inf.utfsm.cl> <Pine.LNX.4.21.0001201436070.29486-100...@james.kalifornia.com>
X-Authentication-Warning: palladium.transmeta.com: bin set sender to n...@transmeta.com using -f
Content-Type: text/plain; charset=US-ASCII
X-Orcpt: rfc822;linux-kernel-outgoing-dig
Organization: Transmeta Corporation, Santa Clara CA
Copyright: Copyright 2000 H. Peter Anvin - All Rights Reserved
MIME-Version: 1.0
Reply-To: h...@transmeta.com (H. Peter Anvin)
Newsgroups: fa.linux.kernel
Disclaimer: Not speaking for Transmeta in any way, shape, or form.
X-Loop: majord...@vger.rutgers.edu
Followup to: <Pine.LNX.4.21.0001201436070.29486-100...@james.kalifornia.com>
By author: Blu3Viper <da...@killerlabs.com>
In newsgroup: linux.dev.kernel
>
> So it is a net gain on any machine with 8 or more running processes. Pretty
> much all of my machines fall in that range and most of them are personal
> workstations.
>
*RUNNING* processes? Most desktops don't have even one running
process most of the time.
-hpa
--
<h...@transmeta.com> at work, <h...@zytor.com> in private!
"Unix gives you enough rope to shoot yourself in the foot."
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
From: Ingo Molnar <mi...@chiara.csoma.elte.hu>
Subject: Linux scheduler, overscheduling performance, threads
Date: 2000/01/21
Message-ID: <fa.n5ov0tv.54smgn@ifi.uio.no>#1/1
X-Deja-AN: 575735289
Original-Date: Fri, 21 Jan 2000 13:52:04 +0100 (CET)
Sender: owner-linux-ker...@vger.rutgers.edu
Original-Message-ID: <Pine.LNX.4.10.10001211337160.2252-100000@chiara.csoma.elte.hu>
References: <fa.inmfqjv.cl8628@ifi.uio.no>
To: "H. Peter Anvin" <h...@transmeta.com>
Content-Type: TEXT/PLAIN; charset=US-ASCII
X-Orcpt: rfc822;linux-kernel-outgoing-dig
Organization: Internet mailing list
MIME-Version: 1.0
Newsgroups: fa.linux.kernel
X-Loop: majord...@vger.rutgers.edu
On 20 Jan 2000, H. Peter Anvin wrote:
> > So it is a net gain on any machine with 8 or more running processes.
> > Pretty much all of my machines fall in that range and most of them
> > are personal workstations.
> *RUNNING* processes? Most desktops don't have even one running
> process most of the time.
yep, many people i believe are missing the point. Linux schedules just
fine if there are 20000+ threads running:
moon:~/l> ps aux | wc -l
20137
moon:~/l> ./lat_ctx -s 0 2
"size=0k ovr=2.82
2 2.08
(ie. on a system with 20137 threads created we schedule from one process
to another in 2.08 microseconds. This is exactly as fast as on a system
with only a few processes.)
the issue is, how many threads are running at once. If it's much more than
the number of processors then the system is either 1) hopelessly
overloaded and needs a hardware upgrade 2) the application (or kernel) for
some reason is marking too many threads to run, and this creates
overscheduling situations. Such situations have to be avoided, but
debugging such situations is not simple. Nevertheless we cannot tell in
advance wether it's the application's or the kernel's fault. But the most
important thing is that it's definitely not the scheduler's fault. Dont
shoot the scheduler, it's just he messanger.
-- mingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
From: Ian Soboroff <i...@cs.umbc.edu>
Subject: Re: Interesting analysis of linux kernel threading by IBM
Date: 2000/01/21
Message-ID: <fa.j2vkcnv.l5sv0a@ifi.uio.no>#1/1
X-Deja-AN: 575808153
Original-Date: 21 Jan 2000 09:47:38 -0500
Sender: owner-linux-ker...@vger.rutgers.edu
Original-Message-ID: <m3vh4n5vn9.fsf@danube.cs.umbc.edu>
To: linux-ker...@vger.rutgers.edu
Content-Type: text/plain; charset=us-ascii
X-Orcpt: rfc822;linux-kernel-outgoing-dig
Organization: Internet mailing list
User-Agent: Gnus/5.0803 (Gnus v5.8.3) Emacs/20.4
MIME-Version: 1.0
Newsgroups: fa.linux.kernel
X-Loop: majord...@vger.rutgers.edu
Phillip Ezolt <ez...@perf.zko.dec.com> writes:
> > Hondreds of tasks is just not a typical (perhaps even realistic)
> > workload.
>
> Yes it is.
> [...]
> Or a highly threaded application.
didn't Larry McVoy make a point a while back (which relates well to
the IBM paper), that if your application depends on huge numbers of
threads, you're always going to keep bumping up against the scheduler?
a lot of people throw lots of threads at a problem and it can really
be bad design.
in the vast majority of cases, i suspect it's easier and probably
better to redesign the app than redesign the scheduler. that said,
the improvements already done are quite good and needed.
ian
--
----
Ian Soboroff i...@cs.umbc.edu
University of MD Baltimore County http://www.cs.umbc.edu/~ian
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
From: Larry McVoy <l...@bitmover.com>
Subject: Re: Interesting analysis of linux kernel threading by IBM
Date: 2000/01/21
Message-ID: <fa.h4b4rnv.1c0ceaf@ifi.uio.no>#1/1
X-Deja-AN: 575761207
Original-Date: Thu, 20 Jan 2000 20:01:34 -0800
Sender: owner-linux-ker...@vger.rutgers.edu
Original-Message-Id: <200001210401.UAA12354@work.bitmover.com>
References: <fa.e5035mv.pn8h8n@ifi.uio.no>
To: Peter Rival <fri...@zk3.dec.com>
X-Orcpt: rfc822;linux-kernel-outgoing-dig
Organization: Internet mailing list
Newsgroups: fa.linux.kernel
X-Loop: majord...@vger.rutgers.edu
: First of all, you're right.
Ahhh, I love it when people suck up to me :-)
Actually, smart move, it set my mind to actually reading the rest of your
message. Everybody? Please start out all mail to me with "You're right".
What? No? You don't think so? Ahh, well, I can hope :-)
: Larry, you mentioned that you had thoughts on what Linux has to do to work on big
: servers. I'm imagining, given your background and the content of this email, that
: they are also ideas that won't harm the performance on small systems (e.g.
: desktops). I must have missed them somehow - could you recap?
OK, but they are pretty radical. Linus and I have talked them over and he
has always been of the opinion "sounds good, sounds like it might be right,
where's the code?". And I'm side tracked onto BitKeeper.
Whatever, can the people who are really interested in high performance
take a look at
http://www.bitmover.com/llnl/smp.{ps,pdf}
and then
http://www.bitmover.com/llnl/labs.{ps,pdf}
I'll briefly summarize here. No justification for these statements are
here, there are some in the papers.
Premise 1: SMP scaling is a bad idea beyond a very small number processors.
The reasoning for this is that when you start out threading a kernel,
it's just a few locks. That quickly evolves into more locks, and
for a short time, there is a 1:1 mapping between each sort of object
in the system (file, file system, device, process, etc) and a lock.
So there can be a lot of locks, but there is only one reader/writer
lock per object instance. This is a pretty nice place to be - it's
understandable, explainable, and maintainable.
Then people want more performance. So they thread some more and now
the locks aren't 1:1 to the objects. What a lock covers starts to
become fuzzy. Thinks break down quickly after this because what
happens is that it becomes unclear if you are covered or not and
it's too much work to figure it out, so each time a thing is added
to the kernel, it comes with a lock. Before long, your 10 or 20
locks are 3000 or more like what Solaris has. This is really bad,
it hurts performance in far reaching ways and it is impossible to
undo.
Premise 2: most/all locking follows a canonical form of "take a global
data structure, split it up into N, where N is a function of the
number of CPUs, and give each CPU or group of CPUS, their own data
structure. Classic example: global/local run queues.
Premise 3: it is far easier to take a bunch of operating system images
and make them share the parts they need to share (i.e., the page
cache), than to take a single image and pry it apart so that it
runs well on N processors.
All of this leads us to an interesting twist on the clustering idea,
one that I give credit for mostly to DEC (they have some of this
implemented already). Suppose you were to take a single big machine
and run another instance of the OS every N processors where N is chosen
so that it is well under the knee of the locking curve, i.e., around 4,
maybe 8, but certainly no more than 8.
Multiple OS's on a single box? Wacky, huh? But if you think about it,
you've just instantly taken *EVERY* data structure in the kernel and
multi threaded it. Cool, no? And it cost you nothing but some boot
code.
That's kinda cute but not very useful because what you really want is to
be able to have all processors working together on the same data with only
one copy of the data. In other words, I don't care if I have one OS or
1000, I want all processors to be able to mmap /space/damn_big_file and
poke at it. And I don't want any stinkin' DSM - I want real, hardware
based coherency. Well, bucky, I'm here to tell ya, praise the lord,
you can have it :-) You need to make an SMPFS which lets other OS's
put reference counts on your inodes. The operation is extremely
similar to what you have to do when you invalidate a page - you shoot
down the other processor's TLB entries. So we need something like
that in the reverse.
If you think I'm waving my hands wildly, I am. But this is definitely
doable, and as hard as it seems, it is easily an order of magnitude easier
than threading the kernel to get to even 32 processors. I've lived
through that twice, it's about a 7 year process (in hind sight; before
hand, everyone said it would be maybe 18 months).
Read the papers. Think. Think again. Let's talk. I can set up a
perf@bitmover aliase if this becomes too off topic.
--lm
P.S. I call these SMP clusters, to distinguish them from HA or HPC clusters.
SMP clusters are for the enterprise - these are the clusters that will get
Linux on big iron running Oracle and kicking serious butt. Fast.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
From: Horst von Brand <vonbr...@pincoya.inf.utfsm.cl>
Subject: Re: Interesting analysis of linux kernel threading by IBM
Date: 2000/01/21
Message-ID: <fa.j66fbbv.1s2eqpa@ifi.uio.no>#1/1
X-Deja-AN: 575812163
Original-Date: Fri, 21 Jan 2000 11:34:48 -0300
Sender: owner-linux-ker...@vger.rutgers.edu
Original-Message-Id: <200001211434.LAA10259@pincoya.inf.utfsm.cl>
References: <fa.m6rfi2v.115i9b5@ifi.uio.no>
To: Phillip Ezolt <ez...@perf.zko.dec.com>
X-Orcpt: rfc822;linux-kernel-outgoing-dig
Organization: Internet mailing list
Newsgroups: fa.linux.kernel
X-Loop: majord...@vger.rutgers.edu
Davide Libenzi <davi...@maticad.it>, David Lang <dl...@diginsite.com>, said:
> On Thu, 20 Jan 2000, Horst von Brand wrote:
[...]
> > Hondreds of tasks is just not a typical (perhaps even realistic)
> > workload.
> Yes it is.
>
> If you are running a webserver.
Hundreds of CGIs running at the same time? Wow. But there I'd split load
among machines way before...
> Or a highly threaded application.
Higly stupid idea, typically.
> Or a machine with a lot of users. (For example, a University unix server)
I have such machines here (dozens of users, plus random services). Rarely
gets to 10.
> Or an ftp server. (Where is the Linux equivalent of FreeBSD's
> ftp.cdrom.com?)
Hundreds of people downloading at the same time is not the same as hundreds
of running tasks...
> It is really a question of "Where does Linux want to go?"
Benchmarkland, or real-world useful system?
> If it wants to be a high performance server, Linux needs a new
> scheduler.
Say which hard facts?
> If it wants to be the most efficient desktop machine, then it doesn't
> need it NOW. However, the average number of programs people are
> running on their machine are increasing, not decreasing.
Yes. I expert load average to be in the ones soon, not 0.1s anymore.
> Linux's real penetration has been in the server market. Why not make
> it the best server it can be?
Nobody is saying we shouldn't do it. But before screwing around, _measure_
where the real bottlenecks (for _real_ use, not benchmarks) are.
--
Dr. Horst H. von Brand mailto:vonbr...@inf.utfsm.cl
Departamento de Informatica Fono: +56 32 654431
Universidad Tecnica Federico Santa Maria +56 32 654239
Casilla 110-V, Valparaiso, Chile Fax: +56 32 797513
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
From: Phillip Ezolt <ez...@perf.zko.dec.com>
Subject: Re: Interesting analysis of linux kernel threading by IBM
Date: 2000/01/21
Message-ID: <fa.mab3giv.11lqar7@ifi.uio.no>#1/1
X-Deja-AN: 575790225
Original-Date: Fri, 21 Jan 2000 10:03:02 -0500 (EST)
Sender: owner-linux-ker...@vger.rutgers.edu
Original-Message-ID: <Pine.OSF.3.96.1000121094754.22133D-100000@perf.zko.dec.com>
References: <fa.j66fbbv.1s2eqpa@ifi.uio.no>
To: Horst von Brand <vonbr...@pincoya.inf.utfsm.cl>
Content-Type: TEXT/PLAIN; charset=US-ASCII
X-Orcpt: rfc822;linux-kernel-outgoing-dig
Organization: Internet mailing list
MIME-Version: 1.0
Newsgroups: fa.linux.kernel
X-Loop: majord...@vger.rutgers.edu
The amount of time wasted on a bad scheduler is the following:
O(Num of Times schedule is called * Number of running processes)
When I run SPECWeb96 tests here, I see both a large number of running
process and a huge number of context switches.
Here's a sample of the vmstat data:
vmstat -n 1
procs memory swap io system cpu
r b w swpd free buff cache si so bi bo in cs us sy id
0 0 0 2320 2058424 587520 1061464 0 0 0 0 5700 34 0 7 93
0 0 0 2320 2058424 587520 1061464 0 0 0 0 5706 26 0 7 93
18 0 0 2320 2073664 587776 1061464 0 0 0 1 19950 15612 2 69 28
20 0 0 2320 2073032 588032 1061464 0 0 0 0 11753 11261 3 95 2
22 0 0 2320 2072192 588224 1061464 0 0 0 1 8020 7516 3 96 1
23 0 0 2320 2071560 588480 1061464 0 0 0 0 8258 7419 3 96 1
22 0 0 2320 2071120 588480 1061464 0 0 0 0 10207 9682 3 96 1
24 0 0 2320 2070320 588864 1061464 0 0 0 1225 8267 7618 3 96 1
23 0 0 2320 2069448 589384 1061464 0 0 0 0 11220 9875 3 96 1
21 0 0 2320 2068680 589640 1061464 0 0 0 0 10280 9952 3 96 1
21 0 0 2320 2067936 589896 1061464 0 0 0 578 8317 7571 3 96 1
22 0 0 2320 2067336 590024 1061464 0 0 0 0 9641 7892 3 96 1
24 0 0 2320 2066936 590088 1061464 0 0 0 0 8680 7402 3 96 1
24 0 0 2320 2065752 590664 1061464 0 0 0 1095 11344 10920 3 95 1
23 0 0 2320 2065216 590920 1061464 0 0 0 0 8037 7108 3 95 1
...
Notice. 24 running process and ~7000 context switches.
That is a lot of overhead. Every second, 7000*24 goodnesses are calculated.
Not the (20*3) that a desktop system sees. This is a scalability issue.
A better scheduler means better scalability.
Don't tell me benchmark data is useless. Unless you can give me data
using a real system and where it's faults are, benchmark data is a we
have.
SPECWeb96 pushes Linux until it bleeds. I'm telling you where it
bleeds. You can fix it or bury your head in the sand. It might not
be what your system is seeing today, but it will be in the future.
Would you rather fix it now or wait until someone else how thrown down
the performance gauntelet?
--Phil
Compaq: High Performance Server Division/Benchmark Performance Engineering
---------------- Alpha, The Fastest Processor on Earth --------------------
Phillip.Ez...@compaq.com |C|O|M|P|A|Q| ez...@perf.zko.dec.com
------------------- See the results at www.spec.org -----------------------
On Fri, 21 Jan 2000, Horst von Brand wrote:
> Davide Libenzi <davi...@maticad.it>, David Lang <dl...@diginsite.com>, said:
> > On Thu, 20 Jan 2000, Horst von Brand wrote:
>
> [...]
>
> > > Hondreds of tasks is just not a typical (perhaps even realistic)
> > > workload.
>
> > Yes it is.
> >
> > If you are running a webserver.
>
> Hundreds of CGIs running at the same time? Wow. But there I'd split load
> among machines way before...
>
> > Or a highly threaded application.
>
> Higly stupid idea, typically.
>
> > Or a machine with a lot of users. (For example, a University unix server)
>
> I have such machines here (dozens of users, plus random services). Rarely
> gets to 10.
>
> > Or an ftp server. (Where is the Linux equivalent of FreeBSD's
> > ftp.cdrom.com?)
>
> Hundreds of people downloading at the same time is not the same as hundreds
> of running tasks...
>
> > It is really a question of "Where does Linux want to go?"
>
> Benchmarkland, or real-world useful system?
>
> > If it wants to be a high performance server, Linux needs a new
> > scheduler.
>
> Say which hard facts?
>
> > If it wants to be the most efficient desktop machine, then it doesn't
> > need it NOW. However, the average number of programs people are
> > running on their machine are increasing, not decreasing.
>
> Yes. I expert load average to be in the ones soon, not 0.1s anymore.
>
> > Linux's real penetration has been in the server market. Why not make
> > it the best server it can be?
>
> Nobody is saying we shouldn't do it. But before screwing around, _measure_
> where the real bottlenecks (for _real_ use, not benchmarks) are.
> --
> Dr. Horst H. von Brand mailto:vonbr...@inf.utfsm.cl
> Departamento de Informatica Fono: +56 32 654431
> Universidad Tecnica Federico Santa Maria +56 32 654239
> Casilla 110-V, Valparaiso, Chile Fax: +56 32 797513
>
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
From: Brian Hurt <bh...@talkware.net>
Subject: Re: Linux scheduler, overscheduling performance, threads
Date: 2000/01/21
Message-ID: <fa.mj94hiv.rkidr6@ifi.uio.no>#1/1
X-Deja-AN: 575896983
Original-Date: Thu, 20 Jan 2000 22:57:07 -0600 (CST)
Sender: owner-linux-ker...@vger.rutgers.edu
Original-Message-ID: <Pine.LNX.4.10.10001202229390.1899-100000@mars.talkware.net>
References: <fa.n5ov0tv.54smgn@ifi.uio.no>
To: Ingo Molnar <mi...@chiara.csoma.elte.hu>
Content-Type: TEXT/PLAIN; charset=US-ASCII
X-Orcpt: rfc822;linux-kernel-outgoing-dig
Organization: Internet mailing list
MIME-Version: 1.0
Newsgroups: fa.linux.kernel
X-Loop: majord...@vger.rutgers.edu
Following this thread off and on since it's inception, and being a Java
programmer myself, can I offer some observations?
Thousands of threads in a program is not unreasonable. If you may want to
take full advantage of a 128 CPU machine, for example, you need _at_
_least_ 128 threads. If your threads spend most of their time blocking,
you need even more threads, you need to overschedule, to make sure you
generally have enough threads not blocking to make sure CPUs aren't going
to waste. Unfortunately, due to vagaries of the system, you will have
points when most of the threads become runnable at once.
User level threads are not a full solution- they help, and are a good
thing, but are not a silver bullet. The basic problem is that there are
still ways for a process to block that can't be intercepted and "faked" by
the user level threads- page faulting, for instance. And if a thread
blocks, all threads that share that process are also blocked. Plus, all
the problems and difficulties of scheduling are not removed, they're
simply shoved onto the threading library.
VolanoMark is a real application, and is really sold. People do really
write programs like this- except that they're generally for the Enterprise
market. The question here is if Linux is just a desktop/small server OS,
or if it's also going be an enterprise OS? This isn't meant to be a snide
or insulting question- I'd actually _prefer_ Linux to simply be the best
desktop/small server OS out there. But if Linux is going to play in the
enterprise market- running the same programs and doing the same jobs
(albeit slower and cheaper) as that Enterprise 10000 server, it had better
be ready to deal with applications that spawn thousands of threads.
You're not going to be able to reeducate the hoards of computer pundits
and anonymous cowards trumpeting Linux as the one true OS (or disparaging
it in favor of this other one true OS)- but the kernel developers should
know the answer.
On Fri, 21 Jan 2000, Ingo Molnar wrote:
>
> On 20 Jan 2000, H. Peter Anvin wrote:
>
> > > So it is a net gain on any machine with 8 or more running processes.
> > > Pretty much all of my machines fall in that range and most of them
> > > are personal workstations.
>
> > *RUNNING* processes? Most desktops don't have even one running
> > process most of the time.
>
> yep, many people i believe are missing the point. Linux schedules just
> fine if there are 20000+ threads running:
>
> moon:~/l> ps aux | wc -l
> 20137
> moon:~/l> ./lat_ctx -s 0 2
> "size=0k ovr=2.82
> 2 2.08
>
> (ie. on a system with 20137 threads created we schedule from one process
> to another in 2.08 microseconds. This is exactly as fast as on a system
> with only a few processes.)
>
> the issue is, how many threads are running at once. If it's much more than
> the number of processors then the system is either 1) hopelessly
> overloaded and needs a hardware upgrade 2) the application (or kernel) for
> some reason is marking too many threads to run, and this creates
> overscheduling situations. Such situations have to be avoided, but
> debugging such situations is not simple. Nevertheless we cannot tell in
> advance wether it's the application's or the kernel's fault. But the most
> important thing is that it's definitely not the scheduler's fault. Dont
> shoot the scheduler, it's just he messanger.
>
> -- mingo
>
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.rutgers.edu
> Please read the FAQ at http://www.tux.org/lkml/
>
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
From: Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: Re: Interesting analysis of linux kernel threading by IBM
Date: 2000/01/22
Message-ID: <fa.gp552vv.1o58qhf@ifi.uio.no>#1/1
X-Deja-AN: 575973913
Original-Date: Fri, 21 Jan 2000 18:33:52 +0000 (GMT)
Sender: owner-linux-ker...@vger.rutgers.edu
Original-Message-Id: <E12Bisc-0002Qf-00@the-village.bc.nu>
Content-Transfer-Encoding: 7bit
References: <fa.j2vkcnv.l5sv0a@ifi.uio.no>
To: i...@cs.umbc.edu (Ian Soboroff)
Content-Type: text/plain; charset=us-ascii
X-Orcpt: rfc822;linux-kernel-outgoing-dig
Organization: Internet mailing list
MIME-Version: 1.0
Newsgroups: fa.linux.kernel
X-Loop: majord...@vger.rutgers.edu
> the IBM paper), that if your application depends on huge numbers of
> threads, you're always going to keep bumping up against the scheduler?
> a lot of people throw lots of threads at a problem and it can really
> be bad design.
That is the least of your worry. 1000 threads is 8Mb of kernel stacks, and
enough switching of tasks to be sure you might as well turn most of your
cache off. A computer is a state machine. Threads are for people who cant
program state machines.
There are plenty of cases Linux is most definitely not helping the situation
notably asynchronous block I/O.
Alan
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
From: Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: Re: Interesting analysis of linux kernel threading by IBM
Date: 2000/01/22
Message-ID: <fa.h93cu7v.157gm9f@ifi.uio.no>#1/1
X-Deja-AN: 575992650
Original-Date: Fri, 21 Jan 2000 23:18:51 +0000 (GMT)
Sender: owner-linux-ker...@vger.rutgers.edu
Original-Message-Id: <E12BnKP-0002lr-00@the-village.bc.nu>
Content-Transfer-Encoding: 7bit
References: <fa.mab3giv.11lqar7@ifi.uio.no>
To: ez...@perf.zko.dec.com (Phillip Ezolt)
Content-Type: text/plain; charset=us-ascii
X-Orcpt: rfc822;linux-kernel-outgoing-dig
Organization: Internet mailing list
MIME-Version: 1.0
Newsgroups: fa.linux.kernel
X-Loop: majord...@vger.rutgers.edu
> The amount of time wasted on a bad scheduler is the following:
>
> O(Num of Times schedule is called * Number of running processes)
>
> When I run SPECWeb96 tests here, I see both a large number of running
> process and a huge number of context switches.
Specweb96 is about as relevant to real world web performance as the colour
of car you own. And in this case massively so. Run your test with thttpd.
Your run queue length will be _one_. Always one, never more and under load
never less. Its an architectural issue in your web server.
Specweb99 which does at least pretend to be a credible benchmark you would
see more running because of your cgis, although in most real world cases your
cgi will be short lived, and if performance critical be a module in your
web server so not more threads.
Alan
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
From: torva...@www.transmeta.com (Linus Torvalds)
Subject: Re: Interesting analysis of linux kernel threading by IBM
Date: 2000/01/22
Message-ID: <fa.jchrvpv.197o6qb@ifi.uio.no>#1/1
X-Deja-AN: 576001948
Original-Date: 21 Jan 2000 11:43:39 -0800
Sender: owner-linux-ker...@vger.rutgers.edu
Original-Message-ID: <86actb$1ct$1@penguin.transmeta.com>
References: <fa.m7c6ibv.1ag4m15@ifi.uio.no>
To: linux-ker...@vger.rutgers.edu
Original-References: <200001201920.LAA31...@work.bitmover.com> <Pine.LNX.4.21.0001201818410.148-100...@dlang.diginsite.com>
X-Authentication-Warning: palladium.transmeta.com: bin set sender to n...@transmeta.com using -f
X-Orcpt: rfc822;linux-kernel-outgoing-dig
Organization: Transmeta Corporation
Newsgroups: fa.linux.kernel
X-Loop: majord...@vger.rutgers.edu
In article <Pine.LNX.4.21.0001201818410.148-100...@dlang.diginsite.com>,
David Lang <dl...@diginsite.com> wrote:
>Shortly before I went and purchased $10,000 encryption co-processors for
>my SSL web servers it was not unusual to see a 5 min loadave of 30-50 (and
>one time I saw it go up to 144!!). This was on AIX quad 233 2G ram RS/6000
>servers, after I got the encryption co-processors the loadave dropped to
>5-10
Ok, before this gets out of hand, let me just clarify:
- under loads like the above, scheduling speeds MEANS ABSOLUTELY
NOTHING.
Nada. Zilch. Zero.
You aren't spending any time scheduling - you're spending all the time
COMPUTING. The high load is because you're compute-bound, and you
probably end up scheduling maybe a few hundred times a second. If that.
With a default timeslice in the 10-200 millisecond range (Linux defaults
to 200ms, and that's probably too dang long), compute-active processes
aren't really much an issue.
Basically, the situation where you have BOTH
- large number of runnable processes
AND
- lots of scheduling activity
are very rare indeed. They are rare even in threaded code, unless that
threaded code has a lot of synchronization points and a lot of
synchronous inter-thread communication.
The IBM numbers are very interesting. The cache-line optimization is an
obvious performance advantage, and has been incorporated into the recent
kernels. But I think some people think that this is a common problem,
and think that "load average" automatically equals scheduling. It isn't,
and it doesn't.
Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
From: Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: Re: Interesting analysis of linux kernel threading by IBM
Date: 2000/01/22
Message-ID: <fa.h8vcofv.1530c1f@ifi.uio.no>#1/1
X-Deja-AN: 576035632
Original-Date: Fri, 21 Jan 2000 23:13:26 +0000 (GMT)
Sender: owner-linux-ker...@vger.rutgers.edu
Original-Message-Id: <E12BnF9-0002l7-00@the-village.bc.nu>
Content-Transfer-Encoding: 7bit
References: <fa.j66fbbv.1s2eqpa@ifi.uio.no>
To: vonbr...@pincoya.inf.utfsm.cl (Horst von Brand)
Content-Type: text/plain; charset=us-ascii
X-Orcpt: rfc822;linux-kernel-outgoing-dig
Organization: Internet mailing list
MIME-Version: 1.0
Newsgroups: fa.linux.kernel
X-Loop: majord...@vger.rutgers.edu
> > Or an ftp server. (Where is the Linux equivalent of FreeBSD's
> > ftp.cdrom.com?)
>
> Hundreds of people downloading at the same time is not the same as hundreds
> of running tasks...
A scalable anonymous ftpd needs a few threads in Linux, and for sendfile a lot
but thats flaws in the aio facilities in Linux - and in general in Unix. Its
not an area unix of old really handled well.
Thats the cause, thats what you fix
Alan
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
From: Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: Re: Linux scheduler, overscheduling performance, threads
Date: 2000/01/22
Message-ID: <fa.h7kl3vv.17m0rhf@ifi.uio.no>#1/1
X-Deja-AN: 576074530
Original-Date: Fri, 21 Jan 2000 22:56:24 +0000 (GMT)
Sender: owner-linux-ker...@vger.rutgers.edu
Original-Message-Id: <E12Bmyg-0002jX-00@the-village.bc.nu>
Content-Transfer-Encoding: 7bit
References: <fa.mj94hiv.rkidr6@ifi.uio.no>
To: bh...@talkware.net (Brian Hurt)
Content-Type: text/plain; charset=us-ascii
X-Orcpt: rfc822;linux-kernel-outgoing-dig
Organization: Internet mailing list
MIME-Version: 1.0
Newsgroups: fa.linux.kernel
X-Loop: majord...@vger.rutgers.edu
> Thousands of threads in a program is not unreasonable. If you may want to
More than a couple of threads per CPU is highly unreasonable.
> take full advantage of a 128 CPU machine, for example, you need _at_
> _least_ 128 threads. If your threads spend most of their time blocking,
Linux doesnt run on any 128 CPU machines, so it would be bad to tune for it.
You want 256 threads on a 128 cpu box, ok no argument
> desktop/small server OS out there. But if Linux is going to play in the
> enterprise market- running the same programs and doing the same jobs
> (albeit slower and cheaper) as that Enterprise 10000 server, it had better
> be ready to deal with applications that spawn thousands of threads.
And what happens when someone comes along with a performance issue. Do you
try and cope with excessive threads, 20Mb of kernel overhead, trashed
caches or do you put your thinking hat on. In java you have very poor AIO
facilities which doesnt help (I believe the newest java stuff fixes this ?)
Alan
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
From: "David S. Miller" <da...@redhat.com>
Subject: Re: Interesting analysis of linux kernel threading by IBM
Date: 2000/01/22
Message-ID: <fa.het298v.137a6ob@ifi.uio.no>#1/1
X-Deja-AN: 576090112
Original-Date: Fri, 21 Jan 2000 20:03:34 -0800
Sender: owner-linux-ker...@vger.rutgers.edu
Original-Message-Id: <200001220403.UAA01639@pizda.ninka.net>
References: <fa.h93cu7v.157gm9f@ifi.uio.no>
To: a...@lxorguk.ukuu.org.uk
Original-References: <E12BnKP-0002lr...@the-village.bc.nu>
X-Authentication-Warning: pizda.ninka.net: davem set sender to da...@redhat.com using -f
X-Orcpt: rfc822;linux-kernel-outgoing-dig
Organization: Internet mailing list
Newsgroups: fa.linux.kernel
X-Loop: majord...@vger.rutgers.edu
Date: Fri, 21 Jan 2000 23:18:51 +0000 (GMT)
From: Alan Cox <a...@lxorguk.ukuu.org.uk>
> When I run SPECWeb96 tests here, I see both a large number of running
> process and a huge number of context switches.
Running 2.2.x I imagine as well, right?
Specweb96 is about as relevant to real world web performance as the colour
of car you own. And in this case massively so. Run your test with thttpd.
Your run queue length will be _one_. Always one, never more and under load
never less. Its an architectural issue in your web server.
Only partly true Alan, only 2.3.x has the "dumb wakeups" issue with
TCP accept fixed (2.2.x will cause ~3 wakeups for each new connection
when only 1 should be made). That plays a big factor as well,
remember the same exact thread about all of this "run queue
scalability" bogosity we had nearly a year ago?
So to the original specweb96 tester, divide your context switch number
by 3, does it look more sane now? :-)
However Alan is right, web server architecture has a lot to do with
bad "benchmark" performance.
Later,
David S. Miller
da...@redhat.com
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
From: Peter Rival <fri...@zk3.dec.com>
Subject: Re: Interesting analysis of linux kernel threading by IBM
Date: 2000/01/22
Message-ID: <fa.e8f735v.r7ocr7@ifi.uio.no>#1/1
X-Deja-AN: 576196994
Original-Date: Sat, 22 Jan 2000 11:57:23 -0500
Sender: owner-linux-ker...@vger.rutgers.edu
Content-Transfer-Encoding: 7bit
Original-Message-ID: <3889E173.1888C659@zk3.dec.com>
References: <fa.het298v.137a6ob@ifi.uio.no>
To: "David S. Miller" <da...@redhat.com>
Original-References: <E12BnKP-0002lr...@the-village.bc.nu> <200001220403.UAA01...@pizda.ninka.net>
X-Accept-Language: en
Content-Type: text/plain; charset=us-ascii
X-Orcpt: rfc822;linux-kernel-outgoing-dig
Organization: Internet mailing list
MIME-Version: 1.0
Newsgroups: fa.linux.kernel
X-Loop: majord...@vger.rutgers.edu
Stupid question, but it's Saturday and I'm on a Windows system (don't ask...).
What do people think about something like SPECWeb99 - dynamic content, CGIs,
etc? I know the biggest complaint about 96 is that it's completely static
content which a completely tweaked setup will keep completely in memory, among
other "optimisations" I've heard of.
If not SPECweb, then is there another benchmark that you can all agree with?
Preferably something that can be used to show scalability of an operating system
and hardware combination - not only in terms of CPUs and memory, but also number
of users/requests/transactions/whatever. I am also interested (not that many
people are) in what happens when the system is pushed over the edge - again,
something like the House web server when the Lewinsky paper was released. It's
important for people to know that their system will continue to run and hopefully
well even when it's completely overrun. Ideas?
- Pete
"David S. Miller" wrote:
> Date: Fri, 21 Jan 2000 23:18:51 +0000 (GMT)
> From: Alan Cox <a...@lxorguk.ukuu.org.uk>
>
> > When I run SPECWeb96 tests here, I see both a large number of running
> > process and a huge number of context switches.
>
> Running 2.2.x I imagine as well, right?
>
> Specweb96 is about as relevant to real world web performance as the colour
> of car you own. And in this case massively so. Run your test with thttpd.
> Your run queue length will be _one_. Always one, never more and under load
> never less. Its an architectural issue in your web server.
>
> Only partly true Alan, only 2.3.x has the "dumb wakeups" issue with
> TCP accept fixed (2.2.x will cause ~3 wakeups for each new connection
> when only 1 should be made). That plays a big factor as well,
> remember the same exact thread about all of this "run queue
> scalability" bogosity we had nearly a year ago?
>
> So to the original specweb96 tester, divide your context switch number
> by 3, does it look more sane now? :-)
>
> However Alan is right, web server architecture has a lot to do with
> bad "benchmark" performance.
>
> Later,
> David S. Miller
> da...@redhat.com
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.rutgers.edu
> Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
From: Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: Re: Interesting analysis of linux kernel threading by IBM
Date: 2000/01/22
Message-ID: <fa.gd2157v.94cp9f@ifi.uio.no>#1/1
X-Deja-AN: 576209371
Original-Date: Sat, 22 Jan 2000 17:39:43 +0000 (GMT)
Sender: owner-linux-ker...@vger.rutgers.edu
Original-Message-Id: <E12C4Vl-0003nQ-00@the-village.bc.nu>
Content-Transfer-Encoding: 7bit
References: <fa.e8f735v.r7ocr7@ifi.uio.no>
To: fri...@zk3.dec.com (Peter Rival)
Content-Type: text/plain; charset=us-ascii
X-Orcpt: rfc822;linux-kernel-outgoing-dig
Organization: Internet mailing list
MIME-Version: 1.0
Newsgroups: fa.linux.kernel
X-Loop: majord...@vger.rutgers.edu
> Stupid question, but it's Saturday and I'm on a Windows system (don't ask...).
> What do people think about something like SPECWeb99 - dynamic content, CGIs,
> etc? I know the biggest complaint about 96 is that it's completely static
> content which a completely tweaked setup will keep completely in memory, among
> other "optimisations" I've heard of.
99 is a lot better. Its worthless for real traffic studies because real traffic
its lots of slow connections (which bizarrely enough tends to be harder
than a small number of fast ones). specweb96 is a bit comical, its the
bogomips of web benches, 99 tells you some truths about your cgi performance
at least
> something like the House web server when the Lewinsky paper was released. It's
> important for people to know that their system will continue to run and hopefully
> well even when it's completely overrun. Ideas?
It depends on your server. Apache is really easy to degrade badly, thttpd
performs materially better under extreme load. Do your tests with both apache
and thttpd (www.acme.com)
I think the comparison is probably the interesting part for showing how
degradation is app dependant
Alan
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
From: Peter Rival <fri...@zk3.dec.com>
Subject: Re: Interesting analysis of linux kernel threading by IBM
Date: 2000/01/22
Message-ID: <fa.e2vt7ev.qn2fgr@ifi.uio.no>#1/1
X-Deja-AN: 576215445
Original-Date: Sat, 22 Jan 2000 13:15:09 -0500
Sender: owner-linux-ker...@vger.rutgers.edu
Content-Transfer-Encoding: 7bit
Original-Message-ID: <3889F3AC.90C2DEB3@zk3.dec.com>
References: <fa.gd2157v.94cp9f@ifi.uio.no>
To: Alan Cox <a...@lxorguk.ukuu.org.uk>
Original-References: <E12C4Vl-0003nQ...@the-village.bc.nu>
X-Accept-Language: en
Content-Type: text/plain; charset=us-ascii
X-Orcpt: rfc822;linux-kernel-outgoing-dig
Organization: Internet mailing list
MIME-Version: 1.0
Newsgroups: fa.linux.kernel
X-Loop: majord...@vger.rutgers.edu
Alan Cox wrote:
> > Stupid question, but it's Saturday and I'm on a Windows system (don't ask...).
> > What do people think about something like SPECWeb99 - dynamic content, CGIs,
> > etc? I know the biggest complaint about 96 is that it's completely static
> > content which a completely tweaked setup will keep completely in memory, among
> > other "optimisations" I've heard of.
>
> 99 is a lot better. Its worthless for real traffic studies because real traffic
> its lots of slow connections (which bizarrely enough tends to be harder
> than a small number of fast ones). specweb96 is a bit comical, its the
> bogomips of web benches, 99 tells you some truths about your cgi performance
> at least
>
Good - it sounds like we're at least getting somewhere.
>
> > something like the House web server when the Lewinsky paper was released. It's
> > important for people to know that their system will continue to run and hopefully
> > well even when it's completely overrun. Ideas?
>
> It depends on your server. Apache is really easy to degrade badly, thttpd
> performs materially better under extreme load. Do your tests with both apache
> and thttpd (www.acme.com)
>
> I think the comparison is probably the interesting part for showing how
> degradation is app dependant
>
That's easy enough I should think (he says, never having run 99). My question now is,
if we should uncover something that appears to be a problem in the kernel, will people
listen if the testcase that found it was SPECweb99, or will we wind up down the same
rathole we've been down recently? I'm hoping we can find something that people won't
say "eww - a benchmark, how stupid", point out its flaws, and then ignore whatever
merit it may have. In other words, I don't want to waste everyone's time.
- Pete
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
From: Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: Re: Interesting analysis of linux kernel threading by IBM
Date: 2000/01/22
Message-ID: <fa.gfhp2fv.7l4q1f@ifi.uio.no>#1/1
X-Deja-AN: 576219183
Original-Date: Sat, 22 Jan 2000 18:45:40 +0000 (GMT)
Sender: owner-linux-ker...@vger.rutgers.edu
Original-Message-Id: <E12C5Xa-0003rK-00@the-village.bc.nu>
Content-Transfer-Encoding: 7bit
References: <fa.e2vt7ev.qn2fgr@ifi.uio.no>
To: fri...@zk3.dec.com (Peter Rival)
Content-Type: text/plain; charset=us-ascii
X-Orcpt: rfc822;linux-kernel-outgoing-dig
Organization: Internet mailing list
MIME-Version: 1.0
Newsgroups: fa.linux.kernel
X-Loop: majord...@vger.rutgers.edu
> > performs materially better under extreme load. Do your tests with both apache
> > and thttpd (www.acme.com)
> >
> > I think the comparison is probably the interesting part for showing how
> > degradation is app dependant
> >
>
> if we should uncover something that appears to be a problem in the kernel, will people
> listen if the testcase that found it was SPECweb99, or will we wind up down the same
> rathole we've been down recently? I'm hoping we can find something that people won't
A benchmark shows how well we do something. Having got the numbers the question
becomes "what does it show in real life". Its true of all benchmarks. If it
shows stuff impacting real world situations then it makes sense to look at it,
especially if fixing it doesn't harm common paths.
Alan
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
From: Peter Rival <fri...@zk3.dec.com>
Subject: Re: Interesting analysis of linux kernel threading by IBM
Date: 2000/01/22
Message-ID: <fa.e1gf5mv.r0kjrf@ifi.uio.no>#1/1
X-Deja-AN: 576231061
Original-Date: Sat, 22 Jan 2000 14:05:02 -0500
Sender: owner-linux-ker...@vger.rutgers.edu
Content-Transfer-Encoding: 7bit
Original-Message-ID: <3889FF5D.2EA0A532@zk3.dec.com>
References: <fa.gd2157v.94cp9f@ifi.uio.no>
To: Alan Cox <a...@lxorguk.ukuu.org.uk>
Original-References: <E12C4Vl-0003nQ...@the-village.bc.nu>
X-Accept-Language: en
Content-Type: text/plain; charset=us-ascii
X-Orcpt: rfc822;linux-kernel-outgoing-dig
Organization: Internet mailing list
MIME-Version: 1.0
Newsgroups: fa.linux.kernel
X-Loop: majord...@vger.rutgers.edu
Alan Cox wrote:
> > Stupid question, but it's Saturday and I'm on a Windows system (don't ask...).
> > What do people think about something like SPECWeb99 - dynamic content, CGIs,
> > etc? I know the biggest complaint about 96 is that it's completely static
> > content which a completely tweaked setup will keep completely in memory, among
> > other "optimisations" I've heard of.
>
> 99 is a lot better. Its worthless for real traffic studies because real traffic
> its lots of slow connections (which bizarrely enough tends to be harder
> than a small number of fast ones). specweb96 is a bit comical, its the
> bogomips of web benches, 99 tells you some truths about your cgi performance
> at least
>
I knew that I wanted to say something else about this...You are assuming that the only
important web server is one that is accessed over the Internet via a slow dialup
connection. While that may be so now, and certainly was moreso earlier, it is only
becoming less true.
With things such as cable modem, DSL and high-speed wireless connections, as well as
the growing importance of intranet servers, the common case in the near future will be
lots of fast connections, not slow ones. Now all we need to do is be able to measure
performance on both scenarios and I think we'll be doing much better.
- Pete
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
|