Thanx to all who replied. The original question:
> We're running a Sparc 10, Model 30 with 64Mb RAM and 140Mb swap
> space as a research/crunching platform. Recently, a job
> was submitted which requested 119844kb data and stack segements
> (the SZ field of ps) and 50024kb real memory (the RSS field
> of ps). No problem, that's what the machine is there for...
>
> During the day, we wanted to get some work done at the console,
> but the response from X (X11R5/Motif -- _not_ OW3) dragged much
> more than expected. I renice'ed the job to 19, but the console
> response didn't improve, and the job continued to chew down 60+%
> of the CPU. My impression of renice 19 was that the process would
> swap out until the CPU was essentially idle -- there was plenty of
> swap space to farm the job out...
>
> Is there something I'm missing in the configuration, or is my
> understanding of scheduling and the renice command lacking?
First, a better explanation of the problem:
Thanx to: Andy Feldt <feldt@phyast.nhn.uoknor.edu>
Hal Stern <stern@sunne.East.Sun.COM>
Luis E. Mun~oz <lem@shaddam.usb.ve>
Dan Razzell <razzell@cs.ubc.ca>
Jay Lessert <lscpdx!jayl@nosun.West.Sun.COM>
One of the best explanations was from Andy Feldt:
> Your impression that "the process would swap out until the CPU was
> essentially idle" is correct - it is just that the machine's definition
> of idle is far different from yours and so the moment your X session
> is idle (a *very* short time by human standards), the background jobs
> swaps in all its pages. The next time you try to do something, *you*
> force that job to swap out enough pages for yours to be swapped in. This
> is the source of the slowness. The only cure is to have enough physical
> memory to encompass all of the needs of your mix of jobs. Note that some
> jobs can require a large virtual memory but will not exhibit this problem
> because they mostly access only a small fraction of their pages - your
> big job appears to keep running through a large enough fraction of its
> pages to cause you grief.
So, essentially, renicing the job doesn't force the job to go "idle", but
rather lowers its scheduling priority. Since the CPU is _much_ faster than
I could ever hope to type/move the mouse/plot, the job would start to page
in; move the mouse, and it pages out again -- causing the observed slowdown.
Sounds vaguely akin to "thrashing", IMHO -- different cause though...
The possible solutions:
Thanx to: Hal Stern <stern@sunne.East.Sun.COM>
Dan Stromberg <strombrg@uci.edu>
Dan Razzell <razzell@cs.ubc.ca>
Jay Lessert <lscpdx!jayl@nosun.West.Sun.COM>
1) trick: open an xclock with a second hand -- that should keep the process
at bay while running the console session.
2) send the process a SIGSTOP (took me a while to find this, try: kill -l)
when windowing, and then SIGCONT when leaving.
3) try "batch" -- this is good for jobs that will complete overnight, but
the real jobs are taking a day or so.
4) try "verynice" available via anonymous ftp (archie couldn't find this --
anybody know where to get a copy? Is it any good?)
5) Upgrade to Solaris 2.x -- the scheduler is configurable and may include
the flexibility that we need.
6) Buy more RAM.
The possible solutions with smiley's attached:
1) Convince the programmer to rewrite the code to make it less of a VM-hog --
uh huh, right. The user is always "right", right?
2) Live with it [now that wasn't very helpful, was it? :-) ]
For comments/opinions and letting me know that I'm not the only one with this
problem, thanx go to:
Joe McPherson <jmcphers@bio.ri.ccf.org>
Richard E. Perlotto II <vlsiphx!paladin!perlotto@enuucp.eas.asu.edu>
Kevin O'Brien <kobrien@noaapmel.gov>
Since this doesn't happen very often, we'll probably get by with the
SIGSTOP and SIGCONT solution until we upgrade to Solaris 2.x. Hopefully
the configurable scheduler will give us the flexibility we need. Any
experience out there with tweaking the scheduler?
For those that sent the "we've had that problem too", please try some of
these solutions and let me know what works best for you (the job has since
finished). We're not planning on upgrading to Solaris 2.x for a while, so
I would be interested in any and all results.
Thanx again...
--Michael Zika
(zika@fatman.tamu.edu)
This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:07:55 CDT