[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: slow performance from a SQL Server

From: Bob Proulx
Subject: Re: slow performance from a SQL Server
Date: Sun, 2 Nov 2008 10:11:59 -0700
User-agent: Mutt/1.5.18 (2008-05-17)

Turner, Don wrote:
> I am running a nightly compression to compress the exported sql
> server file and, it takes an incredibly long amount of time.  I
> expect a few hours, since it is compressing over 100 gig of data
> however, it seems to be running much slower recently than I
> remember.  Are there parameters or versions that perform better than
> others?

I will make a couple of general suggestions about performance.  I
can't say anything specific about your particular case because there
aren't enough details.

If something that used to run faster now runs slower I can think of a
few possible things that may have changed.

The machine might now be swapping when before it was not.  If a
problem can fit entirely in memory then it runs much faster than when
it pages to disk.  If this particular file grew just large enough that
it no longer fit in memory then you would "fall off the wall" of
performance as it now pages when before it didn't.  Also, this is a
global problem over the entire machine.  If the sum total of all tasks
running at that time has now increased then even if this task is the
same size as before the system might be paging because overall it is
using more memory at that time.

I like to run 'htop' (or the classic 'top', but I like 'htop',
personal preference) and watch how the system is performing during
heavy use.  Are there additional processes in the run queue?  Is the
machine using 100% of the cpu?  (100% cpu use is good.)  Or is it low
cpu use?  (Low cpu use but high disk would be an indicator of paging.)
On Linux kernels I like to see a good amount of memory being used for
filesystem buffer cache.  If the system is memory stressed then the
kernel will reclaim this and you won't see very much filesystem buffer
cache being used.

Since you are talking about a nightly task then things like backup can
adversely affect the process.  Backup processes compete with other
tasks for resources.  Check if a backup is competing at the time your
task is running.  If the process uses the network then it might be
waiting for the network if the network is completely consumed.

As far as parameters go you can reduce the compression on gzip by
using these options:

       -# --fast --best
              Regulate the speed of compression using the specified  digit  #,
              where  -1  or  --fast  indicates  the fastest compression method
              (less compression) and -9 or --best indicates the  slowest  com‐
              pression  method  (best  compression).   The default compression
              level is -6 (that is, biased towards high compression at expense
              of speed).

As far as other programs are concerned the upcoming rising start is
'lzma' which is very likely to replace 'gzip' in the grand scheme of
things.  It will take a few years before it becomes as prevalent as
gzip though so I wouldn't throw out gzip just yet.  If you hadn't
heard about 'lzma' yet then it is a good one to note and to add to
your toolbox.  You will certainly be hearing more about it in the

Good luck!


reply via email to

[Prev in Thread] Current Thread [Next in Thread]