info-cvs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: "No space left on device" error during cvs status/update


From: Chris Cameron
Subject: RE: "No space left on device" error during cvs status/update
Date: Thu, 2 Nov 2000 10:23:19 +1300

On Thursday, November 02, 2000 9:40 AM, Robert Bresner 
[SMTP:address@hidden wrote:
>
>
> address@hidden wrote:
> >
> > You write:
> >
> > >    Filesystem            kbytes    used   avail capacity  Mounted on
> > >    /dev/dsk/c0t0d0s0     962582  830683   35649    96%    /
> >
> > assuming /tmp is part of this device, then if your project is larger
> > than 17.8 Megs your probably running out of space on /tmp.
>
> Why? Does CVS copy an entire project to /tmp before performing the
> likes of an update or status on my NT client?
> If this were the case, then ALL of my areas should fail, but only
> two of seven are failing.
>
>
> > >    swap                  186640    5488  181152     3%    /tmp
> > Does this mean that the swap partition is mounted on /tmp???
>
> Sure looks that way. I don't know much about swap spaces and mounts
> or, for that matter, setup of unix boxes.
>
Can't solve your other problems, but this looks like a Solaris box.  On 
Solaris, /tmp uses the swap partition.  This df is showing that the swap 
space is mounted as /tmp.  Therefore as more swap space is used, less space 
is available for /tmp AND this can change dynamically as the system is 
running!

***************************************************************
Chris Cameron                    Open Telecommunications NZ Ltd
Senior Solution Architect                 IN Product Management
address@hidden                           P.O.Box 10-388
      +64 4 495 8403 (DDI)                          The Terrace
fax:  +64 4 495 8419                                 Wellington
cell: +64 21 650 680                                New Zealand
Life, don't talk to me about life ....(Marvin - HHGTTG)





reply via email to

[Prev in Thread] Current Thread [Next in Thread]