You are viewing a plain text version of this content. The canonical link for it is here.
Posted to general@gump.apache.org by Leo Simons <ma...@leosimons.com> on 2005/03/05 20:31:40 UTC

brutus disk full again

Hi gang,

we keep running out of disk space. I cleaned up some random bits (like 
/usr/local/gump/jars and /var/cache/apt/archive) but this is a 
structural problem. Someone needs to come up with a plan to avoid this 
kind of stuff.

cheers,

- LSD

---------------------------------------------------------------------
To unsubscribe, e-mail: general-unsubscribe@gump.apache.org
For additional commands, e-mail: general-help@gump.apache.org


Re: brutus disk full again

Posted by David Crossley <cr...@apache.org>.
Leo Simons wrote:
[snip]
> ... It also looks like there's a
> sizeable amount of stuff from the forrest people that might be able to
> shrink a little.

Not really. I had already trimmed it down as much as possible.

I gather from watching the build-up to the upcoming Infrathon
that the scheduled extra resources for brutus, and the other
IBM-donated machines, is not likely to happen. Is there an
alternative plan to get more disk space?

--David

---------------------------------------------------------------------
To unsubscribe, e-mail: general-unsubscribe@gump.apache.org
For additional commands, e-mail: general-help@gump.apache.org


Re: brutus disk full again

Posted by Stefan Bodewig <bo...@apache.org>.
On Mon, 07 Mar 2005, Stefan Bodewig <bo...@apache.org> wrote:

> plenty of configurations that would be interesting (building on
> Maverik

s/Maverik/Mustang/

Stefan

---------------------------------------------------------------------
To unsubscribe, e-mail: general-unsubscribe@gump.apache.org
For additional commands, e-mail: general-help@gump.apache.org


Re: brutus disk full again

Posted by "Adam R. B. Jack" <aj...@apache.org>.
> >> We probably could make all our workspaces share the same "cvs"
> >> directory, i.e. the directory holding the clean working copies,
> >> this would give us a few GB additional disk space.

We could (for the least amount of coding effort) probably have gump lock
each module's root directory as it attempts an update (cvs|svn) so that we
could share the download repository. Since we have this part multi-threaded
(and working in back-ground threads w/ the core build thread waiting as
needed) it oughtn't matter if the lock was initiated by another thread, or
another process. We'd replicate the effort of cvs|svn figuring out what
updates (often none) where needed, but that is no big deal. There are other
approaches, but this seems easiest/cheapest within the code.

BTW: If not already, could things like "stray locks" get filed into JIRA,
just in case I ever do find myself spare cycles? [That, and if JIRA ever
reminds me of my account information so I can go check.]

regards,

Adam


---------------------------------------------------------------------
To unsubscribe, e-mail: general-unsubscribe@gump.apache.org
For additional commands, e-mail: general-help@gump.apache.org


Re: brutus disk full again

Posted by Stefan Bodewig <bo...@apache.org>.
On Mon, 07 Mar 2005, Leo Simons <ma...@leosimons.com> wrote:
> On 07-03-2005 09:07, "Stefan Bodewig" <bo...@apache.org> wrote:

>> We probably could make all our workspaces share the same "cvs"
>> directory, i.e. the directory holding the clean working copies,
>> this would give us a few GB additional disk space.
> 
> Yep. Before we do that we need to change things so we are totally
> sure that no two gump runs can intermingle, ever.

There are two things to worry about:

(1) Two concurrent "cvs up" or "svn up" processes.  I don't think
    they'd manage to deadlock each other, can they?

(2) "cvs up" or "svn up" running at the same time as a sync process,
    so we'd end up with an inconsistent copy after sync.

We already lock Gump runs against each other via lock files that we
need to clean from time to time, the same tactic could apply to
locking sync and update from each other.

Stefan

---------------------------------------------------------------------
To unsubscribe, e-mail: general-unsubscribe@gump.apache.org
For additional commands, e-mail: general-help@gump.apache.org


Re: brutus disk full again

Posted by Leo Simons <ma...@leosimons.com>.
On 07-03-2005 09:07, "Stefan Bodewig" <bo...@apache.org> wrote:
> On Sat, 05 Mar 2005, Leo Simons <ma...@leosimons.com> wrote:
> 
>> we keep running out of disk space. I cleaned up some random bits
>> (like /usr/local/gump/jars and /var/cache/apt/archive) but this is a
>> structural problem.
> 
> Have we changed something?  When I saw you putting together yet
> another workspace (the free-java stuff) I was afraid you would enable
> it, since we really don't have the disk space for more than four
> workspaces.

That's not running on brutus. The kaffe/classpath people are setting up
their own machine to do that stuff :-D

>> Someone needs to come up with a plan to avoid this kind of stuff.
> 
> Yes, in particular if we want to add even more workspaces.  There are
> plenty of configurations that would be interesting (building on
> Maverik nightly builds, building on IKVM/Mono and so on).
> 
> We probably could make all our workspaces share the same "cvs"
> directory, i.e. the directory holding the clean working copies, this
> would give us a few GB additional disk space.

Yep. Before we do that we need to change things so we are totally sure that
no two gump runs can intermingle, ever. It also looks like there's a
sizeable amount of stuff from the forrest people that might be able to
shrink a little.

Cheers,

- Leo



---------------------------------------------------------------------
To unsubscribe, e-mail: general-unsubscribe@gump.apache.org
For additional commands, e-mail: general-help@gump.apache.org


Re: brutus disk full again

Posted by Stefan Bodewig <bo...@apache.org>.
On Sat, 05 Mar 2005, Leo Simons <ma...@leosimons.com> wrote:

> we keep running out of disk space. I cleaned up some random bits
> (like /usr/local/gump/jars and /var/cache/apt/archive) but this is a
> structural problem.

Have we changed something?  When I saw you putting together yet
another workspace (the free-java stuff) I was afraid you would enable
it, since we really don't have the disk space for more than four
workspaces.

> Someone needs to come up with a plan to avoid this kind of stuff.

Yes, in particular if we want to add even more workspaces.  There are
plenty of configurations that would be interesting (building on
Maverik nightly builds, building on IKVM/Mono and so on).

We probably could make all our workspaces share the same "cvs"
directory, i.e. the directory holding the clean working copies, this
would give us a few GB additional disk space.

Stefan

---------------------------------------------------------------------
To unsubscribe, e-mail: general-unsubscribe@gump.apache.org
For additional commands, e-mail: general-help@gump.apache.org