You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@oodt.apache.org by Chris Mattmann <ma...@apache.org> on 2010/02/08 16:15:34 UTC

JIRA and Wiki

Hi All,

The OODT JIRA and Wiki are now up and running:

JIRA:

http://issues.apache.org/jira/browse/OODT

Wiki:

http://cwiki.apache.org/confluence/display/OODT

I've begun fleshing out components and release versions in JIRA and have
started to create issues to track the import of the code into SVN.

Justin, any news on the account requests?

Thanks!

Cheers,
Chris




Re: JIRA and Wiki

Posted by "Mattmann, Chris A (388J)" <ch...@jpl.nasa.gov>.
Hey Justin, cool works for me. I appreciate the help. If I can muster the time tonight, I'll build up the dmp files and upload to people.a.o, and SHA-them up, and linkify to JIRA.

Thanks!

Cheers,
Chris



On 2/9/10 2:06 PM, "Justin Erenkrantz" <ju...@erenkrantz.com> wrote:

On Tue, Feb 9, 2010 at 1:50 PM, Mattmann, Chris A (388J)
<ch...@jpl.nasa.gov> wrote:
> Hey Justin,
>
> No probs, we can do it. Is it OK to tar it up? Or do you just want the raw dmp? It would be nice to do one dmp (matching OODT-1) and the other (match OODT-2). Would that be OK? If not, one it is, but let me know if it's OK to tar it up.

It should be okay to tar and bzip2 (or 7zip if that floats your boat)
- as long as there is a FreeBSD archiver port, it'll be fine.  =)

As for the number of dumps, I think it'd be smoothest if we just have
one dump file.  That'll reduce some of the moving pieces when the dump
is loaded as there won't be any ordering issues or whatnot when
loading it up into the main repository.

Based upon the feedback on general@, when you are ready, I'd suggest
copying the dumps into your public_html homedir on people.a.o and
annotating the JIRA with the URL and the SHA1 checksum ("openssl sha1
<dumpfile>").

I haven't yet been able to talk to the Infra team about when we can
schedule the loads - they're doing some conversions of the repository
to an LDAP authz system, so it might take until next week to do the
loads as that is inevitably gonna blow up in everyone's face.  =P  --
justin



++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Chris Mattmann, Ph.D.
Senior Computer Scientist
NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
Office: 171-266B, Mailstop: 171-246
Email: Chris.Mattmann@jpl.nasa.gov
WWW:   http://sunset.usc.edu/~mattmann/
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Adjunct Assistant Professor, Computer Science Department
University of Southern California, Los Angeles, CA 90089 USA
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


Re: JIRA and Wiki

Posted by Justin Erenkrantz <ju...@erenkrantz.com>.
On Tue, Feb 9, 2010 at 1:50 PM, Mattmann, Chris A (388J)
<ch...@jpl.nasa.gov> wrote:
> Hey Justin,
>
> No probs, we can do it. Is it OK to tar it up? Or do you just want the raw dmp? It would be nice to do one dmp (matching OODT-1) and the other (match OODT-2). Would that be OK? If not, one it is, but let me know if it's OK to tar it up.

It should be okay to tar and bzip2 (or 7zip if that floats your boat)
- as long as there is a FreeBSD archiver port, it'll be fine.  =)

As for the number of dumps, I think it'd be smoothest if we just have
one dump file.  That'll reduce some of the moving pieces when the dump
is loaded as there won't be any ordering issues or whatnot when
loading it up into the main repository.

Based upon the feedback on general@, when you are ready, I'd suggest
copying the dumps into your public_html homedir on people.a.o and
annotating the JIRA with the URL and the SHA1 checksum ("openssl sha1
<dumpfile>").

I haven't yet been able to talk to the Infra team about when we can
schedule the loads - they're doing some conversions of the repository
to an LDAP authz system, so it might take until next week to do the
loads as that is inevitably gonna blow up in everyone's face.  =P  --
justin

Re: JIRA and Wiki

Posted by "Mattmann, Chris A (388J)" <ch...@jpl.nasa.gov>.
Hey Justin,

No probs, we can do it. Is it OK to tar it up? Or do you just want the raw dmp? It would be nice to do one dmp (matching OODT-1) and the other (match OODT-2). Would that be OK? If not, one it is, but let me know if it's OK to tar it up.

Thanks!

Cheers,
Chris



On 2/9/10 11:03 AM, "Justin Erenkrantz" <ju...@erenkrantz.com> wrote:

On Tue, Feb 9, 2010 at 10:40 AM, Mattmann, Chris A (388J)
<ch...@jpl.nasa.gov> wrote:
> Woot, got it. We found the link when I asked Brian to try and dump the cas-filemgr this morning.
>
> The dmps don't look too huge, especially when tarballed. Thoughts?

Due to the logistics involved with doing loads, I think it's going to
be best to do all of the imports at one time rather than piecemeal
them over time.  So, how does creating one dump work for ya'll?  --
justin



++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Chris Mattmann, Ph.D.
Senior Computer Scientist
NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
Office: 171-266B, Mailstop: 171-246
Email: Chris.Mattmann@jpl.nasa.gov
WWW:   http://sunset.usc.edu/~mattmann/
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Adjunct Assistant Professor, Computer Science Department
University of Southern California, Los Angeles, CA 90089 USA
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


Re: JIRA and Wiki

Posted by Justin Erenkrantz <ju...@erenkrantz.com>.
On Tue, Feb 9, 2010 at 10:40 AM, Mattmann, Chris A (388J)
<ch...@jpl.nasa.gov> wrote:
> Woot, got it. We found the link when I asked Brian to try and dump the cas-filemgr this morning.
>
> The dmps don't look too huge, especially when tarballed. Thoughts?

Due to the logistics involved with doing loads, I think it's going to
be best to do all of the imports at one time rather than piecemeal
them over time.  So, how does creating one dump work for ya'll?  --
justin

Re: JIRA and Wiki

Posted by "Mattmann, Chris A (388J)" <ch...@jpl.nasa.gov>.
Woot, got it. We found the link when I asked Brian to try and dump the cas-filemgr this morning.

The dmps don't look too huge, especially when tarballed. Thoughts?

Cheers,
Chris



On 2/9/10 10:38 AM, "Justin Erenkrantz" <ju...@erenkrantz.com> wrote:

On Mon, Feb 8, 2010 at 10:21 PM, Mattmann, Chris A (388J)
<ch...@jpl.nasa.gov> > +1. I'd imagine the dump being on
the order of 100s of megabytes, but I
> could be off. At certain points the repo included large data files which
> were then removed. Is there a way to exclude those during the dmp process?

Take a look at svndumpfilter - from the O'Reilly SVN book:

http://svnbook.red-bean.com/en/1.5/svn.reposadmin.maint.html#svn.reposadmin.maint.filtering

That should do the trick...  -- justin



++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Chris Mattmann, Ph.D.
Senior Computer Scientist
NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
Office: 171-266B, Mailstop: 171-246
Email: Chris.Mattmann@jpl.nasa.gov
WWW:   http://sunset.usc.edu/~mattmann/
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Adjunct Assistant Professor, Computer Science Department
University of Southern California, Los Angeles, CA 90089 USA
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


Re: JIRA and Wiki

Posted by Justin Erenkrantz <ju...@erenkrantz.com>.
On Mon, Feb 8, 2010 at 10:21 PM, Mattmann, Chris A (388J)
<ch...@jpl.nasa.gov> > +1. I'd imagine the dump being on
the order of 100s of megabytes, but I
> could be off. At certain points the repo included large data files which
> were then removed. Is there a way to exclude those during the dmp process?

Take a look at svndumpfilter - from the O'Reilly SVN book:

http://svnbook.red-bean.com/en/1.5/svn.reposadmin.maint.html#svn.reposadmin.maint.filtering

That should do the trick...  -- justin

Re: JIRA and Wiki

Posted by "Mattmann, Chris A (388J)" <ch...@jpl.nasa.gov>.
Hey Justin,

> I'll start a thread on general@ about how to proceed with a dump.

+1, thanks.

> 
> I've seen some mention of uploading a dump to JIRA, but that
> seems...bizarre.  I think just uploading it to your account on
> people.apache.org and posting the SHA checksum to the list should be
> sufficient.

+1, agreed, and I've mentioned my non-binding vote on general@

> 
> To set expectations, I'd expect it would take about 48 hours from when
> we hand over the dump to being fully available.  In general, the dump
> will be loaded into a test repository (to insure no problems in the
> loading), let us verify the load is "correct", and then, if all looks
> good, load it in to the main repository.  (If we coordinate with the
> infra team, this can be reduced to a shorter window if necessary.)  Do
> you have an idea how big the dump will be?

+1. I'd imagine the dump being on the order of 100s of megabytes, but I
could be off. At certain points the repo included large data files which
were then removed. Is there a way to exclude those during the dmp process?
 
> 
> If it's rather big, then it may be best to wait until some pending HW
> upgrades to the main SVN server are deployed.  Depending upon how we
> do it, loading a dump may also knock out the EU mirror until it
> receives the dump - so imports are often done when EU is asleep.  (On
> the master, we're installing ZFS l2arch's on SSDs - see
> http://blogs.sun.com/brendan/entry/test ; but we're waiting on a SAS
> card that is supported by FreeBSD, so...soon...yah...soon.)

Okey dokey. I'll wait till I have the dmp file (which I can probably
generate tomorrow), or files, and then I'll rely on your judgement for the
best way to load it.

Cheers,
Chris

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Chris Mattmann, Ph.D.
Senior Computer Scientist
NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
Office: 171-266B, Mailstop: 171-246
Email: Chris.Mattmann@jpl.nasa.gov
WWW:   http://sunset.usc.edu/~mattmann/
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Adjunct Assistant Professor, Computer Science Department
University of Southern California, Los Angeles, CA 90089 USA
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++



Re: JIRA and Wiki

Posted by Justin Erenkrantz <ju...@erenkrantz.com>.
On Mon, Feb 8, 2010 at 8:35 PM, Chris Mattmann <ma...@apache.org> wrote:
> Let¹s do an SVN dump!

I'll start a thread on general@ about how to proceed with a dump.

I've seen some mention of uploading a dump to JIRA, but that
seems...bizarre.  I think just uploading it to your account on
people.apache.org and posting the SHA checksum to the list should be
sufficient.

To set expectations, I'd expect it would take about 48 hours from when
we hand over the dump to being fully available.  In general, the dump
will be loaded into a test repository (to insure no problems in the
loading), let us verify the load is "correct", and then, if all looks
good, load it in to the main repository.  (If we coordinate with the
infra team, this can be reduced to a shorter window if necessary.)  Do
you have an idea how big the dump will be?

If it's rather big, then it may be best to wait until some pending HW
upgrades to the main SVN server are deployed.  Depending upon how we
do it, loading a dump may also knock out the EU mirror until it
receives the dump - so imports are often done when EU is asleep.  (On
the master, we're installing ZFS l2arch's on SSDs - see
http://blogs.sun.com/brendan/entry/test ; but we're waiting on a SAS
card that is supported by FreeBSD, so...soon...yah...soon.)

> I¹m happy to start the process. I¹ve filed OODT-1 and OODT-2 to track the
> progress on this...

Cool.  -- justin

Re: JIRA and Wiki

Posted by Justin Erenkrantz <ju...@erenkrantz.com>.
On Sun, Feb 14, 2010 at 3:20 PM, Chris Mattmann <ma...@apache.org> wrote:
> Hi Justin,
>
> I went ahead and uploaded the *.dmp file to a public JPL server. See OODT-1
> and OODT-2 for the location:
>
> http://issues.apache.org/jira/browse/OODT-1
> http://issues.apache.org/jira/browse/OODT-2
>
> My +1 for @joesuf to import, at his convenience and get the OODT Apache ball
> rolling...
>
> *kick*

INFRA-2503 filed.  Joe says he expects to get to it over the weekend.

https://issues.apache.org/jira/browse/INFRA-2503

Cheers.  -- justin

Re: JIRA and Wiki

Posted by Chris Mattmann <ma...@apache.org>.
Hi Justin,

I went ahead and uploaded the *.dmp file to a public JPL server. See OODT-1
and OODT-2 for the location:

http://issues.apache.org/jira/browse/OODT-1
http://issues.apache.org/jira/browse/OODT-2

My +1 for @joesuf to import, at his convenience and get the OODT Apache ball
rolling...

*kick*

:)

Cheers,
Chris



On 2/10/10 2:32 PM, "Chris Mattmann" <ma...@apache.org> wrote:

> Over the weekend is fine by me, gives me more time to generate the *.dmp and
> upload. Will let you guys know when it¹s ready, and attached to the JIRA
> issue...thanks Justin!
> 
> Cheers,
> Chris
> 
> 
> 
> On 2/10/10 8:53 AM, "Justin Erenkrantz" <ju...@erenkrantz.com> wrote:
> 
>> On Wed, Feb 10, 2010 at 8:46 AM, Mattmann, Chris A (388J)
>> <ch...@jpl.nasa.gov> wrote:
>>>> We¹re working on sizing the dumps right now, but a good metric is ~7K of
>>>> revisions, with probably a tarred up total <= 100 MB?
>> 
>> Ack.  Joe says we can import about 400revs/hr in our current setup -
>> so given the number of revisions, it's going to probably be best to do
>> the import over a weekend.  -- justin
>> 
> 
> 
> 




Re: JIRA and Wiki

Posted by Chris Mattmann <ma...@apache.org>.
Over the weekend is fine by me, gives me more time to generate the *.dmp and
upload. Will let you guys know when it¹s ready, and attached to the JIRA
issue...thanks Justin!

Cheers,
Chris



On 2/10/10 8:53 AM, "Justin Erenkrantz" <ju...@erenkrantz.com> wrote:

> On Wed, Feb 10, 2010 at 8:46 AM, Mattmann, Chris A (388J)
> <ch...@jpl.nasa.gov> wrote:
>> > We¹re working on sizing the dumps right now, but a good metric is ~7K of
>> > revisions, with probably a tarred up total <= 100 MB?
> 
> Ack.  Joe says we can import about 400revs/hr in our current setup -
> so given the number of revisions, it's going to probably be best to do
> the import over a weekend.  -- justin
> 



Re: JIRA and Wiki

Posted by Justin Erenkrantz <ju...@erenkrantz.com>.
On Wed, Feb 10, 2010 at 8:46 AM, Mattmann, Chris A (388J)
<ch...@jpl.nasa.gov> wrote:
> We’re working on sizing the dumps right now, but a good metric is ~7K of
> revisions, with probably a tarred up total <= 100 MB?

Ack.  Joe says we can import about 400revs/hr in our current setup -
so given the number of revisions, it's going to probably be best to do
the import over a weekend.  -- justin

Re: JIRA and Wiki

Posted by "Mattmann, Chris A (388J)" <ch...@jpl.nasa.gov>.
Hey Justin,

Thanks so much for pushing these through!

We're working on sizing the dumps right now, but a good metric is ~7K of revisions, with probably a tarred up total <= 100 MB?

I will work on putting these on my people.a.o account.

Thx!

Cheers,
Chris



On 2/10/10 8:42 AM, "Justin Erenkrantz" <ju...@erenkrantz.com> wrote:

On Mon, Feb 8, 2010 at 8:29 PM, Justin Erenkrantz <ju...@erenkrantz.com> wrote:
> Account requests have now been filed.  Account requests are usually
> done once a week, so we should have everyone with access by next week.
>  After the accounts are created, I need to assign Subversion karma.

Accounts have been created and karma assigned.  If you haven't
received your account information, please let me know ASAP.

In order to figure out when to do the import, the infra team wants to
know how many revisions we are talking about and the approximate size
of the dumps.  -- justin



++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Chris Mattmann, Ph.D.
Senior Computer Scientist
NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
Office: 171-266B, Mailstop: 171-246
Email: Chris.Mattmann@jpl.nasa.gov
WWW:   http://sunset.usc.edu/~mattmann/
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Adjunct Assistant Professor, Computer Science Department
University of Southern California, Los Angeles, CA 90089 USA
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


Re: JIRA and Wiki

Posted by Justin Erenkrantz <ju...@erenkrantz.com>.
On Mon, Feb 8, 2010 at 8:29 PM, Justin Erenkrantz <ju...@erenkrantz.com> wrote:
> Account requests have now been filed.  Account requests are usually
> done once a week, so we should have everyone with access by next week.
>  After the accounts are created, I need to assign Subversion karma.

Accounts have been created and karma assigned.  If you haven't
received your account information, please let me know ASAP.

In order to figure out when to do the import, the infra team wants to
know how many revisions we are talking about and the approximate size
of the dumps.  -- justin

Re: JIRA and Wiki

Posted by Chris Mattmann <ma...@apache.org>.
Let¹s do an SVN dump!

I¹m happy to start the process. I¹ve filed OODT-1 and OODT-2 to track the
progress on this...

Cheers,
Chris



On 2/8/10 8:29 PM, "Justin Erenkrantz" <ju...@erenkrantz.com> wrote:

> On Mon, Feb 8, 2010 at 8:03 AM, Chris Mattmann <ma...@apache.org> wrote:
>> > Thanks Justin!
> 
> Account requests have now been filed.  Account requests are usually
> done once a week, so we should have everyone with access by next week.
>  After the accounts are created, I need to assign Subversion karma.
> 
> While that process chugs along, we can start to begin the code import
> process - or we can wait until all of the accounts are created.  I
> believe you already filed the software grant as part of the CCLA, so
> there shouldn't be any more paperwork required.  If OODT is using
> Subversion now, we should be able to import a dump file (preserving
> history and logs) and then do cleanups after import.  Or, we can just
> do a import from an export or such (no history or logs).  Either way
> is acceptable on the infra end, so it's up to what ya'll want to do.
> 
> WDYT?  -- justin
> 



Re: JIRA and Wiki

Posted by Justin Erenkrantz <ju...@erenkrantz.com>.
On Mon, Feb 8, 2010 at 8:03 AM, Chris Mattmann <ma...@apache.org> wrote:
> Thanks Justin!

Account requests have now been filed.  Account requests are usually
done once a week, so we should have everyone with access by next week.
 After the accounts are created, I need to assign Subversion karma.

While that process chugs along, we can start to begin the code import
process - or we can wait until all of the accounts are created.  I
believe you already filed the software grant as part of the CCLA, so
there shouldn't be any more paperwork required.  If OODT is using
Subversion now, we should be able to import a dump file (preserving
history and logs) and then do cleanups after import.  Or, we can just
do a import from an export or such (no history or logs).  Either way
is acceptable on the infra end, so it's up to what ya'll want to do.

WDYT?  -- justin

Re: JIRA and Wiki

Posted by Chris Mattmann <ma...@apache.org>.
Thanks Justin!

Cheers,
Chris


On 2/8/10 7:51 AM, "Justin Erenkrantz" <ju...@erenkrantz.com> wrote:

> On Mon, Feb 8, 2010 at 7:15 AM, Chris Mattmann <ma...@apache.org> wrote:
>> > I've begun fleshing out components and release versions in JIRA and have
>> > started to create issues to track the import of the code into SVN.
> 
> Cool.
> 
>> > Justin, any news on the account requests?
> 
> I plan to submit the account requests today.
> 
> Thanks.  -- justin
> 



Re: JIRA and Wiki

Posted by Justin Erenkrantz <ju...@erenkrantz.com>.
On Mon, Feb 8, 2010 at 7:15 AM, Chris Mattmann <ma...@apache.org> wrote:
> I've begun fleshing out components and release versions in JIRA and have
> started to create issues to track the import of the code into SVN.

Cool.

> Justin, any news on the account requests?

I plan to submit the account requests today.

Thanks.  -- justin