You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@accumulo.apache.org by Rob Tallis <ro...@gmail.com> on 2013/05/10 09:33:58 UTC

1.5 - how to build rpm; cdh3u4;

Hi - two quick questions:

1)How do I build an rpm for 1.5?
mvn rpm:rpm -N  works on 1.4.3 branch

on trunk and 1.5 it gives:
[ERROR] Failed to execute goal
org.codehaus.mojo:rpm-maven-plugin:2.1-alpha-2:rpm (default-cli) on project
accumulo: The parameters 'mappings', 'group' for goal
org.codehaus.mojo:rpm-maven-plugin:2.1-alpha-2:rpm are missing or invalid
-> [Help 1]

I can see mappings and group in the accumulo-assemble/pom.xml but I don't
know the mvn commands.

2) Is 1.5 compatible with cdh3u4 and how do I build it?
On the 1.5 and trunk branches, the following *doesn't* work:
mvn clean package -P assemble -DskipTests  -Dhadoop.version=0.20.2-cdh3u4
-Dzookeeper.version=3.3.5-cdh3u4

it gives:
[ERROR]
/home/rob/git/accumulo/start/src/test/java/org/apache/accumulo/test/AccumuloDFSBase.java:[87,30]
error: cannot find symbol

For info, changing it to cdh3u5, it *does* work:
mvn clean package -P assemble -DskipTests  -Dhadoop.version=0.20.2-cdh3u5
-Dzookeeper.version=3.3.5-cdh3u5

Well, it builds at least...

Thanks,
Rob

Re: 1.5 - how to build rpm; cdh3u4;

Posted by John Vines <vi...@apache.org>.
It appears that CDH3u4 is not support with the 1.5 release, both due to
this and the lack of commons-io in CDH3u4. CDH3u5 is fine though.


On Fri, May 10, 2013 at 10:17 AM, Christopher <ct...@apache.org> wrote:

> Minimally, 'mvn clean package -P native,rpm' should do the trick to
> build the RPM. (you may need some dependencies installed, such as
> expect and rpm-sign to make the build happy).
>
> Somebody else will have to respond regarding cdh support.
>
> --
> Christopher L Tubbs II
> http://gravatar.com/ctubbsii
>
>
> On Fri, May 10, 2013 at 3:33 AM, Rob Tallis <ro...@gmail.com> wrote:
> > Hi - two quick questions:
> >
> > 1)How do I build an rpm for 1.5?
> > mvn rpm:rpm -N  works on 1.4.3 branch
> >
> > on trunk and 1.5 it gives:
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:rpm-maven-plugin:2.1-alpha-2:rpm (default-cli) on
> project
> > accumulo: The parameters 'mappings', 'group' for goal
> > org.codehaus.mojo:rpm-maven-plugin:2.1-alpha-2:rpm are missing or invalid
> > -> [Help 1]
> >
> > I can see mappings and group in the accumulo-assemble/pom.xml but I don't
> > know the mvn commands.
> >
> > 2) Is 1.5 compatible with cdh3u4 and how do I build it?
> > On the 1.5 and trunk branches, the following *doesn't* work:
> > mvn clean package -P assemble -DskipTests  -Dhadoop.version=0.20.2-cdh3u4
> > -Dzookeeper.version=3.3.5-cdh3u4
> >
> > it gives:
> > [ERROR]
> >
> /home/rob/git/accumulo/start/src/test/java/org/apache/accumulo/test/AccumuloDFSBase.java:[87,30]
> > error: cannot find symbol
> >
> > For info, changing it to cdh3u5, it *does* work:
> > mvn clean package -P assemble -DskipTests  -Dhadoop.version=0.20.2-cdh3u5
> > -Dzookeeper.version=3.3.5-cdh3u5
> >
> > Well, it builds at least...
> >
> > Thanks,
> > Rob
>

Re: 1.5 - how to build rpm; cdh3u4;

Posted by Christopher <ct...@apache.org>.
Minimally, 'mvn clean package -P native,rpm' should do the trick to
build the RPM. (you may need some dependencies installed, such as
expect and rpm-sign to make the build happy).

Somebody else will have to respond regarding cdh support.

--
Christopher L Tubbs II
http://gravatar.com/ctubbsii


On Fri, May 10, 2013 at 3:33 AM, Rob Tallis <ro...@gmail.com> wrote:
> Hi - two quick questions:
>
> 1)How do I build an rpm for 1.5?
> mvn rpm:rpm -N  works on 1.4.3 branch
>
> on trunk and 1.5 it gives:
> [ERROR] Failed to execute goal
> org.codehaus.mojo:rpm-maven-plugin:2.1-alpha-2:rpm (default-cli) on project
> accumulo: The parameters 'mappings', 'group' for goal
> org.codehaus.mojo:rpm-maven-plugin:2.1-alpha-2:rpm are missing or invalid
> -> [Help 1]
>
> I can see mappings and group in the accumulo-assemble/pom.xml but I don't
> know the mvn commands.
>
> 2) Is 1.5 compatible with cdh3u4 and how do I build it?
> On the 1.5 and trunk branches, the following *doesn't* work:
> mvn clean package -P assemble -DskipTests  -Dhadoop.version=0.20.2-cdh3u4
> -Dzookeeper.version=3.3.5-cdh3u4
>
> it gives:
> [ERROR]
> /home/rob/git/accumulo/start/src/test/java/org/apache/accumulo/test/AccumuloDFSBase.java:[87,30]
> error: cannot find symbol
>
> For info, changing it to cdh3u5, it *does* work:
> mvn clean package -P assemble -DskipTests  -Dhadoop.version=0.20.2-cdh3u5
> -Dzookeeper.version=3.3.5-cdh3u5
>
> Well, it builds at least...
>
> Thanks,
> Rob

Re: 1.5 - how to build rpm; cdh3u4;

Posted by Christopher <ct...@apache.org>.
Not to the extent that I've just stated, but the instructions for
rebuilding everything are somewhat self-documented in the
assemble/build.sh convenience script.

There is a ticket open to improve the README in 1.6 (ACCUMULO-1515)
and the discussion has centered around including instructions for
building.

--
Christopher L Tubbs II
http://gravatar.com/ctubbsii


On Wed, Jun 19, 2013 at 8:40 AM, Rob Tallis <ro...@gmail.com> wrote:
> Thanks Christopher, that worked.
> Is any of this documented anywhere?
>
>
> On 17 June 2013 02:10, Christopher <ct...@apache.org> wrote:
>
>> Since 1.5 was released, the RPM now expects at least one other profile
>> to be active, also: the thrift profile. This is because it was decided
>> during the reviewing of the release candidates for 1.5 that the thrift
>> bindings for several languages to the new proxy feature, should be
>> delivered with the new proxy.
>>
>> The correct command for building the entire RPM for 1.5 would be
>> (minimally, if we skip tests):
>> mvn package -DskipTests -P thrift,native,rpm
>>
>> Typically, one would also activate the seal-jars profile and the docs
>> profile, as well as build the aggregate javadocs for packaging with
>> the monitor:
>> mvn clean compile javadoc:aggregate package -DskipTests -P
>> docs,seal-jars,thrift,native,rpm
>>
>> Also, don't expect trunk to work the same way. ACCUMULO-210 is going
>> to result in changes to the way we build RPMs. Even if we make an
>> effort to continue to support building the monolithic RPM, there's no
>> guarantee that the maven profile prerequisites won't change, due to
>> other improvements in the build. For instance, the docs directory is
>> now a proper maven module and there are likely going to be changes due
>> to the discussion of consolidating documentation.
>>
>> --
>> Christopher L Tubbs II
>> http://gravatar.com/ctubbsii
>>
>>
>> On Sun, Jun 16, 2013 at 9:16 AM, Rob Tallis <ro...@gmail.com> wrote:
>> > Dragging the rpm question up again, the instruction to create an rpm from
>> > source was *mvn clean package -P native,rpm*
>> >
>> > From a fresh clone, on both trunk and 1.5 I get:
>> >
>> > *[ERROR] Failed to execute goal
>> > org.codehaus.mojo:rpm-maven-plugin:2.1-alpha-2:attached-rpm
>> (build-bin-rpm)
>> > on project accumulo: Unable to copy files for packaging: You must set at
>> > least one file. -> [Help 1]*
>> >
>> > and I can't decipher the build setup to figure this out. What am I doing
>> > wrong?
>> >
>> > Thanks, Rob
>> >
>> >
>> > On 15 May 2013 19:06, Rob Tallis <ro...@gmail.com> wrote:
>> >
>> >> That sorted it, thanks.
>> >>
>> >>
>> >> On 15 May 2013 18:11, John Vines <vi...@apache.org> wrote:
>> >>
>> >>> In the example files, specifically accumulo-env.sh, there are 2
>> commented
>> >>> lines after HADOOP_CONF_DIR is set, I believe. Make sure that you
>> comment
>> >>> out the old one and uncomment the one after the hadoop2 comment.
>> >>>
>> >>> This is necessary because Accumulo puts the hadoop conf dir on the
>> >>> classpath in order to load the core-site.xml, which has the HDFS
>> namenode
>> >>> config. By default, this is file:///, so if it's not there it's goingto
>> >>> default to the local file system. A quick way to validate is to run
>> >>> bin/accumulo classpath and then look to see if the conf dir (I don't
>> not
>> >>> recall what is it for CDH4) is there.
>> >>>
>> >>>
>> >>> On Wed, May 15, 2013 at 4:06 AM, Rob Tallis <ro...@gmail.com>
>> wrote:
>> >>>
>> >>> > I've given up on cdh3 then. I've trying to get 1.5 and /or trunk
>> going
>> >>> on
>> >>> > cdh4.2.1 on a small hadoop cluster installed via cloudera manager. I
>> >>> built
>> >>> > the tar specifying Dhadoop.profile=2.0
>> -Dhadoop.version=2.0.0-cdh4.2.1.
>> >>> > I've edited accumulo-site to add $HADOOP_PREFIX/client/.*.jar, to the
>> >>> > classpath. This lets me init and start the processes but I've got the
>> >>> > problem of the instance information being stored on local disk rather
>> >>> than
>> >>> > on hdfs. (unable obtain instance id at /accumulo/instance_id)
>> >>> >
>> >>> > I can see references to this problem elsewhere but I can't figure out
>> >>> what
>> >>> > I'm doing wrong. Something wrong with my environment when I init i
>> >>> guess..?
>> >>> > (tbh it's the first time I've tried a cluster install over a
>> standalone
>> >>> so
>> >>> > it might not have anything to do with the versions I'm trying)
>> >>> >
>> >>> > Rob
>> >>> >
>> >>> >
>> >>> >
>> >>> > On 13 May 2013 12:24, Rob Tallis <ro...@gmail.com> wrote:
>> >>> >
>> >>> > > Perfect, thanks for the help
>> >>> > >
>> >>> > >
>> >>> > > On 11 May 2013 08:37, John Vines <jv...@gmail.com> wrote:
>> >>> > >
>> >>> > >> It also appears that CDH3u* does not have commons-collections or
>> >>> > >> commons-configuration included, so you will need to manually add
>> >>> those
>> >>> > >> jars
>> >>> > >> to the classpath, either in accumulo lib or hadoop lib. Without
>> these
>> >>> > >> files, tserver and master will not start.
>> >>> > >>
>> >>> > >>
>> >>> > >> On Fri, May 10, 2013 at 11:14 AM, Josh Elser <
>> josh.elser@gmail.com>
>> >>> > >> wrote:
>> >>> > >>
>> >>> > >> > FWIW, if you don't run -DskipTests, you will get some failures
>> on
>> >>> some
>> >>> > >> of
>> >>> > >> > the newer MiniAccumuloCluster tests.
>> >>> > >> >
>> >>> > >> > testPerTableClasspath(org.**apache.accumulo.server.mini.**
>> >>> > >> > MiniAccumuloClusterTest)
>> >>> > >> >
>> test(org.apache.accumulo.**server.mini.**MiniAccumuloClusterTest)
>> >>> > >> >
>> >>> > >> > Just about the same thing as we were seeing on
>> >>> > >> https://issues.apache.org/*
>> >>> > >> > *jira/browse/ACCUMULO-837<
>> >>> > >> https://issues.apache.org/jira/browse/ACCUMULO-837>.
>> >>> > >> > My guess would be that we're including the wrong test
>> dependency.
>> >>> > >> >
>> >>> > >> > On 5/10/13 3:33 AM, Rob Tallis wrote:
>> >>> > >> >
>> >>> > >> >> For info, changing it to cdh3u5, it*does*  work:
>> >>> > >> >>
>> >>> > >> >> mvn clean package -P assemble -DskipTests
>> >>> > >>  -Dhadoop.version=0.20.2-cdh3u5
>> >>> > >> >> -Dzookeeper.version=3.3.5-**cdh3u5
>> >>> > >> >>
>> >>> > >> >
>> >>> > >> >
>> >>> > >>
>> >>> > >>
>> >>> > >> --
>> >>> > >> Cheers
>> >>> > >> ~John
>> >>> > >>
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> >>
>> >>
>>

Re: 1.5 - how to build rpm; cdh3u4;

Posted by Rob Tallis <ro...@gmail.com>.
Thanks Christopher, that worked.
Is any of this documented anywhere?


On 17 June 2013 02:10, Christopher <ct...@apache.org> wrote:

> Since 1.5 was released, the RPM now expects at least one other profile
> to be active, also: the thrift profile. This is because it was decided
> during the reviewing of the release candidates for 1.5 that the thrift
> bindings for several languages to the new proxy feature, should be
> delivered with the new proxy.
>
> The correct command for building the entire RPM for 1.5 would be
> (minimally, if we skip tests):
> mvn package -DskipTests -P thrift,native,rpm
>
> Typically, one would also activate the seal-jars profile and the docs
> profile, as well as build the aggregate javadocs for packaging with
> the monitor:
> mvn clean compile javadoc:aggregate package -DskipTests -P
> docs,seal-jars,thrift,native,rpm
>
> Also, don't expect trunk to work the same way. ACCUMULO-210 is going
> to result in changes to the way we build RPMs. Even if we make an
> effort to continue to support building the monolithic RPM, there's no
> guarantee that the maven profile prerequisites won't change, due to
> other improvements in the build. For instance, the docs directory is
> now a proper maven module and there are likely going to be changes due
> to the discussion of consolidating documentation.
>
> --
> Christopher L Tubbs II
> http://gravatar.com/ctubbsii
>
>
> On Sun, Jun 16, 2013 at 9:16 AM, Rob Tallis <ro...@gmail.com> wrote:
> > Dragging the rpm question up again, the instruction to create an rpm from
> > source was *mvn clean package -P native,rpm*
> >
> > From a fresh clone, on both trunk and 1.5 I get:
> >
> > *[ERROR] Failed to execute goal
> > org.codehaus.mojo:rpm-maven-plugin:2.1-alpha-2:attached-rpm
> (build-bin-rpm)
> > on project accumulo: Unable to copy files for packaging: You must set at
> > least one file. -> [Help 1]*
> >
> > and I can't decipher the build setup to figure this out. What am I doing
> > wrong?
> >
> > Thanks, Rob
> >
> >
> > On 15 May 2013 19:06, Rob Tallis <ro...@gmail.com> wrote:
> >
> >> That sorted it, thanks.
> >>
> >>
> >> On 15 May 2013 18:11, John Vines <vi...@apache.org> wrote:
> >>
> >>> In the example files, specifically accumulo-env.sh, there are 2
> commented
> >>> lines after HADOOP_CONF_DIR is set, I believe. Make sure that you
> comment
> >>> out the old one and uncomment the one after the hadoop2 comment.
> >>>
> >>> This is necessary because Accumulo puts the hadoop conf dir on the
> >>> classpath in order to load the core-site.xml, which has the HDFS
> namenode
> >>> config. By default, this is file:///, so if it's not there it's goingto
> >>> default to the local file system. A quick way to validate is to run
> >>> bin/accumulo classpath and then look to see if the conf dir (I don't
> not
> >>> recall what is it for CDH4) is there.
> >>>
> >>>
> >>> On Wed, May 15, 2013 at 4:06 AM, Rob Tallis <ro...@gmail.com>
> wrote:
> >>>
> >>> > I've given up on cdh3 then. I've trying to get 1.5 and /or trunk
> going
> >>> on
> >>> > cdh4.2.1 on a small hadoop cluster installed via cloudera manager. I
> >>> built
> >>> > the tar specifying Dhadoop.profile=2.0
> -Dhadoop.version=2.0.0-cdh4.2.1.
> >>> > I've edited accumulo-site to add $HADOOP_PREFIX/client/.*.jar, to the
> >>> > classpath. This lets me init and start the processes but I've got the
> >>> > problem of the instance information being stored on local disk rather
> >>> than
> >>> > on hdfs. (unable obtain instance id at /accumulo/instance_id)
> >>> >
> >>> > I can see references to this problem elsewhere but I can't figure out
> >>> what
> >>> > I'm doing wrong. Something wrong with my environment when I init i
> >>> guess..?
> >>> > (tbh it's the first time I've tried a cluster install over a
> standalone
> >>> so
> >>> > it might not have anything to do with the versions I'm trying)
> >>> >
> >>> > Rob
> >>> >
> >>> >
> >>> >
> >>> > On 13 May 2013 12:24, Rob Tallis <ro...@gmail.com> wrote:
> >>> >
> >>> > > Perfect, thanks for the help
> >>> > >
> >>> > >
> >>> > > On 11 May 2013 08:37, John Vines <jv...@gmail.com> wrote:
> >>> > >
> >>> > >> It also appears that CDH3u* does not have commons-collections or
> >>> > >> commons-configuration included, so you will need to manually add
> >>> those
> >>> > >> jars
> >>> > >> to the classpath, either in accumulo lib or hadoop lib. Without
> these
> >>> > >> files, tserver and master will not start.
> >>> > >>
> >>> > >>
> >>> > >> On Fri, May 10, 2013 at 11:14 AM, Josh Elser <
> josh.elser@gmail.com>
> >>> > >> wrote:
> >>> > >>
> >>> > >> > FWIW, if you don't run -DskipTests, you will get some failures
> on
> >>> some
> >>> > >> of
> >>> > >> > the newer MiniAccumuloCluster tests.
> >>> > >> >
> >>> > >> > testPerTableClasspath(org.**apache.accumulo.server.mini.**
> >>> > >> > MiniAccumuloClusterTest)
> >>> > >> >
> test(org.apache.accumulo.**server.mini.**MiniAccumuloClusterTest)
> >>> > >> >
> >>> > >> > Just about the same thing as we were seeing on
> >>> > >> https://issues.apache.org/*
> >>> > >> > *jira/browse/ACCUMULO-837<
> >>> > >> https://issues.apache.org/jira/browse/ACCUMULO-837>.
> >>> > >> > My guess would be that we're including the wrong test
> dependency.
> >>> > >> >
> >>> > >> > On 5/10/13 3:33 AM, Rob Tallis wrote:
> >>> > >> >
> >>> > >> >> For info, changing it to cdh3u5, it*does*  work:
> >>> > >> >>
> >>> > >> >> mvn clean package -P assemble -DskipTests
> >>> > >>  -Dhadoop.version=0.20.2-cdh3u5
> >>> > >> >> -Dzookeeper.version=3.3.5-**cdh3u5
> >>> > >> >>
> >>> > >> >
> >>> > >> >
> >>> > >>
> >>> > >>
> >>> > >> --
> >>> > >> Cheers
> >>> > >> ~John
> >>> > >>
> >>> > >
> >>> > >
> >>> >
> >>>
> >>
> >>
>

Re: 1.5 - how to build rpm; cdh3u4;

Posted by Christopher <ct...@apache.org>.
Since 1.5 was released, the RPM now expects at least one other profile
to be active, also: the thrift profile. This is because it was decided
during the reviewing of the release candidates for 1.5 that the thrift
bindings for several languages to the new proxy feature, should be
delivered with the new proxy.

The correct command for building the entire RPM for 1.5 would be
(minimally, if we skip tests):
mvn package -DskipTests -P thrift,native,rpm

Typically, one would also activate the seal-jars profile and the docs
profile, as well as build the aggregate javadocs for packaging with
the monitor:
mvn clean compile javadoc:aggregate package -DskipTests -P
docs,seal-jars,thrift,native,rpm

Also, don't expect trunk to work the same way. ACCUMULO-210 is going
to result in changes to the way we build RPMs. Even if we make an
effort to continue to support building the monolithic RPM, there's no
guarantee that the maven profile prerequisites won't change, due to
other improvements in the build. For instance, the docs directory is
now a proper maven module and there are likely going to be changes due
to the discussion of consolidating documentation.

--
Christopher L Tubbs II
http://gravatar.com/ctubbsii


On Sun, Jun 16, 2013 at 9:16 AM, Rob Tallis <ro...@gmail.com> wrote:
> Dragging the rpm question up again, the instruction to create an rpm from
> source was *mvn clean package -P native,rpm*
>
> From a fresh clone, on both trunk and 1.5 I get:
>
> *[ERROR] Failed to execute goal
> org.codehaus.mojo:rpm-maven-plugin:2.1-alpha-2:attached-rpm (build-bin-rpm)
> on project accumulo: Unable to copy files for packaging: You must set at
> least one file. -> [Help 1]*
>
> and I can't decipher the build setup to figure this out. What am I doing
> wrong?
>
> Thanks, Rob
>
>
> On 15 May 2013 19:06, Rob Tallis <ro...@gmail.com> wrote:
>
>> That sorted it, thanks.
>>
>>
>> On 15 May 2013 18:11, John Vines <vi...@apache.org> wrote:
>>
>>> In the example files, specifically accumulo-env.sh, there are 2 commented
>>> lines after HADOOP_CONF_DIR is set, I believe. Make sure that you comment
>>> out the old one and uncomment the one after the hadoop2 comment.
>>>
>>> This is necessary because Accumulo puts the hadoop conf dir on the
>>> classpath in order to load the core-site.xml, which has the HDFS namenode
>>> config. By default, this is file:///, so if it's not there it's goingto
>>> default to the local file system. A quick way to validate is to run
>>> bin/accumulo classpath and then look to see if the conf dir (I don't not
>>> recall what is it for CDH4) is there.
>>>
>>>
>>> On Wed, May 15, 2013 at 4:06 AM, Rob Tallis <ro...@gmail.com> wrote:
>>>
>>> > I've given up on cdh3 then. I've trying to get 1.5 and /or trunk going
>>> on
>>> > cdh4.2.1 on a small hadoop cluster installed via cloudera manager. I
>>> built
>>> > the tar specifying Dhadoop.profile=2.0 -Dhadoop.version=2.0.0-cdh4.2.1.
>>> > I've edited accumulo-site to add $HADOOP_PREFIX/client/.*.jar, to the
>>> > classpath. This lets me init and start the processes but I've got the
>>> > problem of the instance information being stored on local disk rather
>>> than
>>> > on hdfs. (unable obtain instance id at /accumulo/instance_id)
>>> >
>>> > I can see references to this problem elsewhere but I can't figure out
>>> what
>>> > I'm doing wrong. Something wrong with my environment when I init i
>>> guess..?
>>> > (tbh it's the first time I've tried a cluster install over a standalone
>>> so
>>> > it might not have anything to do with the versions I'm trying)
>>> >
>>> > Rob
>>> >
>>> >
>>> >
>>> > On 13 May 2013 12:24, Rob Tallis <ro...@gmail.com> wrote:
>>> >
>>> > > Perfect, thanks for the help
>>> > >
>>> > >
>>> > > On 11 May 2013 08:37, John Vines <jv...@gmail.com> wrote:
>>> > >
>>> > >> It also appears that CDH3u* does not have commons-collections or
>>> > >> commons-configuration included, so you will need to manually add
>>> those
>>> > >> jars
>>> > >> to the classpath, either in accumulo lib or hadoop lib. Without these
>>> > >> files, tserver and master will not start.
>>> > >>
>>> > >>
>>> > >> On Fri, May 10, 2013 at 11:14 AM, Josh Elser <jo...@gmail.com>
>>> > >> wrote:
>>> > >>
>>> > >> > FWIW, if you don't run -DskipTests, you will get some failures on
>>> some
>>> > >> of
>>> > >> > the newer MiniAccumuloCluster tests.
>>> > >> >
>>> > >> > testPerTableClasspath(org.**apache.accumulo.server.mini.**
>>> > >> > MiniAccumuloClusterTest)
>>> > >> >   test(org.apache.accumulo.**server.mini.**MiniAccumuloClusterTest)
>>> > >> >
>>> > >> > Just about the same thing as we were seeing on
>>> > >> https://issues.apache.org/*
>>> > >> > *jira/browse/ACCUMULO-837<
>>> > >> https://issues.apache.org/jira/browse/ACCUMULO-837>.
>>> > >> > My guess would be that we're including the wrong test dependency.
>>> > >> >
>>> > >> > On 5/10/13 3:33 AM, Rob Tallis wrote:
>>> > >> >
>>> > >> >> For info, changing it to cdh3u5, it*does*  work:
>>> > >> >>
>>> > >> >> mvn clean package -P assemble -DskipTests
>>> > >>  -Dhadoop.version=0.20.2-cdh3u5
>>> > >> >> -Dzookeeper.version=3.3.5-**cdh3u5
>>> > >> >>
>>> > >> >
>>> > >> >
>>> > >>
>>> > >>
>>> > >> --
>>> > >> Cheers
>>> > >> ~John
>>> > >>
>>> > >
>>> > >
>>> >
>>>
>>
>>

Re: 1.5 - how to build rpm; cdh3u4;

Posted by Rob Tallis <ro...@gmail.com>.
Dragging the rpm question up again, the instruction to create an rpm from
source was *mvn clean package -P native,rpm*

>From a fresh clone, on both trunk and 1.5 I get:

*[ERROR] Failed to execute goal
org.codehaus.mojo:rpm-maven-plugin:2.1-alpha-2:attached-rpm (build-bin-rpm)
on project accumulo: Unable to copy files for packaging: You must set at
least one file. -> [Help 1]*

and I can't decipher the build setup to figure this out. What am I doing
wrong?

Thanks, Rob


On 15 May 2013 19:06, Rob Tallis <ro...@gmail.com> wrote:

> That sorted it, thanks.
>
>
> On 15 May 2013 18:11, John Vines <vi...@apache.org> wrote:
>
>> In the example files, specifically accumulo-env.sh, there are 2 commented
>> lines after HADOOP_CONF_DIR is set, I believe. Make sure that you comment
>> out the old one and uncomment the one after the hadoop2 comment.
>>
>> This is necessary because Accumulo puts the hadoop conf dir on the
>> classpath in order to load the core-site.xml, which has the HDFS namenode
>> config. By default, this is file:///, so if it's not there it's goingto
>> default to the local file system. A quick way to validate is to run
>> bin/accumulo classpath and then look to see if the conf dir (I don't not
>> recall what is it for CDH4) is there.
>>
>>
>> On Wed, May 15, 2013 at 4:06 AM, Rob Tallis <ro...@gmail.com> wrote:
>>
>> > I've given up on cdh3 then. I've trying to get 1.5 and /or trunk going
>> on
>> > cdh4.2.1 on a small hadoop cluster installed via cloudera manager. I
>> built
>> > the tar specifying Dhadoop.profile=2.0 -Dhadoop.version=2.0.0-cdh4.2.1.
>> > I've edited accumulo-site to add $HADOOP_PREFIX/client/.*.jar, to the
>> > classpath. This lets me init and start the processes but I've got the
>> > problem of the instance information being stored on local disk rather
>> than
>> > on hdfs. (unable obtain instance id at /accumulo/instance_id)
>> >
>> > I can see references to this problem elsewhere but I can't figure out
>> what
>> > I'm doing wrong. Something wrong with my environment when I init i
>> guess..?
>> > (tbh it's the first time I've tried a cluster install over a standalone
>> so
>> > it might not have anything to do with the versions I'm trying)
>> >
>> > Rob
>> >
>> >
>> >
>> > On 13 May 2013 12:24, Rob Tallis <ro...@gmail.com> wrote:
>> >
>> > > Perfect, thanks for the help
>> > >
>> > >
>> > > On 11 May 2013 08:37, John Vines <jv...@gmail.com> wrote:
>> > >
>> > >> It also appears that CDH3u* does not have commons-collections or
>> > >> commons-configuration included, so you will need to manually add
>> those
>> > >> jars
>> > >> to the classpath, either in accumulo lib or hadoop lib. Without these
>> > >> files, tserver and master will not start.
>> > >>
>> > >>
>> > >> On Fri, May 10, 2013 at 11:14 AM, Josh Elser <jo...@gmail.com>
>> > >> wrote:
>> > >>
>> > >> > FWIW, if you don't run -DskipTests, you will get some failures on
>> some
>> > >> of
>> > >> > the newer MiniAccumuloCluster tests.
>> > >> >
>> > >> > testPerTableClasspath(org.**apache.accumulo.server.mini.**
>> > >> > MiniAccumuloClusterTest)
>> > >> >   test(org.apache.accumulo.**server.mini.**MiniAccumuloClusterTest)
>> > >> >
>> > >> > Just about the same thing as we were seeing on
>> > >> https://issues.apache.org/*
>> > >> > *jira/browse/ACCUMULO-837<
>> > >> https://issues.apache.org/jira/browse/ACCUMULO-837>.
>> > >> > My guess would be that we're including the wrong test dependency.
>> > >> >
>> > >> > On 5/10/13 3:33 AM, Rob Tallis wrote:
>> > >> >
>> > >> >> For info, changing it to cdh3u5, it*does*  work:
>> > >> >>
>> > >> >> mvn clean package -P assemble -DskipTests
>> > >>  -Dhadoop.version=0.20.2-cdh3u5
>> > >> >> -Dzookeeper.version=3.3.5-**cdh3u5
>> > >> >>
>> > >> >
>> > >> >
>> > >>
>> > >>
>> > >> --
>> > >> Cheers
>> > >> ~John
>> > >>
>> > >
>> > >
>> >
>>
>
>

Re: 1.5 - how to build rpm; cdh3u4;

Posted by Rob Tallis <ro...@gmail.com>.
That sorted it, thanks.


On 15 May 2013 18:11, John Vines <vi...@apache.org> wrote:

> In the example files, specifically accumulo-env.sh, there are 2 commented
> lines after HADOOP_CONF_DIR is set, I believe. Make sure that you comment
> out the old one and uncomment the one after the hadoop2 comment.
>
> This is necessary because Accumulo puts the hadoop conf dir on the
> classpath in order to load the core-site.xml, which has the HDFS namenode
> config. By default, this is file:///, so if it's not there it's goingto
> default to the local file system. A quick way to validate is to run
> bin/accumulo classpath and then look to see if the conf dir (I don't not
> recall what is it for CDH4) is there.
>
>
> On Wed, May 15, 2013 at 4:06 AM, Rob Tallis <ro...@gmail.com> wrote:
>
> > I've given up on cdh3 then. I've trying to get 1.5 and /or trunk going on
> > cdh4.2.1 on a small hadoop cluster installed via cloudera manager. I
> built
> > the tar specifying Dhadoop.profile=2.0 -Dhadoop.version=2.0.0-cdh4.2.1.
> > I've edited accumulo-site to add $HADOOP_PREFIX/client/.*.jar, to the
> > classpath. This lets me init and start the processes but I've got the
> > problem of the instance information being stored on local disk rather
> than
> > on hdfs. (unable obtain instance id at /accumulo/instance_id)
> >
> > I can see references to this problem elsewhere but I can't figure out
> what
> > I'm doing wrong. Something wrong with my environment when I init i
> guess..?
> > (tbh it's the first time I've tried a cluster install over a standalone
> so
> > it might not have anything to do with the versions I'm trying)
> >
> > Rob
> >
> >
> >
> > On 13 May 2013 12:24, Rob Tallis <ro...@gmail.com> wrote:
> >
> > > Perfect, thanks for the help
> > >
> > >
> > > On 11 May 2013 08:37, John Vines <jv...@gmail.com> wrote:
> > >
> > >> It also appears that CDH3u* does not have commons-collections or
> > >> commons-configuration included, so you will need to manually add those
> > >> jars
> > >> to the classpath, either in accumulo lib or hadoop lib. Without these
> > >> files, tserver and master will not start.
> > >>
> > >>
> > >> On Fri, May 10, 2013 at 11:14 AM, Josh Elser <jo...@gmail.com>
> > >> wrote:
> > >>
> > >> > FWIW, if you don't run -DskipTests, you will get some failures on
> some
> > >> of
> > >> > the newer MiniAccumuloCluster tests.
> > >> >
> > >> > testPerTableClasspath(org.**apache.accumulo.server.mini.**
> > >> > MiniAccumuloClusterTest)
> > >> >   test(org.apache.accumulo.**server.mini.**MiniAccumuloClusterTest)
> > >> >
> > >> > Just about the same thing as we were seeing on
> > >> https://issues.apache.org/*
> > >> > *jira/browse/ACCUMULO-837<
> > >> https://issues.apache.org/jira/browse/ACCUMULO-837>.
> > >> > My guess would be that we're including the wrong test dependency.
> > >> >
> > >> > On 5/10/13 3:33 AM, Rob Tallis wrote:
> > >> >
> > >> >> For info, changing it to cdh3u5, it*does*  work:
> > >> >>
> > >> >> mvn clean package -P assemble -DskipTests
> > >>  -Dhadoop.version=0.20.2-cdh3u5
> > >> >> -Dzookeeper.version=3.3.5-**cdh3u5
> > >> >>
> > >> >
> > >> >
> > >>
> > >>
> > >> --
> > >> Cheers
> > >> ~John
> > >>
> > >
> > >
> >
>

Re: 1.5 - how to build rpm; cdh3u4;

Posted by John Vines <vi...@apache.org>.
In the example files, specifically accumulo-env.sh, there are 2 commented
lines after HADOOP_CONF_DIR is set, I believe. Make sure that you comment
out the old one and uncomment the one after the hadoop2 comment.

This is necessary because Accumulo puts the hadoop conf dir on the
classpath in order to load the core-site.xml, which has the HDFS namenode
config. By default, this is file:///, so if it's not there it's goingto
default to the local file system. A quick way to validate is to run
bin/accumulo classpath and then look to see if the conf dir (I don't not
recall what is it for CDH4) is there.


On Wed, May 15, 2013 at 4:06 AM, Rob Tallis <ro...@gmail.com> wrote:

> I've given up on cdh3 then. I've trying to get 1.5 and /or trunk going on
> cdh4.2.1 on a small hadoop cluster installed via cloudera manager. I built
> the tar specifying Dhadoop.profile=2.0 -Dhadoop.version=2.0.0-cdh4.2.1.
> I've edited accumulo-site to add $HADOOP_PREFIX/client/.*.jar, to the
> classpath. This lets me init and start the processes but I've got the
> problem of the instance information being stored on local disk rather than
> on hdfs. (unable obtain instance id at /accumulo/instance_id)
>
> I can see references to this problem elsewhere but I can't figure out what
> I'm doing wrong. Something wrong with my environment when I init i guess..?
> (tbh it's the first time I've tried a cluster install over a standalone so
> it might not have anything to do with the versions I'm trying)
>
> Rob
>
>
>
> On 13 May 2013 12:24, Rob Tallis <ro...@gmail.com> wrote:
>
> > Perfect, thanks for the help
> >
> >
> > On 11 May 2013 08:37, John Vines <jv...@gmail.com> wrote:
> >
> >> It also appears that CDH3u* does not have commons-collections or
> >> commons-configuration included, so you will need to manually add those
> >> jars
> >> to the classpath, either in accumulo lib or hadoop lib. Without these
> >> files, tserver and master will not start.
> >>
> >>
> >> On Fri, May 10, 2013 at 11:14 AM, Josh Elser <jo...@gmail.com>
> >> wrote:
> >>
> >> > FWIW, if you don't run -DskipTests, you will get some failures on some
> >> of
> >> > the newer MiniAccumuloCluster tests.
> >> >
> >> > testPerTableClasspath(org.**apache.accumulo.server.mini.**
> >> > MiniAccumuloClusterTest)
> >> >   test(org.apache.accumulo.**server.mini.**MiniAccumuloClusterTest)
> >> >
> >> > Just about the same thing as we were seeing on
> >> https://issues.apache.org/*
> >> > *jira/browse/ACCUMULO-837<
> >> https://issues.apache.org/jira/browse/ACCUMULO-837>.
> >> > My guess would be that we're including the wrong test dependency.
> >> >
> >> > On 5/10/13 3:33 AM, Rob Tallis wrote:
> >> >
> >> >> For info, changing it to cdh3u5, it*does*  work:
> >> >>
> >> >> mvn clean package -P assemble -DskipTests
> >>  -Dhadoop.version=0.20.2-cdh3u5
> >> >> -Dzookeeper.version=3.3.5-**cdh3u5
> >> >>
> >> >
> >> >
> >>
> >>
> >> --
> >> Cheers
> >> ~John
> >>
> >
> >
>

Re: 1.5 - how to build rpm; cdh3u4;

Posted by Rob Tallis <ro...@gmail.com>.
I've given up on cdh3 then. I've trying to get 1.5 and /or trunk going on
cdh4.2.1 on a small hadoop cluster installed via cloudera manager. I built
the tar specifying Dhadoop.profile=2.0 -Dhadoop.version=2.0.0-cdh4.2.1.
I've edited accumulo-site to add $HADOOP_PREFIX/client/.*.jar, to the
classpath. This lets me init and start the processes but I've got the
problem of the instance information being stored on local disk rather than
on hdfs. (unable obtain instance id at /accumulo/instance_id)

I can see references to this problem elsewhere but I can't figure out what
I'm doing wrong. Something wrong with my environment when I init i guess..?
(tbh it's the first time I've tried a cluster install over a standalone so
it might not have anything to do with the versions I'm trying)

Rob



On 13 May 2013 12:24, Rob Tallis <ro...@gmail.com> wrote:

> Perfect, thanks for the help
>
>
> On 11 May 2013 08:37, John Vines <jv...@gmail.com> wrote:
>
>> It also appears that CDH3u* does not have commons-collections or
>> commons-configuration included, so you will need to manually add those
>> jars
>> to the classpath, either in accumulo lib or hadoop lib. Without these
>> files, tserver and master will not start.
>>
>>
>> On Fri, May 10, 2013 at 11:14 AM, Josh Elser <jo...@gmail.com>
>> wrote:
>>
>> > FWIW, if you don't run -DskipTests, you will get some failures on some
>> of
>> > the newer MiniAccumuloCluster tests.
>> >
>> > testPerTableClasspath(org.**apache.accumulo.server.mini.**
>> > MiniAccumuloClusterTest)
>> >   test(org.apache.accumulo.**server.mini.**MiniAccumuloClusterTest)
>> >
>> > Just about the same thing as we were seeing on
>> https://issues.apache.org/*
>> > *jira/browse/ACCUMULO-837<
>> https://issues.apache.org/jira/browse/ACCUMULO-837>.
>> > My guess would be that we're including the wrong test dependency.
>> >
>> > On 5/10/13 3:33 AM, Rob Tallis wrote:
>> >
>> >> For info, changing it to cdh3u5, it*does*  work:
>> >>
>> >> mvn clean package -P assemble -DskipTests
>>  -Dhadoop.version=0.20.2-cdh3u5
>> >> -Dzookeeper.version=3.3.5-**cdh3u5
>> >>
>> >
>> >
>>
>>
>> --
>> Cheers
>> ~John
>>
>
>

Re: 1.5 - how to build rpm; cdh3u4;

Posted by Rob Tallis <ro...@gmail.com>.
Perfect, thanks for the help


On 11 May 2013 08:37, John Vines <jv...@gmail.com> wrote:

> It also appears that CDH3u* does not have commons-collections or
> commons-configuration included, so you will need to manually add those jars
> to the classpath, either in accumulo lib or hadoop lib. Without these
> files, tserver and master will not start.
>
>
> On Fri, May 10, 2013 at 11:14 AM, Josh Elser <jo...@gmail.com> wrote:
>
> > FWIW, if you don't run -DskipTests, you will get some failures on some of
> > the newer MiniAccumuloCluster tests.
> >
> > testPerTableClasspath(org.**apache.accumulo.server.mini.**
> > MiniAccumuloClusterTest)
> >   test(org.apache.accumulo.**server.mini.**MiniAccumuloClusterTest)
> >
> > Just about the same thing as we were seeing on
> https://issues.apache.org/*
> > *jira/browse/ACCUMULO-837<
> https://issues.apache.org/jira/browse/ACCUMULO-837>.
> > My guess would be that we're including the wrong test dependency.
> >
> > On 5/10/13 3:33 AM, Rob Tallis wrote:
> >
> >> For info, changing it to cdh3u5, it*does*  work:
> >>
> >> mvn clean package -P assemble -DskipTests
>  -Dhadoop.version=0.20.2-cdh3u5
> >> -Dzookeeper.version=3.3.5-**cdh3u5
> >>
> >
> >
>
>
> --
> Cheers
> ~John
>

Re: 1.5 - how to build rpm; cdh3u4;

Posted by John Vines <jv...@gmail.com>.
It also appears that CDH3u* does not have commons-collections or
commons-configuration included, so you will need to manually add those jars
to the classpath, either in accumulo lib or hadoop lib. Without these
files, tserver and master will not start.


On Fri, May 10, 2013 at 11:14 AM, Josh Elser <jo...@gmail.com> wrote:

> FWIW, if you don't run -DskipTests, you will get some failures on some of
> the newer MiniAccumuloCluster tests.
>
> testPerTableClasspath(org.**apache.accumulo.server.mini.**
> MiniAccumuloClusterTest)
>   test(org.apache.accumulo.**server.mini.**MiniAccumuloClusterTest)
>
> Just about the same thing as we were seeing on https://issues.apache.org/*
> *jira/browse/ACCUMULO-837<https://issues.apache.org/jira/browse/ACCUMULO-837>.
> My guess would be that we're including the wrong test dependency.
>
> On 5/10/13 3:33 AM, Rob Tallis wrote:
>
>> For info, changing it to cdh3u5, it*does*  work:
>>
>> mvn clean package -P assemble -DskipTests  -Dhadoop.version=0.20.2-cdh3u5
>> -Dzookeeper.version=3.3.5-**cdh3u5
>>
>
>


-- 
Cheers
~John

Re: 1.5 - how to build rpm; cdh3u4;

Posted by Josh Elser <jo...@gmail.com>.
FWIW, if you don't run -DskipTests, you will get some failures on some 
of the newer MiniAccumuloCluster tests.

testPerTableClasspath(org.apache.accumulo.server.mini.MiniAccumuloClusterTest)
   test(org.apache.accumulo.server.mini.MiniAccumuloClusterTest)

Just about the same thing as we were seeing on 
https://issues.apache.org/jira/browse/ACCUMULO-837. My guess would be 
that we're including the wrong test dependency.

On 5/10/13 3:33 AM, Rob Tallis wrote:
> For info, changing it to cdh3u5, it*does*  work:
> mvn clean package -P assemble -DskipTests  -Dhadoop.version=0.20.2-cdh3u5
> -Dzookeeper.version=3.3.5-cdh3u5