You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@bigtop.apache.org by Tim Harsch <th...@yarcdata.com> on 2014/09/23 22:31:11 UTC

smoke tests in 0.7.0

Hi all,
I am having trouble getting the smoke tests to run in 0.7.0.  I guess my first question is: which systems are these smokes being validated against at release time?  I'm wondering if it should work for CentOS6

For installing bigtop, I followed instructions in the Hadoop for Dummies book.  I install a CentOS6 vmWare player.  And installed bigtop via the following:
"yum install hadoop\* mahout\* oozie\* hbase\* hive\* hue\* pig\* zookeeper\* giraph\*"

I believe I now have a working hadoop system, "jps" confirms running services.  But, almost none of the tests run:

I've read the top level README, and followed the wiki https://cwiki.apache.org/confluence/display/BIGTOP/Running+integration+and+system+tests.

README:
Following the first step for testing yields:

[root@localhost bigtop-0.7.0]# mvn install -DskipTests -DskipITs -DperformRelease -f bigtop-tests/test-execution/smokes/package/pom.xml
[INFO] Scanning for projects...
[ERROR] The build could not read 1 project -> [Help 1]
[ERROR]
[ERROR]   The project  (/root/bigtop/bigtop-0.7.0/bigtop-tests/test-execution/smokes/package/pom.xml) has 1 error
[ERROR]     Non-readable POM /root/bigtop/bigtop-0.7.0/bigtop-tests/test-execution/smokes/package/pom.xml: /root/bigtop/bigtop-0.7.0/bigtop-tests/test-execution/smokes/package/pom.xml (No such file or directory)
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException

>From the error we see there is no package module.

WIKI:
Basically advises to run a submodule…  hadoop seems like a good one.  If I run it though I get 8 of 8 tests failing.  Here is the first sign of trouble:

Running org.apache.bigtop.itest.hadoop.mapreduce.TestHadoopSmoke
Failed command: hadoop fs -rmr test.hadoopsmoke.1411495413183/cachefile/out
hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar -D mapred.map.tasks=1 -D mapred.reduce.tasks=1 -D mapred.job.name=Experiment -cacheArchive file:////user/root/test.hadoopsmoke.1411495413183/cachefile/cachedir.jar#testlink -input test.hadoopsmoke.1411495413183/cachefile/input.txt -mapper map.sh -file map.sh -reducer cat -output test.hadoopsmoke.1411495413183/cachefile/out -verbose
error code: 2
stdout: [2014-09-23 20:03:38,522 WARN  [main] streaming.StreamJob (StreamJob.java:parseArgv(290)) - -file option is deprecated, please use generic option -files instead., 2014-09-23 20:03:38,913 WARN  [main] streaming.StreamJob (StreamJob.java:parseArgv(337)) - -cacheArchive option is deprecated, please use -archives instead., STREAM: addTaskEnvironment=HADOOP_ROOT_LOGGER=, STREAM: shippedCanonFiles_=[/root/bigtop/bigtop-0.7.0/bigtop-tests/test-execution/smokes/hadoop/target/map.sh], STREAM: shipped: true /root/bigtop/bigtop-0.7.0/bigtop-tests/test-execution/smokes/hadoop/target/map.sh, STREAM: cmd=map.sh, STREAM: cmd=null, STREAM: shipped:

Seems like I must have something wrong.

Any help appreciated.
Thanks,
Tim



Re: smoke tests in 0.7.0

Posted by Roman Shaposhnik <ro...@shaposhnik.org>.
On Wed, Sep 24, 2014 at 2:57 PM, Jay Vyas <ja...@gmail.com> wrote:
> Hi Tim.
>
> I generally deploy bigtop using the vagrant recipes, which set this memory,
> users, init hdfs automatically . Meanwhile, when we test vendor distros,
> those settings are usually also set for you.
>
> And you're right the pig tests aren't idemopotent.  Would love to see a jira
> for test idemootency.  I think some others are also not idemopotent.
>
> Glad the smoke-tests framework is working for you.  I think your the 3rd
> person I know who has used it as a quick way to get up and running with
> bigtop smoke tests.
>
> Would you like to summarize your findings in a JIra outlining improvements
> that you'd like to see? If so I can work on them this wknd.

I can't echo that suggestion enough: we soooo do need folks like you, Tim,
to help us with keeping things up-to-date.

To answer one of the questions you asked earlier: the smoke tests used
to be executed nightly (and most definitely on every release). What
has changed recently is that Bigtop has suffered quite a fundamental
meltdown of our EC2 infra. I'm rebuilding some of it right now, but the
progress is slow.

Not trying to come up with excesses -- just explaining the situation.

Given that we're standardizing our CI on Docker, though, Bigtop
is now a perfect place to get to play with that bit of coolness. So...
if you're interested -- I'd be more than happy to give you a lowdown
on what needs to be done.

Thanks,
Roman.

Re: smoke tests in 0.7.0

Posted by Jay Vyas <ja...@gmail.com>.
Hi Tim.

I generally deploy bigtop using the vagrant recipes, which set this memory, users, init hdfs automatically . Meanwhile, when we test vendor distros, those settings are usually also set for you. 

And you're right the pig tests aren't idemopotent.  Would love to see a jira for test idemootency.  I think some others are also not idemopotent.

Glad the smoke-tests framework is working for you.  I think your the 3rd person I know who has used it as a quick way to get up and running with bigtop smoke tests.

Would you like to summarize your findings in a JIra outlining improvements that you'd like to see? If so I can work on them this wknd.


> On Sep 24, 2014, at 5:35 PM, Tim Harsch <th...@yarcdata.com> wrote:
> 
> I'm glad to report that I got the pig gradle test to work. 
> 
> 
> I then run it as unix user 'tom' (the vmWare VM used in Hadoop for Dummies).  Problem no home directory.  Fixed that.
> 
> Next problem:
> /tmp/hadoop-yarn/staging is perms 700 ( and not addressed by  /usr/lib/hadoop/libexec/init-hdfs.sh, should it be?)
> Fix:
> % hdfs dfs -chmod -R 1777 /tmp/hadoop-yarn/staging
> 
> Next problem, pig jobs are in "ACCEPTED" state and never move to running, with wonderfully no hints in logs.  Solution:  Up the memory settings in mapred-site.xml
>  <property>
>     <name>mapreduce.map.memory.mb</name>
>     <value>2400</value>
>   </property>
>   <property>
>     <name>mapreduce.map.java.opts</name>
>     <value>-Xmx2048m</value>
>   </property>
>   <property>
>     <name>mapreduce.reduce.memory.mb</name>
>     <value>4400</value>
>   </property>
>   <property>
>     <name>mapreduce.reduce.java.opts</name>
>     <value>-Xmx4096m</value>
>   </property>
> 
> I guess the defaults are just too low…
> 
> Next problem, test aborts due to test resource clean up issue.   Solved with the following change to the test:
> [tom@localhost pig]$ git diff -- TestPigSmoke.groovy
> diff --git a/bigtop-tests/smoke-tests/pig/TestPigSmoke.groovy b/bigtop-tests/smoke-tests/pig/TestPigSmoke.groovy
> index 9902267..9511626 100644
> --- a/bigtop-tests/smoke-tests/pig/TestPigSmoke.groovy
> +++ b/bigtop-tests/smoke-tests/pig/TestPigSmoke.groovy
> @@ -41,6 +41,7 @@ class TestPigSmoke {
>    @AfterClass
>    public static void tearDown() {
>      sh.exec("hadoop fs -rmr -skipTrash pigsmoketest");
> +    sh.exec("hadoop fs -rmr -skipTrash pig-output-wordcount");
>    }
>  
>    @BeforeClass
> 
> 
> So, not casting stones… but when are the smoke tests run?   Seems like it would be optimal if they would get run on a bigtop distro prior to a bigtop release, on each of the supported OS's (sounds like a lot of work… is there something like that?).  For the case of users stepping in to bigtop for the first time (raises hand…)   it would be nice if the docs could say install big top and then add a non-privileged user, up memory parameters and (??) etc.  Now run all the tests and ensure they pass.
> 
> Tim
> 
> 
> From: Jay Vyas <ja...@gmail.com>
> Reply-To: "user@bigtop.apache.org" <us...@bigtop.apache.org>
> Date: Wednesday, September 24, 2014 10:36 AM
> To: "user@bigtop.apache.org" <us...@bigtop.apache.org>
> Subject: Re: smoke tests in 0.7.0
> 
> Hi Tim.  Great to hear your making progress..
> 
> Your on the right track but i forgot the details.  But yes: you'll have to run some simple commands as user hdfs to set up permissions for "root".
> 
> You can try running your tests as user "hdfs". That is a good hammer to use since hdfs is super user on Hadoop systems that use HDFS as the file system.
> 
> In other systems like gluster, we usually have root as the super user.
> 
> Directory perms are always a pain in Hadoop setup.  Anything you suggest to make it more user friendly maybe create a jira. On this route, we have done bigtop-1200 which now encodes all info in a json file so that any FileSystem Can use the bigtop for provisioner.  I can discuss that with you also if you want later on (send me a private message).
> 
> I haven't merged that to replace init-hdfs , but it is functionally equivalent , and can be found in the code base (see jiras bigtop-952 and bigtop-1200 for details).
> 
> 
> On Sep 24, 2014, at 12:50 PM, Tim Harsch <th...@yarcdata.com> wrote:
> 
>> Thanks that was helpful.   So, I looked closely at the TestPigSmoke test and tried repeating it's steps manually, which really helped.  I was able to track the issue down to a perms problem for running as user root.  See this:
>> 
>> [root@localhost pig]# hadoop fs -ls /
>> Found 6 items
>> drwxrwxrwx   - hdfs  supergroup          0 2014-09-24 00:32 /benchmarks
>> drwxr-xr-x   - hbase hbase               0 2014-09-24 00:32 /hbase
>> drwxr-xr-x   - solr  solr                0 2014-09-24 00:32 /solr
>> drwxrwxrwt   - hdfs  supergroup          0 2014-09-24 18:33 /tmp
>> drwxr-xr-x   - hdfs  supergroup          0 2014-09-24 00:33 /user
>> drwxr-xr-x   - hdfs  supergroup          0 2014-09-24 00:32 /var
>> 
>> [root@localhost pig]# hadoop fs -ls /tmp
>> Found 2 items
>> drwxrwxrwx   - mapred mapred              0 2014-09-24 00:37 /tmp/hadoop-yarn
>> drwxr-xr-x   - root   supergroup          0 2014-09-24 01:29 /tmp/temp-1450563950
>> 
>> [root@localhost pig]# hadoop fs -ls /tmp/hadoop-yarn
>> Found 1 items
>> drwxrwx---   - mapred mapred          0 2014-09-24 00:37 /tmp/hadoop-yarn/staging
>> 
>> [root@localhost pig]# hadoop fs -ls /tmp/hadoop-yarn/staging
>> ls: Permission denied: user=root, access=READ_EXECUTE, inode="/tmp/hadoop-yarn/staging":mapred:mapred:drwxrwx---
>> 
>> OK, makes sense.  But I'm a little confused..  I thought all the directories would be set up correctly by the script /usr/lib/hadoop/libexec/init-hdfs.sh, which as you can tell from the above output, I did run it.  From the docs I've read the assumption is that after running /usr/lib/hadoop/libexec/init-hdfs.sh all tests should pass… but perhaps I missed some instruction somewhere.
>> 
>> Tim
>> 
>> 
>> 
>> From: jay vyas <ja...@gmail.com>
>> Reply-To: "user@bigtop.apache.org" <us...@bigtop.apache.org>
>> Date: Wednesday, September 24, 2014 5:46 AM
>> To: "user@bigtop.apache.org" <us...@bigtop.apache.org>
>> Subject: Re: smoke tests in 0.7.0
>> 
>> Thanks tim.  It could be related to permissions on the DFS... depending on the user you are running the job as.
>> 
>> Can you paste the error you got ? In general the errors should be eay to track down in smoke-tests (you can just hack some print statements into the groovy script under pig/).  
>> Also, the stack trace should give you some information ?

Re: smoke tests in 0.7.0

Posted by Tim Harsch <th...@yarcdata.com>.
I'm glad to report that I got the pig gradle test to work.


I then run it as unix user 'tom' (the vmWare VM used in Hadoop for Dummies).  Problem no home directory.  Fixed that.

Next problem:
/tmp/hadoop-yarn/staging is perms 700 ( and not addressed by  /usr/lib/hadoop/libexec/init-hdfs.sh, should it be?)
Fix:
% hdfs dfs -chmod -R 1777 /tmp/hadoop-yarn/staging

Next problem, pig jobs are in "ACCEPTED" state and never move to running, with wonderfully no hints in logs.  Solution:  Up the memory settings in mapred-site.xml
 <property>
    <name>mapreduce.map.memory.mb</name>
    <value>2400</value>
  </property>
  <property>
    <name>mapreduce.map.java.opts</name>
    <value>-Xmx2048m</value>
  </property>
  <property>
    <name>mapreduce.reduce.memory.mb</name>
    <value>4400</value>
  </property>
  <property>
    <name>mapreduce.reduce.java.opts</name>
    <value>-Xmx4096m</value>
  </property>

I guess the defaults are just too low…

Next problem, test aborts due to test resource clean up issue.   Solved with the following change to the test:
[tom@localhost pig]$ git diff -- TestPigSmoke.groovy
diff --git a/bigtop-tests/smoke-tests/pig/TestPigSmoke.groovy b/bigtop-tests/smoke-tests/pig/TestPigSmoke.groovy
index 9902267..9511626 100644
--- a/bigtop-tests/smoke-tests/pig/TestPigSmoke.groovy
+++ b/bigtop-tests/smoke-tests/pig/TestPigSmoke.groovy
@@ -41,6 +41,7 @@ class TestPigSmoke {
   @AfterClass
   public static void tearDown() {
     sh.exec("hadoop fs -rmr -skipTrash pigsmoketest");
+    sh.exec("hadoop fs -rmr -skipTrash pig-output-wordcount");
   }

   @BeforeClass


So, not casting stones… but when are the smoke tests run?   Seems like it would be optimal if they would get run on a bigtop distro prior to a bigtop release, on each of the supported OS's (sounds like a lot of work… is there something like that?).  For the case of users stepping in to bigtop for the first time (raises hand…)   it would be nice if the docs could say install big top and then add a non-privileged user, up memory parameters and (??) etc.  Now run all the tests and ensure they pass.

Tim


From: Jay Vyas <ja...@gmail.com>>
Reply-To: "user@bigtop.apache.org<ma...@bigtop.apache.org>" <us...@bigtop.apache.org>>
Date: Wednesday, September 24, 2014 10:36 AM
To: "user@bigtop.apache.org<ma...@bigtop.apache.org>" <us...@bigtop.apache.org>>
Subject: Re: smoke tests in 0.7.0

Hi Tim.  Great to hear your making progress..

Your on the right track but i forgot the details.  But yes: you'll have to run some simple commands as user hdfs to set up permissions for "root".

You can try running your tests as user "hdfs". That is a good hammer to use since hdfs is super user on Hadoop systems that use HDFS as the file system.

In other systems like gluster, we usually have root as the super user.

Directory perms are always a pain in Hadoop setup.  Anything you suggest to make it more user friendly maybe create a jira. On this route, we have done bigtop-1200 which now encodes all info in a json file so that any FileSystem Can use the bigtop for provisioner.  I can discuss that with you also if you want later on (send me a private message).

I haven't merged that to replace init-hdfs , but it is functionally equivalent , and can be found in the code base (see jiras bigtop-952 and bigtop-1200 for details).


On Sep 24, 2014, at 12:50 PM, Tim Harsch <th...@yarcdata.com>> wrote:

Thanks that was helpful.   So, I looked closely at the TestPigSmoke test and tried repeating it's steps manually, which really helped.  I was able to track the issue down to a perms problem for running as user root.  See this:

[root@localhost pig]# hadoop fs -ls /
Found 6 items
drwxrwxrwx   - hdfs  supergroup          0 2014-09-24 00:32 /benchmarks
drwxr-xr-x   - hbase hbase               0 2014-09-24 00:32 /hbase
drwxr-xr-x   - solr  solr                0 2014-09-24 00:32 /solr
drwxrwxrwt   - hdfs  supergroup          0 2014-09-24 18:33 /tmp
drwxr-xr-x   - hdfs  supergroup          0 2014-09-24 00:33 /user
drwxr-xr-x   - hdfs  supergroup          0 2014-09-24 00:32 /var

[root@localhost pig]# hadoop fs -ls /tmp
Found 2 items
drwxrwxrwx   - mapred mapred              0 2014-09-24 00:37 /tmp/hadoop-yarn
drwxr-xr-x   - root   supergroup          0 2014-09-24 01:29 /tmp/temp-1450563950

[root@localhost pig]# hadoop fs -ls /tmp/hadoop-yarn
Found 1 items
drwxrwx---   - mapred mapred          0 2014-09-24 00:37 /tmp/hadoop-yarn/staging

[root@localhost pig]# hadoop fs -ls /tmp/hadoop-yarn/staging
ls: Permission denied: user=root, access=READ_EXECUTE, inode="/tmp/hadoop-yarn/staging":mapred:mapred:drwxrwx---

OK, makes sense.  But I'm a little confused..  I thought all the directories would be set up correctly by the script /usr/lib/hadoop/libexec/init-hdfs.sh, which as you can tell from the above output, I did run it.  From the docs I've read the assumption is that after running /usr/lib/hadoop/libexec/init-hdfs.sh all tests should pass… but perhaps I missed some instruction somewhere.

Tim



From: jay vyas <ja...@gmail.com>>
Reply-To: "user@bigtop.apache.org<ma...@bigtop.apache.org>" <us...@bigtop.apache.org>>
Date: Wednesday, September 24, 2014 5:46 AM
To: "user@bigtop.apache.org<ma...@bigtop.apache.org>" <us...@bigtop.apache.org>>
Subject: Re: smoke tests in 0.7.0

Thanks tim.  It could be related to permissions on the DFS... depending on the user you are running the job as.

Can you paste the error you got ? In general the errors should be eay to track down in smoke-tests (you can just hack some print statements into the groovy script under pig/).
Also, the stack trace should give you some information ?

Re: smoke tests in 0.7.0

Posted by Jay Vyas <ja...@gmail.com>.
Hi Tim.  Great to hear your making progress..

Your on the right track but i forgot the details.  But yes: you'll have to run some simple commands as user hdfs to set up permissions for "root".

You can try running your tests as user "hdfs". That is a good hammer to use since hdfs is super user on Hadoop systems that use HDFS as the file system.

In other systems like gluster, we usually have root as the super user.

Directory perms are always a pain in Hadoop setup.  Anything you suggest to make it more user friendly maybe create a jira. On this route, we have done bigtop-1200 which now encodes all info in a json file so that any FileSystem Can use the bigtop for provisioner.  I can discuss that with you also if you want later on (send me a private message).

I haven't merged that to replace init-hdfs , but it is functionally equivalent , and can be found in the code base (see jiras bigtop-952 and bigtop-1200 for details).


> On Sep 24, 2014, at 12:50 PM, Tim Harsch <th...@yarcdata.com> wrote:
> 
> Thanks that was helpful.   So, I looked closely at the TestPigSmoke test and tried repeating it's steps manually, which really helped.  I was able to track the issue down to a perms problem for running as user root.  See this:
> 
> [root@localhost pig]# hadoop fs -ls /
> Found 6 items
> drwxrwxrwx   - hdfs  supergroup          0 2014-09-24 00:32 /benchmarks
> drwxr-xr-x   - hbase hbase               0 2014-09-24 00:32 /hbase
> drwxr-xr-x   - solr  solr                0 2014-09-24 00:32 /solr
> drwxrwxrwt   - hdfs  supergroup          0 2014-09-24 18:33 /tmp
> drwxr-xr-x   - hdfs  supergroup          0 2014-09-24 00:33 /user
> drwxr-xr-x   - hdfs  supergroup          0 2014-09-24 00:32 /var
> 
> [root@localhost pig]# hadoop fs -ls /tmp
> Found 2 items
> drwxrwxrwx   - mapred mapred              0 2014-09-24 00:37 /tmp/hadoop-yarn
> drwxr-xr-x   - root   supergroup          0 2014-09-24 01:29 /tmp/temp-1450563950
> 
> [root@localhost pig]# hadoop fs -ls /tmp/hadoop-yarn
> Found 1 items
> drwxrwx---   - mapred mapred          0 2014-09-24 00:37 /tmp/hadoop-yarn/staging
> 
> [root@localhost pig]# hadoop fs -ls /tmp/hadoop-yarn/staging
> ls: Permission denied: user=root, access=READ_EXECUTE, inode="/tmp/hadoop-yarn/staging":mapred:mapred:drwxrwx---
> 
> OK, makes sense.  But I'm a little confused..  I thought all the directories would be set up correctly by the script /usr/lib/hadoop/libexec/init-hdfs.sh, which as you can tell from the above output, I did run it.  From the docs I've read the assumption is that after running /usr/lib/hadoop/libexec/init-hdfs.sh all tests should pass… but perhaps I missed some instruction somewhere.
> 
> Tim
> 
> 
> 
> From: jay vyas <ja...@gmail.com>
> Reply-To: "user@bigtop.apache.org" <us...@bigtop.apache.org>
> Date: Wednesday, September 24, 2014 5:46 AM
> To: "user@bigtop.apache.org" <us...@bigtop.apache.org>
> Subject: Re: smoke tests in 0.7.0
> 
> Thanks tim.  It could be related to permissions on the DFS... depending on the user you are running the job as.
> 
> Can you paste the error you got ? In general the errors should be eay to track down in smoke-tests (you can just hack some print statements into the groovy script under pig/).  
> Also, the stack trace should give you some information ?

Re: smoke tests in 0.7.0

Posted by Tim Harsch <th...@yarcdata.com>.
Thanks that was helpful.   So, I looked closely at the TestPigSmoke test and tried repeating it's steps manually, which really helped.  I was able to track the issue down to a perms problem for running as user root.  See this:

[root@localhost pig]# hadoop fs -ls /
Found 6 items
drwxrwxrwx   - hdfs  supergroup          0 2014-09-24 00:32 /benchmarks
drwxr-xr-x   - hbase hbase               0 2014-09-24 00:32 /hbase
drwxr-xr-x   - solr  solr                0 2014-09-24 00:32 /solr
drwxrwxrwt   - hdfs  supergroup          0 2014-09-24 18:33 /tmp
drwxr-xr-x   - hdfs  supergroup          0 2014-09-24 00:33 /user
drwxr-xr-x   - hdfs  supergroup          0 2014-09-24 00:32 /var

[root@localhost pig]# hadoop fs -ls /tmp
Found 2 items
drwxrwxrwx   - mapred mapred              0 2014-09-24 00:37 /tmp/hadoop-yarn
drwxr-xr-x   - root   supergroup          0 2014-09-24 01:29 /tmp/temp-1450563950

[root@localhost pig]# hadoop fs -ls /tmp/hadoop-yarn
Found 1 items
drwxrwx---   - mapred mapred          0 2014-09-24 00:37 /tmp/hadoop-yarn/staging

[root@localhost pig]# hadoop fs -ls /tmp/hadoop-yarn/staging
ls: Permission denied: user=root, access=READ_EXECUTE, inode="/tmp/hadoop-yarn/staging":mapred:mapred:drwxrwx---

OK, makes sense.  But I'm a little confused..  I thought all the directories would be set up correctly by the script /usr/lib/hadoop/libexec/init-hdfs.sh, which as you can tell from the above output, I did run it.  From the docs I've read the assumption is that after running /usr/lib/hadoop/libexec/init-hdfs.sh all tests should pass… but perhaps I missed some instruction somewhere.

Tim



From: jay vyas <ja...@gmail.com>>
Reply-To: "user@bigtop.apache.org<ma...@bigtop.apache.org>" <us...@bigtop.apache.org>>
Date: Wednesday, September 24, 2014 5:46 AM
To: "user@bigtop.apache.org<ma...@bigtop.apache.org>" <us...@bigtop.apache.org>>
Subject: Re: smoke tests in 0.7.0

Thanks tim.  It could be related to permissions on the DFS... depending on the user you are running the job as.

Can you paste the error you got ? In general the errors should be eay to track down in smoke-tests (you can just hack some print statements into the groovy script under pig/).
Also, the stack trace should give you some information ?

Re: smoke tests in 0.7.0

Posted by jay vyas <ja...@gmail.com>.
Thanks tim.  It could be related to permissions on the DFS... depending on
the user you are running the job as.

Can you paste the error you got ? In general the errors should be eay to
track down in smoke-tests (you can just hack some print statements into the
groovy script under pig/).
Also, the stack trace should give you some information ?

Re: smoke tests in 0.7.0

Posted by Tim Harsch <th...@yarcdata.com>.
Hi Jay,
I looked at the README in master, and it has since been updated.  It was the README in the 0.7.0 distro that was the issue.

I tried running the gradle smokes in master and I am getting this:
:pig:test
ENV VARIABLE: PIG_HOME = /usr/lib/pig

org.apache.bigtop.itest.hadoop.mapreduce.TestPigSmoke > test FAILED
    java.lang.AssertionError at TestPigSmoke.groovy:56

1 test completed, 1 failed
:pig:test FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':pig:test'.
> There were failing tests. See the report at: file:///root/git/bigtop/bigtop-tests/smoke-tests/pig/build/reports/tests/index.html

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.

BUILD FAILED

Total time: 32.62 secs


From: jay vyas <ja...@gmail.com>>
Reply-To: "user@bigtop.apache.org<ma...@bigtop.apache.org>" <us...@bigtop.apache.org>>
Date: Tuesday, September 23, 2014 1:43 PM
To: "user@bigtop.apache.org<ma...@bigtop.apache.org>" <us...@bigtop.apache.org>>
Subject: Re: smoke tests in 0.7.0

Hi tim ,

1) thanks for noting.  looks like time for a README update.  the "package" part needs to be modified .   could you create a JIRA for that?

2) Also, iirc , is there a /user/root/ directory in hdfs?  There needs to be one so that it can write the local files.

weve also completed a simpler way of running the tests recently in the "smoke-tests" package which you can try (see the README).


On Tue, Sep 23, 2014 at 4:31 PM, Tim Harsch <th...@yarcdata.com>> wrote:
Hi all,
I am having trouble getting the smoke tests to run in 0.7.0.  I guess my first question is: which systems are these smokes being validated against at release time?  I'm wondering if it should work for CentOS6

For installing bigtop, I followed instructions in the Hadoop for Dummies book.  I install a CentOS6 vmWare player.  And installed bigtop via the following:
"yum install hadoop\* mahout\* oozie\* hbase\* hive\* hue\* pig\* zookeeper\* giraph\*"

I believe I now have a working hadoop system, "jps" confirms running services.  But, almost none of the tests run:

I've read the top level README, and followed the wiki https://cwiki.apache.org/confluence/display/BIGTOP/Running+integration+and+system+tests.

README:
Following the first step for testing yields:

[root@localhost bigtop-0.7.0]# mvn install -DskipTests -DskipITs -DperformRelease -f bigtop-tests/test-execution/smokes/package/pom.xml
[INFO] Scanning for projects...
[ERROR] The build could not read 1 project -> [Help 1]
[ERROR]
[ERROR]   The project  (/root/bigtop/bigtop-0.7.0/bigtop-tests/test-execution/smokes/package/pom.xml) has 1 error
[ERROR]     Non-readable POM /root/bigtop/bigtop-0.7.0/bigtop-tests/test-execution/smokes/package/pom.xml: /root/bigtop/bigtop-0.7.0/bigtop-tests/test-execution/smokes/package/pom.xml (No such file or directory)
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException

>From the error we see there is no package module.

WIKI:
Basically advises to run a submodule…  hadoop seems like a good one.  If I run it though I get 8 of 8 tests failing.  Here is the first sign of trouble:

Running org.apache.bigtop.itest.hadoop.mapreduce.TestHadoopSmoke
Failed command: hadoop fs -rmr test.hadoopsmoke.1411495413183/cachefile/out
hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar -D mapred.map.tasks=1 -D mapred.reduce.tasks=1 -D mapred.job.name<http://mapred.job.name>=Experiment -cacheArchive file:////user/root/test.hadoopsmoke.1411495413183/cachefile/cachedir.jar#testlink -input test.hadoopsmoke.1411495413183/cachefile/input.txt -mapper map.sh -file map.sh -reducer cat -output test.hadoopsmoke.1411495413183/cachefile/out -verbose
error code: 2
stdout: [2014-09-23 20:03:38,522 WARN  [main] streaming.StreamJob (StreamJob.java:parseArgv(290)) - -file option is deprecated, please use generic option -files instead., 2014-09-23 20:03:38,913 WARN  [main] streaming.StreamJob (StreamJob.java:parseArgv(337)) - -cacheArchive option is deprecated, please use -archives instead., STREAM: addTaskEnvironment=HADOOP_ROOT_LOGGER=, STREAM: shippedCanonFiles_=[/root/bigtop/bigtop-0.7.0/bigtop-tests/test-execution/smokes/hadoop/target/map.sh], STREAM: shipped: true /root/bigtop/bigtop-0.7.0/bigtop-tests/test-execution/smokes/hadoop/target/map.sh, STREAM: cmd=map.sh, STREAM: cmd=null, STREAM: shipped:

Seems like I must have something wrong.

Any help appreciated.
Thanks,
Tim





--
jay vyas

Re: smoke tests in 0.7.0

Posted by jay vyas <ja...@gmail.com>.
Hi tim ,

1) thanks for noting.  looks like time for a README update.  the "package"
part needs to be modified .   could you create a JIRA for that?

2) Also, iirc , is there a /user/root/ directory in hdfs?  There needs to
be one so that it can write the local files.

weve also completed a simpler way of running the tests recently in the
"smoke-tests" package which you can try (see the README).


On Tue, Sep 23, 2014 at 4:31 PM, Tim Harsch <th...@yarcdata.com> wrote:

>  Hi all,
>  I am having trouble getting the smoke tests to run in 0.7.0.  I guess my
> first question is: which systems are these smokes being validated against
> at release time?  I'm wondering if it should work for CentOS6
>
>  For installing bigtop, I followed instructions in the Hadoop for Dummies
> book.  I install a CentOS6 vmWare player.  And installed bigtop via the
> following:
>  "yum install hadoop\* mahout\* oozie\* hbase\* hive\* hue\* pig\*
> zookeeper\* giraph\*"
>
>  I believe I now have a working hadoop system, "jps" confirms running
> services.  But, almost none of the tests run:
>
>  I've read the top level README, and followed the wiki
> https://cwiki.apache.org/confluence/display/BIGTOP/Running+integration+and+system+tests
> .
>
>  README:
>  Following the first step for testing yields:
>
>  [root@localhost bigtop-0.7.0]# mvn install -DskipTests -DskipITs
> -DperformRelease -f bigtop-tests/test-execution/smokes/package/pom.xml
> [INFO] Scanning for projects...
> [ERROR] The build could not read 1 project -> [Help 1]
> [ERROR]
> [ERROR]   The project
>  (/root/bigtop/bigtop-0.7.0/bigtop-tests/test-execution/smokes/package/pom.xml)
> has 1 error
> [ERROR]     Non-readable POM
> /root/bigtop/bigtop-0.7.0/bigtop-tests/test-execution/smokes/package/pom.xml:
> /root/bigtop/bigtop-0.7.0/bigtop-tests/test-execution/smokes/package/pom.xml
> (No such file or directory)
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the
> -e switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions,
> please read the following articles:
> [ERROR] [Help 1]
> http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
>
>  From the error we see there is no package module.
>
>  WIKI:
>  Basically advises to run a submodule…  hadoop seems like a good one.  If
> I run it though I get 8 of 8 tests failing.  Here is the first sign of
> trouble:
>
>  Running org.apache.bigtop.itest.hadoop.mapreduce.TestHadoopSmoke
> Failed command: hadoop fs -rmr test.hadoopsmoke.1411495413183/cachefile/out
> hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar -D
> mapred.map.tasks=1 -D mapred.reduce.tasks=1 -D mapred.job.name=Experiment
> -cacheArchive
> file:////user/root/test.hadoopsmoke.1411495413183/cachefile/cachedir.jar#testlink
> -input test.hadoopsmoke.1411495413183/cachefile/input.txt -mapper map.sh
> -file map.sh -reducer cat -output
> test.hadoopsmoke.1411495413183/cachefile/out -verbose
> error code: 2
> stdout: [2014-09-23 20:03:38,522 WARN  [main] streaming.StreamJob
> (StreamJob.java:parseArgv(290)) - -file option is deprecated, please use
> generic option -files instead., 2014-09-23 20:03:38,913 WARN  [main]
> streaming.StreamJob (StreamJob.java:parseArgv(337)) - -cacheArchive option
> is deprecated, please use -archives instead., STREAM:
> addTaskEnvironment=HADOOP_ROOT_LOGGER=, STREAM:
> shippedCanonFiles_=[/root/bigtop/bigtop-0.7.0/bigtop-tests/test-execution/smokes/hadoop/target/map.sh],
> STREAM: shipped: true
> /root/bigtop/bigtop-0.7.0/bigtop-tests/test-execution/smokes/hadoop/target/map.sh,
> STREAM: cmd=map.sh, STREAM: cmd=null, STREAM: shipped:
>
>  Seems like I must have something wrong.
>
>  Any help appreciated.
> Thanks,
> Tim
>
>
>


-- 
jay vyas