You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Bhat, Vijay (CONT)" <Vi...@capitalone.com> on 2014/12/02 01:59:57 UTC

Test case failures with Hadoop trunk

Hi all,

My name is Vijay Bhat and I am looking to contribute to the Hadoop YARN project. I have been using and benefiting from Hadoop ecosystem technologies for a few years now and I want to give back to the community that makes this happen.

I forked the apache/hadoop branch on github and synced to the last commit (https://github.com/apache/hadoop/commit/1556f86a31a54733d6550363aa0e027acca7823b) that successfully built on the Apache build server (https://builds.apache.org/view/All/job/Hadoop-Yarn-trunk/758/).

However, I get test case failures when I build the Hadoop source code on a VM running Ubuntu 12.04 LTS.

The maven command I am running from the hadoop base directory is:

mvn clean install -U

Console output

Tests run: 9, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 2.392 sec <<< FAILURE! - in org.apache.hadoop.ipc.TestDecayRpcScheduler
testAccumulate(org.apache.hadoop.ipc.TestDecayRpcScheduler)  Time elapsed: 0.084 sec  <<< FAILURE!
java.lang.AssertionError: expected:<3> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at org.apache.hadoop.ipc.TestDecayRpcScheduler.testAccumulate(TestDecayRpcScheduler.java:136)

testPriority(org.apache.hadoop.ipc.TestDecayRpcScheduler)  Time elapsed: 0.052 sec  <<< FAILURE!
java.lang.AssertionError: expected:<1> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at org.apache.hadoop.ipc.TestDecayRpcScheduler.testPriority(TestDecayRpcScheduler.java:197)


Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 111.519 sec <<< FAILURE! - in org.apache.hadoop.ha.TestZKFailoverControllerStress
testExpireBackAndForth(org.apache.hadoop.ha.TestZKFailoverControllerStress)  Time elapsed: 45.46 sec  <<< ERROR!
java.lang.Exception: test timed out after 40000 milliseconds
at java.lang.Thread.sleep(Native Method)
at org.apache.hadoop.ha.MiniZKFCCluster.waitForHAState(MiniZKFCCluster.java:164)
at org.apache.hadoop.ha.MiniZKFCCluster.expireAndVerifyFailover(MiniZKFCCluster.java:236)
at org.apache.hadoop.ha.TestZKFailoverControllerStress.testExpireBackAndForth(TestZKFailoverControllerStress.java:79)


Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 62.514 sec <<< FAILURE! - in org.apache.hadoop.ha.TestZKFailoverController
testGracefulFailoverFailBecomingStandby(org.apache.hadoop.ha.TestZKFailoverController)  Time elapsed: 15.062 sec  <<< ERROR!
java.lang.Exception: test timed out after 15000 milliseconds
at java.lang.Object.wait(Native Method)
at org.apache.hadoop.ha.ZKFailoverController.waitForActiveAttempt(ZKFailoverController.java:467)
at org.apache.hadoop.ha.ZKFailoverController.doGracefulFailover(ZKFailoverController.java:657)
at org.apache.hadoop.ha.ZKFailoverController.access$400(ZKFailoverController.java:61)
at org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:602)
at org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:599)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1683)
at org.apache.hadoop.ha.ZKFailoverController.gracefulFailoverToYou(ZKFailoverController.java:599)
at org.apache.hadoop.ha.ZKFCRpcServer.gracefulFailover(ZKFCRpcServer.java:94)
at org.apache.hadoop.ha.TestZKFailoverController.testGracefulFailoverFailBecomingStandby(TestZKFailoverController.java:532)


When I skip the tests, the source code compiles successfully.

mvn clean install -U –DskipTests

Is there something I’m doing incorrectly that’s causing the test cases to fail? I’d really appreciate any insight from folks who have gone through this process before. I’ve looked at the JIRAs labeled newbie (http://wiki.apache.org/hadoop/HowToContribute) but didn’t find promising leads.

Thanks for the help!
-Vijay




________________________________________________________

The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed.  If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.

Re: Test case failures with Hadoop trunk

Posted by "Bhat, Vijay (CONT)" <Vi...@capitalone.com>.
Hi Ravi,

Thanks for the background! I like your idea of contributing to fix broken
CI specs and will keep that in mind.

-Vijay 

On 12/4/14, 10:36 AM, "Ravi Prakash" <ra...@ymail.com> wrote:

>Hi Vijay!
>Thanks a lot for your initiative. You are correct. There are often a lot
>of unit tests which fail in addition to flaky tests which fail
>intermittently. About two years ago I tried to bring the number of unit
>test failures down to zero, but since then a lot of features have been
>added and I'm guessing the number of test failures has crept back up.
>Most people just keep a list of tests which fail usually and ignore
>those. There also ought to be open JIRAs for failing tests usually.
>
>As a community we should try to bring these down to 0, but everyone is
>busy putting in new features and improving our CI unfortunately falls in
>priority. Prime areas for contributions ;-)
>HTHRavi
> 
>
>     On Monday, December 1, 2014 5:03 PM, "Bhat, Vijay (CONT)"
><Vi...@capitalone.com> wrote:
>   
>
> Hi all,
>
>My name is Vijay Bhat and I am looking to contribute to the Hadoop YARN
>project. I have been using and benefiting from Hadoop ecosystem
>technologies for a few years now and I want to give back to the community
>that makes this happen.
>
>I forked the apache/hadoop branch on github and synced to the last commit
>(https://github.com/apache/hadoop/commit/1556f86a31a54733d6550363aa0e027ac
>ca7823b) that successfully built on the Apache build server
>(https://builds.apache.org/view/All/job/Hadoop-Yarn-trunk/758/).
>
>However, I get test case failures when I build the Hadoop source code on
>a VM running Ubuntu 12.04 LTS.
>
>The maven command I am running from the hadoop base directory is:
>
>mvn clean install -U
>
>Console output
>
>Tests run: 9, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 2.392 sec
><<< FAILURE! - in org.apache.hadoop.ipc.TestDecayRpcScheduler
>testAccumulate(org.apache.hadoop.ipc.TestDecayRpcScheduler)  Time
>elapsed: 0.084 sec  <<< FAILURE!
>java.lang.AssertionError: expected:<3> but was:<2>
>at org.junit.Assert.fail(Assert.java:88)
>at org.junit.Assert.failNotEquals(Assert.java:743)
>at org.junit.Assert.assertEquals(Assert.java:118)
>at org.junit.Assert.assertEquals(Assert.java:555)
>at org.junit.Assert.assertEquals(Assert.java:542)
>at 
>org.apache.hadoop.ipc.TestDecayRpcScheduler.testAccumulate(TestDecayRpcSch
>eduler.java:136)
>
>testPriority(org.apache.hadoop.ipc.TestDecayRpcScheduler)  Time elapsed:
>0.052 sec  <<< FAILURE!
>java.lang.AssertionError: expected:<1> but was:<0>
>at org.junit.Assert.fail(Assert.java:88)
>at org.junit.Assert.failNotEquals(Assert.java:743)
>at org.junit.Assert.assertEquals(Assert.java:118)
>at org.junit.Assert.assertEquals(Assert.java:555)
>at org.junit.Assert.assertEquals(Assert.java:542)
>at 
>org.apache.hadoop.ipc.TestDecayRpcScheduler.testPriority(TestDecayRpcSched
>uler.java:197)
>
>
>Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 111.519
>sec <<< FAILURE! - in org.apache.hadoop.ha.TestZKFailoverControllerStress
>testExpireBackAndForth(org.apache.hadoop.ha.TestZKFailoverControllerStress
>)  Time elapsed: 45.46 sec  <<< ERROR!
>java.lang.Exception: test timed out after 40000 milliseconds
>at java.lang.Thread.sleep(Native Method)
>at 
>org.apache.hadoop.ha.MiniZKFCCluster.waitForHAState(MiniZKFCCluster.java:1
>64)
>at 
>org.apache.hadoop.ha.MiniZKFCCluster.expireAndVerifyFailover(MiniZKFCClust
>er.java:236)
>at 
>org.apache.hadoop.ha.TestZKFailoverControllerStress.testExpireBackAndForth
>(TestZKFailoverControllerStress.java:79)
>
>
>Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 62.514
>sec <<< FAILURE! - in org.apache.hadoop.ha.TestZKFailoverController
>testGracefulFailoverFailBecomingStandby(org.apache.hadoop.ha.TestZKFailove
>rController)  Time elapsed: 15.062 sec  <<< ERROR!
>java.lang.Exception: test timed out after 15000 milliseconds
>at java.lang.Object.wait(Native Method)
>at 
>org.apache.hadoop.ha.ZKFailoverController.waitForActiveAttempt(ZKFailoverC
>ontroller.java:467)
>at 
>org.apache.hadoop.ha.ZKFailoverController.doGracefulFailover(ZKFailoverCon
>troller.java:657)
>at 
>org.apache.hadoop.ha.ZKFailoverController.access$400(ZKFailoverController.
>java:61)
>at 
>org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:
>602)
>at 
>org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:
>599)
>at java.security.AccessController.doPrivileged(Native Method)
>at javax.security.auth.Subject.doAs(Subject.java:396)
>at 
>org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.
>java:1683)
>at 
>org.apache.hadoop.ha.ZKFailoverController.gracefulFailoverToYou(ZKFailover
>Controller.java:599)
>at 
>org.apache.hadoop.ha.ZKFCRpcServer.gracefulFailover(ZKFCRpcServer.java:94)
>at 
>org.apache.hadoop.ha.TestZKFailoverController.testGracefulFailoverFailBeco
>mingStandby(TestZKFailoverController.java:532)
>
>
>When I skip the tests, the source code compiles successfully.
>
>mvn clean install -U ­DskipTests
>
>Is there something I¹m doing incorrectly that¹s causing the test cases to
>fail? I¹d really appreciate any insight from folks who have gone through
>this process before. I¹ve looked at the JIRAs labeled newbie
>(http://wiki.apache.org/hadoop/HowToContribute) but didn¹t find promising
>leads.
>
>Thanks for the help!
>-Vijay
>
>
>
>
>________________________________________________________
>
>The information contained in this e-mail is confidential and/or
>proprietary to Capital One and/or its affiliates. The information
>transmitted herewith is intended only for use by the individual or entity
>to which it is addressed.  If the reader of this message is not the
>intended recipient, you are hereby notified that any review,
>retransmission, dissemination, distribution, copying or other use of, or
>taking of any action in reliance upon this information is strictly
>prohibited. If you have received this communication in error, please
>contact the sender and delete the material from your computer.
>
>   

________________________________________________________

The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed.  If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.


Re: Test case failures with Hadoop trunk

Posted by "Bhat, Vijay (CONT)" <Vi...@capitalone.com>.
Hi Ravi,

Thanks for the background! I like your idea of contributing to fix broken
CI specs and will keep that in mind.

-Vijay 

On 12/4/14, 10:36 AM, "Ravi Prakash" <ra...@ymail.com> wrote:

>Hi Vijay!
>Thanks a lot for your initiative. You are correct. There are often a lot
>of unit tests which fail in addition to flaky tests which fail
>intermittently. About two years ago I tried to bring the number of unit
>test failures down to zero, but since then a lot of features have been
>added and I'm guessing the number of test failures has crept back up.
>Most people just keep a list of tests which fail usually and ignore
>those. There also ought to be open JIRAs for failing tests usually.
>
>As a community we should try to bring these down to 0, but everyone is
>busy putting in new features and improving our CI unfortunately falls in
>priority. Prime areas for contributions ;-)
>HTHRavi
> 
>
>     On Monday, December 1, 2014 5:03 PM, "Bhat, Vijay (CONT)"
><Vi...@capitalone.com> wrote:
>   
>
> Hi all,
>
>My name is Vijay Bhat and I am looking to contribute to the Hadoop YARN
>project. I have been using and benefiting from Hadoop ecosystem
>technologies for a few years now and I want to give back to the community
>that makes this happen.
>
>I forked the apache/hadoop branch on github and synced to the last commit
>(https://github.com/apache/hadoop/commit/1556f86a31a54733d6550363aa0e027ac
>ca7823b) that successfully built on the Apache build server
>(https://builds.apache.org/view/All/job/Hadoop-Yarn-trunk/758/).
>
>However, I get test case failures when I build the Hadoop source code on
>a VM running Ubuntu 12.04 LTS.
>
>The maven command I am running from the hadoop base directory is:
>
>mvn clean install -U
>
>Console output
>
>Tests run: 9, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 2.392 sec
><<< FAILURE! - in org.apache.hadoop.ipc.TestDecayRpcScheduler
>testAccumulate(org.apache.hadoop.ipc.TestDecayRpcScheduler)  Time
>elapsed: 0.084 sec  <<< FAILURE!
>java.lang.AssertionError: expected:<3> but was:<2>
>at org.junit.Assert.fail(Assert.java:88)
>at org.junit.Assert.failNotEquals(Assert.java:743)
>at org.junit.Assert.assertEquals(Assert.java:118)
>at org.junit.Assert.assertEquals(Assert.java:555)
>at org.junit.Assert.assertEquals(Assert.java:542)
>at 
>org.apache.hadoop.ipc.TestDecayRpcScheduler.testAccumulate(TestDecayRpcSch
>eduler.java:136)
>
>testPriority(org.apache.hadoop.ipc.TestDecayRpcScheduler)  Time elapsed:
>0.052 sec  <<< FAILURE!
>java.lang.AssertionError: expected:<1> but was:<0>
>at org.junit.Assert.fail(Assert.java:88)
>at org.junit.Assert.failNotEquals(Assert.java:743)
>at org.junit.Assert.assertEquals(Assert.java:118)
>at org.junit.Assert.assertEquals(Assert.java:555)
>at org.junit.Assert.assertEquals(Assert.java:542)
>at 
>org.apache.hadoop.ipc.TestDecayRpcScheduler.testPriority(TestDecayRpcSched
>uler.java:197)
>
>
>Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 111.519
>sec <<< FAILURE! - in org.apache.hadoop.ha.TestZKFailoverControllerStress
>testExpireBackAndForth(org.apache.hadoop.ha.TestZKFailoverControllerStress
>)  Time elapsed: 45.46 sec  <<< ERROR!
>java.lang.Exception: test timed out after 40000 milliseconds
>at java.lang.Thread.sleep(Native Method)
>at 
>org.apache.hadoop.ha.MiniZKFCCluster.waitForHAState(MiniZKFCCluster.java:1
>64)
>at 
>org.apache.hadoop.ha.MiniZKFCCluster.expireAndVerifyFailover(MiniZKFCClust
>er.java:236)
>at 
>org.apache.hadoop.ha.TestZKFailoverControllerStress.testExpireBackAndForth
>(TestZKFailoverControllerStress.java:79)
>
>
>Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 62.514
>sec <<< FAILURE! - in org.apache.hadoop.ha.TestZKFailoverController
>testGracefulFailoverFailBecomingStandby(org.apache.hadoop.ha.TestZKFailove
>rController)  Time elapsed: 15.062 sec  <<< ERROR!
>java.lang.Exception: test timed out after 15000 milliseconds
>at java.lang.Object.wait(Native Method)
>at 
>org.apache.hadoop.ha.ZKFailoverController.waitForActiveAttempt(ZKFailoverC
>ontroller.java:467)
>at 
>org.apache.hadoop.ha.ZKFailoverController.doGracefulFailover(ZKFailoverCon
>troller.java:657)
>at 
>org.apache.hadoop.ha.ZKFailoverController.access$400(ZKFailoverController.
>java:61)
>at 
>org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:
>602)
>at 
>org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:
>599)
>at java.security.AccessController.doPrivileged(Native Method)
>at javax.security.auth.Subject.doAs(Subject.java:396)
>at 
>org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.
>java:1683)
>at 
>org.apache.hadoop.ha.ZKFailoverController.gracefulFailoverToYou(ZKFailover
>Controller.java:599)
>at 
>org.apache.hadoop.ha.ZKFCRpcServer.gracefulFailover(ZKFCRpcServer.java:94)
>at 
>org.apache.hadoop.ha.TestZKFailoverController.testGracefulFailoverFailBeco
>mingStandby(TestZKFailoverController.java:532)
>
>
>When I skip the tests, the source code compiles successfully.
>
>mvn clean install -U ­DskipTests
>
>Is there something I¹m doing incorrectly that¹s causing the test cases to
>fail? I¹d really appreciate any insight from folks who have gone through
>this process before. I¹ve looked at the JIRAs labeled newbie
>(http://wiki.apache.org/hadoop/HowToContribute) but didn¹t find promising
>leads.
>
>Thanks for the help!
>-Vijay
>
>
>
>
>________________________________________________________
>
>The information contained in this e-mail is confidential and/or
>proprietary to Capital One and/or its affiliates. The information
>transmitted herewith is intended only for use by the individual or entity
>to which it is addressed.  If the reader of this message is not the
>intended recipient, you are hereby notified that any review,
>retransmission, dissemination, distribution, copying or other use of, or
>taking of any action in reliance upon this information is strictly
>prohibited. If you have received this communication in error, please
>contact the sender and delete the material from your computer.
>
>   

________________________________________________________

The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed.  If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.


Re: Test case failures with Hadoop trunk

Posted by Ravi Prakash <ra...@ymail.com>.
Hi Vijay!
Thanks a lot for your initiative. You are correct. There are often a lot of unit tests which fail in addition to flaky tests which fail intermittently. About two years ago I tried to bring the number of unit test failures down to zero, but since then a lot of features have been added and I'm guessing the number of test failures has crept back up. Most people just keep a list of tests which fail usually and ignore those. There also ought to be open JIRAs for failing tests usually.

As a community we should try to bring these down to 0, but everyone is busy putting in new features and improving our CI unfortunately falls in priority. Prime areas for contributions ;-)
HTHRavi
 

     On Monday, December 1, 2014 5:03 PM, "Bhat, Vijay (CONT)" <Vi...@capitalone.com> wrote:
   

 Hi all,

My name is Vijay Bhat and I am looking to contribute to the Hadoop YARN project. I have been using and benefiting from Hadoop ecosystem technologies for a few years now and I want to give back to the community that makes this happen.

I forked the apache/hadoop branch on github and synced to the last commit (https://github.com/apache/hadoop/commit/1556f86a31a54733d6550363aa0e027acca7823b) that successfully built on the Apache build server (https://builds.apache.org/view/All/job/Hadoop-Yarn-trunk/758/).

However, I get test case failures when I build the Hadoop source code on a VM running Ubuntu 12.04 LTS.

The maven command I am running from the hadoop base directory is:

mvn clean install -U

Console output

Tests run: 9, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 2.392 sec <<< FAILURE! - in org.apache.hadoop.ipc.TestDecayRpcScheduler
testAccumulate(org.apache.hadoop.ipc.TestDecayRpcScheduler)  Time elapsed: 0.084 sec  <<< FAILURE!
java.lang.AssertionError: expected:<3> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at org.apache.hadoop.ipc.TestDecayRpcScheduler.testAccumulate(TestDecayRpcScheduler.java:136)

testPriority(org.apache.hadoop.ipc.TestDecayRpcScheduler)  Time elapsed: 0.052 sec  <<< FAILURE!
java.lang.AssertionError: expected:<1> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at org.apache.hadoop.ipc.TestDecayRpcScheduler.testPriority(TestDecayRpcScheduler.java:197)


Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 111.519 sec <<< FAILURE! - in org.apache.hadoop.ha.TestZKFailoverControllerStress
testExpireBackAndForth(org.apache.hadoop.ha.TestZKFailoverControllerStress)  Time elapsed: 45.46 sec  <<< ERROR!
java.lang.Exception: test timed out after 40000 milliseconds
at java.lang.Thread.sleep(Native Method)
at org.apache.hadoop.ha.MiniZKFCCluster.waitForHAState(MiniZKFCCluster.java:164)
at org.apache.hadoop.ha.MiniZKFCCluster.expireAndVerifyFailover(MiniZKFCCluster.java:236)
at org.apache.hadoop.ha.TestZKFailoverControllerStress.testExpireBackAndForth(TestZKFailoverControllerStress.java:79)


Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 62.514 sec <<< FAILURE! - in org.apache.hadoop.ha.TestZKFailoverController
testGracefulFailoverFailBecomingStandby(org.apache.hadoop.ha.TestZKFailoverController)  Time elapsed: 15.062 sec  <<< ERROR!
java.lang.Exception: test timed out after 15000 milliseconds
at java.lang.Object.wait(Native Method)
at org.apache.hadoop.ha.ZKFailoverController.waitForActiveAttempt(ZKFailoverController.java:467)
at org.apache.hadoop.ha.ZKFailoverController.doGracefulFailover(ZKFailoverController.java:657)
at org.apache.hadoop.ha.ZKFailoverController.access$400(ZKFailoverController.java:61)
at org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:602)
at org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:599)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1683)
at org.apache.hadoop.ha.ZKFailoverController.gracefulFailoverToYou(ZKFailoverController.java:599)
at org.apache.hadoop.ha.ZKFCRpcServer.gracefulFailover(ZKFCRpcServer.java:94)
at org.apache.hadoop.ha.TestZKFailoverController.testGracefulFailoverFailBecomingStandby(TestZKFailoverController.java:532)


When I skip the tests, the source code compiles successfully.

mvn clean install -U –DskipTests

Is there something I’m doing incorrectly that’s causing the test cases to fail? I’d really appreciate any insight from folks who have gone through this process before. I’ve looked at the JIRAs labeled newbie (http://wiki.apache.org/hadoop/HowToContribute) but didn’t find promising leads.

Thanks for the help!
-Vijay




________________________________________________________

The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed.  If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.

   

Re: Test case failures with Hadoop trunk

Posted by Ravi Prakash <ra...@ymail.com>.
Hi Vijay!
Thanks a lot for your initiative. You are correct. There are often a lot of unit tests which fail in addition to flaky tests which fail intermittently. About two years ago I tried to bring the number of unit test failures down to zero, but since then a lot of features have been added and I'm guessing the number of test failures has crept back up. Most people just keep a list of tests which fail usually and ignore those. There also ought to be open JIRAs for failing tests usually.

As a community we should try to bring these down to 0, but everyone is busy putting in new features and improving our CI unfortunately falls in priority. Prime areas for contributions ;-)
HTHRavi
 

     On Monday, December 1, 2014 5:03 PM, "Bhat, Vijay (CONT)" <Vi...@capitalone.com> wrote:
   

 Hi all,

My name is Vijay Bhat and I am looking to contribute to the Hadoop YARN project. I have been using and benefiting from Hadoop ecosystem technologies for a few years now and I want to give back to the community that makes this happen.

I forked the apache/hadoop branch on github and synced to the last commit (https://github.com/apache/hadoop/commit/1556f86a31a54733d6550363aa0e027acca7823b) that successfully built on the Apache build server (https://builds.apache.org/view/All/job/Hadoop-Yarn-trunk/758/).

However, I get test case failures when I build the Hadoop source code on a VM running Ubuntu 12.04 LTS.

The maven command I am running from the hadoop base directory is:

mvn clean install -U

Console output

Tests run: 9, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 2.392 sec <<< FAILURE! - in org.apache.hadoop.ipc.TestDecayRpcScheduler
testAccumulate(org.apache.hadoop.ipc.TestDecayRpcScheduler)  Time elapsed: 0.084 sec  <<< FAILURE!
java.lang.AssertionError: expected:<3> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at org.apache.hadoop.ipc.TestDecayRpcScheduler.testAccumulate(TestDecayRpcScheduler.java:136)

testPriority(org.apache.hadoop.ipc.TestDecayRpcScheduler)  Time elapsed: 0.052 sec  <<< FAILURE!
java.lang.AssertionError: expected:<1> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at org.apache.hadoop.ipc.TestDecayRpcScheduler.testPriority(TestDecayRpcScheduler.java:197)


Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 111.519 sec <<< FAILURE! - in org.apache.hadoop.ha.TestZKFailoverControllerStress
testExpireBackAndForth(org.apache.hadoop.ha.TestZKFailoverControllerStress)  Time elapsed: 45.46 sec  <<< ERROR!
java.lang.Exception: test timed out after 40000 milliseconds
at java.lang.Thread.sleep(Native Method)
at org.apache.hadoop.ha.MiniZKFCCluster.waitForHAState(MiniZKFCCluster.java:164)
at org.apache.hadoop.ha.MiniZKFCCluster.expireAndVerifyFailover(MiniZKFCCluster.java:236)
at org.apache.hadoop.ha.TestZKFailoverControllerStress.testExpireBackAndForth(TestZKFailoverControllerStress.java:79)


Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 62.514 sec <<< FAILURE! - in org.apache.hadoop.ha.TestZKFailoverController
testGracefulFailoverFailBecomingStandby(org.apache.hadoop.ha.TestZKFailoverController)  Time elapsed: 15.062 sec  <<< ERROR!
java.lang.Exception: test timed out after 15000 milliseconds
at java.lang.Object.wait(Native Method)
at org.apache.hadoop.ha.ZKFailoverController.waitForActiveAttempt(ZKFailoverController.java:467)
at org.apache.hadoop.ha.ZKFailoverController.doGracefulFailover(ZKFailoverController.java:657)
at org.apache.hadoop.ha.ZKFailoverController.access$400(ZKFailoverController.java:61)
at org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:602)
at org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:599)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1683)
at org.apache.hadoop.ha.ZKFailoverController.gracefulFailoverToYou(ZKFailoverController.java:599)
at org.apache.hadoop.ha.ZKFCRpcServer.gracefulFailover(ZKFCRpcServer.java:94)
at org.apache.hadoop.ha.TestZKFailoverController.testGracefulFailoverFailBecomingStandby(TestZKFailoverController.java:532)


When I skip the tests, the source code compiles successfully.

mvn clean install -U –DskipTests

Is there something I’m doing incorrectly that’s causing the test cases to fail? I’d really appreciate any insight from folks who have gone through this process before. I’ve looked at the JIRAs labeled newbie (http://wiki.apache.org/hadoop/HowToContribute) but didn’t find promising leads.

Thanks for the help!
-Vijay




________________________________________________________

The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed.  If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.