You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mesos.apache.org by 王瑜 <wa...@nfs.iscas.ac.cn> on 2013/04/08 11:45:50 UTC

../src/tests/allocator_zookeeper_tests.cpp:202: Failure

Hi all,
There are other problems, when I run "make check", It can not go throw in zookeeper test, what's this problem is? 

Note: Google Test filter = *-
[==========] Running 212 tests from 44 test cases.
[----------] Global test environment set-up.
[----------] 2 tests from AllocatorZooKeeperTest/0, where TypeParam = mesos::internal::master::HierarchicalAllocatorProcess<mesos::internal::master::DRFSorter, mesos::internal::master::DRFSorter>
[ RUN      ] AllocatorZooKeeperTest/0.FrameworkReregistersFirst
../../src/tests/allocator_zookeeper_tests.cpp:202: Failure
Failed
Waited too long for 'shutdownMsg'



From: Vinod Kone
Date: 2013-04-05 01:25
To: mesos-dev@incubator.apache.org
Subject: Re: Re: Caused by: java.io.IOException: Task process exit with nonzero status of 1.
Hi Wang,

The version of Mesos on trunk is 0.12.0. We have recently refactored our
Hadoop port, so I suggest you get the latest copy from trunk and build
Hadoop. You can do it as follows:

$ git checkout git://git.apache.org/mesos.git
$ cd mesos
$ ./bootstrap
$ ./configure
$ make
$ cd hadoop
$ make hadoop-0.20.205.0


@vinodkone


On Thu, Apr 4, 2013 at 4:43 AM, Wang Yu <wa...@nfs.iscas.ac.cn> wrote:

> Hi Vinod,
>
> Yes, I am running trunk, which version 0.9.0. I deploy hadoop using
> default version, and I just want run map-reduce program on hadoop, using
> mesos resource schduling method.
>
> There are 3 servers:
> 192.168.0.2: master
> 192.168.0.3: slave1
> 192.168.0.7: slave5
>
> I set "master" as mesos master, "master""slave1""slave5" as mesos's
> slaves. I deploy hadoop using the same setting.
>
> Need any other information? Have I described clearly? Sorry for my poor
> English...
>
> 2013-04-04
>
>
>
> Wang Yu
>
>
>
> 发件人:Vinod Kone <vi...@twitter.com>
> 发送时间:2013-04-04 13:27
> 主题:Re: Caused by: java.io.IOException: Task process exit with nonzero
> status of 1.
> 收件人:"mesos-dev@incubator.apache.org"<me...@incubator.apache.org>
> 抄送:"mesos-dev"<me...@incubator.apache.org>,"Benjamin Hindman"<
> benh@berkeley.edu>
>
> Are you running off trunk? Can you explain your setup and what you are
> trying to achieve?
>
> @vinodkone
> Sent from my mobile
>
> On Apr 3, 2013, at 7:59 PM, "Wang Yu" <wa...@nfs.iscas.ac.cn> wrote:
>
> > Hi all,
> > Have you seen this problem ever? Please help me , thanks very much!
> >
> > By the way, it works well with single server, when I deploy hadoop using
> mesos scheduler on 3 servers, the problem occures.
> >
> >
> > [root@master hadoop-0.20.205.0]# bin/hadoop jar
> hadoop-examples-0.20.205.0.jar randomwriter
> -Dtest.randomwrite.bytes_per_map=6710886
> -Dtest.randomwriter.maps_per_host=10 rand
> > Running 30 maps.
> > Job started: Thu Apr 04 10:54:25 CST 2013
> > 13/04/04 10:54:26 INFO mapred.JobClient: Running job:
> job_201304031018_0005
> > 13/04/04 10:54:27 INFO mapred.JobClient:  map 0% reduce 0%
> > 13/04/04 10:54:50 INFO mapred.JobClient: Task Id :
> attempt_201304031018_0005_m_000000_0, Status : FAILED
> > java.lang.Throwable: Child Error
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > Caused by: java.io.IOException: Task process exit with nonzero status of
> 1.
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> >
> > 13/04/04 10:54:50 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000000_0&filter=stdout
> > 13/04/04 10:54:50 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000000_0&filter=stderr
> > 13/04/04 10:54:51 INFO mapred.JobClient: Task Id :
> attempt_201304031018_0005_m_000001_0, Status : FAILED
> > java.lang.Throwable: Child Error
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > Caused by: java.io.IOException: Task process exit with nonzero status of
> 1.
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> >
> > java.lang.Throwable: Child Error
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > Caused by: java.io.IOException: Task process exit with nonzero status of
> 1.
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> >
> > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000001_0&filter=stdout
> > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000001_0&filter=stderr
> > 13/04/04 10:54:51 INFO mapred.JobClient: Task Id :
> attempt_201304031018_0005_m_000002_0, Status : FAILED
> > java.lang.Throwable: Child Error
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > Caused by: java.io.IOException: Task process exit with nonzero status of
> 1.
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> >
> > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000002_0&filter=stdout
> > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000002_0&filter=stderr
> > 13/04/04 10:54:53 INFO mapred.JobClient: Task Id :
> attempt_201304031018_0005_m_000003_0, Status : FAILED
> > java.lang.Throwable: Child Error
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > Caused by: java.io.IOException: Task process exit with nonzero status of
> 1.
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> >
> > java.lang.Throwable: Child Error
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > Caused by: java.io.IOException: Task process exit with nonzero status of
> 1.
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> >
> > 13/04/04 10:54:53 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000003_0&filter=stdout
> > 13/04/04 10:54:53 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000003_0&filter=stderr
> >
> > 2013-04-04
> >
> >
> >
> > Wang Yu
>

回复: Re: ../src/tests/allocator_zookeeper_tests.cpp:202: Failure

Posted by 王瑜 <wa...@nfs.iscas.ac.cn>.
原来如此,十分感谢!
Thanks very much!

发件人: Benjamin Mahler
发送时间: 2013-04-09 13:47
收件人: wangyu
抄送: mesos-dev
主题: Re: Re: ../src/tests/allocator_zookeeper_tests.cpp:202: Failure
The new webui runs on port 5050 by default. There is no longer a webui
running on the slave as well, the new webui is served from the master.

On Mon, Apr 8, 2013 at 6:20 PM, 王瑜 <wa...@nfs.iscas.ac.cn> wrote:

> **
> Hi Ben,
>
> Because I can not see web UI with master:8080, this makes me think there
> are some problems in my installation.  Sorry for not knowing mesos very
> well.
>
> Yes, when I run master and slave, the console log looks ok, but why I can
> not see web UI? Thanks very much!
>
> When I try "make install" many time, it is failing for me all the time.
>
> ------------------------------
>
>  *发件人:* Benjamin Mahler <be...@gmail.com>
> *发送时间:* 2013-04-09 00:42
> *收件人:* mesos-dev@incubator.apache.org; wangyu <wa...@nfs.iscas.ac.cn>
> *主题:* Re: ../src/tests/allocator_zookeeper_tests.cpp:202: Failure
>  That looks like a successful install, what led you to think there was an
> error?
>
> As for the allocator tests, those are known to be flaky and we are
> currently undertaking some work to make the tests more robust. Is it flaky,
> or is it failing for you all the time?
>
>
> On Mon, Apr 8, 2013 at 2:52 AM, 王瑜 <wa...@nfs.iscas.ac.cn> wrote:
>
>> When I go on using "make install", the following error occurs, would
>> anybody  help me with this? Thanks very much!
>>
>> /bin/mkdir -p '/home/mesos/build/var/mesos/conf'
>>  /usr/bin/install -c -m 644  ../../src/conf/mesos.conf.template
>> '/home/mesos/build/var/mesos/conf'
>> test -z "/home/mesos/build/include/mesos" || /bin/mkdir -p
>> "/home/mesos/build/include/mesos"
>>  /usr/bin/install -c -m 644 ../include/mesos/mesos.hpp mesos.pb.h
>> '/home/mesos/build/include/mesos'
>> /usr/bin/install: "../include/mesos/mesos.hpp"
>> 与"/home/mesos/build/include/mesos/mesos.hpp" 为同一文件
>> make[3]: *** [install-nodist_pkgincludeHEADERS] 错误 1
>> make[3]: Leaving directory `/home/mesos/build/src'
>> make[2]: *** [install-am] 错误 2
>> make[2]: Leaving directory `/home/mesos/build/src'
>> make[1]: *** [install] 错误 2
>> make[1]: Leaving directory `/home/mesos/build/src'
>> make: *** [install-recursive] 错误 1
>>
>> From: 王瑜
>> Date: 2013-04-08 17:45
>> To: mesos-dev
>> Subject: ../src/tests/allocator_zookeeper_tests.cpp:202: Failure
>>  Hi all,
>> There are other problems, when I run "make check", It can not go throw in
>> zookeeper test, what's this problem is?
>>
>> Note: Google Test filter = *-
>> [==========] Running 212 tests from 44 test cases.
>> [----------] Global test environment set-up.
>> [----------] 2 tests from AllocatorZooKeeperTest/0, where TypeParam =
>> mesos::internal::master::HierarchicalAllocatorProcess<mesos::internal::master::DRFSorter,
>> mesos::internal::master::DRFSorter>
>> [ RUN      ] AllocatorZooKeeperTest/0.FrameworkReregistersFirst
>> ../../src/tests/allocator_zookeeper_tests.cpp:202: Failure
>> Failed
>> Waited too long for 'shutdownMsg'
>>
>>
>>
>> From: Vinod Kone
>> Date: 2013-04-05 01:25
>> To: mesos-dev@incubator.apache.org
>> Subject: Re: Re: Caused by: java.io.IOException: Task process exit with
>> nonzero status of 1.
>> Hi Wang,
>>
>> The version of Mesos on trunk is 0.12.0. We have recently refactored our
>> Hadoop port, so I suggest you get the latest copy from trunk and build
>> Hadoop. You can do it as follows:
>>
>> $ git checkout git://git.apache.org/mesos.git
>> $ cd mesos
>> $ ./bootstrap
>> $ ./configure
>> $ make
>> $ cd hadoop
>> $ make hadoop-0.20.205.0
>>
>>
>> @vinodkone
>>
>>
>> On Thu, Apr 4, 2013 at 4:43 AM, Wang Yu <wa...@nfs.iscas.ac.cn> wrote:
>>
>> > Hi Vinod,
>> >
>> > Yes, I am running trunk, which version 0.9.0. I deploy hadoop using
>> > default version, and I just want run map-reduce program on hadoop, using
>> > mesos resource schduling method.
>> >
>> > There are 3 servers:
>> > 192.168.0.2: master
>> > 192.168.0.3: slave1
>> > 192.168.0.7: slave5
>> >
>> > I set "master" as mesos master, "master""slave1""slave5" as mesos's
>> > slaves. I deploy hadoop using the same setting.
>> >
>> > Need any other information? Have I described clearly? Sorry for my poor
>> > English...
>> >
>> > 2013-04-04
>> >
>> >
>> >
>> > Wang Yu
>> >
>> >
>> >
>> > 发件人:Vinod Kone <vi...@twitter.com>
>> > 发送时间:2013-04-04 13:27
>> > 主题:Re: Caused by: java.io.IOException: Task process exit with nonzero
>> > status of 1.
>> > 收件人:"mesos-dev@incubator.apache.org"<me...@incubator.apache.org>
>> > 抄送:"mesos-dev"<me...@incubator.apache.org>,"Benjamin Hindman"<
>> > benh@berkeley.edu>
>> >
>> > Are you running off trunk? Can you explain your setup and what you are
>> > trying to achieve?
>> >
>> > @vinodkone
>> > Sent from my mobile
>> >
>> > On Apr 3, 2013, at 7:59 PM, "Wang Yu" <wa...@nfs.iscas.ac.cn> wrote:
>> >
>> > > Hi all,
>> > > Have you seen this problem ever? Please help me , thanks very much!
>> > >
>> > > By the way, it works well with single server, when I deploy hadoop
>> using
>> > mesos scheduler on 3 servers, the problem occures.
>> > >
>> > >
>> > > [root@master hadoop-0.20.205.0]# bin/hadoop jar
>> > hadoop-examples-0.20.205.0.jar randomwriter
>> > -Dtest.randomwrite.bytes_per_map=6710886
>> > -Dtest.randomwriter.maps_per_host=10 rand
>> > > Running 30 maps.
>> > > Job started: Thu Apr 04 10:54:25 CST 2013
>> > > 13/04/04 10:54:26 INFO mapred.JobClient: Running job:
>> > job_201304031018_0005
>> > > 13/04/04 10:54:27 INFO mapred.JobClient:  map 0% reduce 0%
>> > > 13/04/04 10:54:50 INFO mapred.JobClient: Task Id :
>> > attempt_201304031018_0005_m_000000_0, Status : FAILED
>> > > java.lang.Throwable: Child Error
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
>> > > Caused by: java.io.IOException: Task process exit with nonzero status
>> of
>> > 1.
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
>> > >
>> > > 13/04/04 10:54:50 WARN mapred.JobClient: Error reading task
>> >
>> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000000_0&filter=stdout
>> > > 13/04/04 10:54:50 WARN mapred.JobClient: Error reading task
>> >
>> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000000_0&filter=stderr
>> > > 13/04/04 10:54:51 INFO mapred.JobClient: Task Id :
>> > attempt_201304031018_0005_m_000001_0, Status : FAILED
>> > > java.lang.Throwable: Child Error
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
>> > > Caused by: java.io.IOException: Task process exit with nonzero status
>> of
>> > 1.
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
>> > >
>> > > java.lang.Throwable: Child Error
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
>> > > Caused by: java.io.IOException: Task process exit with nonzero status
>> of
>> > 1.
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
>> > >
>> > > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
>> >
>> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000001_0&filter=stdout
>> > > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
>> >
>> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000001_0&filter=stderr
>> > > 13/04/04 10:54:51 INFO mapred.JobClient: Task Id :
>> > attempt_201304031018_0005_m_000002_0, Status : FAILED
>> > > java.lang.Throwable: Child Error
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
>> > > Caused by: java.io.IOException: Task process exit with nonzero status
>> of
>> > 1.
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
>> > >
>> > > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
>> >
>> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000002_0&filter=stdout
>> > > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
>> >
>> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000002_0&filter=stderr
>> > > 13/04/04 10:54:53 INFO mapred.JobClient: Task Id :
>> > attempt_201304031018_0005_m_000003_0, Status : FAILED
>> > > java.lang.Throwable: Child Error
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
>> > > Caused by: java.io.IOException: Task process exit with nonzero status
>> of
>> > 1.
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
>> > >
>> > > java.lang.Throwable: Child Error
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
>> > > Caused by: java.io.IOException: Task process exit with nonzero status
>> of
>> > 1.
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
>> > >
>> > > 13/04/04 10:54:53 WARN mapred.JobClient: Error reading task
>> >
>> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000003_0&filter=stdout
>> > > 13/04/04 10:54:53 WARN mapred.JobClient: Error reading task
>> >
>> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000003_0&filter=stderr
>> > >
>> > > 2013-04-04
>> > >
>> > >
>> > >
>> > > Wang Yu
>> >
>>
>
>

Re: Re: ../src/tests/allocator_zookeeper_tests.cpp:202: Failure

Posted by Benjamin Mahler <be...@gmail.com>.
The new webui runs on port 5050 by default. There is no longer a webui
running on the slave as well, the new webui is served from the master.

On Mon, Apr 8, 2013 at 6:20 PM, 王瑜 <wa...@nfs.iscas.ac.cn> wrote:

> **
> Hi Ben,
>
> Because I can not see web UI with master:8080, this makes me think there
> are some problems in my installation.  Sorry for not knowing mesos very
> well.
>
> Yes, when I run master and slave, the console log looks ok, but why I can
> not see web UI? Thanks very much!
>
> When I try "make install" many time, it is failing for me all the time.
>
> ------------------------------
>
>  *发件人:* Benjamin Mahler <be...@gmail.com>
> *发送时间:* 2013-04-09 00:42
> *收件人:* mesos-dev@incubator.apache.org; wangyu <wa...@nfs.iscas.ac.cn>
> *主题:* Re: ../src/tests/allocator_zookeeper_tests.cpp:202: Failure
>  That looks like a successful install, what led you to think there was an
> error?
>
> As for the allocator tests, those are known to be flaky and we are
> currently undertaking some work to make the tests more robust. Is it flaky,
> or is it failing for you all the time?
>
>
> On Mon, Apr 8, 2013 at 2:52 AM, 王瑜 <wa...@nfs.iscas.ac.cn> wrote:
>
>> When I go on using "make install", the following error occurs, would
>> anybody  help me with this? Thanks very much!
>>
>> /bin/mkdir -p '/home/mesos/build/var/mesos/conf'
>>  /usr/bin/install -c -m 644  ../../src/conf/mesos.conf.template
>> '/home/mesos/build/var/mesos/conf'
>> test -z "/home/mesos/build/include/mesos" || /bin/mkdir -p
>> "/home/mesos/build/include/mesos"
>>  /usr/bin/install -c -m 644 ../include/mesos/mesos.hpp mesos.pb.h
>> '/home/mesos/build/include/mesos'
>> /usr/bin/install: "../include/mesos/mesos.hpp"
>> 与"/home/mesos/build/include/mesos/mesos.hpp" 为同一文件
>> make[3]: *** [install-nodist_pkgincludeHEADERS] 错误 1
>> make[3]: Leaving directory `/home/mesos/build/src'
>> make[2]: *** [install-am] 错误 2
>> make[2]: Leaving directory `/home/mesos/build/src'
>> make[1]: *** [install] 错误 2
>> make[1]: Leaving directory `/home/mesos/build/src'
>> make: *** [install-recursive] 错误 1
>>
>> From: 王瑜
>> Date: 2013-04-08 17:45
>> To: mesos-dev
>> Subject: ../src/tests/allocator_zookeeper_tests.cpp:202: Failure
>>  Hi all,
>> There are other problems, when I run "make check", It can not go throw in
>> zookeeper test, what's this problem is?
>>
>> Note: Google Test filter = *-
>> [==========] Running 212 tests from 44 test cases.
>> [----------] Global test environment set-up.
>> [----------] 2 tests from AllocatorZooKeeperTest/0, where TypeParam =
>> mesos::internal::master::HierarchicalAllocatorProcess<mesos::internal::master::DRFSorter,
>> mesos::internal::master::DRFSorter>
>> [ RUN      ] AllocatorZooKeeperTest/0.FrameworkReregistersFirst
>> ../../src/tests/allocator_zookeeper_tests.cpp:202: Failure
>> Failed
>> Waited too long for 'shutdownMsg'
>>
>>
>>
>> From: Vinod Kone
>> Date: 2013-04-05 01:25
>> To: mesos-dev@incubator.apache.org
>> Subject: Re: Re: Caused by: java.io.IOException: Task process exit with
>> nonzero status of 1.
>> Hi Wang,
>>
>> The version of Mesos on trunk is 0.12.0. We have recently refactored our
>> Hadoop port, so I suggest you get the latest copy from trunk and build
>> Hadoop. You can do it as follows:
>>
>> $ git checkout git://git.apache.org/mesos.git
>> $ cd mesos
>> $ ./bootstrap
>> $ ./configure
>> $ make
>> $ cd hadoop
>> $ make hadoop-0.20.205.0
>>
>>
>> @vinodkone
>>
>>
>> On Thu, Apr 4, 2013 at 4:43 AM, Wang Yu <wa...@nfs.iscas.ac.cn> wrote:
>>
>> > Hi Vinod,
>> >
>> > Yes, I am running trunk, which version 0.9.0. I deploy hadoop using
>> > default version, and I just want run map-reduce program on hadoop, using
>> > mesos resource schduling method.
>> >
>> > There are 3 servers:
>> > 192.168.0.2: master
>> > 192.168.0.3: slave1
>> > 192.168.0.7: slave5
>> >
>> > I set "master" as mesos master, "master""slave1""slave5" as mesos's
>> > slaves. I deploy hadoop using the same setting.
>> >
>> > Need any other information? Have I described clearly? Sorry for my poor
>> > English...
>> >
>> > 2013-04-04
>> >
>> >
>> >
>> > Wang Yu
>> >
>> >
>> >
>> > 发件人:Vinod Kone <vi...@twitter.com>
>> > 发送时间:2013-04-04 13:27
>> > 主题:Re: Caused by: java.io.IOException: Task process exit with nonzero
>> > status of 1.
>> > 收件人:"mesos-dev@incubator.apache.org"<me...@incubator.apache.org>
>> > 抄送:"mesos-dev"<me...@incubator.apache.org>,"Benjamin Hindman"<
>> > benh@berkeley.edu>
>> >
>> > Are you running off trunk? Can you explain your setup and what you are
>> > trying to achieve?
>> >
>> > @vinodkone
>> > Sent from my mobile
>> >
>> > On Apr 3, 2013, at 7:59 PM, "Wang Yu" <wa...@nfs.iscas.ac.cn> wrote:
>> >
>> > > Hi all,
>> > > Have you seen this problem ever? Please help me , thanks very much!
>> > >
>> > > By the way, it works well with single server, when I deploy hadoop
>> using
>> > mesos scheduler on 3 servers, the problem occures.
>> > >
>> > >
>> > > [root@master hadoop-0.20.205.0]# bin/hadoop jar
>> > hadoop-examples-0.20.205.0.jar randomwriter
>> > -Dtest.randomwrite.bytes_per_map=6710886
>> > -Dtest.randomwriter.maps_per_host=10 rand
>> > > Running 30 maps.
>> > > Job started: Thu Apr 04 10:54:25 CST 2013
>> > > 13/04/04 10:54:26 INFO mapred.JobClient: Running job:
>> > job_201304031018_0005
>> > > 13/04/04 10:54:27 INFO mapred.JobClient:  map 0% reduce 0%
>> > > 13/04/04 10:54:50 INFO mapred.JobClient: Task Id :
>> > attempt_201304031018_0005_m_000000_0, Status : FAILED
>> > > java.lang.Throwable: Child Error
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
>> > > Caused by: java.io.IOException: Task process exit with nonzero status
>> of
>> > 1.
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
>> > >
>> > > 13/04/04 10:54:50 WARN mapred.JobClient: Error reading task
>> >
>> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000000_0&filter=stdout
>> > > 13/04/04 10:54:50 WARN mapred.JobClient: Error reading task
>> >
>> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000000_0&filter=stderr
>> > > 13/04/04 10:54:51 INFO mapred.JobClient: Task Id :
>> > attempt_201304031018_0005_m_000001_0, Status : FAILED
>> > > java.lang.Throwable: Child Error
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
>> > > Caused by: java.io.IOException: Task process exit with nonzero status
>> of
>> > 1.
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
>> > >
>> > > java.lang.Throwable: Child Error
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
>> > > Caused by: java.io.IOException: Task process exit with nonzero status
>> of
>> > 1.
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
>> > >
>> > > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
>> >
>> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000001_0&filter=stdout
>> > > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
>> >
>> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000001_0&filter=stderr
>> > > 13/04/04 10:54:51 INFO mapred.JobClient: Task Id :
>> > attempt_201304031018_0005_m_000002_0, Status : FAILED
>> > > java.lang.Throwable: Child Error
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
>> > > Caused by: java.io.IOException: Task process exit with nonzero status
>> of
>> > 1.
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
>> > >
>> > > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
>> >
>> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000002_0&filter=stdout
>> > > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
>> >
>> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000002_0&filter=stderr
>> > > 13/04/04 10:54:53 INFO mapred.JobClient: Task Id :
>> > attempt_201304031018_0005_m_000003_0, Status : FAILED
>> > > java.lang.Throwable: Child Error
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
>> > > Caused by: java.io.IOException: Task process exit with nonzero status
>> of
>> > 1.
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
>> > >
>> > > java.lang.Throwable: Child Error
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
>> > > Caused by: java.io.IOException: Task process exit with nonzero status
>> of
>> > 1.
>> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
>> > >
>> > > 13/04/04 10:54:53 WARN mapred.JobClient: Error reading task
>> >
>> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000003_0&filter=stdout
>> > > 13/04/04 10:54:53 WARN mapred.JobClient: Error reading task
>> >
>> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000003_0&filter=stderr
>> > >
>> > > 2013-04-04
>> > >
>> > >
>> > >
>> > > Wang Yu
>> >
>>
>
>

回复: Re: ../src/tests/allocator_zookeeper_tests.cpp:202: Failure

Posted by 王瑜 <wa...@nfs.iscas.ac.cn>.
Hi Ben,

Because I can not see web UI with master:8080, this makes me think there are some problems in my installation.  Sorry for not knowing mesos very well.

Yes, when I run master and slave, the console log looks ok, but why I can not see web UI? Thanks very much!

When I try "make install" many time, it is failing for me all the time.





发件人: Benjamin Mahler
发送时间: 2013-04-09 00:42
收件人: mesos-dev@incubator.apache.org; wangyu
主题: Re: ../src/tests/allocator_zookeeper_tests.cpp:202: Failure
That looks like a successful install, what led you to think there was an error?


As for the allocator tests, those are known to be flaky and we are currently undertaking some work to make the tests more robust. Is it flaky, or is it failing for you all the time?



On Mon, Apr 8, 2013 at 2:52 AM, 王瑜 <wa...@nfs.iscas.ac.cn> wrote:

When I go on using "make install", the following error occurs, would anybody  help me with this? Thanks very much!

/bin/mkdir -p '/home/mesos/build/var/mesos/conf'
 /usr/bin/install -c -m 644  ../../src/conf/mesos.conf.template '/home/mesos/build/var/mesos/conf'
test -z "/home/mesos/build/include/mesos" || /bin/mkdir -p "/home/mesos/build/include/mesos"
 /usr/bin/install -c -m 644 ../include/mesos/mesos.hpp mesos.pb.h '/home/mesos/build/include/mesos'
/usr/bin/install: "../include/mesos/mesos.hpp" 与"/home/mesos/build/include/mesos/mesos.hpp" 为同一文件
make[3]: *** [install-nodist_pkgincludeHEADERS] 错误 1
make[3]: Leaving directory `/home/mesos/build/src'
make[2]: *** [install-am] 错误 2
make[2]: Leaving directory `/home/mesos/build/src'
make[1]: *** [install] 错误 2
make[1]: Leaving directory `/home/mesos/build/src'
make: *** [install-recursive] 错误 1

From: 王瑜
Date: 2013-04-08 17:45
To: mesos-dev
Subject: ../src/tests/allocator_zookeeper_tests.cpp:202: Failure

Hi all,
There are other problems, when I run "make check", It can not go throw in zookeeper test, what's this problem is?

Note: Google Test filter = *-
[==========] Running 212 tests from 44 test cases.
[----------] Global test environment set-up.
[----------] 2 tests from AllocatorZooKeeperTest/0, where TypeParam = mesos::internal::master::HierarchicalAllocatorProcess<mesos::internal::master::DRFSorter, mesos::internal::master::DRFSorter>
[ RUN      ] AllocatorZooKeeperTest/0.FrameworkReregistersFirst
../../src/tests/allocator_zookeeper_tests.cpp:202: Failure
Failed
Waited too long for 'shutdownMsg'



From: Vinod Kone
Date: 2013-04-05 01:25
To: mesos-dev@incubator.apache.org
Subject: Re: Re: Caused by: java.io.IOException: Task process exit with nonzero status of 1.
Hi Wang,

The version of Mesos on trunk is 0.12.0. We have recently refactored our
Hadoop port, so I suggest you get the latest copy from trunk and build
Hadoop. You can do it as follows:

$ git checkout git://git.apache.org/mesos.git
$ cd mesos
$ ./bootstrap
$ ./configure
$ make
$ cd hadoop
$ make hadoop-0.20.205.0


@vinodkone


On Thu, Apr 4, 2013 at 4:43 AM, Wang Yu <wa...@nfs.iscas.ac.cn> wrote:

> Hi Vinod,
>
> Yes, I am running trunk, which version 0.9.0. I deploy hadoop using
> default version, and I just want run map-reduce program on hadoop, using
> mesos resource schduling method.
>
> There are 3 servers:
> 192.168.0.2: master
> 192.168.0.3: slave1
> 192.168.0.7: slave5
>
> I set "master" as mesos master, "master""slave1""slave5" as mesos's
> slaves. I deploy hadoop using the same setting.
>
> Need any other information? Have I described clearly? Sorry for my poor
> English...
>
> 2013-04-04
>
>
>
> Wang Yu
>
>
>
> 发件人:Vinod Kone <vi...@twitter.com>
> 发送时间:2013-04-04 13:27
> 主题:Re: Caused by: java.io.IOException: Task process exit with nonzero
> status of 1.
> 收件人:"mesos-dev@incubator.apache.org"<me...@incubator.apache.org>
> 抄送:"mesos-dev"<me...@incubator.apache.org>,"Benjamin Hindman"<
> benh@berkeley.edu>
>
> Are you running off trunk? Can you explain your setup and what you are
> trying to achieve?
>
> @vinodkone
> Sent from my mobile
>
> On Apr 3, 2013, at 7:59 PM, "Wang Yu" <wa...@nfs.iscas.ac.cn> wrote:
>
> > Hi all,
> > Have you seen this problem ever? Please help me , thanks very much!
> >
> > By the way, it works well with single server, when I deploy hadoop using
> mesos scheduler on 3 servers, the problem occures.
> >
> >
> > [root@master hadoop-0.20.205.0]# bin/hadoop jar
> hadoop-examples-0.20.205.0.jar randomwriter
> -Dtest.randomwrite.bytes_per_map=6710886
> -Dtest.randomwriter.maps_per_host=10 rand
> > Running 30 maps.
> > Job started: Thu Apr 04 10:54:25 CST 2013
> > 13/04/04 10:54:26 INFO mapred.JobClient: Running job:
> job_201304031018_0005
> > 13/04/04 10:54:27 INFO mapred.JobClient:  map 0% reduce 0%
> > 13/04/04 10:54:50 INFO mapred.JobClient: Task Id :
> attempt_201304031018_0005_m_000000_0, Status : FAILED
> > java.lang.Throwable: Child Error
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > Caused by: java.io.IOException: Task process exit with nonzero status of
> 1.
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> >
> > 13/04/04 10:54:50 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000000_0&filter=stdout
> > 13/04/04 10:54:50 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000000_0&filter=stderr
> > 13/04/04 10:54:51 INFO mapred.JobClient: Task Id :
> attempt_201304031018_0005_m_000001_0, Status : FAILED
> > java.lang.Throwable: Child Error
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > Caused by: java.io.IOException: Task process exit with nonzero status of
> 1.
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> >
> > java.lang.Throwable: Child Error
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > Caused by: java.io.IOException: Task process exit with nonzero status of
> 1.
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> >
> > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000001_0&filter=stdout
> > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000001_0&filter=stderr
> > 13/04/04 10:54:51 INFO mapred.JobClient: Task Id :
> attempt_201304031018_0005_m_000002_0, Status : FAILED
> > java.lang.Throwable: Child Error
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > Caused by: java.io.IOException: Task process exit with nonzero status of
> 1.
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> >
> > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000002_0&filter=stdout
> > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000002_0&filter=stderr
> > 13/04/04 10:54:53 INFO mapred.JobClient: Task Id :
> attempt_201304031018_0005_m_000003_0, Status : FAILED
> > java.lang.Throwable: Child Error
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > Caused by: java.io.IOException: Task process exit with nonzero status of
> 1.
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> >
> > java.lang.Throwable: Child Error
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > Caused by: java.io.IOException: Task process exit with nonzero status of
> 1.
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> >
> > 13/04/04 10:54:53 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000003_0&filter=stdout
> > 13/04/04 10:54:53 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000003_0&filter=stderr
> >
> > 2013-04-04
> >
> >
> >
> > Wang Yu
>

Re: ../src/tests/allocator_zookeeper_tests.cpp:202: Failure

Posted by Benjamin Mahler <be...@gmail.com>.
That looks like a successful install, what led you to think there was an
error?

As for the allocator tests, those are known to be flaky and we are
currently undertaking some work to make the tests more robust. Is it flaky,
or is it failing for you all the time?


On Mon, Apr 8, 2013 at 2:52 AM, 王瑜 <wa...@nfs.iscas.ac.cn> wrote:

> When I go on using "make install", the following error occurs, would
> anybody  help me with this? Thanks very much!
>
> /bin/mkdir -p '/home/mesos/build/var/mesos/conf'
>  /usr/bin/install -c -m 644  ../../src/conf/mesos.conf.template
> '/home/mesos/build/var/mesos/conf'
> test -z "/home/mesos/build/include/mesos" || /bin/mkdir -p
> "/home/mesos/build/include/mesos"
>  /usr/bin/install -c -m 644 ../include/mesos/mesos.hpp mesos.pb.h
> '/home/mesos/build/include/mesos'
> /usr/bin/install: "../include/mesos/mesos.hpp"
> 与"/home/mesos/build/include/mesos/mesos.hpp" 为同一文件
> make[3]: *** [install-nodist_pkgincludeHEADERS] 错误 1
> make[3]: Leaving directory `/home/mesos/build/src'
> make[2]: *** [install-am] 错误 2
> make[2]: Leaving directory `/home/mesos/build/src'
> make[1]: *** [install] 错误 2
> make[1]: Leaving directory `/home/mesos/build/src'
> make: *** [install-recursive] 错误 1
>
> From: 王瑜
> Date: 2013-04-08 17:45
> To: mesos-dev
> Subject: ../src/tests/allocator_zookeeper_tests.cpp:202: Failure
> Hi all,
> There are other problems, when I run "make check", It can not go throw in
> zookeeper test, what's this problem is?
>
> Note: Google Test filter = *-
> [==========] Running 212 tests from 44 test cases.
> [----------] Global test environment set-up.
> [----------] 2 tests from AllocatorZooKeeperTest/0, where TypeParam =
> mesos::internal::master::HierarchicalAllocatorProcess<mesos::internal::master::DRFSorter,
> mesos::internal::master::DRFSorter>
> [ RUN      ] AllocatorZooKeeperTest/0.FrameworkReregistersFirst
> ../../src/tests/allocator_zookeeper_tests.cpp:202: Failure
> Failed
> Waited too long for 'shutdownMsg'
>
>
>
> From: Vinod Kone
> Date: 2013-04-05 01:25
> To: mesos-dev@incubator.apache.org
> Subject: Re: Re: Caused by: java.io.IOException: Task process exit with
> nonzero status of 1.
> Hi Wang,
>
> The version of Mesos on trunk is 0.12.0. We have recently refactored our
> Hadoop port, so I suggest you get the latest copy from trunk and build
> Hadoop. You can do it as follows:
>
> $ git checkout git://git.apache.org/mesos.git
> $ cd mesos
> $ ./bootstrap
> $ ./configure
> $ make
> $ cd hadoop
> $ make hadoop-0.20.205.0
>
>
> @vinodkone
>
>
> On Thu, Apr 4, 2013 at 4:43 AM, Wang Yu <wa...@nfs.iscas.ac.cn> wrote:
>
> > Hi Vinod,
> >
> > Yes, I am running trunk, which version 0.9.0. I deploy hadoop using
> > default version, and I just want run map-reduce program on hadoop, using
> > mesos resource schduling method.
> >
> > There are 3 servers:
> > 192.168.0.2: master
> > 192.168.0.3: slave1
> > 192.168.0.7: slave5
> >
> > I set "master" as mesos master, "master""slave1""slave5" as mesos's
> > slaves. I deploy hadoop using the same setting.
> >
> > Need any other information? Have I described clearly? Sorry for my poor
> > English...
> >
> > 2013-04-04
> >
> >
> >
> > Wang Yu
> >
> >
> >
> > 发件人:Vinod Kone <vi...@twitter.com>
> > 发送时间:2013-04-04 13:27
> > 主题:Re: Caused by: java.io.IOException: Task process exit with nonzero
> > status of 1.
> > 收件人:"mesos-dev@incubator.apache.org"<me...@incubator.apache.org>
> > 抄送:"mesos-dev"<me...@incubator.apache.org>,"Benjamin Hindman"<
> > benh@berkeley.edu>
> >
> > Are you running off trunk? Can you explain your setup and what you are
> > trying to achieve?
> >
> > @vinodkone
> > Sent from my mobile
> >
> > On Apr 3, 2013, at 7:59 PM, "Wang Yu" <wa...@nfs.iscas.ac.cn> wrote:
> >
> > > Hi all,
> > > Have you seen this problem ever? Please help me , thanks very much!
> > >
> > > By the way, it works well with single server, when I deploy hadoop
> using
> > mesos scheduler on 3 servers, the problem occures.
> > >
> > >
> > > [root@master hadoop-0.20.205.0]# bin/hadoop jar
> > hadoop-examples-0.20.205.0.jar randomwriter
> > -Dtest.randomwrite.bytes_per_map=6710886
> > -Dtest.randomwriter.maps_per_host=10 rand
> > > Running 30 maps.
> > > Job started: Thu Apr 04 10:54:25 CST 2013
> > > 13/04/04 10:54:26 INFO mapred.JobClient: Running job:
> > job_201304031018_0005
> > > 13/04/04 10:54:27 INFO mapred.JobClient:  map 0% reduce 0%
> > > 13/04/04 10:54:50 INFO mapred.JobClient: Task Id :
> > attempt_201304031018_0005_m_000000_0, Status : FAILED
> > > java.lang.Throwable: Child Error
> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > > Caused by: java.io.IOException: Task process exit with nonzero status
> of
> > 1.
> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> > >
> > > 13/04/04 10:54:50 WARN mapred.JobClient: Error reading task
> >
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000000_0&filter=stdout
> > > 13/04/04 10:54:50 WARN mapred.JobClient: Error reading task
> >
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000000_0&filter=stderr
> > > 13/04/04 10:54:51 INFO mapred.JobClient: Task Id :
> > attempt_201304031018_0005_m_000001_0, Status : FAILED
> > > java.lang.Throwable: Child Error
> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > > Caused by: java.io.IOException: Task process exit with nonzero status
> of
> > 1.
> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> > >
> > > java.lang.Throwable: Child Error
> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > > Caused by: java.io.IOException: Task process exit with nonzero status
> of
> > 1.
> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> > >
> > > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
> >
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000001_0&filter=stdout
> > > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
> >
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000001_0&filter=stderr
> > > 13/04/04 10:54:51 INFO mapred.JobClient: Task Id :
> > attempt_201304031018_0005_m_000002_0, Status : FAILED
> > > java.lang.Throwable: Child Error
> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > > Caused by: java.io.IOException: Task process exit with nonzero status
> of
> > 1.
> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> > >
> > > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
> >
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000002_0&filter=stdout
> > > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
> >
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000002_0&filter=stderr
> > > 13/04/04 10:54:53 INFO mapred.JobClient: Task Id :
> > attempt_201304031018_0005_m_000003_0, Status : FAILED
> > > java.lang.Throwable: Child Error
> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > > Caused by: java.io.IOException: Task process exit with nonzero status
> of
> > 1.
> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> > >
> > > java.lang.Throwable: Child Error
> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > > Caused by: java.io.IOException: Task process exit with nonzero status
> of
> > 1.
> > >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> > >
> > > 13/04/04 10:54:53 WARN mapred.JobClient: Error reading task
> >
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000003_0&filter=stdout
> > > 13/04/04 10:54:53 WARN mapred.JobClient: Error reading task
> >
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000003_0&filter=stderr
> > >
> > > 2013-04-04
> > >
> > >
> > >
> > > Wang Yu
> >
>

Re: ../src/tests/allocator_zookeeper_tests.cpp:202: Failure

Posted by 王瑜 <wa...@nfs.iscas.ac.cn>.
When I go on using "make install", the following error occurs, would anybody  help me with this? Thanks very much!

/bin/mkdir -p '/home/mesos/build/var/mesos/conf'
 /usr/bin/install -c -m 644  ../../src/conf/mesos.conf.template '/home/mesos/build/var/mesos/conf'
test -z "/home/mesos/build/include/mesos" || /bin/mkdir -p "/home/mesos/build/include/mesos"
 /usr/bin/install -c -m 644 ../include/mesos/mesos.hpp mesos.pb.h '/home/mesos/build/include/mesos'
/usr/bin/install: "../include/mesos/mesos.hpp" 与"/home/mesos/build/include/mesos/mesos.hpp" 为同一文件
make[3]: *** [install-nodist_pkgincludeHEADERS] 错误 1
make[3]: Leaving directory `/home/mesos/build/src'
make[2]: *** [install-am] 错误 2
make[2]: Leaving directory `/home/mesos/build/src'
make[1]: *** [install] 错误 2
make[1]: Leaving directory `/home/mesos/build/src'
make: *** [install-recursive] 错误 1

From: 王瑜
Date: 2013-04-08 17:45
To: mesos-dev
Subject: ../src/tests/allocator_zookeeper_tests.cpp:202: Failure
Hi all,
There are other problems, when I run "make check", It can not go throw in zookeeper test, what's this problem is? 

Note: Google Test filter = *-
[==========] Running 212 tests from 44 test cases.
[----------] Global test environment set-up.
[----------] 2 tests from AllocatorZooKeeperTest/0, where TypeParam = mesos::internal::master::HierarchicalAllocatorProcess<mesos::internal::master::DRFSorter, mesos::internal::master::DRFSorter>
[ RUN      ] AllocatorZooKeeperTest/0.FrameworkReregistersFirst
../../src/tests/allocator_zookeeper_tests.cpp:202: Failure
Failed
Waited too long for 'shutdownMsg'



From: Vinod Kone
Date: 2013-04-05 01:25
To: mesos-dev@incubator.apache.org
Subject: Re: Re: Caused by: java.io.IOException: Task process exit with nonzero status of 1.
Hi Wang,

The version of Mesos on trunk is 0.12.0. We have recently refactored our
Hadoop port, so I suggest you get the latest copy from trunk and build
Hadoop. You can do it as follows:

$ git checkout git://git.apache.org/mesos.git
$ cd mesos
$ ./bootstrap
$ ./configure
$ make
$ cd hadoop
$ make hadoop-0.20.205.0


@vinodkone


On Thu, Apr 4, 2013 at 4:43 AM, Wang Yu <wa...@nfs.iscas.ac.cn> wrote:

> Hi Vinod,
>
> Yes, I am running trunk, which version 0.9.0. I deploy hadoop using
> default version, and I just want run map-reduce program on hadoop, using
> mesos resource schduling method.
>
> There are 3 servers:
> 192.168.0.2: master
> 192.168.0.3: slave1
> 192.168.0.7: slave5
>
> I set "master" as mesos master, "master""slave1""slave5" as mesos's
> slaves. I deploy hadoop using the same setting.
>
> Need any other information? Have I described clearly? Sorry for my poor
> English...
>
> 2013-04-04
>
>
>
> Wang Yu
>
>
>
> 发件人:Vinod Kone <vi...@twitter.com>
> 发送时间:2013-04-04 13:27
> 主题:Re: Caused by: java.io.IOException: Task process exit with nonzero
> status of 1.
> 收件人:"mesos-dev@incubator.apache.org"<me...@incubator.apache.org>
> 抄送:"mesos-dev"<me...@incubator.apache.org>,"Benjamin Hindman"<
> benh@berkeley.edu>
>
> Are you running off trunk? Can you explain your setup and what you are
> trying to achieve?
>
> @vinodkone
> Sent from my mobile
>
> On Apr 3, 2013, at 7:59 PM, "Wang Yu" <wa...@nfs.iscas.ac.cn> wrote:
>
> > Hi all,
> > Have you seen this problem ever? Please help me , thanks very much!
> >
> > By the way, it works well with single server, when I deploy hadoop using
> mesos scheduler on 3 servers, the problem occures.
> >
> >
> > [root@master hadoop-0.20.205.0]# bin/hadoop jar
> hadoop-examples-0.20.205.0.jar randomwriter
> -Dtest.randomwrite.bytes_per_map=6710886
> -Dtest.randomwriter.maps_per_host=10 rand
> > Running 30 maps.
> > Job started: Thu Apr 04 10:54:25 CST 2013
> > 13/04/04 10:54:26 INFO mapred.JobClient: Running job:
> job_201304031018_0005
> > 13/04/04 10:54:27 INFO mapred.JobClient:  map 0% reduce 0%
> > 13/04/04 10:54:50 INFO mapred.JobClient: Task Id :
> attempt_201304031018_0005_m_000000_0, Status : FAILED
> > java.lang.Throwable: Child Error
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > Caused by: java.io.IOException: Task process exit with nonzero status of
> 1.
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> >
> > 13/04/04 10:54:50 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000000_0&filter=stdout
> > 13/04/04 10:54:50 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000000_0&filter=stderr
> > 13/04/04 10:54:51 INFO mapred.JobClient: Task Id :
> attempt_201304031018_0005_m_000001_0, Status : FAILED
> > java.lang.Throwable: Child Error
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > Caused by: java.io.IOException: Task process exit with nonzero status of
> 1.
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> >
> > java.lang.Throwable: Child Error
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > Caused by: java.io.IOException: Task process exit with nonzero status of
> 1.
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> >
> > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000001_0&filter=stdout
> > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000001_0&filter=stderr
> > 13/04/04 10:54:51 INFO mapred.JobClient: Task Id :
> attempt_201304031018_0005_m_000002_0, Status : FAILED
> > java.lang.Throwable: Child Error
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > Caused by: java.io.IOException: Task process exit with nonzero status of
> 1.
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> >
> > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000002_0&filter=stdout
> > 13/04/04 10:54:51 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000002_0&filter=stderr
> > 13/04/04 10:54:53 INFO mapred.JobClient: Task Id :
> attempt_201304031018_0005_m_000003_0, Status : FAILED
> > java.lang.Throwable: Child Error
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > Caused by: java.io.IOException: Task process exit with nonzero status of
> 1.
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> >
> > java.lang.Throwable: Child Error
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:278)
> > Caused by: java.io.IOException: Task process exit with nonzero status of
> 1.
> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:265)
> >
> > 13/04/04 10:54:53 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000003_0&filter=stdout
> > 13/04/04 10:54:53 WARN mapred.JobClient: Error reading task
> outputhttp://master:50060/tasklog?plaintext=true&attemptid=attempt_201304031018_0005_m_000003_0&filter=stderr
> >
> > 2013-04-04
> >
> >
> >
> > Wang Yu
>