You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hama.apache.org by changguanghui <ch...@huawei.com> on 2011/09/15 10:01:50 UTC

Hama help (how the distributed mode is working)

Hi,
I can't test the example which provide in HAMA tar ball on three machine.
The trouble is how can I config the distributed HAMA ?
Could you tell me some details for setup the HAMA on three machine. Thank you!

-----邮件原件-----
发件人: Thomas Jungblut [mailto:thomas.jungblut@googlemail.com] 
发送时间: 2011年9月14日 18:00
收件人: Luis Eduardo Pineda Morales
抄送: hama-user@incubator.apache.org
主题: Re: Hama help (Local mode not working)

Hi Luis,


> - For mere consistency of the page, you might want to use the tag <tt>
> (used in the rest of the document) instead of the <em> that you are using
> for names of files and configuration properties.
>

Thanks, I will take care of that.

- I don't know if this is only my problem, but when I execute Hama with the
> Local configuration, the Master doesn't run (and neither does the Groom).
> They don't recognize "local" as a valid hostname, both fail with this
> exception:
>

"local" itself is no hostname, there is a bug in our handling of this mode.
Actually nothing should be launched then. I'll extend this in our wiki.
What you are searching for is the pseudo-distributed mode which runs a
Master, Groom and Zookeeper on your machine.
You then have to provide "localhost" as the hostname or the real hostname of
your machine.

Is this maybe a problem with version 0.3? Would you suggest me to use 0.2
> instead?
>

In 0.2 is no local mode, so you won't face these problems.
Since this is a twiddle in your configuration, which should be solved using
"localhost" instead of "local", you don't need to downgrade.

I hope it will help you.

Regards,
Thomas

2011/9/14 Luis Eduardo Pineda Morales <lu...@gmail.com>

> Thanks for you prompt reply Thomas,
>
> The wiki is more clarifying now that you added the part of the Modes.
> However, if I may, I have a couple of remarks to mention:
>
> - For mere consistency of the page, you might want to use the tag <tt>
> (used in the rest of the document) instead of the <em> that you are using
> for names of files and configuration properties.
>
> - I don't know if this is only my problem, but when I execute Hama with the
> Local configuration, the Master doesn't run (and neither does the Groom).
> They don't recognize "local" as a valid hostname, both fail with this
> exception:
>
> From* bspmaster.log:*
>
> *FATAL org.apache.hama.BSPMasterRunner: java.net.UnknownHostException:
> Invalid hostname for server: local*
>         at org.apache.hadoop.ipc.Server.bind(Server.java:198)
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:253)
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1026)
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:488)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:450)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:441)
>         at org.apache.hama.bsp.BSPMaster.<init>(BSPMaster.java:250)
>         at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:421)
>         at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:415)
>         at org.apache.hama.BSPMasterRunner.run(BSPMasterRunner.java:46)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>         at org.apache.hama.BSPMasterRunner.main(BSPMasterRunner.java:56)
>
>
> From *groom.log*
>
> ERROR org.apache.hama.bsp.GroomServer: Got fatal exception while
> reinitializing GroomServer: java.net.UnknownHostException: unknown host:
> local
>         at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:195)
>         at org.apache.hadoop.ipc.Client.getConnection(Client.java:850)
>         at org.apache.hadoop.ipc.Client.call(Client.java:720)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>         at $Proxy4.getProtocolVersion(Unknown Source)
>         at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>         at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
>         at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
>         at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:314)
>         at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:291)
>         at org.apache.hama.bsp.GroomServer.initialize(GroomServer.java:279)
>         at org.apache.hama.bsp.GroomServer.run(GroomServer.java:600)
>         at java.lang.Thread.run(Thread.java:680)
>
>
> I've tested it in Debian, Ubuntu and MacOS Terminal. Is this maybe a
> problem with version 0.3? Would you suggest me to use 0.2 instead?
>
>
> I'm copying this to the user mailing list too, hope you don't mind.
>
> Luis
>



-- 
Thomas Jungblut
Berlin

mobile: 0170-3081070

business: thomas.jungblut@testberichte.de
private: thomas.jungblut@gmail.com

Re: compared with MapReduce ,what is the advantage of HAMA?

Posted by Thomas Jungblut <th...@googlemail.com>.
Thanks for your tips, I transfer this to our dev-list for discussion.

2011/9/24 changguanghui <ch...@huawei.com>

> I think,maybe, It is important to find some algorithm or some problem which
> is more suitable for using HAMA. Then, people can observe the contrast to
> the results between HAMA and MapReduce. Because more people want to know why
> they should choose HAMA, when they should choose HAMA.....
>
>  -----邮件原件-----
> 发件人: Thomas Jungblut [mailto:thomas.jungblut@googlemail.com]
> 发送时间: 2011年9月23日 19:39
> 收件人: hama-user@incubator.apache.org
> 主题: Re: compared with MapReduce ,what is the advantage of HAMA?
>
> Hi,
> to clearly state the advantage: you have less overhead.
> Let me illustrate an algorithm for mindist search, I renamed it to graph
> exploration. This will apply on Shortest Paths, too.
> I wrote about it here:
>
> http://codingwiththomas.blogspot.com/2011/04/graph-exploration-with-hadoop-mapreduce.html
>
> Basically the algorithm groups the components of the graph and assigns the
> lowest key of the group as an identifier for the component.
> Usually you are solving graph problems with MapReduce with a technique
> called "Message Passing".
> So you are going to send messages to other vertices in every map step. Then
> you have to shuffle, sort and reduce the vertices to compute the result.
> This isn't done with a single iteration, so you have to chain several
> map/reduce jobs.
>
> For each iteration you inherit the overhead of sorting and shuffeling.
> Additional you have to do this on the disk.
>
> Hama provides a message passing interface, so you don't have to take care
> of
> writing each message to HDFS.
> Each iteration, which is in MapReduce a full job execution, is called a
> superstep in BSP.
> Each superstep is faster than a full job execution in Hadoop, because you
> don't have the overhead with spilling to disk, job setup, sorting and
> shuffeling.
> In addition you can put your whole graph into RAM, this will speed up the
> computation anyways. Hadoop does not offer this capability yet.
>
> But I want to point out some facts that are not positive though:
> Currently no benchmarks against Hadoop or other Frameworks like Giraph or
> GoldenORB exist, so we can't say: we are the best/fastest/coolest.
> And graph algorithms are a hard way to code. As you can see, I have written
> lots of code to get this running. That is because I have to take care of
> the
> partitioning, vertex messaging and IO stuff by myself.
> For that purpose we are going to release a Pregel API which makes the
> development of graph algorithms a lot more easier.
> You can get a sneak peek here:
> https://issues.apache.org/jira/browse/HAMA-409
>
> That was a lot of text, but I hope to clarify a lot.
>
> Best Regards,
> Thomas
>
> 2011/9/23 changguanghui <ch...@huawei.com>
>
> > Hi Thomas,
> >
> > Could you provide a concrete instance to illustrate the advantage of
> HAMA,
> > when HAMA vs. MapReduce?
> >
> > For example,SSSP on HAMA vs. SSSP on MapReduce. So ,I can catch the idea
> of
> > HAMA quickly.
> >
> > Thank you very much!
> >
> > Changguanghui
> >
>



-- 
Thomas Jungblut
Berlin

mobile: 0170-3081070

business: thomas.jungblut@testberichte.de
private: thomas.jungblut@gmail.com

Re: compared with MapReduce ,what is the advantage of HAMA?

Posted by changguanghui <ch...@huawei.com>.
I think,maybe, It is important to find some algorithm or some problem which is more suitable for using HAMA. Then, people can observe the contrast to the results between HAMA and MapReduce. Because more people want to know why they should choose HAMA, when they should choose HAMA.....

 -----邮件原件-----
发件人: Thomas Jungblut [mailto:thomas.jungblut@googlemail.com] 
发送时间: 2011年9月23日 19:39
收件人: hama-user@incubator.apache.org
主题: Re: compared with MapReduce ,what is the advantage of HAMA?

Hi,
to clearly state the advantage: you have less overhead.
Let me illustrate an algorithm for mindist search, I renamed it to graph
exploration. This will apply on Shortest Paths, too.
I wrote about it here:
http://codingwiththomas.blogspot.com/2011/04/graph-exploration-with-hadoop-mapreduce.html

Basically the algorithm groups the components of the graph and assigns the
lowest key of the group as an identifier for the component.
Usually you are solving graph problems with MapReduce with a technique
called "Message Passing".
So you are going to send messages to other vertices in every map step. Then
you have to shuffle, sort and reduce the vertices to compute the result.
This isn't done with a single iteration, so you have to chain several
map/reduce jobs.

For each iteration you inherit the overhead of sorting and shuffeling.
Additional you have to do this on the disk.

Hama provides a message passing interface, so you don't have to take care of
writing each message to HDFS.
Each iteration, which is in MapReduce a full job execution, is called a
superstep in BSP.
Each superstep is faster than a full job execution in Hadoop, because you
don't have the overhead with spilling to disk, job setup, sorting and
shuffeling.
In addition you can put your whole graph into RAM, this will speed up the
computation anyways. Hadoop does not offer this capability yet.

But I want to point out some facts that are not positive though:
Currently no benchmarks against Hadoop or other Frameworks like Giraph or
GoldenORB exist, so we can't say: we are the best/fastest/coolest.
And graph algorithms are a hard way to code. As you can see, I have written
lots of code to get this running. That is because I have to take care of the
partitioning, vertex messaging and IO stuff by myself.
For that purpose we are going to release a Pregel API which makes the
development of graph algorithms a lot more easier.
You can get a sneak peek here:
https://issues.apache.org/jira/browse/HAMA-409

That was a lot of text, but I hope to clarify a lot.

Best Regards,
Thomas

2011/9/23 changguanghui <ch...@huawei.com>

> Hi Thomas,
>
> Could you provide a concrete instance to illustrate the advantage of HAMA,
> when HAMA vs. MapReduce?
>
> For example,SSSP on HAMA vs. SSSP on MapReduce. So ,I can catch the idea of
> HAMA quickly.
>
> Thank you very much!
>
> Changguanghui
>

Re: compared with MapReduce ,what is the advantage of HAMA?

Posted by Thomas Jungblut <th...@googlemail.com>.
Hi,
to clearly state the advantage: you have less overhead.
Let me illustrate an algorithm for mindist search, I renamed it to graph
exploration. This will apply on Shortest Paths, too.
I wrote about it here:
http://codingwiththomas.blogspot.com/2011/04/graph-exploration-with-hadoop-mapreduce.html

Basically the algorithm groups the components of the graph and assigns the
lowest key of the group as an identifier for the component.
Usually you are solving graph problems with MapReduce with a technique
called "Message Passing".
So you are going to send messages to other vertices in every map step. Then
you have to shuffle, sort and reduce the vertices to compute the result.
This isn't done with a single iteration, so you have to chain several
map/reduce jobs.

For each iteration you inherit the overhead of sorting and shuffeling.
Additional you have to do this on the disk.

Hama provides a message passing interface, so you don't have to take care of
writing each message to HDFS.
Each iteration, which is in MapReduce a full job execution, is called a
superstep in BSP.
Each superstep is faster than a full job execution in Hadoop, because you
don't have the overhead with spilling to disk, job setup, sorting and
shuffeling.
In addition you can put your whole graph into RAM, this will speed up the
computation anyways. Hadoop does not offer this capability yet.

But I want to point out some facts that are not positive though:
Currently no benchmarks against Hadoop or other Frameworks like Giraph or
GoldenORB exist, so we can't say: we are the best/fastest/coolest.
And graph algorithms are a hard way to code. As you can see, I have written
lots of code to get this running. That is because I have to take care of the
partitioning, vertex messaging and IO stuff by myself.
For that purpose we are going to release a Pregel API which makes the
development of graph algorithms a lot more easier.
You can get a sneak peek here:
https://issues.apache.org/jira/browse/HAMA-409

That was a lot of text, but I hope to clarify a lot.

Best Regards,
Thomas

2011/9/23 changguanghui <ch...@huawei.com>

> Hi Thomas,
>
> Could you provide a concrete instance to illustrate the advantage of HAMA,
> when HAMA vs. MapReduce?
>
> For example,SSSP on HAMA vs. SSSP on MapReduce. So ,I can catch the idea of
> HAMA quickly.
>
> Thank you very much!
>
> Changguanghui
>

Re: compared with MapReduce ,what is the advantage of HAMA?

Posted by changguanghui <ch...@huawei.com>.
Hi Lin,

Thank you very much. When we execute some algorithm respectively using BSP or MapReduce, I just want a result to display some different overhead , runtime, and so on.
So I can show HAMA's advantage straightforward to the others who may not deep knowledge in parallel computing but only know MapReduce.

Best Regards

Changguanghui

-----邮件原件-----
发件人: Chia-Hung Lin [mailto:clin4j@googlemail.com] 
发送时间: 2011年9月23日 18:07
收件人: hama-user@incubator.apache.org
主题: Re: compared with MapReduce ,what is the advantage of HAMA?

My understanding is that Hama bases on BSP model, which is suitable
for iterative algorithm; whereas MapReduce suitable for dealing with
bipartite graph.

2011/9/23 changguanghui <ch...@huawei.com>:
> Hi Thomas,
>
> Could you provide a concrete instance to illustrate the advantage of HAMA, when HAMA vs. MapReduce?
>
> For example,SSSP on HAMA vs. SSSP on MapReduce. So ,I can catch the idea of HAMA quickly.
>
> Thank you very much!
>
> Changguanghui
>
> -----邮件原件-----
> 发件人: Thomas Jungblut [mailto:thomas.jungblut@googlemail.com]
> 发送时间: 2011年9月19日 23:17
> 收件人: Luis Eduardo Pineda Morales
> 抄送: hama-user@incubator.apache.org
> 主题: Re: Hama help (how the distributed mode is working)
>
>>
>> I finally managed to setup and run Hama in fully distributed mode (thanks a
>> lot to Thomas Jungblut!)
>>
>
> No problem, that's my "job" ;)). That is great. Have fun!
>
> 2011/9/19 Luis Eduardo Pineda Morales <lu...@gmail.com>
>
>> Hi all!
>>
>> I finally managed to setup and run Hama in fully distributed mode (thanks a
>> lot to Thomas Jungblut!)
>>
>> I'm using Hama 0.3.0 and Hadoop 0.20.2 with IPv4 as in
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>
>> Same settings didn't work with Hadoop 0.20.203 (said to be the most recent
>> stable version).
>> Hope these settings are useful for you.
>>
>> Luis
>>
>>
>> On 15 Sep 2011, at 19:25, Thomas Jungblut wrote:
>>
>> Hey, I'm sorry, the IPv6 was misleading.
>> On your screenshot I see that you are using an Append version of Hadoop.
>> Did you try it with 0.20.2?
>>
>> 2011/9/15 Luis Eduardo Pineda Morales <lu...@gmail.com>
>>
>>> Hi Thomas, apparently IPv6 wasn't the problem, since now hadoop is running
>>> in IPv4 and i still get the same exceptions in hama.
>>>
>>> pineda@server00:~/hadoop$ jps
>>> 10592 NameNode
>>> 10922 Jps
>>> 10695 DataNode
>>> 10844 SecondaryNameNode
>>>
>>> pineda@server00:~/hadoop$ lsof -i
>>> COMMAND   PID   USER   FD   TYPE  DEVICE SIZE NODE NAME
>>> java    10592 pineda   46u  IPv4 2559447       TCP *:50272 (LISTEN)
>>> java    10592 pineda   56u  IPv4 2559684       TCP server00:54310 (LISTEN)
>>> java    10592 pineda   67u  IPv4 2559694       TCP *:50070 (LISTEN)
>>> java    10592 pineda   71u  IPv4 2559771       TCP
>>> server00:54310->server00:51666 (ESTABLISHED)
>>> java    10592 pineda   72u  IPv4 2559810       TCP
>>> server00:51668->server00:54310 (ESTABLISHED)
>>> java    10592 pineda   73u  IPv4 2559811       TCP
>>> server00:54310->server00:51668 (ESTABLISHED)
>>> java    10592 pineda   77u  IPv4 2560218       TCP
>>> server00:54310->server00:51671 (ESTABLISHED)
>>> java    10695 pineda   46u  IPv4 2559682       TCP *:44935 (LISTEN)
>>> java    10695 pineda   52u  IPv4 2559764       TCP
>>> server00:51666->server00:54310 (ESTABLISHED)
>>> java    10695 pineda   60u  IPv4 2559892       TCP *:50010 (LISTEN)
>>> java    10695 pineda   61u  IPv4 2559899       TCP *:50075 (LISTEN)
>>> java    10695 pineda   66u  IPv4 2560208       TCP *:50020 (LISTEN)
>>> java    10844 pineda   46u  IPv4 2560204       TCP *:41188 (LISTEN)
>>> java    10844 pineda   52u  IPv4 2560217       TCP
>>> server00:51671->server00:54310 (ESTABLISHED)
>>> java    10844 pineda   59u  IPv4 2560225       TCP *:50090 (LISTEN)
>>>
>>>
>>> also the web interface doesn't show any errors:   and I'm able to run
>>> hadoop shell commands.  Any other idea? :-/
>>>
>>> Luis
>>>
>>>
>>>
>>>
>>> On 15 Sep 2011, at 18:17, Thomas Jungblut wrote:
>>>
>>> > Hi Luis,
>>> >
>>> > it doesn't mean that it is working, just because there is no exception.
>>> > Thanks that you appended your lsof output, because Hadoop does not
>>> support
>>> > IPv6.
>>> >
>>> > Please setup Hadoop correctly [1] and then use Hama.
>>> > For example here is my lsof -i output:
>>> >
>>> > hadoop@raynor:/home/thomasjungblut$ lsof -i
>>> >> COMMAND  PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
>>> >> java    1144 hadoop   33u  IPv4   8819      0t0  TCP *:49737 (LISTEN)
>>> >> java    1144 hadoop   37u  IPv4   9001      0t0  TCP raynor:9001
>>> (LISTEN)
>>> >> java    1144 hadoop   47u  IPv4   9222      0t0  TCP *:50070 (LISTEN)
>>> >> java    1144 hadoop   52u  IPv4   9429      0t0  TCP
>>> >> raynor:9001->findlay:35283 (ESTABLISHED)
>>> >> java    1144 hadoop   53u  IPv4   9431      0t0  TCP
>>> >> raynor:9001->karrigan:57345 (ESTABLISHED)
>>> >> java    1249 hadoop   33u  IPv4   8954      0t0  TCP *:54235 (LISTEN)
>>> >> java    1249 hadoop   44u  IPv4   9422      0t0  TCP *:50010 (LISTEN)
>>> >> java    1249 hadoop   45u  IPv4   9426      0t0  TCP *:50075 (LISTEN)
>>> >>
>>> >
>>> > There are two ways to determine if Hadoop is setup correctly:
>>> >
>>> >   1. Look at the Webinterface of the Namenode [2] and see that there is
>>> no
>>> >   Safemode message or datanode missing.
>>> >   2. Or run a sample MapReduce Job, for example WordCount [3].
>>> >
>>> > If Hama is not working afterwards, you ask your next question again.
>>> >
>>> > Thanks and good luck :)
>>> >
>>> > [1]
>>> >
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>> > [2]
>>> >
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#hdfs-name-node-web-interface
>>> > [3]
>>> >
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#run-the-mapreduce-job
>>> >
>>> >
>>> > 2011/9/15 Luis Eduardo Pineda Morales <lu...@gmail.com>
>>> >
>>> >> Hi all,
>>> >>
>>> >> I am attempting to run the distributed mode. I have HDFS running in a
>>> >> single machine (pseudo-distributed mode):
>>> >>
>>> >> pineda@server00:~/hadoop$ jps
>>> >> 472 SecondaryNameNode
>>> >> 1429 Jps
>>> >> 32733 NameNode
>>> >> 364 DataNode
>>> >>
>>> >> pineda@server00:~/hadoop$ lsof -i
>>> >> COMMAND   PID   USER   FD   TYPE  DEVICE SIZE NODE NAME
>>> >> java      364 pineda   46u  IPv6 2532945       TCP *:41462 (LISTEN)
>>> >> java      364 pineda   52u  IPv6 2533275       TCP
>>> >> server00:42445->server00:54310 (ESTABLISHED)
>>> >> java      364 pineda   60u  IPv6 2533307       TCP *:50010 (LISTEN)
>>> >> java      364 pineda   61u  IPv6 2533511       TCP *:50075 (LISTEN)
>>> >> java      364 pineda   66u  IPv6 2533518       TCP *:50020 (LISTEN)
>>> >> java      472 pineda   46u  IPv6 2533286       TCP *:43098 (LISTEN)
>>> >> java      472 pineda   59u  IPv6 2533536       TCP *:50090 (LISTEN)
>>> >> java    32733 pineda   46u  IPv6 2532751       TCP *:54763 (LISTEN)
>>> >> java    32733 pineda   56u  IPv6 2533062       TCP server00:54310
>>> (LISTEN)
>>> >> java    32733 pineda   67u  IPv6 2533081       TCP *:50070 (LISTEN)
>>> >> java    32733 pineda   76u  IPv6 2533276       TCP
>>> >> server00:54310->server00:42445 (ESTABLISHED)
>>> >>
>>> >> i.e.    fs.defaul.name  =  hdfs://server00:54310/
>>> >>
>>> >> then I run hama in server04 (groom in server03, zookeeper in server05):
>>> >>
>>> >> pineda@server04:~/hama$ bin/start-bspd.sh
>>> >> server05: starting zookeeper, logging to
>>> >> /logs/hama-pineda-zookeeper-server05.out
>>> >> starting bspmaster, logging to /logs/hama-pineda-bspmaster-server04.out
>>> >> 2011-09-15 17:08:43.349:INFO::Logging to STDERR via
>>> >> org.mortbay.log.StdErrLog
>>> >> 2011-09-15 17:08:43.409:INFO::jetty-0.3.0-incubating
>>> >> server03: starting groom, logging to
>>> /logs/hama-pineda-groom-server03.out
>>> >>
>>> >> this is my hama-site.xml file:
>>> >>
>>> >> <configuration>
>>> >> <property>
>>> >>   <name>bsp.master.address</name>
>>> >>    <value>server04</value>
>>> >>  </property>
>>> >>
>>> >> <property>
>>> >>   <name>fs.default.name</name>
>>> >>    <value>hdfs://server00:54310</value>
>>> >>  </property>
>>> >>
>>> >> <property>
>>> >>   <name>hama.zookeeper.quorum</name>
>>> >>    <value>server05</value>
>>> >> </property>
>>> >> </configuration>
>>> >>
>>> >>
>>> >> In theory I can connect to the HDFS, because I don't get any
>>> >> ConnectException, but Hama doesn't run, and I get this Exception trace
>>> in my
>>> >> bspmaster.log after the Jetty is bound:
>>> >>
>>> >>
>>> >> 2011-09-15 17:08:43,409 INFO org.apache.hama.http.HttpServer: Jetty
>>> bound
>>> >> to port 40013
>>> >> 2011-09-15 17:08:44,070 INFO org.apache.hama.bsp.BSPMaster: problem
>>> >> cleaning system directory: null
>>> >> java.io.IOException: Call to server00/192.168.122.10:54310 failed on
>>> local
>>> >> exception: java.io.EOFException
>>> >>       at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
>>> >>       at org.apache.hadoop.ipc.Client.call(Client.java:743)
>>> >>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>> >>       at $Proxy4.getProtocolVersion(Unknown Source)
>>> >>       at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>>> >>        at
>>> >> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
>>> >>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
>>> >>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
>>> >>       at
>>> >>
>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
>>> >>       at
>>> >> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>>> >>       at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>>> >>       at
>>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>>> >>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>>> >>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>>> >>       at org.apache.hama.bsp.BSPMaster.<init>(BSPMaster.java:263)
>>> >>        at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:421)
>>> >>       at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:415)
>>> >>       at org.apache.hama.BSPMasterRunner.run(BSPMasterRunner.java:46)
>>> >>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>>> >>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>>> >>       at org.apache.hama.BSPMasterRunner.main(BSPMasterRunner.java:56)
>>> >> Caused by: java.io.EOFException
>>> >>       at java.io.DataInputStream.readInt(DataInputStream.java:375)
>>> >>       at
>>> >>
>>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
>>> >>       at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
>>> >>
>>> >>
>>> >> Do you know how to fix this? Do you know what is the directory that it
>>> is
>>> >> trying to clean?
>>> >>
>>> >> Any idea is welcomed!
>>> >>
>>> >> Thanks,
>>> >> Luis.
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > Thomas Jungblut
>>> > Berlin
>>> >
>>> > mobile: 0170-3081070
>>> >
>>> > business: thomas.jungblut@testberichte.de
>>> > private: thomas.jungblut@gmail.com
>>>
>>>
>>>
>>
>>
>> --
>> Thomas Jungblut
>> Berlin
>>
>> mobile: 0170-3081070
>>
>> business: thomas.jungblut@testberichte.de
>> private: thomas.jungblut@gmail.com
>>
>>
>>
>
>
> --
> Thomas Jungblut
> Berlin
>
> mobile: 0170-3081070
>
> business: thomas.jungblut@testberichte.de
> private: thomas.jungblut@gmail.com
>

Re: compared with MapReduce ,what is the advantage of HAMA?

Posted by Chia-Hung Lin <cl...@googlemail.com>.
My understanding is that Hama bases on BSP model, which is suitable
for iterative algorithm; whereas MapReduce suitable for dealing with
bipartite graph.

2011/9/23 changguanghui <ch...@huawei.com>:
> Hi Thomas,
>
> Could you provide a concrete instance to illustrate the advantage of HAMA, when HAMA vs. MapReduce?
>
> For example,SSSP on HAMA vs. SSSP on MapReduce. So ,I can catch the idea of HAMA quickly.
>
> Thank you very much!
>
> Changguanghui
>
> -----邮件原件-----
> 发件人: Thomas Jungblut [mailto:thomas.jungblut@googlemail.com]
> 发送时间: 2011年9月19日 23:17
> 收件人: Luis Eduardo Pineda Morales
> 抄送: hama-user@incubator.apache.org
> 主题: Re: Hama help (how the distributed mode is working)
>
>>
>> I finally managed to setup and run Hama in fully distributed mode (thanks a
>> lot to Thomas Jungblut!)
>>
>
> No problem, that's my "job" ;)). That is great. Have fun!
>
> 2011/9/19 Luis Eduardo Pineda Morales <lu...@gmail.com>
>
>> Hi all!
>>
>> I finally managed to setup and run Hama in fully distributed mode (thanks a
>> lot to Thomas Jungblut!)
>>
>> I'm using Hama 0.3.0 and Hadoop 0.20.2 with IPv4 as in
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>
>> Same settings didn't work with Hadoop 0.20.203 (said to be the most recent
>> stable version).
>> Hope these settings are useful for you.
>>
>> Luis
>>
>>
>> On 15 Sep 2011, at 19:25, Thomas Jungblut wrote:
>>
>> Hey, I'm sorry, the IPv6 was misleading.
>> On your screenshot I see that you are using an Append version of Hadoop.
>> Did you try it with 0.20.2?
>>
>> 2011/9/15 Luis Eduardo Pineda Morales <lu...@gmail.com>
>>
>>> Hi Thomas, apparently IPv6 wasn't the problem, since now hadoop is running
>>> in IPv4 and i still get the same exceptions in hama.
>>>
>>> pineda@server00:~/hadoop$ jps
>>> 10592 NameNode
>>> 10922 Jps
>>> 10695 DataNode
>>> 10844 SecondaryNameNode
>>>
>>> pineda@server00:~/hadoop$ lsof -i
>>> COMMAND   PID   USER   FD   TYPE  DEVICE SIZE NODE NAME
>>> java    10592 pineda   46u  IPv4 2559447       TCP *:50272 (LISTEN)
>>> java    10592 pineda   56u  IPv4 2559684       TCP server00:54310 (LISTEN)
>>> java    10592 pineda   67u  IPv4 2559694       TCP *:50070 (LISTEN)
>>> java    10592 pineda   71u  IPv4 2559771       TCP
>>> server00:54310->server00:51666 (ESTABLISHED)
>>> java    10592 pineda   72u  IPv4 2559810       TCP
>>> server00:51668->server00:54310 (ESTABLISHED)
>>> java    10592 pineda   73u  IPv4 2559811       TCP
>>> server00:54310->server00:51668 (ESTABLISHED)
>>> java    10592 pineda   77u  IPv4 2560218       TCP
>>> server00:54310->server00:51671 (ESTABLISHED)
>>> java    10695 pineda   46u  IPv4 2559682       TCP *:44935 (LISTEN)
>>> java    10695 pineda   52u  IPv4 2559764       TCP
>>> server00:51666->server00:54310 (ESTABLISHED)
>>> java    10695 pineda   60u  IPv4 2559892       TCP *:50010 (LISTEN)
>>> java    10695 pineda   61u  IPv4 2559899       TCP *:50075 (LISTEN)
>>> java    10695 pineda   66u  IPv4 2560208       TCP *:50020 (LISTEN)
>>> java    10844 pineda   46u  IPv4 2560204       TCP *:41188 (LISTEN)
>>> java    10844 pineda   52u  IPv4 2560217       TCP
>>> server00:51671->server00:54310 (ESTABLISHED)
>>> java    10844 pineda   59u  IPv4 2560225       TCP *:50090 (LISTEN)
>>>
>>>
>>> also the web interface doesn't show any errors:   and I'm able to run
>>> hadoop shell commands.  Any other idea? :-/
>>>
>>> Luis
>>>
>>>
>>>
>>>
>>> On 15 Sep 2011, at 18:17, Thomas Jungblut wrote:
>>>
>>> > Hi Luis,
>>> >
>>> > it doesn't mean that it is working, just because there is no exception.
>>> > Thanks that you appended your lsof output, because Hadoop does not
>>> support
>>> > IPv6.
>>> >
>>> > Please setup Hadoop correctly [1] and then use Hama.
>>> > For example here is my lsof -i output:
>>> >
>>> > hadoop@raynor:/home/thomasjungblut$ lsof -i
>>> >> COMMAND  PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
>>> >> java    1144 hadoop   33u  IPv4   8819      0t0  TCP *:49737 (LISTEN)
>>> >> java    1144 hadoop   37u  IPv4   9001      0t0  TCP raynor:9001
>>> (LISTEN)
>>> >> java    1144 hadoop   47u  IPv4   9222      0t0  TCP *:50070 (LISTEN)
>>> >> java    1144 hadoop   52u  IPv4   9429      0t0  TCP
>>> >> raynor:9001->findlay:35283 (ESTABLISHED)
>>> >> java    1144 hadoop   53u  IPv4   9431      0t0  TCP
>>> >> raynor:9001->karrigan:57345 (ESTABLISHED)
>>> >> java    1249 hadoop   33u  IPv4   8954      0t0  TCP *:54235 (LISTEN)
>>> >> java    1249 hadoop   44u  IPv4   9422      0t0  TCP *:50010 (LISTEN)
>>> >> java    1249 hadoop   45u  IPv4   9426      0t0  TCP *:50075 (LISTEN)
>>> >>
>>> >
>>> > There are two ways to determine if Hadoop is setup correctly:
>>> >
>>> >   1. Look at the Webinterface of the Namenode [2] and see that there is
>>> no
>>> >   Safemode message or datanode missing.
>>> >   2. Or run a sample MapReduce Job, for example WordCount [3].
>>> >
>>> > If Hama is not working afterwards, you ask your next question again.
>>> >
>>> > Thanks and good luck :)
>>> >
>>> > [1]
>>> >
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>> > [2]
>>> >
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#hdfs-name-node-web-interface
>>> > [3]
>>> >
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#run-the-mapreduce-job
>>> >
>>> >
>>> > 2011/9/15 Luis Eduardo Pineda Morales <lu...@gmail.com>
>>> >
>>> >> Hi all,
>>> >>
>>> >> I am attempting to run the distributed mode. I have HDFS running in a
>>> >> single machine (pseudo-distributed mode):
>>> >>
>>> >> pineda@server00:~/hadoop$ jps
>>> >> 472 SecondaryNameNode
>>> >> 1429 Jps
>>> >> 32733 NameNode
>>> >> 364 DataNode
>>> >>
>>> >> pineda@server00:~/hadoop$ lsof -i
>>> >> COMMAND   PID   USER   FD   TYPE  DEVICE SIZE NODE NAME
>>> >> java      364 pineda   46u  IPv6 2532945       TCP *:41462 (LISTEN)
>>> >> java      364 pineda   52u  IPv6 2533275       TCP
>>> >> server00:42445->server00:54310 (ESTABLISHED)
>>> >> java      364 pineda   60u  IPv6 2533307       TCP *:50010 (LISTEN)
>>> >> java      364 pineda   61u  IPv6 2533511       TCP *:50075 (LISTEN)
>>> >> java      364 pineda   66u  IPv6 2533518       TCP *:50020 (LISTEN)
>>> >> java      472 pineda   46u  IPv6 2533286       TCP *:43098 (LISTEN)
>>> >> java      472 pineda   59u  IPv6 2533536       TCP *:50090 (LISTEN)
>>> >> java    32733 pineda   46u  IPv6 2532751       TCP *:54763 (LISTEN)
>>> >> java    32733 pineda   56u  IPv6 2533062       TCP server00:54310
>>> (LISTEN)
>>> >> java    32733 pineda   67u  IPv6 2533081       TCP *:50070 (LISTEN)
>>> >> java    32733 pineda   76u  IPv6 2533276       TCP
>>> >> server00:54310->server00:42445 (ESTABLISHED)
>>> >>
>>> >> i.e.    fs.defaul.name  =  hdfs://server00:54310/
>>> >>
>>> >> then I run hama in server04 (groom in server03, zookeeper in server05):
>>> >>
>>> >> pineda@server04:~/hama$ bin/start-bspd.sh
>>> >> server05: starting zookeeper, logging to
>>> >> /logs/hama-pineda-zookeeper-server05.out
>>> >> starting bspmaster, logging to /logs/hama-pineda-bspmaster-server04.out
>>> >> 2011-09-15 17:08:43.349:INFO::Logging to STDERR via
>>> >> org.mortbay.log.StdErrLog
>>> >> 2011-09-15 17:08:43.409:INFO::jetty-0.3.0-incubating
>>> >> server03: starting groom, logging to
>>> /logs/hama-pineda-groom-server03.out
>>> >>
>>> >> this is my hama-site.xml file:
>>> >>
>>> >> <configuration>
>>> >> <property>
>>> >>   <name>bsp.master.address</name>
>>> >>    <value>server04</value>
>>> >>  </property>
>>> >>
>>> >> <property>
>>> >>   <name>fs.default.name</name>
>>> >>    <value>hdfs://server00:54310</value>
>>> >>  </property>
>>> >>
>>> >> <property>
>>> >>   <name>hama.zookeeper.quorum</name>
>>> >>    <value>server05</value>
>>> >> </property>
>>> >> </configuration>
>>> >>
>>> >>
>>> >> In theory I can connect to the HDFS, because I don't get any
>>> >> ConnectException, but Hama doesn't run, and I get this Exception trace
>>> in my
>>> >> bspmaster.log after the Jetty is bound:
>>> >>
>>> >>
>>> >> 2011-09-15 17:08:43,409 INFO org.apache.hama.http.HttpServer: Jetty
>>> bound
>>> >> to port 40013
>>> >> 2011-09-15 17:08:44,070 INFO org.apache.hama.bsp.BSPMaster: problem
>>> >> cleaning system directory: null
>>> >> java.io.IOException: Call to server00/192.168.122.10:54310 failed on
>>> local
>>> >> exception: java.io.EOFException
>>> >>       at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
>>> >>       at org.apache.hadoop.ipc.Client.call(Client.java:743)
>>> >>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>> >>       at $Proxy4.getProtocolVersion(Unknown Source)
>>> >>       at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>>> >>        at
>>> >> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
>>> >>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
>>> >>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
>>> >>       at
>>> >>
>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
>>> >>       at
>>> >> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>>> >>       at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>>> >>       at
>>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>>> >>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>>> >>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>>> >>       at org.apache.hama.bsp.BSPMaster.<init>(BSPMaster.java:263)
>>> >>        at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:421)
>>> >>       at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:415)
>>> >>       at org.apache.hama.BSPMasterRunner.run(BSPMasterRunner.java:46)
>>> >>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>>> >>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>>> >>       at org.apache.hama.BSPMasterRunner.main(BSPMasterRunner.java:56)
>>> >> Caused by: java.io.EOFException
>>> >>       at java.io.DataInputStream.readInt(DataInputStream.java:375)
>>> >>       at
>>> >>
>>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
>>> >>       at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
>>> >>
>>> >>
>>> >> Do you know how to fix this? Do you know what is the directory that it
>>> is
>>> >> trying to clean?
>>> >>
>>> >> Any idea is welcomed!
>>> >>
>>> >> Thanks,
>>> >> Luis.
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > Thomas Jungblut
>>> > Berlin
>>> >
>>> > mobile: 0170-3081070
>>> >
>>> > business: thomas.jungblut@testberichte.de
>>> > private: thomas.jungblut@gmail.com
>>>
>>>
>>>
>>
>>
>> --
>> Thomas Jungblut
>> Berlin
>>
>> mobile: 0170-3081070
>>
>> business: thomas.jungblut@testberichte.de
>> private: thomas.jungblut@gmail.com
>>
>>
>>
>
>
> --
> Thomas Jungblut
> Berlin
>
> mobile: 0170-3081070
>
> business: thomas.jungblut@testberichte.de
> private: thomas.jungblut@gmail.com
>

compared with MapReduce ,what is the advantage of HAMA?

Posted by changguanghui <ch...@huawei.com>.
Hi Thomas,

Could you provide a concrete instance to illustrate the advantage of HAMA, when HAMA vs. MapReduce?

For example,SSSP on HAMA vs. SSSP on MapReduce. So ,I can catch the idea of HAMA quickly.

Thank you very much!

Changguanghui

-----邮件原件-----
发件人: Thomas Jungblut [mailto:thomas.jungblut@googlemail.com] 
发送时间: 2011年9月19日 23:17
收件人: Luis Eduardo Pineda Morales
抄送: hama-user@incubator.apache.org
主题: Re: Hama help (how the distributed mode is working)

>
> I finally managed to setup and run Hama in fully distributed mode (thanks a
> lot to Thomas Jungblut!)
>

No problem, that's my "job" ;)). That is great. Have fun!

2011/9/19 Luis Eduardo Pineda Morales <lu...@gmail.com>

> Hi all!
>
> I finally managed to setup and run Hama in fully distributed mode (thanks a
> lot to Thomas Jungblut!)
>
> I'm using Hama 0.3.0 and Hadoop 0.20.2 with IPv4 as in
> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>
> Same settings didn't work with Hadoop 0.20.203 (said to be the most recent
> stable version).
> Hope these settings are useful for you.
>
> Luis
>
>
> On 15 Sep 2011, at 19:25, Thomas Jungblut wrote:
>
> Hey, I'm sorry, the IPv6 was misleading.
> On your screenshot I see that you are using an Append version of Hadoop.
> Did you try it with 0.20.2?
>
> 2011/9/15 Luis Eduardo Pineda Morales <lu...@gmail.com>
>
>> Hi Thomas, apparently IPv6 wasn't the problem, since now hadoop is running
>> in IPv4 and i still get the same exceptions in hama.
>>
>> pineda@server00:~/hadoop$ jps
>> 10592 NameNode
>> 10922 Jps
>> 10695 DataNode
>> 10844 SecondaryNameNode
>>
>> pineda@server00:~/hadoop$ lsof -i
>> COMMAND   PID   USER   FD   TYPE  DEVICE SIZE NODE NAME
>> java    10592 pineda   46u  IPv4 2559447       TCP *:50272 (LISTEN)
>> java    10592 pineda   56u  IPv4 2559684       TCP server00:54310 (LISTEN)
>> java    10592 pineda   67u  IPv4 2559694       TCP *:50070 (LISTEN)
>> java    10592 pineda   71u  IPv4 2559771       TCP
>> server00:54310->server00:51666 (ESTABLISHED)
>> java    10592 pineda   72u  IPv4 2559810       TCP
>> server00:51668->server00:54310 (ESTABLISHED)
>> java    10592 pineda   73u  IPv4 2559811       TCP
>> server00:54310->server00:51668 (ESTABLISHED)
>> java    10592 pineda   77u  IPv4 2560218       TCP
>> server00:54310->server00:51671 (ESTABLISHED)
>> java    10695 pineda   46u  IPv4 2559682       TCP *:44935 (LISTEN)
>> java    10695 pineda   52u  IPv4 2559764       TCP
>> server00:51666->server00:54310 (ESTABLISHED)
>> java    10695 pineda   60u  IPv4 2559892       TCP *:50010 (LISTEN)
>> java    10695 pineda   61u  IPv4 2559899       TCP *:50075 (LISTEN)
>> java    10695 pineda   66u  IPv4 2560208       TCP *:50020 (LISTEN)
>> java    10844 pineda   46u  IPv4 2560204       TCP *:41188 (LISTEN)
>> java    10844 pineda   52u  IPv4 2560217       TCP
>> server00:51671->server00:54310 (ESTABLISHED)
>> java    10844 pineda   59u  IPv4 2560225       TCP *:50090 (LISTEN)
>>
>>
>> also the web interface doesn't show any errors:   and I'm able to run
>> hadoop shell commands.  Any other idea? :-/
>>
>> Luis
>>
>>
>>
>>
>> On 15 Sep 2011, at 18:17, Thomas Jungblut wrote:
>>
>> > Hi Luis,
>> >
>> > it doesn't mean that it is working, just because there is no exception.
>> > Thanks that you appended your lsof output, because Hadoop does not
>> support
>> > IPv6.
>> >
>> > Please setup Hadoop correctly [1] and then use Hama.
>> > For example here is my lsof -i output:
>> >
>> > hadoop@raynor:/home/thomasjungblut$ lsof -i
>> >> COMMAND  PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
>> >> java    1144 hadoop   33u  IPv4   8819      0t0  TCP *:49737 (LISTEN)
>> >> java    1144 hadoop   37u  IPv4   9001      0t0  TCP raynor:9001
>> (LISTEN)
>> >> java    1144 hadoop   47u  IPv4   9222      0t0  TCP *:50070 (LISTEN)
>> >> java    1144 hadoop   52u  IPv4   9429      0t0  TCP
>> >> raynor:9001->findlay:35283 (ESTABLISHED)
>> >> java    1144 hadoop   53u  IPv4   9431      0t0  TCP
>> >> raynor:9001->karrigan:57345 (ESTABLISHED)
>> >> java    1249 hadoop   33u  IPv4   8954      0t0  TCP *:54235 (LISTEN)
>> >> java    1249 hadoop   44u  IPv4   9422      0t0  TCP *:50010 (LISTEN)
>> >> java    1249 hadoop   45u  IPv4   9426      0t0  TCP *:50075 (LISTEN)
>> >>
>> >
>> > There are two ways to determine if Hadoop is setup correctly:
>> >
>> >   1. Look at the Webinterface of the Namenode [2] and see that there is
>> no
>> >   Safemode message or datanode missing.
>> >   2. Or run a sample MapReduce Job, for example WordCount [3].
>> >
>> > If Hama is not working afterwards, you ask your next question again.
>> >
>> > Thanks and good luck :)
>> >
>> > [1]
>> >
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>> > [2]
>> >
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#hdfs-name-node-web-interface
>> > [3]
>> >
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#run-the-mapreduce-job
>> >
>> >
>> > 2011/9/15 Luis Eduardo Pineda Morales <lu...@gmail.com>
>> >
>> >> Hi all,
>> >>
>> >> I am attempting to run the distributed mode. I have HDFS running in a
>> >> single machine (pseudo-distributed mode):
>> >>
>> >> pineda@server00:~/hadoop$ jps
>> >> 472 SecondaryNameNode
>> >> 1429 Jps
>> >> 32733 NameNode
>> >> 364 DataNode
>> >>
>> >> pineda@server00:~/hadoop$ lsof -i
>> >> COMMAND   PID   USER   FD   TYPE  DEVICE SIZE NODE NAME
>> >> java      364 pineda   46u  IPv6 2532945       TCP *:41462 (LISTEN)
>> >> java      364 pineda   52u  IPv6 2533275       TCP
>> >> server00:42445->server00:54310 (ESTABLISHED)
>> >> java      364 pineda   60u  IPv6 2533307       TCP *:50010 (LISTEN)
>> >> java      364 pineda   61u  IPv6 2533511       TCP *:50075 (LISTEN)
>> >> java      364 pineda   66u  IPv6 2533518       TCP *:50020 (LISTEN)
>> >> java      472 pineda   46u  IPv6 2533286       TCP *:43098 (LISTEN)
>> >> java      472 pineda   59u  IPv6 2533536       TCP *:50090 (LISTEN)
>> >> java    32733 pineda   46u  IPv6 2532751       TCP *:54763 (LISTEN)
>> >> java    32733 pineda   56u  IPv6 2533062       TCP server00:54310
>> (LISTEN)
>> >> java    32733 pineda   67u  IPv6 2533081       TCP *:50070 (LISTEN)
>> >> java    32733 pineda   76u  IPv6 2533276       TCP
>> >> server00:54310->server00:42445 (ESTABLISHED)
>> >>
>> >> i.e.    fs.defaul.name  =  hdfs://server00:54310/
>> >>
>> >> then I run hama in server04 (groom in server03, zookeeper in server05):
>> >>
>> >> pineda@server04:~/hama$ bin/start-bspd.sh
>> >> server05: starting zookeeper, logging to
>> >> /logs/hama-pineda-zookeeper-server05.out
>> >> starting bspmaster, logging to /logs/hama-pineda-bspmaster-server04.out
>> >> 2011-09-15 17:08:43.349:INFO::Logging to STDERR via
>> >> org.mortbay.log.StdErrLog
>> >> 2011-09-15 17:08:43.409:INFO::jetty-0.3.0-incubating
>> >> server03: starting groom, logging to
>> /logs/hama-pineda-groom-server03.out
>> >>
>> >> this is my hama-site.xml file:
>> >>
>> >> <configuration>
>> >> <property>
>> >>   <name>bsp.master.address</name>
>> >>    <value>server04</value>
>> >>  </property>
>> >>
>> >> <property>
>> >>   <name>fs.default.name</name>
>> >>    <value>hdfs://server00:54310</value>
>> >>  </property>
>> >>
>> >> <property>
>> >>   <name>hama.zookeeper.quorum</name>
>> >>    <value>server05</value>
>> >> </property>
>> >> </configuration>
>> >>
>> >>
>> >> In theory I can connect to the HDFS, because I don't get any
>> >> ConnectException, but Hama doesn't run, and I get this Exception trace
>> in my
>> >> bspmaster.log after the Jetty is bound:
>> >>
>> >>
>> >> 2011-09-15 17:08:43,409 INFO org.apache.hama.http.HttpServer: Jetty
>> bound
>> >> to port 40013
>> >> 2011-09-15 17:08:44,070 INFO org.apache.hama.bsp.BSPMaster: problem
>> >> cleaning system directory: null
>> >> java.io.IOException: Call to server00/192.168.122.10:54310 failed on
>> local
>> >> exception: java.io.EOFException
>> >>       at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
>> >>       at org.apache.hadoop.ipc.Client.call(Client.java:743)
>> >>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>> >>       at $Proxy4.getProtocolVersion(Unknown Source)
>> >>       at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>> >>        at
>> >> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
>> >>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
>> >>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
>> >>       at
>> >>
>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
>> >>       at
>> >> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>> >>       at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>> >>       at
>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>> >>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>> >>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>> >>       at org.apache.hama.bsp.BSPMaster.<init>(BSPMaster.java:263)
>> >>        at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:421)
>> >>       at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:415)
>> >>       at org.apache.hama.BSPMasterRunner.run(BSPMasterRunner.java:46)
>> >>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> >>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>> >>       at org.apache.hama.BSPMasterRunner.main(BSPMasterRunner.java:56)
>> >> Caused by: java.io.EOFException
>> >>       at java.io.DataInputStream.readInt(DataInputStream.java:375)
>> >>       at
>> >>
>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
>> >>       at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
>> >>
>> >>
>> >> Do you know how to fix this? Do you know what is the directory that it
>> is
>> >> trying to clean?
>> >>
>> >> Any idea is welcomed!
>> >>
>> >> Thanks,
>> >> Luis.
>> >
>> >
>> >
>> >
>> > --
>> > Thomas Jungblut
>> > Berlin
>> >
>> > mobile: 0170-3081070
>> >
>> > business: thomas.jungblut@testberichte.de
>> > private: thomas.jungblut@gmail.com
>>
>>
>>
>
>
> --
> Thomas Jungblut
> Berlin
>
> mobile: 0170-3081070
>
> business: thomas.jungblut@testberichte.de
> private: thomas.jungblut@gmail.com
>
>
>


-- 
Thomas Jungblut
Berlin

mobile: 0170-3081070

business: thomas.jungblut@testberichte.de
private: thomas.jungblut@gmail.com

Re: Hama help (how the distributed mode is working)

Posted by Thomas Jungblut <th...@googlemail.com>.
>
> I finally managed to setup and run Hama in fully distributed mode (thanks a
> lot to Thomas Jungblut!)
>

No problem, that's my "job" ;)). That is great. Have fun!

2011/9/19 Luis Eduardo Pineda Morales <lu...@gmail.com>

> Hi all!
>
> I finally managed to setup and run Hama in fully distributed mode (thanks a
> lot to Thomas Jungblut!)
>
> I'm using Hama 0.3.0 and Hadoop 0.20.2 with IPv4 as in
> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>
> Same settings didn't work with Hadoop 0.20.203 (said to be the most recent
> stable version).
> Hope these settings are useful for you.
>
> Luis
>
>
> On 15 Sep 2011, at 19:25, Thomas Jungblut wrote:
>
> Hey, I'm sorry, the IPv6 was misleading.
> On your screenshot I see that you are using an Append version of Hadoop.
> Did you try it with 0.20.2?
>
> 2011/9/15 Luis Eduardo Pineda Morales <lu...@gmail.com>
>
>> Hi Thomas, apparently IPv6 wasn't the problem, since now hadoop is running
>> in IPv4 and i still get the same exceptions in hama.
>>
>> pineda@server00:~/hadoop$ jps
>> 10592 NameNode
>> 10922 Jps
>> 10695 DataNode
>> 10844 SecondaryNameNode
>>
>> pineda@server00:~/hadoop$ lsof -i
>> COMMAND   PID   USER   FD   TYPE  DEVICE SIZE NODE NAME
>> java    10592 pineda   46u  IPv4 2559447       TCP *:50272 (LISTEN)
>> java    10592 pineda   56u  IPv4 2559684       TCP server00:54310 (LISTEN)
>> java    10592 pineda   67u  IPv4 2559694       TCP *:50070 (LISTEN)
>> java    10592 pineda   71u  IPv4 2559771       TCP
>> server00:54310->server00:51666 (ESTABLISHED)
>> java    10592 pineda   72u  IPv4 2559810       TCP
>> server00:51668->server00:54310 (ESTABLISHED)
>> java    10592 pineda   73u  IPv4 2559811       TCP
>> server00:54310->server00:51668 (ESTABLISHED)
>> java    10592 pineda   77u  IPv4 2560218       TCP
>> server00:54310->server00:51671 (ESTABLISHED)
>> java    10695 pineda   46u  IPv4 2559682       TCP *:44935 (LISTEN)
>> java    10695 pineda   52u  IPv4 2559764       TCP
>> server00:51666->server00:54310 (ESTABLISHED)
>> java    10695 pineda   60u  IPv4 2559892       TCP *:50010 (LISTEN)
>> java    10695 pineda   61u  IPv4 2559899       TCP *:50075 (LISTEN)
>> java    10695 pineda   66u  IPv4 2560208       TCP *:50020 (LISTEN)
>> java    10844 pineda   46u  IPv4 2560204       TCP *:41188 (LISTEN)
>> java    10844 pineda   52u  IPv4 2560217       TCP
>> server00:51671->server00:54310 (ESTABLISHED)
>> java    10844 pineda   59u  IPv4 2560225       TCP *:50090 (LISTEN)
>>
>>
>> also the web interface doesn't show any errors:   and I'm able to run
>> hadoop shell commands.  Any other idea? :-/
>>
>> Luis
>>
>>
>>
>>
>> On 15 Sep 2011, at 18:17, Thomas Jungblut wrote:
>>
>> > Hi Luis,
>> >
>> > it doesn't mean that it is working, just because there is no exception.
>> > Thanks that you appended your lsof output, because Hadoop does not
>> support
>> > IPv6.
>> >
>> > Please setup Hadoop correctly [1] and then use Hama.
>> > For example here is my lsof -i output:
>> >
>> > hadoop@raynor:/home/thomasjungblut$ lsof -i
>> >> COMMAND  PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
>> >> java    1144 hadoop   33u  IPv4   8819      0t0  TCP *:49737 (LISTEN)
>> >> java    1144 hadoop   37u  IPv4   9001      0t0  TCP raynor:9001
>> (LISTEN)
>> >> java    1144 hadoop   47u  IPv4   9222      0t0  TCP *:50070 (LISTEN)
>> >> java    1144 hadoop   52u  IPv4   9429      0t0  TCP
>> >> raynor:9001->findlay:35283 (ESTABLISHED)
>> >> java    1144 hadoop   53u  IPv4   9431      0t0  TCP
>> >> raynor:9001->karrigan:57345 (ESTABLISHED)
>> >> java    1249 hadoop   33u  IPv4   8954      0t0  TCP *:54235 (LISTEN)
>> >> java    1249 hadoop   44u  IPv4   9422      0t0  TCP *:50010 (LISTEN)
>> >> java    1249 hadoop   45u  IPv4   9426      0t0  TCP *:50075 (LISTEN)
>> >>
>> >
>> > There are two ways to determine if Hadoop is setup correctly:
>> >
>> >   1. Look at the Webinterface of the Namenode [2] and see that there is
>> no
>> >   Safemode message or datanode missing.
>> >   2. Or run a sample MapReduce Job, for example WordCount [3].
>> >
>> > If Hama is not working afterwards, you ask your next question again.
>> >
>> > Thanks and good luck :)
>> >
>> > [1]
>> >
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>> > [2]
>> >
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#hdfs-name-node-web-interface
>> > [3]
>> >
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#run-the-mapreduce-job
>> >
>> >
>> > 2011/9/15 Luis Eduardo Pineda Morales <lu...@gmail.com>
>> >
>> >> Hi all,
>> >>
>> >> I am attempting to run the distributed mode. I have HDFS running in a
>> >> single machine (pseudo-distributed mode):
>> >>
>> >> pineda@server00:~/hadoop$ jps
>> >> 472 SecondaryNameNode
>> >> 1429 Jps
>> >> 32733 NameNode
>> >> 364 DataNode
>> >>
>> >> pineda@server00:~/hadoop$ lsof -i
>> >> COMMAND   PID   USER   FD   TYPE  DEVICE SIZE NODE NAME
>> >> java      364 pineda   46u  IPv6 2532945       TCP *:41462 (LISTEN)
>> >> java      364 pineda   52u  IPv6 2533275       TCP
>> >> server00:42445->server00:54310 (ESTABLISHED)
>> >> java      364 pineda   60u  IPv6 2533307       TCP *:50010 (LISTEN)
>> >> java      364 pineda   61u  IPv6 2533511       TCP *:50075 (LISTEN)
>> >> java      364 pineda   66u  IPv6 2533518       TCP *:50020 (LISTEN)
>> >> java      472 pineda   46u  IPv6 2533286       TCP *:43098 (LISTEN)
>> >> java      472 pineda   59u  IPv6 2533536       TCP *:50090 (LISTEN)
>> >> java    32733 pineda   46u  IPv6 2532751       TCP *:54763 (LISTEN)
>> >> java    32733 pineda   56u  IPv6 2533062       TCP server00:54310
>> (LISTEN)
>> >> java    32733 pineda   67u  IPv6 2533081       TCP *:50070 (LISTEN)
>> >> java    32733 pineda   76u  IPv6 2533276       TCP
>> >> server00:54310->server00:42445 (ESTABLISHED)
>> >>
>> >> i.e.    fs.defaul.name  =  hdfs://server00:54310/
>> >>
>> >> then I run hama in server04 (groom in server03, zookeeper in server05):
>> >>
>> >> pineda@server04:~/hama$ bin/start-bspd.sh
>> >> server05: starting zookeeper, logging to
>> >> /logs/hama-pineda-zookeeper-server05.out
>> >> starting bspmaster, logging to /logs/hama-pineda-bspmaster-server04.out
>> >> 2011-09-15 17:08:43.349:INFO::Logging to STDERR via
>> >> org.mortbay.log.StdErrLog
>> >> 2011-09-15 17:08:43.409:INFO::jetty-0.3.0-incubating
>> >> server03: starting groom, logging to
>> /logs/hama-pineda-groom-server03.out
>> >>
>> >> this is my hama-site.xml file:
>> >>
>> >> <configuration>
>> >> <property>
>> >>   <name>bsp.master.address</name>
>> >>    <value>server04</value>
>> >>  </property>
>> >>
>> >> <property>
>> >>   <name>fs.default.name</name>
>> >>    <value>hdfs://server00:54310</value>
>> >>  </property>
>> >>
>> >> <property>
>> >>   <name>hama.zookeeper.quorum</name>
>> >>    <value>server05</value>
>> >> </property>
>> >> </configuration>
>> >>
>> >>
>> >> In theory I can connect to the HDFS, because I don't get any
>> >> ConnectException, but Hama doesn't run, and I get this Exception trace
>> in my
>> >> bspmaster.log after the Jetty is bound:
>> >>
>> >>
>> >> 2011-09-15 17:08:43,409 INFO org.apache.hama.http.HttpServer: Jetty
>> bound
>> >> to port 40013
>> >> 2011-09-15 17:08:44,070 INFO org.apache.hama.bsp.BSPMaster: problem
>> >> cleaning system directory: null
>> >> java.io.IOException: Call to server00/192.168.122.10:54310 failed on
>> local
>> >> exception: java.io.EOFException
>> >>       at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
>> >>       at org.apache.hadoop.ipc.Client.call(Client.java:743)
>> >>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>> >>       at $Proxy4.getProtocolVersion(Unknown Source)
>> >>       at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>> >>        at
>> >> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
>> >>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
>> >>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
>> >>       at
>> >>
>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
>> >>       at
>> >> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>> >>       at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>> >>       at
>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>> >>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>> >>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>> >>       at org.apache.hama.bsp.BSPMaster.<init>(BSPMaster.java:263)
>> >>        at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:421)
>> >>       at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:415)
>> >>       at org.apache.hama.BSPMasterRunner.run(BSPMasterRunner.java:46)
>> >>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> >>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>> >>       at org.apache.hama.BSPMasterRunner.main(BSPMasterRunner.java:56)
>> >> Caused by: java.io.EOFException
>> >>       at java.io.DataInputStream.readInt(DataInputStream.java:375)
>> >>       at
>> >>
>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
>> >>       at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
>> >>
>> >>
>> >> Do you know how to fix this? Do you know what is the directory that it
>> is
>> >> trying to clean?
>> >>
>> >> Any idea is welcomed!
>> >>
>> >> Thanks,
>> >> Luis.
>> >
>> >
>> >
>> >
>> > --
>> > Thomas Jungblut
>> > Berlin
>> >
>> > mobile: 0170-3081070
>> >
>> > business: thomas.jungblut@testberichte.de
>> > private: thomas.jungblut@gmail.com
>>
>>
>>
>
>
> --
> Thomas Jungblut
> Berlin
>
> mobile: 0170-3081070
>
> business: thomas.jungblut@testberichte.de
> private: thomas.jungblut@gmail.com
>
>
>


-- 
Thomas Jungblut
Berlin

mobile: 0170-3081070

business: thomas.jungblut@testberichte.de
private: thomas.jungblut@gmail.com

Re: Hama help (how the distributed mode is working)

Posted by Luis Eduardo Pineda Morales <lu...@gmail.com>.
Hi all!

I finally managed to setup and run Hama in fully distributed mode (thanks a lot to Thomas Jungblut!)

I'm using Hama 0.3.0 and Hadoop 0.20.2 with IPv4 as in http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

Same settings didn't work with Hadoop 0.20.203 (said to be the most recent stable version).
Hope these settings are useful for you.

Luis


On 15 Sep 2011, at 19:25, Thomas Jungblut wrote:

> Hey, I'm sorry, the IPv6 was misleading.
> On your screenshot I see that you are using an Append version of Hadoop.
> Did you try it with 0.20.2?
> 
> 2011/9/15 Luis Eduardo Pineda Morales <lu...@gmail.com>
> Hi Thomas, apparently IPv6 wasn't the problem, since now hadoop is running in IPv4 and i still get the same exceptions in hama.
> 
> pineda@server00:~/hadoop$ jps
> 10592 NameNode
> 10922 Jps
> 10695 DataNode
> 10844 SecondaryNameNode
> 
> pineda@server00:~/hadoop$ lsof -i
> COMMAND   PID   USER   FD   TYPE  DEVICE SIZE NODE NAME
> java    10592 pineda   46u  IPv4 2559447       TCP *:50272 (LISTEN)
> java    10592 pineda   56u  IPv4 2559684       TCP server00:54310 (LISTEN)
> java    10592 pineda   67u  IPv4 2559694       TCP *:50070 (LISTEN)
> java    10592 pineda   71u  IPv4 2559771       TCP server00:54310->server00:51666 (ESTABLISHED)
> java    10592 pineda   72u  IPv4 2559810       TCP server00:51668->server00:54310 (ESTABLISHED)
> java    10592 pineda   73u  IPv4 2559811       TCP server00:54310->server00:51668 (ESTABLISHED)
> java    10592 pineda   77u  IPv4 2560218       TCP server00:54310->server00:51671 (ESTABLISHED)
> java    10695 pineda   46u  IPv4 2559682       TCP *:44935 (LISTEN)
> java    10695 pineda   52u  IPv4 2559764       TCP server00:51666->server00:54310 (ESTABLISHED)
> java    10695 pineda   60u  IPv4 2559892       TCP *:50010 (LISTEN)
> java    10695 pineda   61u  IPv4 2559899       TCP *:50075 (LISTEN)
> java    10695 pineda   66u  IPv4 2560208       TCP *:50020 (LISTEN)
> java    10844 pineda   46u  IPv4 2560204       TCP *:41188 (LISTEN)
> java    10844 pineda   52u  IPv4 2560217       TCP server00:51671->server00:54310 (ESTABLISHED)
> java    10844 pineda   59u  IPv4 2560225       TCP *:50090 (LISTEN)
> 
> 
> also the web interface doesn't show any errors:   and I'm able to run hadoop shell commands.  Any other idea? :-/
> 
> Luis
> 
> 
> 
> 
> On 15 Sep 2011, at 18:17, Thomas Jungblut wrote:
> 
> > Hi Luis,
> >
> > it doesn't mean that it is working, just because there is no exception.
> > Thanks that you appended your lsof output, because Hadoop does not support
> > IPv6.
> >
> > Please setup Hadoop correctly [1] and then use Hama.
> > For example here is my lsof -i output:
> >
> > hadoop@raynor:/home/thomasjungblut$ lsof -i
> >> COMMAND  PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
> >> java    1144 hadoop   33u  IPv4   8819      0t0  TCP *:49737 (LISTEN)
> >> java    1144 hadoop   37u  IPv4   9001      0t0  TCP raynor:9001 (LISTEN)
> >> java    1144 hadoop   47u  IPv4   9222      0t0  TCP *:50070 (LISTEN)
> >> java    1144 hadoop   52u  IPv4   9429      0t0  TCP
> >> raynor:9001->findlay:35283 (ESTABLISHED)
> >> java    1144 hadoop   53u  IPv4   9431      0t0  TCP
> >> raynor:9001->karrigan:57345 (ESTABLISHED)
> >> java    1249 hadoop   33u  IPv4   8954      0t0  TCP *:54235 (LISTEN)
> >> java    1249 hadoop   44u  IPv4   9422      0t0  TCP *:50010 (LISTEN)
> >> java    1249 hadoop   45u  IPv4   9426      0t0  TCP *:50075 (LISTEN)
> >>
> >
> > There are two ways to determine if Hadoop is setup correctly:
> >
> >   1. Look at the Webinterface of the Namenode [2] and see that there is no
> >   Safemode message or datanode missing.
> >   2. Or run a sample MapReduce Job, for example WordCount [3].
> >
> > If Hama is not working afterwards, you ask your next question again.
> >
> > Thanks and good luck :)
> >
> > [1]
> > http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
> > [2]
> > http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#hdfs-name-node-web-interface
> > [3]
> > http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#run-the-mapreduce-job
> >
> >
> > 2011/9/15 Luis Eduardo Pineda Morales <lu...@gmail.com>
> >
> >> Hi all,
> >>
> >> I am attempting to run the distributed mode. I have HDFS running in a
> >> single machine (pseudo-distributed mode):
> >>
> >> pineda@server00:~/hadoop$ jps
> >> 472 SecondaryNameNode
> >> 1429 Jps
> >> 32733 NameNode
> >> 364 DataNode
> >>
> >> pineda@server00:~/hadoop$ lsof -i
> >> COMMAND   PID   USER   FD   TYPE  DEVICE SIZE NODE NAME
> >> java      364 pineda   46u  IPv6 2532945       TCP *:41462 (LISTEN)
> >> java      364 pineda   52u  IPv6 2533275       TCP
> >> server00:42445->server00:54310 (ESTABLISHED)
> >> java      364 pineda   60u  IPv6 2533307       TCP *:50010 (LISTEN)
> >> java      364 pineda   61u  IPv6 2533511       TCP *:50075 (LISTEN)
> >> java      364 pineda   66u  IPv6 2533518       TCP *:50020 (LISTEN)
> >> java      472 pineda   46u  IPv6 2533286       TCP *:43098 (LISTEN)
> >> java      472 pineda   59u  IPv6 2533536       TCP *:50090 (LISTEN)
> >> java    32733 pineda   46u  IPv6 2532751       TCP *:54763 (LISTEN)
> >> java    32733 pineda   56u  IPv6 2533062       TCP server00:54310 (LISTEN)
> >> java    32733 pineda   67u  IPv6 2533081       TCP *:50070 (LISTEN)
> >> java    32733 pineda   76u  IPv6 2533276       TCP
> >> server00:54310->server00:42445 (ESTABLISHED)
> >>
> >> i.e.    fs.defaul.name  =  hdfs://server00:54310/
> >>
> >> then I run hama in server04 (groom in server03, zookeeper in server05):
> >>
> >> pineda@server04:~/hama$ bin/start-bspd.sh
> >> server05: starting zookeeper, logging to
> >> /logs/hama-pineda-zookeeper-server05.out
> >> starting bspmaster, logging to /logs/hama-pineda-bspmaster-server04.out
> >> 2011-09-15 17:08:43.349:INFO::Logging to STDERR via
> >> org.mortbay.log.StdErrLog
> >> 2011-09-15 17:08:43.409:INFO::jetty-0.3.0-incubating
> >> server03: starting groom, logging to /logs/hama-pineda-groom-server03.out
> >>
> >> this is my hama-site.xml file:
> >>
> >> <configuration>
> >> <property>
> >>   <name>bsp.master.address</name>
> >>    <value>server04</value>
> >>  </property>
> >>
> >> <property>
> >>   <name>fs.default.name</name>
> >>    <value>hdfs://server00:54310</value>
> >>  </property>
> >>
> >> <property>
> >>   <name>hama.zookeeper.quorum</name>
> >>    <value>server05</value>
> >> </property>
> >> </configuration>
> >>
> >>
> >> In theory I can connect to the HDFS, because I don't get any
> >> ConnectException, but Hama doesn't run, and I get this Exception trace in my
> >> bspmaster.log after the Jetty is bound:
> >>
> >>
> >> 2011-09-15 17:08:43,409 INFO org.apache.hama.http.HttpServer: Jetty bound
> >> to port 40013
> >> 2011-09-15 17:08:44,070 INFO org.apache.hama.bsp.BSPMaster: problem
> >> cleaning system directory: null
> >> java.io.IOException: Call to server00/192.168.122.10:54310 failed on local
> >> exception: java.io.EOFException
> >>       at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
> >>       at org.apache.hadoop.ipc.Client.call(Client.java:743)
> >>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> >>       at $Proxy4.getProtocolVersion(Unknown Source)
> >>       at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
> >>        at
> >> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
> >>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
> >>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
> >>       at
> >> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
> >>       at
> >> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
> >>       at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
> >>       at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
> >>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
> >>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
> >>       at org.apache.hama.bsp.BSPMaster.<init>(BSPMaster.java:263)
> >>        at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:421)
> >>       at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:415)
> >>       at org.apache.hama.BSPMasterRunner.run(BSPMasterRunner.java:46)
> >>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
> >>       at org.apache.hama.BSPMasterRunner.main(BSPMasterRunner.java:56)
> >> Caused by: java.io.EOFException
> >>       at java.io.DataInputStream.readInt(DataInputStream.java:375)
> >>       at
> >> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
> >>       at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
> >>
> >>
> >> Do you know how to fix this? Do you know what is the directory that it is
> >> trying to clean?
> >>
> >> Any idea is welcomed!
> >>
> >> Thanks,
> >> Luis.
> >
> >
> >
> >
> > --
> > Thomas Jungblut
> > Berlin
> >
> > mobile: 0170-3081070
> >
> > business: thomas.jungblut@testberichte.de
> > private: thomas.jungblut@gmail.com
> 
> 
> 
> 
> 
> -- 
> Thomas Jungblut
> Berlin
> 
> mobile: 0170-3081070
> 
> business: thomas.jungblut@testberichte.de
> private: thomas.jungblut@gmail.com


Re: Hama help (how the distributed mode is working)

Posted by Thomas Jungblut <th...@googlemail.com>.
Hi Luis,

it doesn't mean that it is working, just because there is no exception.
Thanks that you appended your lsof output, because Hadoop does not support
IPv6.

Please setup Hadoop correctly [1] and then use Hama.
For example here is my lsof -i output:

hadoop@raynor:/home/thomasjungblut$ lsof -i
> COMMAND  PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
> java    1144 hadoop   33u  IPv4   8819      0t0  TCP *:49737 (LISTEN)
> java    1144 hadoop   37u  IPv4   9001      0t0  TCP raynor:9001 (LISTEN)
> java    1144 hadoop   47u  IPv4   9222      0t0  TCP *:50070 (LISTEN)
> java    1144 hadoop   52u  IPv4   9429      0t0  TCP
> raynor:9001->findlay:35283 (ESTABLISHED)
> java    1144 hadoop   53u  IPv4   9431      0t0  TCP
> raynor:9001->karrigan:57345 (ESTABLISHED)
> java    1249 hadoop   33u  IPv4   8954      0t0  TCP *:54235 (LISTEN)
> java    1249 hadoop   44u  IPv4   9422      0t0  TCP *:50010 (LISTEN)
> java    1249 hadoop   45u  IPv4   9426      0t0  TCP *:50075 (LISTEN)
>

There are two ways to determine if Hadoop is setup correctly:

   1. Look at the Webinterface of the Namenode [2] and see that there is no
   Safemode message or datanode missing.
   2. Or run a sample MapReduce Job, for example WordCount [3].

If Hama is not working afterwards, you ask your next question again.

Thanks and good luck :)

[1]
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
[2]
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#hdfs-name-node-web-interface
[3]
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#run-the-mapreduce-job


2011/9/15 Luis Eduardo Pineda Morales <lu...@gmail.com>

> Hi all,
>
> I am attempting to run the distributed mode. I have HDFS running in a
> single machine (pseudo-distributed mode):
>
> pineda@server00:~/hadoop$ jps
> 472 SecondaryNameNode
> 1429 Jps
> 32733 NameNode
> 364 DataNode
>
> pineda@net-server00:~/hadoop$ lsof -i
> COMMAND   PID   USER   FD   TYPE  DEVICE SIZE NODE NAME
> java      364 pineda   46u  IPv6 2532945       TCP *:41462 (LISTEN)
> java      364 pineda   52u  IPv6 2533275       TCP
> server00:42445->server00:54310 (ESTABLISHED)
> java      364 pineda   60u  IPv6 2533307       TCP *:50010 (LISTEN)
> java      364 pineda   61u  IPv6 2533511       TCP *:50075 (LISTEN)
> java      364 pineda   66u  IPv6 2533518       TCP *:50020 (LISTEN)
> java      472 pineda   46u  IPv6 2533286       TCP *:43098 (LISTEN)
> java      472 pineda   59u  IPv6 2533536       TCP *:50090 (LISTEN)
> java    32733 pineda   46u  IPv6 2532751       TCP *:54763 (LISTEN)
> java    32733 pineda   56u  IPv6 2533062       TCP server00:54310 (LISTEN)
> java    32733 pineda   67u  IPv6 2533081       TCP *:50070 (LISTEN)
> java    32733 pineda   76u  IPv6 2533276       TCP
> server00:54310->server00:42445 (ESTABLISHED)
>
> i.e.    fs.defaul.name  =  hdfs://server00:54310/
>
> then I run hama in server04 (groom in server03, zookeeper in server05):
>
> pineda@server04:~/hama$ bin/start-bspd.sh
> server05: starting zookeeper, logging to
> /logs/hama-pineda-zookeeper-server05.out
> starting bspmaster, logging to /logs/hama-pineda-bspmaster-server04.out
> 2011-09-15 17:08:43.349:INFO::Logging to STDERR via
> org.mortbay.log.StdErrLog
> 2011-09-15 17:08:43.409:INFO::jetty-0.3.0-incubating
> server03: starting groom, logging to /logs/hama-pineda-groom-server03.out
>
> this is my hama-site.xml file:
>
> <configuration>
>  <property>
>    <name>bsp.master.address</name>
>     <value>server04</value>
>   </property>
>
>  <property>
>    <name>fs.default.name</name>
>     <value>hdfs://server00:54310</value>
>   </property>
>
>  <property>
>    <name>hama.zookeeper.quorum</name>
>     <value>server05</value>
>  </property>
> </configuration>
>
>
> In theory I can connect to the HDFS, because I don't get any
> ConnectException, but Hama doesn't run, and I get this Exception trace in my
> bspmaster.log after the Jetty is bound:
>
>
> 2011-09-15 17:08:43,409 INFO org.apache.hama.http.HttpServer: Jetty bound
> to port 40013
> 2011-09-15 17:08:44,070 INFO org.apache.hama.bsp.BSPMaster: problem
> cleaning system directory: null
> java.io.IOException: Call to server00/192.168.122.10:54310 failed on local
> exception: java.io.EOFException
>        at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
>        at org.apache.hadoop.ipc.Client.call(Client.java:743)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>        at $Proxy4.getProtocolVersion(Unknown Source)
>        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>         at
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
>        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
>        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
>        at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
>        at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>        at org.apache.hama.bsp.BSPMaster.<init>(BSPMaster.java:263)
>         at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:421)
>        at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:415)
>        at org.apache.hama.BSPMasterRunner.run(BSPMasterRunner.java:46)
>        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>        at org.apache.hama.BSPMasterRunner.main(BSPMasterRunner.java:56)
> Caused by: java.io.EOFException
>        at java.io.DataInputStream.readInt(DataInputStream.java:375)
>        at
> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
>        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
>
>
> Do you know how to fix this? Do you know what is the directory that it is
> trying to clean?
>
> Any idea is welcomed!
>
> Thanks,
> Luis.




-- 
Thomas Jungblut
Berlin

mobile: 0170-3081070

business: thomas.jungblut@testberichte.de
private: thomas.jungblut@gmail.com

Re: Hama help (how the distributed mode is working)

Posted by Luis Eduardo Pineda Morales <lu...@gmail.com>.
Hi all,

I am attempting to run the distributed mode. I have HDFS running in a single machine (pseudo-distributed mode):

pineda@server00:~/hadoop$ jps
472 SecondaryNameNode
1429 Jps
32733 NameNode
364 DataNode

pineda@net-server00:~/hadoop$ lsof -i
COMMAND   PID   USER   FD   TYPE  DEVICE SIZE NODE NAME
java      364 pineda   46u  IPv6 2532945       TCP *:41462 (LISTEN)
java      364 pineda   52u  IPv6 2533275       TCP server00:42445->server00:54310 (ESTABLISHED)
java      364 pineda   60u  IPv6 2533307       TCP *:50010 (LISTEN)
java      364 pineda   61u  IPv6 2533511       TCP *:50075 (LISTEN)
java      364 pineda   66u  IPv6 2533518       TCP *:50020 (LISTEN)
java      472 pineda   46u  IPv6 2533286       TCP *:43098 (LISTEN)
java      472 pineda   59u  IPv6 2533536       TCP *:50090 (LISTEN)
java    32733 pineda   46u  IPv6 2532751       TCP *:54763 (LISTEN)
java    32733 pineda   56u  IPv6 2533062       TCP server00:54310 (LISTEN)
java    32733 pineda   67u  IPv6 2533081       TCP *:50070 (LISTEN)
java    32733 pineda   76u  IPv6 2533276       TCP server00:54310->server00:42445 (ESTABLISHED)

i.e.    fs.defaul.name  =  hdfs://server00:54310/

then I run hama in server04 (groom in server03, zookeeper in server05):

pineda@server04:~/hama$ bin/start-bspd.sh 
server05: starting zookeeper, logging to /logs/hama-pineda-zookeeper-server05.out
starting bspmaster, logging to /logs/hama-pineda-bspmaster-server04.out
2011-09-15 17:08:43.349:INFO::Logging to STDERR via org.mortbay.log.StdErrLog
2011-09-15 17:08:43.409:INFO::jetty-0.3.0-incubating
server03: starting groom, logging to /logs/hama-pineda-groom-server03.out

this is my hama-site.xml file:

<configuration>
  <property>
    <name>bsp.master.address</name>
    <value>server04</value>
  </property>
  
  <property>
    <name>fs.default.name</name>
    <value>hdfs://server00:54310</value>
  </property>

  <property>
    <name>hama.zookeeper.quorum</name>
    <value>server05</value>
  </property>
</configuration>


In theory I can connect to the HDFS, because I don't get any ConnectException, but Hama doesn't run, and I get this Exception trace in my bspmaster.log after the Jetty is bound:


2011-09-15 17:08:43,409 INFO org.apache.hama.http.HttpServer: Jetty bound to port 40013
2011-09-15 17:08:44,070 INFO org.apache.hama.bsp.BSPMaster: problem cleaning system directory: null
java.io.IOException: Call to server00/192.168.122.10:54310 failed on local exception: java.io.EOFException
        at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
        at org.apache.hadoop.ipc.Client.call(Client.java:743)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
        at $Proxy4.getProtocolVersion(Unknown Source)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
        at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
        at org.apache.hama.bsp.BSPMaster.<init>(BSPMaster.java:263)
        at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:421)
        at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:415)
        at org.apache.hama.BSPMasterRunner.run(BSPMasterRunner.java:46)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
        at org.apache.hama.BSPMasterRunner.main(BSPMasterRunner.java:56)
Caused by: java.io.EOFException
        at java.io.DataInputStream.readInt(DataInputStream.java:375)
        at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)


Do you know how to fix this? Do you know what is the directory that it is trying to clean?

Any idea is welcomed!

Thanks,
Luis.

Re: Hama help (how the distributed mode is working)

Posted by Thomas Jungblut <th...@googlemail.com>.
Hi Changguanghui,

as far as I can see, you are missing your Namenode configuration. HDFS
(Hadoop Distributed File System) is a requirement for Apache Hama.
I think if you see in the log files, you'll notice that the BSPMaster is
never coming up.

If you did not set up Hadoop yet, you can use the tutorials linked here:
http://wiki.apache.org/hama/GettingStarted#Hadoop_Installation

Have a look at our wiki how the hama-site.xml should look like [1].

And I would recommend you (for your 3 node cluster) to set up a single
zookeeper.
You can have a look at my configuration:

<property>
>     <name>bsp.master.address</name>
>     <value>raynor:40000</value>
>   </property>
>
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://raynor:9001</value>
>   </property>
>
>   <property>
>     <name>hama.zookeeper.quorum</name>
>     <value>raynor</value>
>   </property>
>

As you can see, I run on a host named "raynor" and my hadoop namenode is
available under "hdfs://raynor:9001".

If you configured it properly, you can verify by having a look in your logs
or at our web interface which is available on http://localhost:40013.
There you can check if you're slaves are listed.

Best regards,
Thomas

[1] http://wiki.apache.org/hama/GettingStarted#Settings

2011/9/15 changguanghui <ch...@huawei.com>

>  Hi,****
>
> I run the “hama-examples-0.3.0-incubating.jar test ” on my machine ,but it
> will block .****
>
> The hama-site.xml you can see:****
>
> <?xml version="1.0"?>****
>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>****
>
> <!--****
>
> /******
>
> * Copyright 2007 The Apache Software Foundation****
>
> *****
>
> * Licensed to the Apache Software Foundation (ASF) under one****
>
> * or more contributor license agreements.  See the NOTICE file****
>
> * distributed with this work for additional information****
>
> * regarding copyright ownership.  The ASF licenses this file****
>
> * to you under the Apache License, Version 2.0 (the****
>
> * "License"); you may not use this file except in compliance****
>
> * with the License.  You may obtain a copy of the License at****
>
> *****
>
> *     http://www.apache.org/licenses/LICENSE-2.0****
>
> *****
>
> * Unless required by applicable law or agreed to in writing, software****
>
> * distributed under the License is distributed on an "AS IS" BASIS,****
>
> * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> ****
>
> * See the License for the specific language governing permissions and****
>
> * limitations under the License.****
>
> */****
>
> -->****
>
> <configuration>****
>
>    <property>****
>
>     <name>bsp.master.address</name>****
>
>     <value>zgq</value>****
>
>     <description>The address of the bsp master server. Either the****
>
>     literal string "local" or a host[:port] (where host is a name or****
>
>     IP address) for distributed mode.****
>
>     </description>****
>
>   </property>****
>
> ** **
>
>   <property>****
>
>     <name>hama.zookeeper.quorum</name>****
>
>     <value>localhost,zxj,xwh</value>****
>
>     <description>Comma separated list of servers in the ZooKeeper quorum.*
> ***
>
>     For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com
> ".****
>
>     By default this is set to localhost for local and pseudo-distributed
> modes****
>
>     of operation. For a fully-distributed setup, this should be set to a
> full****
>
>     list of ZooKeeper quorum servers. If HAMA_MANAGES_ZK is set in
> hama-env.sh****
>
>     this is the list of servers which we will start/stop ZooKeeper on.****
>
>     </description>****
>
>   </property>****
>
> ** **
>
> </configuration>****
>
>
> ------------------------------------------------------------------------------------------------------------------------------------
> ****
>
> The following is the result when I run it, the supersteps number seems to
> be 0 forever: ****
>
> zgq hama-0.3.0-incubating/bin# ./start-bspd.sh****
>
> zxj: starting zookeeper, logging to
> /home/cgh/hama-0.3.0-incubating/bin/../logs/hama-root-zookeeper-zxj.out***
> *
>
> xwh: starting zookeeper, logging to
> /home/cgh/hama-0.3.0-incubating/bin/../logs/hama-root-zookeeper-xwh.out***
> *
>
> localhost: starting zookeeper, logging to
> /home/cgh/hama-0.3.0-incubating/bin/../logs/hama-root-zookeeper-zgq..out**
> **
>
> starting bspmaster, logging to
> /home/cgh/hama-0.3.0-incubating/bin/../logs/hama-root-bspmaster-zgq..out**
> **
>
> 2011-09-15 16:17:12.668:INFO::Logging to STDERR via
> org.mortbay.log.StdErrLog****
>
> 2011-09-15 16:17:12.712:INFO::jetty-0.3.0-incubating****
>
> 2011-09-15 16:17:12.844:INFO::Started SelectChannelConnector@zgq:40013****
>
> zxj: starting groom, logging to
> /home/cgh/hama-0.3.0-incubating/bin/../logs/hama-root-groom-zxj.out****
>
> localhost: starting groom, logging to
> /home/cgh/hama-0.3.0-incubating/bin/../logs/hama-root-groom-zgq..out****
>
> xwh: starting groom, logging to
> /home/cgh/hama-0.3.0-incubating/bin/../logs/hama-root-groom-xwh.out****
>
> zgq hama-0.3.0-incubating/bin# ./hama jar
> ../hama-examples-0.3.0-incubating.jar test****
>
> 11/09/15 16:17:31 INFO bsp.BSPJobClient: Running job: job_201109151617_0001
> ****
>
> 11/09/15 16:17:34 INFO bsp.BSPJobClient: Current supersteps number: 0****
>
> ** **
>
> ** **
>
> Thank you very much!****
>
> ** **
>
> Changguanghui****
>
> ** **
>
> *发件人:* Thomas Jungblut [mailto:thomas.jungblut@googlemail.com]
> *发送时间:* 2011年9月15日 16:10
> *收件人:* hama-user@incubator.apache.org
> *抄送:* changguanghui
> *主题:* Re: Hama help (how the distributed mode is working)****
>
>  ** **
>
> Hi,
>
> there are several examples in the HAMA tar ball. Which version did you use?
> Which example did you use?
> How is your cluster configured?
> Please post your hama-site.xml of all hosts involved in your cluster.
>
> Thanks and best regards,
> Thomas
>
> ****
>
> 2011/9/15 changguanghui <ch...@huawei.com>****
>
> Hi,
> I can't test the example which provide in HAMA tar ball on three machine.
> The trouble is how can I config the distributed HAMA ?
> Could you tell me some details for setup the HAMA on three machine. Thank
> you!
>
> -----邮件原件-----
> 发件人: Thomas Jungblut [mailto:thomas.jungblut@googlemail.com]
> 发送时间: 2011年9月14日 18:00
> 收件人: Luis Eduardo Pineda Morales
> 抄送: hama-user@incubator.apache.org
> 主题: Re: Hama help (Local mode not working)
>
> Hi Luis,
>
>
> > - For mere consistency of the page, you might want to use the tag <tt>
> > (used in the rest of the document) instead of the <em> that you are using
> > for names of files and configuration properties.
> >
>
> Thanks, I will take care of that.
>
> - I don't know if this is only my problem, but when I execute Hama with the
> > Local configuration, the Master doesn't run (and neither does the Groom).
> > They don't recognize "local" as a valid hostname, both fail with this
> > exception:
> >
>
> "local" itself is no hostname, there is a bug in our handling of this mode.
> Actually nothing should be launched then. I'll extend this in our wiki.
> What you are searching for is the pseudo-distributed mode which runs a
> Master, Groom and Zookeeper on your machine.
> You then have to provide "localhost" as the hostname or the real hostname
> of
> your machine.
>
> Is this maybe a problem with version 0.3? Would you suggest me to use 0.2
> > instead?
> >
>
> In 0.2 is no local mode, so you won't face these problems.
> Since this is a twiddle in your configuration, which should be solved using
> "localhost" instead of "local", you don't need to downgrade.
>
> I hope it will help you.
>
> Regards,
> Thomas
>
> 2011/9/14 Luis Eduardo Pineda Morales <lu...@gmail.com>
>
> > Thanks for you prompt reply Thomas,
> >
> > The wiki is more clarifying now that you added the part of the Modes.
> > However, if I may, I have a couple of remarks to mention:
> >
> > - For mere consistency of the page, you might want to use the tag <tt>
> > (used in the rest of the document) instead of the <em> that you are using
> > for names of files and configuration properties.
> >
> > - I don't know if this is only my problem, but when I execute Hama with
> the
> > Local configuration, the Master doesn't run (and neither does the Groom).
> > They don't recognize "local" as a valid hostname, both fail with this
> > exception:
> >
> > From* bspmaster.log:*
> >
> > *FATAL org.apache.hama.BSPMasterRunner: java.net.UnknownHostException:
> > Invalid hostname for server: local*
> >         at org.apache.hadoop.ipc.Server.bind(Server.java:198)
> >         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:253)
> >         at org.apache.hadoop.ipc.Server.<init>(Server.java:1026)
> >         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:488)
> >         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:450)
> >         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:441)
> >         at org.apache.hama.bsp.BSPMaster.<init>(BSPMaster.java:250)
> >         at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:421)
> >         at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:415)
> >         at org.apache.hama.BSPMasterRunner.run(BSPMasterRunner.java:46)
> >         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
> >         at org.apache.hama.BSPMasterRunner.main(BSPMasterRunner.java:56)
> >
> >
> > From *groom.log*
> >
> > ERROR org.apache.hama.bsp.GroomServer: Got fatal exception while
> > reinitializing GroomServer: java.net.UnknownHostException: unknown host:
> > local
> >         at
> org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:195)
> >         at org.apache.hadoop.ipc.Client.getConnection(Client.java:850)
> >         at org.apache.hadoop.ipc.Client.call(Client.java:720)
> >         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> >         at $Proxy4.getProtocolVersion(Unknown Source)
> >         at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
> >         at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
> >         at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
> >         at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:314)
> >         at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:291)
> >         at
> org.apache.hama.bsp.GroomServer.initialize(GroomServer.java:279)
> >         at org.apache.hama.bsp.GroomServer.run(GroomServer.java:600)
> >         at java.lang.Thread.run(Thread.java:680)
> >
> >
> > I've tested it in Debian, Ubuntu and MacOS Terminal. Is this maybe a
> > problem with version 0.3? Would you suggest me to use 0.2 instead?
> >
> >
> > I'm copying this to the user mailing list too, hope you don't mind.
> >
> > Luis
> >
>
>
>
> --
> Thomas Jungblut
> Berlin
>
> mobile: 0170-3081070
>
> business: thomas.jungblut@testberichte.de
> private: thomas.jungblut@gmail.com****
>
>
>
>
> --
> Thomas Jungblut
> Berlin
>
> mobile: 0170-3081070
>
> business: thomas.jungblut@testberichte.de
> private: thomas.jungblut@gmail.com****
>



-- 
Thomas Jungblut
Berlin

mobile: 0170-3081070

business: thomas.jungblut@testberichte.de
private: thomas.jungblut@gmail.com

Re: Hama help (how the distributed mode is working)

Posted by Thomas Jungblut <th...@googlemail.com>.
Hi,

there are several examples in the HAMA tar ball. Which version did you use?
Which example did you use?
How is your cluster configured?
Please post your hama-site.xml of all hosts involved in your cluster.

Thanks and best regards,
Thomas


2011/9/15 changguanghui <ch...@huawei.com>

> Hi,
> I can't test the example which provide in HAMA tar ball on three machine.
> The trouble is how can I config the distributed HAMA ?
> Could you tell me some details for setup the HAMA on three machine. Thank
> you!
>
> -----邮件原件-----
> 发件人: Thomas Jungblut [mailto:thomas.jungblut@googlemail.com]
> 发送时间: 2011年9月14日 18:00
> 收件人: Luis Eduardo Pineda Morales
> 抄送: hama-user@incubator.apache.org
> 主题: Re: Hama help (Local mode not working)
>
> Hi Luis,
>
>
> > - For mere consistency of the page, you might want to use the tag <tt>
> > (used in the rest of the document) instead of the <em> that you are using
> > for names of files and configuration properties.
> >
>
> Thanks, I will take care of that.
>
> - I don't know if this is only my problem, but when I execute Hama with the
> > Local configuration, the Master doesn't run (and neither does the Groom).
> > They don't recognize "local" as a valid hostname, both fail with this
> > exception:
> >
>
> "local" itself is no hostname, there is a bug in our handling of this mode.
> Actually nothing should be launched then. I'll extend this in our wiki.
> What you are searching for is the pseudo-distributed mode which runs a
> Master, Groom and Zookeeper on your machine.
> You then have to provide "localhost" as the hostname or the real hostname
> of
> your machine.
>
> Is this maybe a problem with version 0.3? Would you suggest me to use 0.2
> > instead?
> >
>
> In 0.2 is no local mode, so you won't face these problems.
> Since this is a twiddle in your configuration, which should be solved using
> "localhost" instead of "local", you don't need to downgrade.
>
> I hope it will help you.
>
> Regards,
> Thomas
>
> 2011/9/14 Luis Eduardo Pineda Morales <lu...@gmail.com>
>
> > Thanks for you prompt reply Thomas,
> >
> > The wiki is more clarifying now that you added the part of the Modes.
> > However, if I may, I have a couple of remarks to mention:
> >
> > - For mere consistency of the page, you might want to use the tag <tt>
> > (used in the rest of the document) instead of the <em> that you are using
> > for names of files and configuration properties.
> >
> > - I don't know if this is only my problem, but when I execute Hama with
> the
> > Local configuration, the Master doesn't run (and neither does the Groom).
> > They don't recognize "local" as a valid hostname, both fail with this
> > exception:
> >
> > From* bspmaster.log:*
> >
> > *FATAL org.apache.hama.BSPMasterRunner: java.net.UnknownHostException:
> > Invalid hostname for server: local*
> >         at org.apache.hadoop.ipc.Server.bind(Server.java:198)
> >         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:253)
> >         at org.apache.hadoop.ipc.Server.<init>(Server.java:1026)
> >         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:488)
> >         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:450)
> >         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:441)
> >         at org.apache.hama.bsp.BSPMaster.<init>(BSPMaster.java:250)
> >         at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:421)
> >         at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:415)
> >         at org.apache.hama.BSPMasterRunner.run(BSPMasterRunner.java:46)
> >         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
> >         at org.apache.hama.BSPMasterRunner.main(BSPMasterRunner.java:56)
> >
> >
> > From *groom.log*
> >
> > ERROR org.apache.hama.bsp.GroomServer: Got fatal exception while
> > reinitializing GroomServer: java.net.UnknownHostException: unknown host:
> > local
> >         at
> org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:195)
> >         at org.apache.hadoop.ipc.Client.getConnection(Client.java:850)
> >         at org.apache.hadoop.ipc.Client.call(Client.java:720)
> >         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> >         at $Proxy4.getProtocolVersion(Unknown Source)
> >         at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
> >         at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
> >         at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
> >         at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:314)
> >         at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:291)
> >         at
> org.apache.hama.bsp.GroomServer.initialize(GroomServer.java:279)
> >         at org.apache.hama.bsp.GroomServer.run(GroomServer.java:600)
> >         at java.lang.Thread.run(Thread.java:680)
> >
> >
> > I've tested it in Debian, Ubuntu and MacOS Terminal. Is this maybe a
> > problem with version 0.3? Would you suggest me to use 0.2 instead?
> >
> >
> > I'm copying this to the user mailing list too, hope you don't mind.
> >
> > Luis
> >
>
>
>
> --
> Thomas Jungblut
> Berlin
>
> mobile: 0170-3081070
>
> business: thomas.jungblut@testberichte.de
> private: thomas.jungblut@gmail.com
>



-- 
Thomas Jungblut
Berlin

mobile: 0170-3081070

business: thomas.jungblut@testberichte.de
private: thomas.jungblut@gmail.com