You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hama.apache.org by changguanghui <ch...@huawei.com> on 2011/09/23 08:50:39 UTC
compared with MapReduce ,what is the advantage of HAMA?
Hi Thomas,
Could you provide a concrete instance to illustrate the advantage of HAMA, when HAMA vs. MapReduce?
For example,SSSP on HAMA vs. SSSP on MapReduce. So ,I can catch the idea of HAMA quickly.
Thank you very much!
Changguanghui
-----邮件原件-----
发件人: Thomas Jungblut [mailto:thomas.jungblut@googlemail.com]
发送时间: 2011年9月19日 23:17
收件人: Luis Eduardo Pineda Morales
抄送: hama-user@incubator.apache.org
主题: Re: Hama help (how the distributed mode is working)
>
> I finally managed to setup and run Hama in fully distributed mode (thanks a
> lot to Thomas Jungblut!)
>
No problem, that's my "job" ;)). That is great. Have fun!
2011/9/19 Luis Eduardo Pineda Morales <lu...@gmail.com>
> Hi all!
>
> I finally managed to setup and run Hama in fully distributed mode (thanks a
> lot to Thomas Jungblut!)
>
> I'm using Hama 0.3.0 and Hadoop 0.20.2 with IPv4 as in
> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>
> Same settings didn't work with Hadoop 0.20.203 (said to be the most recent
> stable version).
> Hope these settings are useful for you.
>
> Luis
>
>
> On 15 Sep 2011, at 19:25, Thomas Jungblut wrote:
>
> Hey, I'm sorry, the IPv6 was misleading.
> On your screenshot I see that you are using an Append version of Hadoop.
> Did you try it with 0.20.2?
>
> 2011/9/15 Luis Eduardo Pineda Morales <lu...@gmail.com>
>
>> Hi Thomas, apparently IPv6 wasn't the problem, since now hadoop is running
>> in IPv4 and i still get the same exceptions in hama.
>>
>> pineda@server00:~/hadoop$ jps
>> 10592 NameNode
>> 10922 Jps
>> 10695 DataNode
>> 10844 SecondaryNameNode
>>
>> pineda@server00:~/hadoop$ lsof -i
>> COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
>> java 10592 pineda 46u IPv4 2559447 TCP *:50272 (LISTEN)
>> java 10592 pineda 56u IPv4 2559684 TCP server00:54310 (LISTEN)
>> java 10592 pineda 67u IPv4 2559694 TCP *:50070 (LISTEN)
>> java 10592 pineda 71u IPv4 2559771 TCP
>> server00:54310->server00:51666 (ESTABLISHED)
>> java 10592 pineda 72u IPv4 2559810 TCP
>> server00:51668->server00:54310 (ESTABLISHED)
>> java 10592 pineda 73u IPv4 2559811 TCP
>> server00:54310->server00:51668 (ESTABLISHED)
>> java 10592 pineda 77u IPv4 2560218 TCP
>> server00:54310->server00:51671 (ESTABLISHED)
>> java 10695 pineda 46u IPv4 2559682 TCP *:44935 (LISTEN)
>> java 10695 pineda 52u IPv4 2559764 TCP
>> server00:51666->server00:54310 (ESTABLISHED)
>> java 10695 pineda 60u IPv4 2559892 TCP *:50010 (LISTEN)
>> java 10695 pineda 61u IPv4 2559899 TCP *:50075 (LISTEN)
>> java 10695 pineda 66u IPv4 2560208 TCP *:50020 (LISTEN)
>> java 10844 pineda 46u IPv4 2560204 TCP *:41188 (LISTEN)
>> java 10844 pineda 52u IPv4 2560217 TCP
>> server00:51671->server00:54310 (ESTABLISHED)
>> java 10844 pineda 59u IPv4 2560225 TCP *:50090 (LISTEN)
>>
>>
>> also the web interface doesn't show any errors: and I'm able to run
>> hadoop shell commands. Any other idea? :-/
>>
>> Luis
>>
>>
>>
>>
>> On 15 Sep 2011, at 18:17, Thomas Jungblut wrote:
>>
>> > Hi Luis,
>> >
>> > it doesn't mean that it is working, just because there is no exception.
>> > Thanks that you appended your lsof output, because Hadoop does not
>> support
>> > IPv6.
>> >
>> > Please setup Hadoop correctly [1] and then use Hama.
>> > For example here is my lsof -i output:
>> >
>> > hadoop@raynor:/home/thomasjungblut$ lsof -i
>> >> COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
>> >> java 1144 hadoop 33u IPv4 8819 0t0 TCP *:49737 (LISTEN)
>> >> java 1144 hadoop 37u IPv4 9001 0t0 TCP raynor:9001
>> (LISTEN)
>> >> java 1144 hadoop 47u IPv4 9222 0t0 TCP *:50070 (LISTEN)
>> >> java 1144 hadoop 52u IPv4 9429 0t0 TCP
>> >> raynor:9001->findlay:35283 (ESTABLISHED)
>> >> java 1144 hadoop 53u IPv4 9431 0t0 TCP
>> >> raynor:9001->karrigan:57345 (ESTABLISHED)
>> >> java 1249 hadoop 33u IPv4 8954 0t0 TCP *:54235 (LISTEN)
>> >> java 1249 hadoop 44u IPv4 9422 0t0 TCP *:50010 (LISTEN)
>> >> java 1249 hadoop 45u IPv4 9426 0t0 TCP *:50075 (LISTEN)
>> >>
>> >
>> > There are two ways to determine if Hadoop is setup correctly:
>> >
>> > 1. Look at the Webinterface of the Namenode [2] and see that there is
>> no
>> > Safemode message or datanode missing.
>> > 2. Or run a sample MapReduce Job, for example WordCount [3].
>> >
>> > If Hama is not working afterwards, you ask your next question again.
>> >
>> > Thanks and good luck :)
>> >
>> > [1]
>> >
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>> > [2]
>> >
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#hdfs-name-node-web-interface
>> > [3]
>> >
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#run-the-mapreduce-job
>> >
>> >
>> > 2011/9/15 Luis Eduardo Pineda Morales <lu...@gmail.com>
>> >
>> >> Hi all,
>> >>
>> >> I am attempting to run the distributed mode. I have HDFS running in a
>> >> single machine (pseudo-distributed mode):
>> >>
>> >> pineda@server00:~/hadoop$ jps
>> >> 472 SecondaryNameNode
>> >> 1429 Jps
>> >> 32733 NameNode
>> >> 364 DataNode
>> >>
>> >> pineda@server00:~/hadoop$ lsof -i
>> >> COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
>> >> java 364 pineda 46u IPv6 2532945 TCP *:41462 (LISTEN)
>> >> java 364 pineda 52u IPv6 2533275 TCP
>> >> server00:42445->server00:54310 (ESTABLISHED)
>> >> java 364 pineda 60u IPv6 2533307 TCP *:50010 (LISTEN)
>> >> java 364 pineda 61u IPv6 2533511 TCP *:50075 (LISTEN)
>> >> java 364 pineda 66u IPv6 2533518 TCP *:50020 (LISTEN)
>> >> java 472 pineda 46u IPv6 2533286 TCP *:43098 (LISTEN)
>> >> java 472 pineda 59u IPv6 2533536 TCP *:50090 (LISTEN)
>> >> java 32733 pineda 46u IPv6 2532751 TCP *:54763 (LISTEN)
>> >> java 32733 pineda 56u IPv6 2533062 TCP server00:54310
>> (LISTEN)
>> >> java 32733 pineda 67u IPv6 2533081 TCP *:50070 (LISTEN)
>> >> java 32733 pineda 76u IPv6 2533276 TCP
>> >> server00:54310->server00:42445 (ESTABLISHED)
>> >>
>> >> i.e. fs.defaul.name = hdfs://server00:54310/
>> >>
>> >> then I run hama in server04 (groom in server03, zookeeper in server05):
>> >>
>> >> pineda@server04:~/hama$ bin/start-bspd.sh
>> >> server05: starting zookeeper, logging to
>> >> /logs/hama-pineda-zookeeper-server05.out
>> >> starting bspmaster, logging to /logs/hama-pineda-bspmaster-server04.out
>> >> 2011-09-15 17:08:43.349:INFO::Logging to STDERR via
>> >> org.mortbay.log.StdErrLog
>> >> 2011-09-15 17:08:43.409:INFO::jetty-0.3.0-incubating
>> >> server03: starting groom, logging to
>> /logs/hama-pineda-groom-server03.out
>> >>
>> >> this is my hama-site.xml file:
>> >>
>> >> <configuration>
>> >> <property>
>> >> <name>bsp.master.address</name>
>> >> <value>server04</value>
>> >> </property>
>> >>
>> >> <property>
>> >> <name>fs.default.name</name>
>> >> <value>hdfs://server00:54310</value>
>> >> </property>
>> >>
>> >> <property>
>> >> <name>hama.zookeeper.quorum</name>
>> >> <value>server05</value>
>> >> </property>
>> >> </configuration>
>> >>
>> >>
>> >> In theory I can connect to the HDFS, because I don't get any
>> >> ConnectException, but Hama doesn't run, and I get this Exception trace
>> in my
>> >> bspmaster.log after the Jetty is bound:
>> >>
>> >>
>> >> 2011-09-15 17:08:43,409 INFO org.apache.hama.http.HttpServer: Jetty
>> bound
>> >> to port 40013
>> >> 2011-09-15 17:08:44,070 INFO org.apache.hama.bsp.BSPMaster: problem
>> >> cleaning system directory: null
>> >> java.io.IOException: Call to server00/192.168.122.10:54310 failed on
>> local
>> >> exception: java.io.EOFException
>> >> at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
>> >> at org.apache.hadoop.ipc.Client.call(Client.java:743)
>> >> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>> >> at $Proxy4.getProtocolVersion(Unknown Source)
>> >> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>> >> at
>> >> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
>> >> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
>> >> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
>> >> at
>> >>
>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
>> >> at
>> >> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>> >> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>> >> at
>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>> >> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>> >> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>> >> at org.apache.hama.bsp.BSPMaster.<init>(BSPMaster.java:263)
>> >> at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:421)
>> >> at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:415)
>> >> at org.apache.hama.BSPMasterRunner.run(BSPMasterRunner.java:46)
>> >> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> >> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>> >> at org.apache.hama.BSPMasterRunner.main(BSPMasterRunner.java:56)
>> >> Caused by: java.io.EOFException
>> >> at java.io.DataInputStream.readInt(DataInputStream.java:375)
>> >> at
>> >>
>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
>> >> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
>> >>
>> >>
>> >> Do you know how to fix this? Do you know what is the directory that it
>> is
>> >> trying to clean?
>> >>
>> >> Any idea is welcomed!
>> >>
>> >> Thanks,
>> >> Luis.
>> >
>> >
>> >
>> >
>> > --
>> > Thomas Jungblut
>> > Berlin
>> >
>> > mobile: 0170-3081070
>> >
>> > business: thomas.jungblut@testberichte.de
>> > private: thomas.jungblut@gmail.com
>>
>>
>>
>
>
> --
> Thomas Jungblut
> Berlin
>
> mobile: 0170-3081070
>
> business: thomas.jungblut@testberichte.de
> private: thomas.jungblut@gmail.com
>
>
>
--
Thomas Jungblut
Berlin
mobile: 0170-3081070
business: thomas.jungblut@testberichte.de
private: thomas.jungblut@gmail.com
Re: compared with MapReduce ,what is the advantage of HAMA?
Posted by Thomas Jungblut <th...@googlemail.com>.
Thanks for your tips, I transfer this to our dev-list for discussion.
2011/9/24 changguanghui <ch...@huawei.com>
> I think,maybe, It is important to find some algorithm or some problem which
> is more suitable for using HAMA. Then, people can observe the contrast to
> the results between HAMA and MapReduce. Because more people want to know why
> they should choose HAMA, when they should choose HAMA.....
>
> -----邮件原件-----
> 发件人: Thomas Jungblut [mailto:thomas.jungblut@googlemail.com]
> 发送时间: 2011年9月23日 19:39
> 收件人: hama-user@incubator.apache.org
> 主题: Re: compared with MapReduce ,what is the advantage of HAMA?
>
> Hi,
> to clearly state the advantage: you have less overhead.
> Let me illustrate an algorithm for mindist search, I renamed it to graph
> exploration. This will apply on Shortest Paths, too.
> I wrote about it here:
>
> http://codingwiththomas.blogspot.com/2011/04/graph-exploration-with-hadoop-mapreduce.html
>
> Basically the algorithm groups the components of the graph and assigns the
> lowest key of the group as an identifier for the component.
> Usually you are solving graph problems with MapReduce with a technique
> called "Message Passing".
> So you are going to send messages to other vertices in every map step. Then
> you have to shuffle, sort and reduce the vertices to compute the result.
> This isn't done with a single iteration, so you have to chain several
> map/reduce jobs.
>
> For each iteration you inherit the overhead of sorting and shuffeling.
> Additional you have to do this on the disk.
>
> Hama provides a message passing interface, so you don't have to take care
> of
> writing each message to HDFS.
> Each iteration, which is in MapReduce a full job execution, is called a
> superstep in BSP.
> Each superstep is faster than a full job execution in Hadoop, because you
> don't have the overhead with spilling to disk, job setup, sorting and
> shuffeling.
> In addition you can put your whole graph into RAM, this will speed up the
> computation anyways. Hadoop does not offer this capability yet.
>
> But I want to point out some facts that are not positive though:
> Currently no benchmarks against Hadoop or other Frameworks like Giraph or
> GoldenORB exist, so we can't say: we are the best/fastest/coolest.
> And graph algorithms are a hard way to code. As you can see, I have written
> lots of code to get this running. That is because I have to take care of
> the
> partitioning, vertex messaging and IO stuff by myself.
> For that purpose we are going to release a Pregel API which makes the
> development of graph algorithms a lot more easier.
> You can get a sneak peek here:
> https://issues.apache.org/jira/browse/HAMA-409
>
> That was a lot of text, but I hope to clarify a lot.
>
> Best Regards,
> Thomas
>
> 2011/9/23 changguanghui <ch...@huawei.com>
>
> > Hi Thomas,
> >
> > Could you provide a concrete instance to illustrate the advantage of
> HAMA,
> > when HAMA vs. MapReduce?
> >
> > For example,SSSP on HAMA vs. SSSP on MapReduce. So ,I can catch the idea
> of
> > HAMA quickly.
> >
> > Thank you very much!
> >
> > Changguanghui
> >
>
--
Thomas Jungblut
Berlin
mobile: 0170-3081070
business: thomas.jungblut@testberichte.de
private: thomas.jungblut@gmail.com
Re: compared with MapReduce ,what is the advantage of HAMA?
Posted by changguanghui <ch...@huawei.com>.
I think,maybe, It is important to find some algorithm or some problem which is more suitable for using HAMA. Then, people can observe the contrast to the results between HAMA and MapReduce. Because more people want to know why they should choose HAMA, when they should choose HAMA.....
-----邮件原件-----
发件人: Thomas Jungblut [mailto:thomas.jungblut@googlemail.com]
发送时间: 2011年9月23日 19:39
收件人: hama-user@incubator.apache.org
主题: Re: compared with MapReduce ,what is the advantage of HAMA?
Hi,
to clearly state the advantage: you have less overhead.
Let me illustrate an algorithm for mindist search, I renamed it to graph
exploration. This will apply on Shortest Paths, too.
I wrote about it here:
http://codingwiththomas.blogspot.com/2011/04/graph-exploration-with-hadoop-mapreduce.html
Basically the algorithm groups the components of the graph and assigns the
lowest key of the group as an identifier for the component.
Usually you are solving graph problems with MapReduce with a technique
called "Message Passing".
So you are going to send messages to other vertices in every map step. Then
you have to shuffle, sort and reduce the vertices to compute the result.
This isn't done with a single iteration, so you have to chain several
map/reduce jobs.
For each iteration you inherit the overhead of sorting and shuffeling.
Additional you have to do this on the disk.
Hama provides a message passing interface, so you don't have to take care of
writing each message to HDFS.
Each iteration, which is in MapReduce a full job execution, is called a
superstep in BSP.
Each superstep is faster than a full job execution in Hadoop, because you
don't have the overhead with spilling to disk, job setup, sorting and
shuffeling.
In addition you can put your whole graph into RAM, this will speed up the
computation anyways. Hadoop does not offer this capability yet.
But I want to point out some facts that are not positive though:
Currently no benchmarks against Hadoop or other Frameworks like Giraph or
GoldenORB exist, so we can't say: we are the best/fastest/coolest.
And graph algorithms are a hard way to code. As you can see, I have written
lots of code to get this running. That is because I have to take care of the
partitioning, vertex messaging and IO stuff by myself.
For that purpose we are going to release a Pregel API which makes the
development of graph algorithms a lot more easier.
You can get a sneak peek here:
https://issues.apache.org/jira/browse/HAMA-409
That was a lot of text, but I hope to clarify a lot.
Best Regards,
Thomas
2011/9/23 changguanghui <ch...@huawei.com>
> Hi Thomas,
>
> Could you provide a concrete instance to illustrate the advantage of HAMA,
> when HAMA vs. MapReduce?
>
> For example,SSSP on HAMA vs. SSSP on MapReduce. So ,I can catch the idea of
> HAMA quickly.
>
> Thank you very much!
>
> Changguanghui
>
Re: compared with MapReduce ,what is the advantage of HAMA?
Posted by Thomas Jungblut <th...@googlemail.com>.
Hi,
to clearly state the advantage: you have less overhead.
Let me illustrate an algorithm for mindist search, I renamed it to graph
exploration. This will apply on Shortest Paths, too.
I wrote about it here:
http://codingwiththomas.blogspot.com/2011/04/graph-exploration-with-hadoop-mapreduce.html
Basically the algorithm groups the components of the graph and assigns the
lowest key of the group as an identifier for the component.
Usually you are solving graph problems with MapReduce with a technique
called "Message Passing".
So you are going to send messages to other vertices in every map step. Then
you have to shuffle, sort and reduce the vertices to compute the result.
This isn't done with a single iteration, so you have to chain several
map/reduce jobs.
For each iteration you inherit the overhead of sorting and shuffeling.
Additional you have to do this on the disk.
Hama provides a message passing interface, so you don't have to take care of
writing each message to HDFS.
Each iteration, which is in MapReduce a full job execution, is called a
superstep in BSP.
Each superstep is faster than a full job execution in Hadoop, because you
don't have the overhead with spilling to disk, job setup, sorting and
shuffeling.
In addition you can put your whole graph into RAM, this will speed up the
computation anyways. Hadoop does not offer this capability yet.
But I want to point out some facts that are not positive though:
Currently no benchmarks against Hadoop or other Frameworks like Giraph or
GoldenORB exist, so we can't say: we are the best/fastest/coolest.
And graph algorithms are a hard way to code. As you can see, I have written
lots of code to get this running. That is because I have to take care of the
partitioning, vertex messaging and IO stuff by myself.
For that purpose we are going to release a Pregel API which makes the
development of graph algorithms a lot more easier.
You can get a sneak peek here:
https://issues.apache.org/jira/browse/HAMA-409
That was a lot of text, but I hope to clarify a lot.
Best Regards,
Thomas
2011/9/23 changguanghui <ch...@huawei.com>
> Hi Thomas,
>
> Could you provide a concrete instance to illustrate the advantage of HAMA,
> when HAMA vs. MapReduce?
>
> For example,SSSP on HAMA vs. SSSP on MapReduce. So ,I can catch the idea of
> HAMA quickly.
>
> Thank you very much!
>
> Changguanghui
>
Re: compared with MapReduce ,what is the advantage of HAMA?
Posted by changguanghui <ch...@huawei.com>.
Hi Lin,
Thank you very much. When we execute some algorithm respectively using BSP or MapReduce, I just want a result to display some different overhead , runtime, and so on.
So I can show HAMA's advantage straightforward to the others who may not deep knowledge in parallel computing but only know MapReduce.
Best Regards
Changguanghui
-----邮件原件-----
发件人: Chia-Hung Lin [mailto:clin4j@googlemail.com]
发送时间: 2011年9月23日 18:07
收件人: hama-user@incubator.apache.org
主题: Re: compared with MapReduce ,what is the advantage of HAMA?
My understanding is that Hama bases on BSP model, which is suitable
for iterative algorithm; whereas MapReduce suitable for dealing with
bipartite graph.
2011/9/23 changguanghui <ch...@huawei.com>:
> Hi Thomas,
>
> Could you provide a concrete instance to illustrate the advantage of HAMA, when HAMA vs. MapReduce?
>
> For example,SSSP on HAMA vs. SSSP on MapReduce. So ,I can catch the idea of HAMA quickly.
>
> Thank you very much!
>
> Changguanghui
>
> -----邮件原件-----
> 发件人: Thomas Jungblut [mailto:thomas.jungblut@googlemail.com]
> 发送时间: 2011年9月19日 23:17
> 收件人: Luis Eduardo Pineda Morales
> 抄送: hama-user@incubator.apache.org
> 主题: Re: Hama help (how the distributed mode is working)
>
>>
>> I finally managed to setup and run Hama in fully distributed mode (thanks a
>> lot to Thomas Jungblut!)
>>
>
> No problem, that's my "job" ;)). That is great. Have fun!
>
> 2011/9/19 Luis Eduardo Pineda Morales <lu...@gmail.com>
>
>> Hi all!
>>
>> I finally managed to setup and run Hama in fully distributed mode (thanks a
>> lot to Thomas Jungblut!)
>>
>> I'm using Hama 0.3.0 and Hadoop 0.20.2 with IPv4 as in
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>
>> Same settings didn't work with Hadoop 0.20.203 (said to be the most recent
>> stable version).
>> Hope these settings are useful for you.
>>
>> Luis
>>
>>
>> On 15 Sep 2011, at 19:25, Thomas Jungblut wrote:
>>
>> Hey, I'm sorry, the IPv6 was misleading.
>> On your screenshot I see that you are using an Append version of Hadoop.
>> Did you try it with 0.20.2?
>>
>> 2011/9/15 Luis Eduardo Pineda Morales <lu...@gmail.com>
>>
>>> Hi Thomas, apparently IPv6 wasn't the problem, since now hadoop is running
>>> in IPv4 and i still get the same exceptions in hama.
>>>
>>> pineda@server00:~/hadoop$ jps
>>> 10592 NameNode
>>> 10922 Jps
>>> 10695 DataNode
>>> 10844 SecondaryNameNode
>>>
>>> pineda@server00:~/hadoop$ lsof -i
>>> COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
>>> java 10592 pineda 46u IPv4 2559447 TCP *:50272 (LISTEN)
>>> java 10592 pineda 56u IPv4 2559684 TCP server00:54310 (LISTEN)
>>> java 10592 pineda 67u IPv4 2559694 TCP *:50070 (LISTEN)
>>> java 10592 pineda 71u IPv4 2559771 TCP
>>> server00:54310->server00:51666 (ESTABLISHED)
>>> java 10592 pineda 72u IPv4 2559810 TCP
>>> server00:51668->server00:54310 (ESTABLISHED)
>>> java 10592 pineda 73u IPv4 2559811 TCP
>>> server00:54310->server00:51668 (ESTABLISHED)
>>> java 10592 pineda 77u IPv4 2560218 TCP
>>> server00:54310->server00:51671 (ESTABLISHED)
>>> java 10695 pineda 46u IPv4 2559682 TCP *:44935 (LISTEN)
>>> java 10695 pineda 52u IPv4 2559764 TCP
>>> server00:51666->server00:54310 (ESTABLISHED)
>>> java 10695 pineda 60u IPv4 2559892 TCP *:50010 (LISTEN)
>>> java 10695 pineda 61u IPv4 2559899 TCP *:50075 (LISTEN)
>>> java 10695 pineda 66u IPv4 2560208 TCP *:50020 (LISTEN)
>>> java 10844 pineda 46u IPv4 2560204 TCP *:41188 (LISTEN)
>>> java 10844 pineda 52u IPv4 2560217 TCP
>>> server00:51671->server00:54310 (ESTABLISHED)
>>> java 10844 pineda 59u IPv4 2560225 TCP *:50090 (LISTEN)
>>>
>>>
>>> also the web interface doesn't show any errors: and I'm able to run
>>> hadoop shell commands. Any other idea? :-/
>>>
>>> Luis
>>>
>>>
>>>
>>>
>>> On 15 Sep 2011, at 18:17, Thomas Jungblut wrote:
>>>
>>> > Hi Luis,
>>> >
>>> > it doesn't mean that it is working, just because there is no exception.
>>> > Thanks that you appended your lsof output, because Hadoop does not
>>> support
>>> > IPv6.
>>> >
>>> > Please setup Hadoop correctly [1] and then use Hama.
>>> > For example here is my lsof -i output:
>>> >
>>> > hadoop@raynor:/home/thomasjungblut$ lsof -i
>>> >> COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
>>> >> java 1144 hadoop 33u IPv4 8819 0t0 TCP *:49737 (LISTEN)
>>> >> java 1144 hadoop 37u IPv4 9001 0t0 TCP raynor:9001
>>> (LISTEN)
>>> >> java 1144 hadoop 47u IPv4 9222 0t0 TCP *:50070 (LISTEN)
>>> >> java 1144 hadoop 52u IPv4 9429 0t0 TCP
>>> >> raynor:9001->findlay:35283 (ESTABLISHED)
>>> >> java 1144 hadoop 53u IPv4 9431 0t0 TCP
>>> >> raynor:9001->karrigan:57345 (ESTABLISHED)
>>> >> java 1249 hadoop 33u IPv4 8954 0t0 TCP *:54235 (LISTEN)
>>> >> java 1249 hadoop 44u IPv4 9422 0t0 TCP *:50010 (LISTEN)
>>> >> java 1249 hadoop 45u IPv4 9426 0t0 TCP *:50075 (LISTEN)
>>> >>
>>> >
>>> > There are two ways to determine if Hadoop is setup correctly:
>>> >
>>> > 1. Look at the Webinterface of the Namenode [2] and see that there is
>>> no
>>> > Safemode message or datanode missing.
>>> > 2. Or run a sample MapReduce Job, for example WordCount [3].
>>> >
>>> > If Hama is not working afterwards, you ask your next question again.
>>> >
>>> > Thanks and good luck :)
>>> >
>>> > [1]
>>> >
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>> > [2]
>>> >
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#hdfs-name-node-web-interface
>>> > [3]
>>> >
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#run-the-mapreduce-job
>>> >
>>> >
>>> > 2011/9/15 Luis Eduardo Pineda Morales <lu...@gmail.com>
>>> >
>>> >> Hi all,
>>> >>
>>> >> I am attempting to run the distributed mode. I have HDFS running in a
>>> >> single machine (pseudo-distributed mode):
>>> >>
>>> >> pineda@server00:~/hadoop$ jps
>>> >> 472 SecondaryNameNode
>>> >> 1429 Jps
>>> >> 32733 NameNode
>>> >> 364 DataNode
>>> >>
>>> >> pineda@server00:~/hadoop$ lsof -i
>>> >> COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
>>> >> java 364 pineda 46u IPv6 2532945 TCP *:41462 (LISTEN)
>>> >> java 364 pineda 52u IPv6 2533275 TCP
>>> >> server00:42445->server00:54310 (ESTABLISHED)
>>> >> java 364 pineda 60u IPv6 2533307 TCP *:50010 (LISTEN)
>>> >> java 364 pineda 61u IPv6 2533511 TCP *:50075 (LISTEN)
>>> >> java 364 pineda 66u IPv6 2533518 TCP *:50020 (LISTEN)
>>> >> java 472 pineda 46u IPv6 2533286 TCP *:43098 (LISTEN)
>>> >> java 472 pineda 59u IPv6 2533536 TCP *:50090 (LISTEN)
>>> >> java 32733 pineda 46u IPv6 2532751 TCP *:54763 (LISTEN)
>>> >> java 32733 pineda 56u IPv6 2533062 TCP server00:54310
>>> (LISTEN)
>>> >> java 32733 pineda 67u IPv6 2533081 TCP *:50070 (LISTEN)
>>> >> java 32733 pineda 76u IPv6 2533276 TCP
>>> >> server00:54310->server00:42445 (ESTABLISHED)
>>> >>
>>> >> i.e. fs.defaul.name = hdfs://server00:54310/
>>> >>
>>> >> then I run hama in server04 (groom in server03, zookeeper in server05):
>>> >>
>>> >> pineda@server04:~/hama$ bin/start-bspd.sh
>>> >> server05: starting zookeeper, logging to
>>> >> /logs/hama-pineda-zookeeper-server05.out
>>> >> starting bspmaster, logging to /logs/hama-pineda-bspmaster-server04.out
>>> >> 2011-09-15 17:08:43.349:INFO::Logging to STDERR via
>>> >> org.mortbay.log.StdErrLog
>>> >> 2011-09-15 17:08:43.409:INFO::jetty-0.3.0-incubating
>>> >> server03: starting groom, logging to
>>> /logs/hama-pineda-groom-server03.out
>>> >>
>>> >> this is my hama-site.xml file:
>>> >>
>>> >> <configuration>
>>> >> <property>
>>> >> <name>bsp.master.address</name>
>>> >> <value>server04</value>
>>> >> </property>
>>> >>
>>> >> <property>
>>> >> <name>fs.default.name</name>
>>> >> <value>hdfs://server00:54310</value>
>>> >> </property>
>>> >>
>>> >> <property>
>>> >> <name>hama.zookeeper.quorum</name>
>>> >> <value>server05</value>
>>> >> </property>
>>> >> </configuration>
>>> >>
>>> >>
>>> >> In theory I can connect to the HDFS, because I don't get any
>>> >> ConnectException, but Hama doesn't run, and I get this Exception trace
>>> in my
>>> >> bspmaster.log after the Jetty is bound:
>>> >>
>>> >>
>>> >> 2011-09-15 17:08:43,409 INFO org.apache.hama.http.HttpServer: Jetty
>>> bound
>>> >> to port 40013
>>> >> 2011-09-15 17:08:44,070 INFO org.apache.hama.bsp.BSPMaster: problem
>>> >> cleaning system directory: null
>>> >> java.io.IOException: Call to server00/192.168.122.10:54310 failed on
>>> local
>>> >> exception: java.io.EOFException
>>> >> at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
>>> >> at org.apache.hadoop.ipc.Client.call(Client.java:743)
>>> >> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>> >> at $Proxy4.getProtocolVersion(Unknown Source)
>>> >> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>>> >> at
>>> >> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
>>> >> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
>>> >> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
>>> >> at
>>> >>
>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
>>> >> at
>>> >> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>>> >> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>>> >> at
>>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>>> >> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>>> >> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>>> >> at org.apache.hama.bsp.BSPMaster.<init>(BSPMaster.java:263)
>>> >> at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:421)
>>> >> at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:415)
>>> >> at org.apache.hama.BSPMasterRunner.run(BSPMasterRunner.java:46)
>>> >> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>>> >> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>>> >> at org.apache.hama.BSPMasterRunner.main(BSPMasterRunner.java:56)
>>> >> Caused by: java.io.EOFException
>>> >> at java.io.DataInputStream.readInt(DataInputStream.java:375)
>>> >> at
>>> >>
>>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
>>> >> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
>>> >>
>>> >>
>>> >> Do you know how to fix this? Do you know what is the directory that it
>>> is
>>> >> trying to clean?
>>> >>
>>> >> Any idea is welcomed!
>>> >>
>>> >> Thanks,
>>> >> Luis.
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > Thomas Jungblut
>>> > Berlin
>>> >
>>> > mobile: 0170-3081070
>>> >
>>> > business: thomas.jungblut@testberichte.de
>>> > private: thomas.jungblut@gmail.com
>>>
>>>
>>>
>>
>>
>> --
>> Thomas Jungblut
>> Berlin
>>
>> mobile: 0170-3081070
>>
>> business: thomas.jungblut@testberichte.de
>> private: thomas.jungblut@gmail.com
>>
>>
>>
>
>
> --
> Thomas Jungblut
> Berlin
>
> mobile: 0170-3081070
>
> business: thomas.jungblut@testberichte.de
> private: thomas.jungblut@gmail.com
>
Re: compared with MapReduce ,what is the advantage of HAMA?
Posted by Chia-Hung Lin <cl...@googlemail.com>.
My understanding is that Hama bases on BSP model, which is suitable
for iterative algorithm; whereas MapReduce suitable for dealing with
bipartite graph.
2011/9/23 changguanghui <ch...@huawei.com>:
> Hi Thomas,
>
> Could you provide a concrete instance to illustrate the advantage of HAMA, when HAMA vs. MapReduce?
>
> For example,SSSP on HAMA vs. SSSP on MapReduce. So ,I can catch the idea of HAMA quickly.
>
> Thank you very much!
>
> Changguanghui
>
> -----邮件原件-----
> 发件人: Thomas Jungblut [mailto:thomas.jungblut@googlemail.com]
> 发送时间: 2011年9月19日 23:17
> 收件人: Luis Eduardo Pineda Morales
> 抄送: hama-user@incubator.apache.org
> 主题: Re: Hama help (how the distributed mode is working)
>
>>
>> I finally managed to setup and run Hama in fully distributed mode (thanks a
>> lot to Thomas Jungblut!)
>>
>
> No problem, that's my "job" ;)). That is great. Have fun!
>
> 2011/9/19 Luis Eduardo Pineda Morales <lu...@gmail.com>
>
>> Hi all!
>>
>> I finally managed to setup and run Hama in fully distributed mode (thanks a
>> lot to Thomas Jungblut!)
>>
>> I'm using Hama 0.3.0 and Hadoop 0.20.2 with IPv4 as in
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>
>> Same settings didn't work with Hadoop 0.20.203 (said to be the most recent
>> stable version).
>> Hope these settings are useful for you.
>>
>> Luis
>>
>>
>> On 15 Sep 2011, at 19:25, Thomas Jungblut wrote:
>>
>> Hey, I'm sorry, the IPv6 was misleading.
>> On your screenshot I see that you are using an Append version of Hadoop.
>> Did you try it with 0.20.2?
>>
>> 2011/9/15 Luis Eduardo Pineda Morales <lu...@gmail.com>
>>
>>> Hi Thomas, apparently IPv6 wasn't the problem, since now hadoop is running
>>> in IPv4 and i still get the same exceptions in hama.
>>>
>>> pineda@server00:~/hadoop$ jps
>>> 10592 NameNode
>>> 10922 Jps
>>> 10695 DataNode
>>> 10844 SecondaryNameNode
>>>
>>> pineda@server00:~/hadoop$ lsof -i
>>> COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
>>> java 10592 pineda 46u IPv4 2559447 TCP *:50272 (LISTEN)
>>> java 10592 pineda 56u IPv4 2559684 TCP server00:54310 (LISTEN)
>>> java 10592 pineda 67u IPv4 2559694 TCP *:50070 (LISTEN)
>>> java 10592 pineda 71u IPv4 2559771 TCP
>>> server00:54310->server00:51666 (ESTABLISHED)
>>> java 10592 pineda 72u IPv4 2559810 TCP
>>> server00:51668->server00:54310 (ESTABLISHED)
>>> java 10592 pineda 73u IPv4 2559811 TCP
>>> server00:54310->server00:51668 (ESTABLISHED)
>>> java 10592 pineda 77u IPv4 2560218 TCP
>>> server00:54310->server00:51671 (ESTABLISHED)
>>> java 10695 pineda 46u IPv4 2559682 TCP *:44935 (LISTEN)
>>> java 10695 pineda 52u IPv4 2559764 TCP
>>> server00:51666->server00:54310 (ESTABLISHED)
>>> java 10695 pineda 60u IPv4 2559892 TCP *:50010 (LISTEN)
>>> java 10695 pineda 61u IPv4 2559899 TCP *:50075 (LISTEN)
>>> java 10695 pineda 66u IPv4 2560208 TCP *:50020 (LISTEN)
>>> java 10844 pineda 46u IPv4 2560204 TCP *:41188 (LISTEN)
>>> java 10844 pineda 52u IPv4 2560217 TCP
>>> server00:51671->server00:54310 (ESTABLISHED)
>>> java 10844 pineda 59u IPv4 2560225 TCP *:50090 (LISTEN)
>>>
>>>
>>> also the web interface doesn't show any errors: and I'm able to run
>>> hadoop shell commands. Any other idea? :-/
>>>
>>> Luis
>>>
>>>
>>>
>>>
>>> On 15 Sep 2011, at 18:17, Thomas Jungblut wrote:
>>>
>>> > Hi Luis,
>>> >
>>> > it doesn't mean that it is working, just because there is no exception.
>>> > Thanks that you appended your lsof output, because Hadoop does not
>>> support
>>> > IPv6.
>>> >
>>> > Please setup Hadoop correctly [1] and then use Hama.
>>> > For example here is my lsof -i output:
>>> >
>>> > hadoop@raynor:/home/thomasjungblut$ lsof -i
>>> >> COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
>>> >> java 1144 hadoop 33u IPv4 8819 0t0 TCP *:49737 (LISTEN)
>>> >> java 1144 hadoop 37u IPv4 9001 0t0 TCP raynor:9001
>>> (LISTEN)
>>> >> java 1144 hadoop 47u IPv4 9222 0t0 TCP *:50070 (LISTEN)
>>> >> java 1144 hadoop 52u IPv4 9429 0t0 TCP
>>> >> raynor:9001->findlay:35283 (ESTABLISHED)
>>> >> java 1144 hadoop 53u IPv4 9431 0t0 TCP
>>> >> raynor:9001->karrigan:57345 (ESTABLISHED)
>>> >> java 1249 hadoop 33u IPv4 8954 0t0 TCP *:54235 (LISTEN)
>>> >> java 1249 hadoop 44u IPv4 9422 0t0 TCP *:50010 (LISTEN)
>>> >> java 1249 hadoop 45u IPv4 9426 0t0 TCP *:50075 (LISTEN)
>>> >>
>>> >
>>> > There are two ways to determine if Hadoop is setup correctly:
>>> >
>>> > 1. Look at the Webinterface of the Namenode [2] and see that there is
>>> no
>>> > Safemode message or datanode missing.
>>> > 2. Or run a sample MapReduce Job, for example WordCount [3].
>>> >
>>> > If Hama is not working afterwards, you ask your next question again.
>>> >
>>> > Thanks and good luck :)
>>> >
>>> > [1]
>>> >
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>> > [2]
>>> >
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#hdfs-name-node-web-interface
>>> > [3]
>>> >
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#run-the-mapreduce-job
>>> >
>>> >
>>> > 2011/9/15 Luis Eduardo Pineda Morales <lu...@gmail.com>
>>> >
>>> >> Hi all,
>>> >>
>>> >> I am attempting to run the distributed mode. I have HDFS running in a
>>> >> single machine (pseudo-distributed mode):
>>> >>
>>> >> pineda@server00:~/hadoop$ jps
>>> >> 472 SecondaryNameNode
>>> >> 1429 Jps
>>> >> 32733 NameNode
>>> >> 364 DataNode
>>> >>
>>> >> pineda@server00:~/hadoop$ lsof -i
>>> >> COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
>>> >> java 364 pineda 46u IPv6 2532945 TCP *:41462 (LISTEN)
>>> >> java 364 pineda 52u IPv6 2533275 TCP
>>> >> server00:42445->server00:54310 (ESTABLISHED)
>>> >> java 364 pineda 60u IPv6 2533307 TCP *:50010 (LISTEN)
>>> >> java 364 pineda 61u IPv6 2533511 TCP *:50075 (LISTEN)
>>> >> java 364 pineda 66u IPv6 2533518 TCP *:50020 (LISTEN)
>>> >> java 472 pineda 46u IPv6 2533286 TCP *:43098 (LISTEN)
>>> >> java 472 pineda 59u IPv6 2533536 TCP *:50090 (LISTEN)
>>> >> java 32733 pineda 46u IPv6 2532751 TCP *:54763 (LISTEN)
>>> >> java 32733 pineda 56u IPv6 2533062 TCP server00:54310
>>> (LISTEN)
>>> >> java 32733 pineda 67u IPv6 2533081 TCP *:50070 (LISTEN)
>>> >> java 32733 pineda 76u IPv6 2533276 TCP
>>> >> server00:54310->server00:42445 (ESTABLISHED)
>>> >>
>>> >> i.e. fs.defaul.name = hdfs://server00:54310/
>>> >>
>>> >> then I run hama in server04 (groom in server03, zookeeper in server05):
>>> >>
>>> >> pineda@server04:~/hama$ bin/start-bspd.sh
>>> >> server05: starting zookeeper, logging to
>>> >> /logs/hama-pineda-zookeeper-server05.out
>>> >> starting bspmaster, logging to /logs/hama-pineda-bspmaster-server04.out
>>> >> 2011-09-15 17:08:43.349:INFO::Logging to STDERR via
>>> >> org.mortbay.log.StdErrLog
>>> >> 2011-09-15 17:08:43.409:INFO::jetty-0.3.0-incubating
>>> >> server03: starting groom, logging to
>>> /logs/hama-pineda-groom-server03.out
>>> >>
>>> >> this is my hama-site.xml file:
>>> >>
>>> >> <configuration>
>>> >> <property>
>>> >> <name>bsp.master.address</name>
>>> >> <value>server04</value>
>>> >> </property>
>>> >>
>>> >> <property>
>>> >> <name>fs.default.name</name>
>>> >> <value>hdfs://server00:54310</value>
>>> >> </property>
>>> >>
>>> >> <property>
>>> >> <name>hama.zookeeper.quorum</name>
>>> >> <value>server05</value>
>>> >> </property>
>>> >> </configuration>
>>> >>
>>> >>
>>> >> In theory I can connect to the HDFS, because I don't get any
>>> >> ConnectException, but Hama doesn't run, and I get this Exception trace
>>> in my
>>> >> bspmaster.log after the Jetty is bound:
>>> >>
>>> >>
>>> >> 2011-09-15 17:08:43,409 INFO org.apache.hama.http.HttpServer: Jetty
>>> bound
>>> >> to port 40013
>>> >> 2011-09-15 17:08:44,070 INFO org.apache.hama.bsp.BSPMaster: problem
>>> >> cleaning system directory: null
>>> >> java.io.IOException: Call to server00/192.168.122.10:54310 failed on
>>> local
>>> >> exception: java.io.EOFException
>>> >> at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
>>> >> at org.apache.hadoop.ipc.Client.call(Client.java:743)
>>> >> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>> >> at $Proxy4.getProtocolVersion(Unknown Source)
>>> >> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>>> >> at
>>> >> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
>>> >> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
>>> >> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
>>> >> at
>>> >>
>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
>>> >> at
>>> >> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>>> >> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>>> >> at
>>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>>> >> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>>> >> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>>> >> at org.apache.hama.bsp.BSPMaster.<init>(BSPMaster.java:263)
>>> >> at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:421)
>>> >> at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:415)
>>> >> at org.apache.hama.BSPMasterRunner.run(BSPMasterRunner.java:46)
>>> >> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>>> >> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>>> >> at org.apache.hama.BSPMasterRunner.main(BSPMasterRunner.java:56)
>>> >> Caused by: java.io.EOFException
>>> >> at java.io.DataInputStream.readInt(DataInputStream.java:375)
>>> >> at
>>> >>
>>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
>>> >> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
>>> >>
>>> >>
>>> >> Do you know how to fix this? Do you know what is the directory that it
>>> is
>>> >> trying to clean?
>>> >>
>>> >> Any idea is welcomed!
>>> >>
>>> >> Thanks,
>>> >> Luis.
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > Thomas Jungblut
>>> > Berlin
>>> >
>>> > mobile: 0170-3081070
>>> >
>>> > business: thomas.jungblut@testberichte.de
>>> > private: thomas.jungblut@gmail.com
>>>
>>>
>>>
>>
>>
>> --
>> Thomas Jungblut
>> Berlin
>>
>> mobile: 0170-3081070
>>
>> business: thomas.jungblut@testberichte.de
>> private: thomas.jungblut@gmail.com
>>
>>
>>
>
>
> --
> Thomas Jungblut
> Berlin
>
> mobile: 0170-3081070
>
> business: thomas.jungblut@testberichte.de
> private: thomas.jungblut@gmail.com
>