You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hama.apache.org by 狼侠 <pe...@foxmail.com> on 2012/12/24 10:01:56 UTC

回复: 回复: 回复: 回复: 回复: sssp experiment

The input data is 1 million vertices and  each vertex has 100 edges ,its size is about 0.9G.When I  I set the maximun tasks of each node is 3,and each tasks has 2000m memory,it run successfully.But when I set the maximun tasks of each node is 5,and each tasks has 1000m memory,it failed.




------------------ 原始邮件 ------------------
发件人: "Edward J. Yoon"; 
发送时间: 2012年12月24日(星期一) 下午4:48
收件人: "dev"; 
主题: Re: 回复: 回复: 回复: 回复: sssp experiment



Yeah, thanks, I am aware of it. In my case, to process vertices within
256MB block, each Task requied 25~30GB memory.

Please test small data at the moment. We're fixing.

On Mon, Dec 24, 2012 at 5:19 PM, 狼侠 <pe...@foxmail.com> wrote:
> My input data is 2G,and my each node has 8G memory,but it run failed.
>
>
>
>
> ------------------ 原始邮件 ------------------
> 发件人: "Edward J. Yoon";
> 发送时间: 2012年12月24日(星期一) 下午3:56
> 收件人: "dev";
> 主题: Re: 回复: 回复: 回复: sssp experiment
>
>
>
>> I set the maximun tasks of each node is 5,and each tasks has 1000m memory
>
> The whole graph (input data) should fit into the memory of the cluster
> nodes. You have only 5GB per node * 9 = 40GB memory. Please check how
> large your input is.
>
> And, as I told you, SSSP job on 1 billion edges graph requires total
> 600+GB memory of full rack. You can't run it using Hama 0.6 on small
> cluster.
>
> On Mon, Dec 24, 2012 at 2:52 PM, 狼侠 <pe...@foxmail.com> wrote:
>> I set the maximun tasks of each node is 5,and each tasks has 1000m memory,but it failed. With the increasing superstep,is the memory consumption increasing?
>>
>>
>>
>>
>> ------------------ 原始邮件 ------------------
>> 发件人: "Edward J. Yoon";
>> 发送时间: 2012年12月24日(星期一) 中午1:36
>> 收件人: "dev";
>> 抄送: "pengchen0525";
>> 主题: Re: 回复: 回复: sssp experiment
>>
>>
>>
>> Using Oracle BDA one rack (18 nodes and each node has 48GB memory), I
>> was able to run (Hama 0.6) SSSP example on 1 ~ 2 billion edges
>> graph[1]. If you can partition the input manually, please increase
>> splits. Then, each task will consume less memory.
>>
>> Currently we're working on partitioner and spilling queue. Please wait
>> next major release.
>>
>> 1. http://wiki.apache.org/hama/Benchmarks
>>
>> On Mon, Dec 24, 2012 at 1:59 PM, 狼侠 <pe...@foxmail.com> wrote:
>>> But when I use 1 million vertices and each vertex has 100 edges,the tasks is 45.It also can not run successful.What is the maximum input data that the hama can run? 发送
>>>
>>>
>>>
>>>
>>> ------------------ 原始邮件 ------------------
>>> 发件人: "Edward J. Yoon";
>>> 发送时间: 2012年12月24日(星期一) 中午12:31
>>> 收件人: "dev";
>>> 主题: Re: 回复: sssp experiment
>>>
>>>
>>>
>>>> The input graph has 10 million vertex.Each vertex has 100 edges.
>>>
>>> 10 million vertices and 10 billion edges? The input is too large.
>>> you'll need at least 2 Racks.
>>>
>>> On Mon, Dec 24, 2012 at 11:00 AM, 狼侠 <pe...@foxmail.com> wrote:
>>>> 12/12/24 09:28:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
>>>> 12/12/24 09:28:50 WARN snappy.LoadSnappy: Snappy native library not loaded
>>>> 12/12/24 09:28:51 INFO sync.ZKSyncClient: Initializing ZK Sync Client
>>>> 12/12/24 09:28:51 INFO sync.ZooKeeperSyncClientImpl: Start connecting to Zookeeper! At /192.168.1.211:61004
>>>> 12/12/24 09:28:51 ERROR sync.ZooKeeperSyncClientImpl: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /bsp/job_201212240927_0001/peers
>>>> 12/12/24 09:28:52 INFO ipc.Server: Starting SocketReader
>>>> 12/12/24 09:28:52 INFO ipc.Server: IPC Server Responder: starting
>>>> 12/12/24 09:28:52 INFO message.HadoopMessageManagerImpl:  BSPPeer address:datanode01 port:61004
>>>> 12/12/24 09:28:52 INFO ipc.Server: IPC Server listener on 61004: starting
>>>> 12/12/24 09:28:52 INFO ipc.Server: IPC Server handler 0 on 61004: starting
>>>> 12/12/24 09:37:39 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.io.IOException: java.lang.OutOfMemoryError: Java heap space
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> ------------------ 原始邮件 ------------------
>>>> 发件人: "Suraj Menon"<me...@gmail.com>;
>>>> 发送时间: 2012年12月24日(星期一) 凌晨0:20
>>>> 收件人: "dev"<de...@hama.apache.org>;
>>>>
>>>> 主题: Re: sssp experiment
>>>>
>>>>
>>>>
>>>> Please share the error you observe in the logs area -
>>>> $HAMA_HOME/logs/tasklogs/job_201212231344_0002/*.log
>>>>
>>>> -Suraj
>>>>
>>>> On Sun, Dec 23, 2012 at 3:24 AM, 狼侠 <pe...@foxmail.com> wrote:
>>>>
>>>>> Hi,      There is something wrong in my sssp experiment.My hama cluster
>>>>> has 9 nodes.The input graph has 10 million vertex.Each vertex has 100
>>>>> edges.And the number of the tasks is 36.But the job failed.What happened??
>>>>> The error message as follows:
>>>>> [hadoop@namenode hama]$ bin/hama jar hama-examples-0.6.0.jar sssp 0
>>>>> sssp_graph/10m_100e_36s sssp_graph_output/10m_100e_36s 36
>>>>> 12/12/23 14:49:30 INFO bsp.FileInputFormat: Total input paths to process :
>>>>> 36
>>>>> 12/12/23 14:49:31 INFO bsp.BSPJobClient: Running job: job_201212231344_0002
>>>>> 12/12/23 14:49:34 INFO bsp.BSPJobClient: Current supersteps number: 0
>>>>> 12/12/23 14:49:49 INFO bsp.BSPJobClient: Current supersteps number: 1
>>>>> 12/12/23 14:49:52 INFO bsp.BSPJobClient: Current supersteps number: 2
>>>>> 12/12/23 14:50:55 INFO bsp.BSPJobClient: Current supersteps number: 3
>>>>> 12/12/23 14:51:55 INFO bsp.BSPJobClient: Current supersteps number: 4
>>>>> 12/12/23 14:52:01 INFO bsp.BSPJobClient: Current supersteps number: 5
>>>>> 12/12/23 14:52:04 INFO bsp.BSPJobClient: Current supersteps number: 6
>>>>> 12/12/23 14:52:07 INFO bsp.BSPJobClient: Current supersteps number: 7
>>>>> 12/12/23 14:52:10 INFO bsp.BSPJobClient: Current supersteps number: 8
>>>>> 12/12/23 14:53:34 INFO bsp.BSPJobClient: Current supersteps number: 9
>>>>> 12/12/23 14:56:38 INFO bsp.BSPJobClient: Job failed.
>>>>> [hadoop@namenode hama]$
>>>
>>>
>>>
>>> --
>>> Best Regards, Edward J. Yoon
>>> @eddieyoon
>>
>>
>>
>> --
>> Best Regards, Edward J. Yoon
>> @eddieyoon
>
>
>
> --
> Best Regards, Edward J. Yoon
> @eddieyoon



-- 
Best Regards, Edward J. Yoon
@eddieyoon