You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hama.apache.org by Jimmy Ong <b1...@yahoo.co.uk> on 2013/12/20 08:11:53 UTC

Number of supersteps and messages

Hi,

I'm new to Hama and have a seemingly dumb question:

Suppose I only have a single BSP task, why does the following code result in having only 5 messages received by peerId?

for(int i=0; j<5; i++) {

    for(int j=0; j<5; j++) {
        peer.send(peerId, new IntWritable(1));
    }
    peer.sync();

}

Even though the system reported as having 25 messages sent and received, peerId.getNumCurrentMessages() returns 5.
Also, why is the total number of supersteps 4 and not 5 (this is running in local mode, distributed is fine)?

Am I missing something here?
Please kindly advise.

Thanks.


13/12/20 15:03:30 INFO bsp.BSPJobClient: Current supersteps number: 4
13/12/20 15:03:30 INFO bsp.BSPJobClient: The total number of supersteps: 4
13/12/20 15:03:30 INFO bsp.BSPJobClient: Counters: 7
13/12/20 15:03:30 INFO bsp.BSPJobClient:   org.apache.hama.bsp.JobInProgress$JobCounter
13/12/20 15:03:30 INFO bsp.BSPJobClient:     SUPERSTEPS=4
13/12/20 15:03:30 INFO bsp.BSPJobClient:     LAUNCHED_TASKS=1
13/12/20 15:03:30 INFO bsp.BSPJobClient:   org.apache.hama.bsp.BSPPeerImpl$PeerCounter
13/12/20 15:03:30 INFO bsp.BSPJobClient:     SUPERSTEP_SUM=5
13/12/20 15:03:30 INFO bsp.BSPJobClient:     TIME_IN_SYNC_MS=0
13/12/20 15:03:30 INFO bsp.BSPJobClient:     TOTAL_MESSAGES_SENT=25
13/12/20 15:03:30 INFO bsp.BSPJobClient:     TOTAL_MESSAGES_RECEIVED=25
13/12/20 15:03:30 INFO bsp.BSPJobClient:     TASK_OUTPUT_RECORDS=1

Re: Aggregator Problem (?)

Posted by Anastasis Andronidis <an...@hotmail.com>.
FYI
https://issues.apache.org/jira/browse/HAMA-833

Anastasis

On 21 Δεκ 2013, at 6:09 μ.μ., Anastasis Andronidis <an...@hotmail.com> wrote:

> Hi again,
> 
> I send you this link for further info on the subject:
> 
> https://issues.apache.org/jira/browse/HAMA-588
> 
> The voteToHalt() function is marking the vertex as halted for the next superstep! Not the current. I agree that we should document this functionality more thoroughly to avoid future problems.
> 
> On the other hand you pin point a very interesting subject. I agree with you that more cases should be handled like:
> 
> 1) voteToStop() : Immediately stop the vertex compute and suppress any further calculations on top of that. (e.g. aggregation)
> 2) voteToTerminate(): Immediately stop the vertex compute, suppress any further calculations on top of that and deactivate the vertex so even if any message reaches it, will not come alive.
> 
> I will open a JIRA ticket on the proposal, feel free to comment : ) Thanks in advance!
> 
> Cheers,
> Anastasis
> 
> On 21 Δεκ 2013, at 12:48 μ.μ., Ηλίας Καπουράνης <ik...@csd.auth.gr> wrote:
> 
>> Hey,
>> 
>> yeah I know about the corner case. What do you mean the aggregated results from superstep number 1? Between supersteps, there are the "aggregator" supersteps. And they are like  this:
>> - node superstep No.1
>> - aggregator superstep No.1
>> - node superstep No.2 etc etc
>> 
>> So if a node at "node superstep No.1" votes to halt, he shouldn't be included in the aggregator phase which comes next, right?
>> 
>> My question is:
>> why the node gets aggregated if he has voted to halt? Doesn't "vote to halt" mean that he wants to stop?
>> 
>> 
>> 
>> Στις 20/12/2013 11:35 μμ, ο/η Anastasis Andronidis έγραψε:
>>> Hello,
>>> 
>>> what you actually see it is an expected behavior from the aggregators. The results you are taking in the superstep number 2, are the aggregated results from superstep number 1.
>>> 
>>> There is a small corner case though. In superstep 0 the aggregators are off. This will change on next release.
>>> 
>>> Cheers,
>>> Anastasis
>>> 
>>> On 20 Δεκ 2013, at 4:48 μ.μ., ikapoura@csd.auth.gr wrote:
>>> 
>>>> Hello there,
>>>> 
>>>> I am using the Graph API and I have noticed something.
>>>> If a node votes to halt at a superstep, we suppose that he won't be part of the aggregation phase.
>>>> BUT he is included in the aggregation phase of the next superstep!
>>>> 
>>>> To be more precise:
>>>> 
>>>> - Imagine we have a graph with 10 nodes.
>>>> - At superstep 1 node K votes to halt.
>>>> - At superstep 2 we check the number of the nodes aggregated and its 10. (it had to be 9)
>>>> - At superstep 3 we check again the number of the nodes aggregated and then it is 9! (which is the correct)
>>>> 
>>>> This persists only with the aggregators. Node K doesn't work at superstep 2.
>>>> 
>>>> Can someone confirm that this is a problem or am i missing something?
>>>> Thanks
>>>> 
>>>> 
>> 
>> 
> 
> 


Re: Aggregator Problem (?)

Posted by "Edward J. Yoon" <ed...@apache.org>.
Quite complex ..

Do you mean that the inactive (halted) vertex should not be activated
by a aggregator (global) message? If so, I agree. It's a logical bug.

On Sun, Dec 22, 2013 at 7:52 AM, Ηλίας Καπουράνης <ik...@csd.auth.gr> wrote:
> Hey,
>
> Yeah these two are better because having a vertex halted but being
> aggregated is a bit improper in my opinion.
> Will check back again!
>
>
> Στις 21/12/2013 7:09 μμ, ο/η Anastasis Andronidis έγραψε:
>
>> Hi again,
>>
>> I send you this link for further info on the subject:
>>
>> https://issues.apache.org/jira/browse/HAMA-588
>>
>> The voteToHalt() function is marking the vertex as halted for the next
>> superstep! Not the current. I agree that we should document this
>> functionality more thoroughly to avoid future problems.
>>
>> On the other hand you pin point a very interesting subject. I agree with
>> you that more cases should be handled like:
>>
>> 1) voteToStop() : Immediately stop the vertex compute and suppress any
>> further calculations on top of that. (e.g. aggregation)
>> 2) voteToTerminate(): Immediately stop the vertex compute, suppress any
>> further calculations on top of that and deactivate the vertex so even if any
>> message reaches it, will not come alive.
>>
>> I will open a JIRA ticket on the proposal, feel free to comment : ) Thanks
>> in advance!
>>
>> Cheers,
>> Anastasis
>>
>> On 21 Δεκ 2013, at 12:48 μ.μ., Ηλίας Καπουράνης <ik...@csd.auth.gr>
>> wrote:
>>
>>> Hey,
>>>
>>> yeah I know about the corner case. What do you mean the aggregated
>>> results from superstep number 1? Between supersteps, there are the
>>> "aggregator" supersteps. And they are like  this:
>>> - node superstep No.1
>>> - aggregator superstep No.1
>>> - node superstep No.2 etc etc
>>>
>>> So if a node at "node superstep No.1" votes to halt, he shouldn't be
>>> included in the aggregator phase which comes next, right?
>>>
>>> My question is:
>>> why the node gets aggregated if he has voted to halt? Doesn't "vote to
>>> halt" mean that he wants to stop?
>>>
>>>
>>>
>>> Στις 20/12/2013 11:35 μμ, ο/η Anastasis Andronidis έγραψε:
>>>>
>>>> Hello,
>>>>
>>>> what you actually see it is an expected behavior from the aggregators.
>>>> The results you are taking in the superstep number 2, are the aggregated
>>>> results from superstep number 1.
>>>>
>>>> There is a small corner case though. In superstep 0 the aggregators are
>>>> off. This will change on next release.
>>>>
>>>> Cheers,
>>>> Anastasis
>>>>
>>>> On 20 Δεκ 2013, at 4:48 μ.μ., ikapoura@csd.auth.gr wrote:
>>>>
>>>>> Hello there,
>>>>>
>>>>> I am using the Graph API and I have noticed something.
>>>>> If a node votes to halt at a superstep, we suppose that he won't be
>>>>> part of the aggregation phase.
>>>>> BUT he is included in the aggregation phase of the next superstep!
>>>>>
>>>>> To be more precise:
>>>>>
>>>>> - Imagine we have a graph with 10 nodes.
>>>>> - At superstep 1 node K votes to halt.
>>>>> - At superstep 2 we check the number of the nodes aggregated and its
>>>>> 10. (it had to be 9)
>>>>> - At superstep 3 we check again the number of the nodes aggregated and
>>>>> then it is 9! (which is the correct)
>>>>>
>>>>> This persists only with the aggregators. Node K doesn't work at
>>>>> superstep 2.
>>>>>
>>>>> Can someone confirm that this is a problem or am i missing something?
>>>>> Thanks
>>>>>
>>>>>
>>>
>



-- 
Best Regards, Edward J. Yoon
@eddieyoon

Re: Aggregator Problem (?)

Posted by Ηλίας Καπουράνης <ik...@csd.auth.gr>.
Hey,

Yeah these two are better because having a vertex halted but being 
aggregated is a bit improper in my opinion.
Will check back again!


Στις 21/12/2013 7:09 μμ, ο/η Anastasis Andronidis έγραψε:
> Hi again,
>
> I send you this link for further info on the subject:
>
> https://issues.apache.org/jira/browse/HAMA-588
>
> The voteToHalt() function is marking the vertex as halted for the next superstep! Not the current. I agree that we should document this functionality more thoroughly to avoid future problems.
>
> On the other hand you pin point a very interesting subject. I agree with you that more cases should be handled like:
>
> 1) voteToStop() : Immediately stop the vertex compute and suppress any further calculations on top of that. (e.g. aggregation)
> 2) voteToTerminate(): Immediately stop the vertex compute, suppress any further calculations on top of that and deactivate the vertex so even if any message reaches it, will not come alive.
>
> I will open a JIRA ticket on the proposal, feel free to comment : ) Thanks in advance!
>
> Cheers,
> Anastasis
>
> On 21 Δεκ 2013, at 12:48 μ.μ., Ηλίας Καπουράνης <ik...@csd.auth.gr> wrote:
>
>> Hey,
>>
>> yeah I know about the corner case. What do you mean the aggregated results from superstep number 1? Between supersteps, there are the "aggregator" supersteps. And they are like  this:
>> - node superstep No.1
>> - aggregator superstep No.1
>> - node superstep No.2 etc etc
>>
>> So if a node at "node superstep No.1" votes to halt, he shouldn't be included in the aggregator phase which comes next, right?
>>
>> My question is:
>> why the node gets aggregated if he has voted to halt? Doesn't "vote to halt" mean that he wants to stop?
>>
>>
>>
>> Στις 20/12/2013 11:35 μμ, ο/η Anastasis Andronidis έγραψε:
>>> Hello,
>>>
>>> what you actually see it is an expected behavior from the aggregators. The results you are taking in the superstep number 2, are the aggregated results from superstep number 1.
>>>
>>> There is a small corner case though. In superstep 0 the aggregators are off. This will change on next release.
>>>
>>> Cheers,
>>> Anastasis
>>>
>>> On 20 Δεκ 2013, at 4:48 μ.μ., ikapoura@csd.auth.gr wrote:
>>>
>>>> Hello there,
>>>>
>>>> I am using the Graph API and I have noticed something.
>>>> If a node votes to halt at a superstep, we suppose that he won't be part of the aggregation phase.
>>>> BUT he is included in the aggregation phase of the next superstep!
>>>>
>>>> To be more precise:
>>>>
>>>> - Imagine we have a graph with 10 nodes.
>>>> - At superstep 1 node K votes to halt.
>>>> - At superstep 2 we check the number of the nodes aggregated and its 10. (it had to be 9)
>>>> - At superstep 3 we check again the number of the nodes aggregated and then it is 9! (which is the correct)
>>>>
>>>> This persists only with the aggregators. Node K doesn't work at superstep 2.
>>>>
>>>> Can someone confirm that this is a problem or am i missing something?
>>>> Thanks
>>>>
>>>>
>>


Re: Aggregator Problem (?)

Posted by Anastasis Andronidis <an...@hotmail.com>.
Hi again,

I send you this link for further info on the subject:

https://issues.apache.org/jira/browse/HAMA-588

The voteToHalt() function is marking the vertex as halted for the next superstep! Not the current. I agree that we should document this functionality more thoroughly to avoid future problems.

On the other hand you pin point a very interesting subject. I agree with you that more cases should be handled like:

1) voteToStop() : Immediately stop the vertex compute and suppress any further calculations on top of that. (e.g. aggregation)
2) voteToTerminate(): Immediately stop the vertex compute, suppress any further calculations on top of that and deactivate the vertex so even if any message reaches it, will not come alive.

I will open a JIRA ticket on the proposal, feel free to comment : ) Thanks in advance!

Cheers,
Anastasis

On 21 Δεκ 2013, at 12:48 μ.μ., Ηλίας Καπουράνης <ik...@csd.auth.gr> wrote:

> Hey,
> 
> yeah I know about the corner case. What do you mean the aggregated results from superstep number 1? Between supersteps, there are the "aggregator" supersteps. And they are like  this:
> - node superstep No.1
> - aggregator superstep No.1
> - node superstep No.2 etc etc
> 
> So if a node at "node superstep No.1" votes to halt, he shouldn't be included in the aggregator phase which comes next, right?
> 
> My question is:
> why the node gets aggregated if he has voted to halt? Doesn't "vote to halt" mean that he wants to stop?
> 
> 
> 
> Στις 20/12/2013 11:35 μμ, ο/η Anastasis Andronidis έγραψε:
>> Hello,
>> 
>> what you actually see it is an expected behavior from the aggregators. The results you are taking in the superstep number 2, are the aggregated results from superstep number 1.
>> 
>> There is a small corner case though. In superstep 0 the aggregators are off. This will change on next release.
>> 
>> Cheers,
>> Anastasis
>> 
>> On 20 Δεκ 2013, at 4:48 μ.μ., ikapoura@csd.auth.gr wrote:
>> 
>>> Hello there,
>>> 
>>> I am using the Graph API and I have noticed something.
>>> If a node votes to halt at a superstep, we suppose that he won't be part of the aggregation phase.
>>> BUT he is included in the aggregation phase of the next superstep!
>>> 
>>> To be more precise:
>>> 
>>> - Imagine we have a graph with 10 nodes.
>>> - At superstep 1 node K votes to halt.
>>> - At superstep 2 we check the number of the nodes aggregated and its 10. (it had to be 9)
>>> - At superstep 3 we check again the number of the nodes aggregated and then it is 9! (which is the correct)
>>> 
>>> This persists only with the aggregators. Node K doesn't work at superstep 2.
>>> 
>>> Can someone confirm that this is a problem or am i missing something?
>>> Thanks
>>> 
>>> 
> 
> 


Re: Aggregator Problem (?)

Posted by Ηλίας Καπουράνης <ik...@csd.auth.gr>.
Hey,

yeah I know about the corner case. What do you mean the aggregated 
results from superstep number 1? Between supersteps, there are the 
"aggregator" supersteps. And they are like  this:
- node superstep No.1
- aggregator superstep No.1
- node superstep No.2 etc etc

So if a node at "node superstep No.1" votes to halt, he shouldn't be 
included in the aggregator phase which comes next, right?

My question is:
why the node gets aggregated if he has voted to halt? Doesn't "vote to 
halt" mean that he wants to stop?



Στις 20/12/2013 11:35 μμ, ο/η Anastasis Andronidis έγραψε:
> Hello,
>
> what you actually see it is an expected behavior from the aggregators. The results you are taking in the superstep number 2, are the aggregated results from superstep number 1.
>
> There is a small corner case though. In superstep 0 the aggregators are off. This will change on next release.
>
> Cheers,
> Anastasis
>
> On 20 Δεκ 2013, at 4:48 μ.μ., ikapoura@csd.auth.gr wrote:
>
>> Hello there,
>>
>> I am using the Graph API and I have noticed something.
>> If a node votes to halt at a superstep, we suppose that he won't be part of the aggregation phase.
>> BUT he is included in the aggregation phase of the next superstep!
>>
>> To be more precise:
>>
>> - Imagine we have a graph with 10 nodes.
>> - At superstep 1 node K votes to halt.
>> - At superstep 2 we check the number of the nodes aggregated and its 10. (it had to be 9)
>> - At superstep 3 we check again the number of the nodes aggregated and then it is 9! (which is the correct)
>>
>> This persists only with the aggregators. Node K doesn't work at superstep 2.
>>
>> Can someone confirm that this is a problem or am i missing something?
>> Thanks
>>
>>


Re: Aggregator Problem (?)

Posted by Anastasis Andronidis <an...@hotmail.com>.
Hello,

what you actually see it is an expected behavior from the aggregators. The results you are taking in the superstep number 2, are the aggregated results from superstep number 1.

There is a small corner case though. In superstep 0 the aggregators are off. This will change on next release.

Cheers,
Anastasis

On 20 Δεκ 2013, at 4:48 μ.μ., ikapoura@csd.auth.gr wrote:

> Hello there,
> 
> I am using the Graph API and I have noticed something.
> If a node votes to halt at a superstep, we suppose that he won't be part of the aggregation phase.
> BUT he is included in the aggregation phase of the next superstep!
> 
> To be more precise:
> 
> - Imagine we have a graph with 10 nodes.
> - At superstep 1 node K votes to halt.
> - At superstep 2 we check the number of the nodes aggregated and its 10. (it had to be 9)
> - At superstep 3 we check again the number of the nodes aggregated and then it is 9! (which is the correct)
> 
> This persists only with the aggregators. Node K doesn't work at superstep 2.
> 
> Can someone confirm that this is a problem or am i missing something?
> Thanks
> 
> 


Aggregator Problem (?)

Posted by ik...@csd.auth.gr.
Hello there,

I am using the Graph API and I have noticed something.
If a node votes to halt at a superstep, we suppose that he won't be  
part of the aggregation phase.
BUT he is included in the aggregation phase of the next superstep!

To be more precise:

- Imagine we have a graph with 10 nodes.
- At superstep 1 node K votes to halt.
- At superstep 2 we check the number of the nodes aggregated and its  
10. (it had to be 9)
- At superstep 3 we check again the number of the nodes aggregated and  
then it is 9! (which is the correct)

This persists only with the aggregators. Node K doesn't work at superstep 2.

Can someone confirm that this is a problem or am i missing something?
Thanks


Re: Number of supersteps and messages

Posted by "Edward J. Yoon" <ed...@apache.org>.
Hi Jimmy,

The unconsumed messages are automatically removed from the queue for
next superstep. That's why getNumCurrentMessages() returns only 5.

We're thinking about adding persistent queue -
https://issues.apache.org/jira/browse/HAMA-734

Hope this helps.


On Fri, Dec 20, 2013 at 4:11 PM, Jimmy Ong <b1...@yahoo.co.uk> wrote:
> Hi,
>
> I'm new to Hama and have a seemingly dumb question:
>
> Suppose I only have a single BSP task, why does the following code result in having only 5 messages received by peerId?
>
> for(int i=0; j<5; i++) {
>
>     for(int j=0; j<5; j++) {
>         peer.send(peerId, new IntWritable(1));
>     }
>     peer.sync();
>
> }
>
> Even though the system reported as having 25 messages sent and received, peerId.getNumCurrentMessages() returns 5.
> Also, why is the total number of supersteps 4 and not 5 (this is running in local mode, distributed is fine)?
>
> Am I missing something here?
> Please kindly advise.
>
> Thanks.
>
>
> 13/12/20 15:03:30 INFO bsp.BSPJobClient: Current supersteps number: 4
> 13/12/20 15:03:30 INFO bsp.BSPJobClient: The total number of supersteps: 4
> 13/12/20 15:03:30 INFO bsp.BSPJobClient: Counters: 7
> 13/12/20 15:03:30 INFO bsp.BSPJobClient:   org.apache.hama.bsp.JobInProgress$JobCounter
> 13/12/20 15:03:30 INFO bsp.BSPJobClient:     SUPERSTEPS=4
> 13/12/20 15:03:30 INFO bsp.BSPJobClient:     LAUNCHED_TASKS=1
> 13/12/20 15:03:30 INFO bsp.BSPJobClient:   org.apache.hama.bsp.BSPPeerImpl$PeerCounter
> 13/12/20 15:03:30 INFO bsp.BSPJobClient:     SUPERSTEP_SUM=5
> 13/12/20 15:03:30 INFO bsp.BSPJobClient:     TIME_IN_SYNC_MS=0
> 13/12/20 15:03:30 INFO bsp.BSPJobClient:     TOTAL_MESSAGES_SENT=25
> 13/12/20 15:03:30 INFO bsp.BSPJobClient:     TOTAL_MESSAGES_RECEIVED=25
> 13/12/20 15:03:30 INFO bsp.BSPJobClient:     TASK_OUTPUT_RECORDS=1



-- 
Best Regards, Edward J. Yoon
@eddieyoon