You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flink.apache.org by Andra Lungu <lu...@gmail.com> on 2015/06/29 23:24:36 UTC

[Runtime] Division by Zero Exception

>From the same series of experiments:

I am basically running an algorithm that simulates a Gather Sum Apply
Iteration that performs Traingle Count (Why simulate it? Because you just
need a superstep -> useless overhead if you use the runGatherSumApply
function in Graph).
What happens, at a high level:
1). Select neighbors with ID greater than the one corresponding to the
current vertex;
2). Propagate the received values to neighbors with higher ID;
3). compute the number of triangles by checking whether

trgVertex.getValue().get(srcVertex.getId());

*As you can see, I *do not* perform any division at all;*
code is here:
https://github.com/andralungu/gelly-partitioning/blob/master/src/main/java/example/GSATriangleCount.java

Now for small graphs, 50MB max, the computation finishes nicely with the
correct result. For a 10GB graph, however, I got this:

java.lang.ArithmeticException: / by zero
    at
org.apache.flink.runtime.operators.hash.MutableHashTable.insertIntoTable(MutableHashTable.java:836)
    at
org.apache.flink.runtime.operators.hash.MutableHashTable.buildTableFromSpilledPartition(MutableHashTable.java:819)
    at
org.apache.flink.runtime.operators.hash.MutableHashTable.prepareNextPartition(MutableHashTable.java:508)
    at
org.apache.flink.runtime.operators.hash.MutableHashTable.nextRecord(MutableHashTable.java:544)
    at
org.apache.flink.runtime.operators.hash.NonReusingBuildFirstHashMatchIterator.callWithNextKey(NonReusingBuildFirstHashMatchIterator.java:104)
    at
org.apache.flink.runtime.operators.MatchDriver.run(MatchDriver.java:173)
    at
org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:496)
    at
org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:362)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
    at java.lang.Thread.run(Thread.java:722)

see the full log here:
https://gist.github.com/andralungu/984774f6348269df7951

Could you help me detect the cause of this?

Thanks!
Andra

Re: [Runtime] Division by Zero Exception

Posted by Andra Lungu <lu...@gmail.com>.
Sure! FLINK-2293

On Tue, Jun 30, 2015 at 10:22 AM, Fabian Hueske <fh...@gmail.com> wrote:

> That looks like a serious bug. :-(
> Can you open a JIRA for that?
>
> Thanks, Fabian
>
> 2015-06-29 23:24 GMT+02:00 Andra Lungu <lu...@gmail.com>:
>
> > From the same series of experiments:
> >
> > I am basically running an algorithm that simulates a Gather Sum Apply
> > Iteration that performs Traingle Count (Why simulate it? Because you just
> > need a superstep -> useless overhead if you use the runGatherSumApply
> > function in Graph).
> > What happens, at a high level:
> > 1). Select neighbors with ID greater than the one corresponding to the
> > current vertex;
> > 2). Propagate the received values to neighbors with higher ID;
> > 3). compute the number of triangles by checking whether
> >
> > trgVertex.getValue().get(srcVertex.getId());
> >
> > *As you can see, I *do not* perform any division at all;*
> > code is here:
> >
> >
> https://github.com/andralungu/gelly-partitioning/blob/master/src/main/java/example/GSATriangleCount.java
> >
> > Now for small graphs, 50MB max, the computation finishes nicely with the
> > correct result. For a 10GB graph, however, I got this:
> >
> > java.lang.ArithmeticException: / by zero
> >     at
> >
> >
> org.apache.flink.runtime.operators.hash.MutableHashTable.insertIntoTable(MutableHashTable.java:836)
> >     at
> >
> >
> org.apache.flink.runtime.operators.hash.MutableHashTable.buildTableFromSpilledPartition(MutableHashTable.java:819)
> >     at
> >
> >
> org.apache.flink.runtime.operators.hash.MutableHashTable.prepareNextPartition(MutableHashTable.java:508)
> >     at
> >
> >
> org.apache.flink.runtime.operators.hash.MutableHashTable.nextRecord(MutableHashTable.java:544)
> >     at
> >
> >
> org.apache.flink.runtime.operators.hash.NonReusingBuildFirstHashMatchIterator.callWithNextKey(NonReusingBuildFirstHashMatchIterator.java:104)
> >     at
> > org.apache.flink.runtime.operators.MatchDriver.run(MatchDriver.java:173)
> >     at
> >
> >
> org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:496)
> >     at
> >
> >
> org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:362)
> >     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
> >     at java.lang.Thread.run(Thread.java:722)
> >
> > see the full log here:
> > https://gist.github.com/andralungu/984774f6348269df7951
> >
> > Could you help me detect the cause of this?
> >
> > Thanks!
> > Andra
> >
>

Re: [Runtime] Division by Zero Exception

Posted by Fabian Hueske <fh...@gmail.com>.
That looks like a serious bug. :-(
Can you open a JIRA for that?

Thanks, Fabian

2015-06-29 23:24 GMT+02:00 Andra Lungu <lu...@gmail.com>:

> From the same series of experiments:
>
> I am basically running an algorithm that simulates a Gather Sum Apply
> Iteration that performs Traingle Count (Why simulate it? Because you just
> need a superstep -> useless overhead if you use the runGatherSumApply
> function in Graph).
> What happens, at a high level:
> 1). Select neighbors with ID greater than the one corresponding to the
> current vertex;
> 2). Propagate the received values to neighbors with higher ID;
> 3). compute the number of triangles by checking whether
>
> trgVertex.getValue().get(srcVertex.getId());
>
> *As you can see, I *do not* perform any division at all;*
> code is here:
>
> https://github.com/andralungu/gelly-partitioning/blob/master/src/main/java/example/GSATriangleCount.java
>
> Now for small graphs, 50MB max, the computation finishes nicely with the
> correct result. For a 10GB graph, however, I got this:
>
> java.lang.ArithmeticException: / by zero
>     at
>
> org.apache.flink.runtime.operators.hash.MutableHashTable.insertIntoTable(MutableHashTable.java:836)
>     at
>
> org.apache.flink.runtime.operators.hash.MutableHashTable.buildTableFromSpilledPartition(MutableHashTable.java:819)
>     at
>
> org.apache.flink.runtime.operators.hash.MutableHashTable.prepareNextPartition(MutableHashTable.java:508)
>     at
>
> org.apache.flink.runtime.operators.hash.MutableHashTable.nextRecord(MutableHashTable.java:544)
>     at
>
> org.apache.flink.runtime.operators.hash.NonReusingBuildFirstHashMatchIterator.callWithNextKey(NonReusingBuildFirstHashMatchIterator.java:104)
>     at
> org.apache.flink.runtime.operators.MatchDriver.run(MatchDriver.java:173)
>     at
>
> org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:496)
>     at
>
> org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:362)
>     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
>     at java.lang.Thread.run(Thread.java:722)
>
> see the full log here:
> https://gist.github.com/andralungu/984774f6348269df7951
>
> Could you help me detect the cause of this?
>
> Thanks!
> Andra
>