You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Fabian Hueske (JIRA)" <ji...@apache.org> on 2015/06/30 10:36:04 UTC

[jira] [Commented] (FLINK-2293) Division by Zero Exception

    [ https://issues.apache.org/jira/browse/FLINK-2293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14607950#comment-14607950 ] 

Fabian Hueske commented on FLINK-2293:
--------------------------------------

Is it possible to share the 10GB input data (download link)? 
Can you also share a few details about your execution setup (local machine, cluster, #machines, #slots, amount of memory, ...) that can help to reproduce the problem? Thanks!

> Division by Zero Exception
> --------------------------
>
>                 Key: FLINK-2293
>                 URL: https://issues.apache.org/jira/browse/FLINK-2293
>             Project: Flink
>          Issue Type: Bug
>          Components: Local Runtime
>    Affects Versions: 0.9, 0.10
>            Reporter: Andra Lungu
>            Priority: Critical
>             Fix For: 0.9.1
>
>
> I am basically running an algorithm that simulates a Gather Sum Apply Iteration that performs Traingle Count (Why simulate it? Because you just need a superstep -> useless overhead if you use the runGatherSumApply function in Graph).
> What happens, at a high level:
> 1). Select neighbors with ID greater than the one corresponding to the current vertex;
> 2). Propagate the received values to neighbors with higher ID;
> 3). compute the number of triangles by checking whether
> trgVertex.getValue().get(srcVertex.getId());
> As you can see, I *do not* perform any division at all;
> code is here: https://github.com/andralungu/gelly-partitioning/blob/master/src/main/java/example/GSATriangleCount.java
> Now for small graphs, 50MB max, the computation finishes nicely with the correct result. For a 10GB graph, however, I got this:
> java.lang.ArithmeticException: / by zero
>     at org.apache.flink.runtime.operators.hash.MutableHashTable.insertIntoTable(MutableHashTable.java:836)
>     at org.apache.flink.runtime.operators.hash.MutableHashTable.buildTableFromSpilledPartition(MutableHashTable.java:819)
>     at org.apache.flink.runtime.operators.hash.MutableHashTable.prepareNextPartition(MutableHashTable.java:508)
>     at org.apache.flink.runtime.operators.hash.MutableHashTable.nextRecord(MutableHashTable.java:544)
>     at org.apache.flink.runtime.operators.hash.NonReusingBuildFirstHashMatchIterator.callWithNextKey(NonReusingBuildFirstHashMatchIterator.java:104)
>     at org.apache.flink.runtime.operators.MatchDriver.run(MatchDriver.java:173)
>     at org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:496)
>     at org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:362)
>     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
>     at java.lang.Thread.run(Thread.java:722)
> see the full log here: https://gist.github.com/andralungu/984774f6348269df7951



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)