You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Stephan Ewen (JIRA)" <ji...@apache.org> on 2015/08/18 21:09:45 UTC
[jira] [Commented] (FLINK-2545) NegativeArraySizeException while
creating hash table bloom filters
[ https://issues.apache.org/jira/browse/FLINK-2545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14701802#comment-14701802 ]
Stephan Ewen commented on FLINK-2545:
-------------------------------------
Thanks for reporting this!
As a quick fix, you can disable bloom filters by adding {{taskmanager.runtime.hashjoin-bloom-filters: false}} to the Flink config.
See here for a reference: https://ci.apache.org/projects/flink/flink-docs-master/setup/config.html#runtime-algorithms
Th bloom filters are a relatively new addition in 0.10.
> NegativeArraySizeException while creating hash table bloom filters
> ------------------------------------------------------------------
>
> Key: FLINK-2545
> URL: https://issues.apache.org/jira/browse/FLINK-2545
> Project: Flink
> Issue Type: Bug
> Components: Distributed Runtime
> Affects Versions: master
> Reporter: Greg Hogan
>
> The following exception occurred a second time when I immediately re-ran my application, though after recompiling and restarting Flink the subsequent execution ran without error.
> java.lang.Exception: The data preparation for task '...' , caused an error: null
> at org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:465)
> at org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:354)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:581)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NegativeArraySizeException
> at org.apache.flink.runtime.operators.hash.MutableHashTable.buildBloomFilterForBucket(MutableHashTable.java:1160)
> at org.apache.flink.runtime.operators.hash.MutableHashTable.buildBloomFilterForBucketsInPartition(MutableHashTable.java:1143)
> at org.apache.flink.runtime.operators.hash.MutableHashTable.spillPartition(MutableHashTable.java:1117)
> at org.apache.flink.runtime.operators.hash.MutableHashTable.insertBucketEntry(MutableHashTable.java:946)
> at org.apache.flink.runtime.operators.hash.MutableHashTable.insertIntoTable(MutableHashTable.java:868)
> at org.apache.flink.runtime.operators.hash.MutableHashTable.buildInitialTable(MutableHashTable.java:692)
> at org.apache.flink.runtime.operators.hash.MutableHashTable.open(MutableHashTable.java:455)
> at org.apache.flink.runtime.operators.hash.ReusingBuildSecondHashMatchIterator.open(ReusingBuildSecondHashMatchIterator.java:93)
> at org.apache.flink.runtime.operators.JoinDriver.prepare(JoinDriver.java:195)
> at org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:459)
> ... 3 more
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)