You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Jark Wu (Jira)" <ji...@apache.org> on 2020/01/07 10:44:00 UTC
[jira] [Resolved] (FLINK-15465) Avoid failing when required memory
calculation not accurate in BinaryHashTable
[ https://issues.apache.org/jira/browse/FLINK-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Jark Wu resolved FLINK-15465.
-----------------------------
Resolution: Fixed
1.11.0: 69ed6feef09d36df48b2e849888f9faebdaa2981
1.10.0: 81b18957da8e35b414b6c6017d13720157340d59
> Avoid failing when required memory calculation not accurate in BinaryHashTable
> ------------------------------------------------------------------------------
>
> Key: FLINK-15465
> URL: https://issues.apache.org/jira/browse/FLINK-15465
> Project: Flink
> Issue Type: Bug
> Components: Table SQL / Runtime
> Reporter: Jingsong Lee
> Assignee: Jingsong Lee
> Priority: Major
> Labels: pull-request-available
> Fix For: 1.10.0
>
> Time Spent: 20m
> Remaining Estimate: 0h
>
> In BinaryHashBucketArea.insertToBucket.
> When BinaryHashTable.buildTableFromSpilledPartition."Build in memory hash table", it requires memory can put all records, if not, will fail.
> Because the linked hash conflict solution, the required memory calculation are not accurate, in this case, we should apply for insufficient memory from heap.
> And must be careful, the steal memory should not return to table.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)