You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "zhu.qing (JIRA)" <ji...@apache.org> on 2018/01/31 11:04:00 UTC
[jira] [Updated] (FLINK-8534) if insert too much BucketEntry into
one bucket in join of iteration will cause a error (Caused :
java.io.FileNotFoundException release file error)
[ https://issues.apache.org/jira/browse/FLINK-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
zhu.qing updated FLINK-8534:
----------------------------
Summary: if insert too much BucketEntry into one bucket in join of iteration will cause a error (Caused : java.io.FileNotFoundException release file error) (was: if insert too much BucketEntry into one bucket in join of iteration. Will cause Caused : java.io.FileNotFoundException release file error)
> if insert too much BucketEntry into one bucket in join of iteration will cause a error (Caused : java.io.FileNotFoundException release file error)
> --------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: FLINK-8534
> URL: https://issues.apache.org/jira/browse/FLINK-8534
> Project: Flink
> Issue Type: Bug
> Environment: windows ideal 8g ram 4core i5 cpu. Flink 1.4.0
> Reporter: zhu.qing
> Priority: Major
>
> When insert too much entry into bucket will cause
> spillPartition(). So
> this.buildSideChannel = ioAccess.createBlockChannelWriter(targetChannel, bufferReturnQueue);
> And in
> prepareNextPartition() of reopenablemutable hash table
> furtherPartitioning = true;
> so in
> finalizeProbePhase()
> freeMemory.add(this.probeSideBuffer.getCurrentSegment());
> // delete the spill files
> this.probeSideChannel.close();
> System.out.println("HashPartition probeSideRecordCounter Delete");
> this.buildSideChannel.deleteChannel();
> this.probeSideChannel.deleteChannel();
> after deleteChannel the next iteartion will fail.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)