You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "zhu.qing (JIRA)" <ji...@apache.org> on 2018/01/31 11:03:00 UTC
[jira] [Created] (FLINK-8534) if insert too much BucketEntry into
one bucket in join of iteration. Will cause Caused by:
java.io.FileNotFoundException release file error
zhu.qing created FLINK-8534:
-------------------------------
Summary: if insert too much BucketEntry into one bucket in join of iteration. Will cause Caused by: java.io.FileNotFoundException release file error
Key: FLINK-8534
URL: https://issues.apache.org/jira/browse/FLINK-8534
Project: Flink
Issue Type: Bug
Environment: windows ideal 8g ram 4core i5 cpu. Flink 1.4.0
Reporter: zhu.qing
When insert too much entry into bucket will cause
spillPartition(). So
this.buildSideChannel = ioAccess.createBlockChannelWriter(targetChannel, bufferReturnQueue);
And in
prepareNextPartition() of reopenablemutable hash table
furtherPartitioning = true;
so in
finalizeProbePhase()
freeMemory.add(this.probeSideBuffer.getCurrentSegment());
// delete the spill files
this.probeSideChannel.close();
System.out.println("HashPartition probeSideRecordCounter Delete");
this.buildSideChannel.deleteChannel();
this.probeSideChannel.deleteChannel();
after deleteChannel the next iteartion will fail.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)