You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Mukul Kumar Singh (JIRA)" <ji...@apache.org> on 2017/12/05 13:45:00 UTC
[jira] [Commented] (HADOOP-15074) SequenceFile#Writer flush does
not update the length of the written file.
[ https://issues.apache.org/jira/browse/HADOOP-15074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16278559#comment-16278559 ]
Mukul Kumar Singh commented on HADOOP-15074:
--------------------------------------------
Hi [~stevel@apache.org], Yes, the length of the file wasn't updated after a hsync/hflush on the writer. This happens because the update length isn't passed as part of the flush/sync. Hence the length isn't updated as part of the sync request.
{code}
@Override
public void hsync() throws IOException {
try (TraceScope ignored = dfsClient.newPathTraceScope("hsync", src)) {
flushOrSync(true, EnumSet.noneOf(SyncFlag.class));
}
}
{code}
> SequenceFile#Writer flush does not update the length of the written file.
> -------------------------------------------------------------------------
>
> Key: HADOOP-15074
> URL: https://issues.apache.org/jira/browse/HADOOP-15074
> Project: Hadoop Common
> Issue Type: Bug
> Reporter: Mukul Kumar Singh
> Assignee: Mukul Kumar Singh
>
> SequenceFile#Writer flush does not update the length of the file. This happens because as part of the flush, {{UPDATE_LENGTH}} flag is not passed to the DFSOutputStream#hsync.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org