You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Jonathan Hsieh (JIRA)" <ji...@apache.org> on 2009/10/28 01:45:59 UTC

[jira] Commented: (HADOOP-6339) SequenceFile writer does not properly flush stream with external DataOutputStream

    [ https://issues.apache.org/jira/browse/HADOOP-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12770732#action_12770732 ] 

Jonathan Hsieh commented on HADOOP-6339:
----------------------------------------

Devaraj:   If that is the case, I think the javadoc documentation should be updated explaining these semantics because they are different from close semantics when creating the writer via SequenceFile.createWriter(...,Path,...).  

Inside SequenceFile.close(),  it eventually calls flush on the FSDataOutputStream.  Note that this test case is actually writing to the local file system (file:///tmp/testfile).  Is that flush call supposed to do nothing to maintain consistency with the semantics when writing to hdfs?

Thanks,
Jon.

> SequenceFile writer does not properly flush stream with external DataOutputStream
> ---------------------------------------------------------------------------------
>
>                 Key: HADOOP-6339
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6339
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: io
>    Affects Versions: 0.20.1
>            Reporter: Jonathan Hsieh
>
> When using the SequenceFile.createWriter(..,FSDataOutputStream, ...) method to create a Writer, data is not flushed when the encapsulating SequenceFile is closed.
> Example test case skeleton:
> {code}
> public void testWhyFail() throws IOException {
>     // There a was a failure case using :
>     Configuration conf = ... ;
>     Path path = new Path("file:///tmp/testfile");
>     FileSystem hdfs = path.getFileSystem(conf);
>     // writing
>     FSDataOutputStream dos = hdfs.create(path);
>     hdfs.deleteOnExit(path);
>     // it is specifically with this writer.
>     Writer writer = SequenceFile.createWriter(conf, dos,
>         WriteableEventKey.class, WriteableEvent.class,
>         SequenceFile.CompressionType.NONE, new DefaultCodec());
>     Writable value = ...;
>     Writable key = ...;
>     writer.append(key, value);
>     writer.sync();
>     writer.close();
>     // Test fails unless I close the underlying FSDataOutputStream handle with the line below.
>     //    dos.close(); 
>     
>     // WTF: nothing written by this writer!
>     FileStatus stats = hdfs.getFileStatus(path);
>     assertTrue(stats.getLen() > 0);
>     // it should have written something but it failed.
>   }
> {code}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.