You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@orc.apache.org by "Owen O'Malley (JIRA)" <ji...@apache.org> on 2016/03/23 00:20:25 UTC

[jira] [Commented] (ORC-44) How to flush orc writer?

    [ https://issues.apache.org/jira/browse/ORC-44?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15207504#comment-15207504 ] 

Owen O'Malley commented on ORC-44:
----------------------------------

Use Writer.writeIntermediateFooter(), it will put a temporary footer on the file (so that it is readable) and flush it to hdfs. The size returned should be passed in to ReaderOptions.maxLength(long) when you are reading the file.


> How to flush orc writer?
> ------------------------
>
>                 Key: ORC-44
>                 URL: https://issues.apache.org/jira/browse/ORC-44
>             Project: Orc
>          Issue Type: Bug
>         Environment: hadoop version: 2.5.0-cdh5.3.2
> hive version: 0.13.1
>            Reporter: Tao Li
>
> I am using  org.apache.hadoop.hive.ql.io.orc.Writer API to generate orc file. I wan to flush the memory data to hdfs. Method close() is work for me, but it will close the orc file. Is there some method like flush() which I can use to flush the memory but not close the orc file?
> {code:java}
> OrcFile.WriterOptions writerOptions = OrcFile.writerOptions(conf);
> writerOptions.inspector(deserializer.getObjectInspector());
> Writer writer = OrcFile.createWriter(new Path(file), writerOptions);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)