You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@parquet.apache.org by "wxmimperio (Jira)" <ji...@apache.org> on 2020/08/05 11:40:00 UTC

[jira] [Commented] (PARQUET-1559) Add way to manually commit already written data to disk

    [ https://issues.apache.org/jira/browse/PARQUET-1559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171434#comment-17171434 ] 

wxmimperio commented on PARQUET-1559:
-------------------------------------

[~gszadovszky]

Hi, If I need to write multiple files, and each file is very large, such as flink to hdfs.

There is not enough memory to buffer all the data, resulting in serious gc.

I don’t need the files to be readable, just flush the memory to disk.

And I did not find a place to call PositionOutputStream.flush(), I think this is the main reason.

columnStore.flush、columnStore.flush, just flush to PositionOutputStream, but not to disk.
{code:java}
//代码占位符
private void flushRowGroupToStore()
    throws IOException {
  recordConsumer.flush();
  LOG.debug("Flushing mem columnStore to file. allocated memory: {}", columnStore.getAllocatedSize());
  if (columnStore.getAllocatedSize() > (3 * rowGroupSizeThreshold)) {
    LOG.warn("Too much memory used: {}", columnStore.memUsageString());
  }

  if (recordCount > 0) {
    parquetFileWriter.startBlock(recordCount);
    columnStore.flush();
    pageStore.flushToFileWriter(parquetFileWriter);
    recordCount = 0;
    parquetFileWriter.endBlock();
    this.nextRowGroupSize = Math.min(
        parquetFileWriter.getNextRowGroupSize(),
        rowGroupSizeThreshold);
  }

  columnStore = null;
  pageStore = null;
}
{code}

> Add way to manually commit already written data to disk
> -------------------------------------------------------
>
>                 Key: PARQUET-1559
>                 URL: https://issues.apache.org/jira/browse/PARQUET-1559
>             Project: Parquet
>          Issue Type: Improvement
>          Components: parquet-mr
>    Affects Versions: 1.10.1
>            Reporter: Victor
>            Priority: Major
>
> I'm not exactly sure this is compliant with the way parquet works, but I have the following need:
>  * I'm using parquet-avro to write to a parquet file during a long running process
>  * I would like to be able from time to time to access the already written data
> So I was expecting to be able to flush manually the file to ensure the data is on disk and then copy the file for preliminary analysis.
> If it's contradictory to the way parquet works (for example there is something about metadata being at the footer of the file), what would then be the alternative?
> Closing the file and opening a new one to continue writing?
> Could this be supported directly by parquet-mr maybe? It would then write multiple files in that case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)