You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@parquet.apache.org by Brian Bowman <Br...@sas.com.INVALID> on 2020/05/06 13:52:03 UTC

Java API that matches C++ low-level write_batch

Here’s some Parquet low-level API C++ code used to write a batch of IEEE doubles in a RowGroup.  Is there a public Java API equivalent for writing Parquet files?

456 // Append a RowGroup with a specific number of rows.
457 parquet::RowGroupWriter* rg_writer = file_writer->AppendRowGroup();

502    auto *double_writer =
503           static_cast<parquet::DoubleWriter *>(rg_writer->NextColumn());
504
505    double_writer->WriteBatch(num_rows, defLevels, nullptr, (double *) double_rows);

Thanks,

Brian



Re: Java API that matches C++ low-level write_batch

Posted by Brian Bowman <Br...@sas.com.INVALID>.
To clarify … is there a public Parquet Java API that includes writing Definition Levels like the C++ low-level API provides?

https://github.com/apache/parquet-mr/blob/master/parquet-column/src/main/java/org/apache/parquet/column/ColumnWriter.java appears to be an internal API. Are Definition Levels exposed in a public API, and are there Java examples using Definition Levels.

We have a Java use case that needs to write Definition Levels.

Thanks,

Briian

From: Brian Bowman <Br...@sas.com>
Date: Wednesday, May 6, 2020 at 9:52 AM
To: "dev@parquet.apache.org" <de...@parquet.apache.org>
Cc: Karl Moss <Ka...@sas.com>, Paul Tomas <Pa...@sas.com>
Subject: Java API that matches C++ low-level write_batch

Here’s some Parquet low-level API C++ code used to write a batch of IEEE doubles in a RowGroup.  Is there a public Java API equivalent for writing Parquet files?

456 // Append a RowGroup with a specific number of rows.
457 parquet::RowGroupWriter* rg_writer = file_writer->AppendRowGroup();

502    auto *double_writer =
503           static_cast<parquet::DoubleWriter *>(rg_writer->NextColumn());
504
505    double_writer->WriteBatch(num_rows, defLevels, nullptr, (double *) double_rows);

Thanks,

Brian