You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "Udit Mehrotra (Jira)" <ji...@apache.org> on 2021/08/25 08:49:00 UTC
[jira] [Updated] (HUDI-722) IndexOutOfBoundsException in
MessageColumnIORecordConsumer.addBinary when writing parquet
[ https://issues.apache.org/jira/browse/HUDI-722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Udit Mehrotra updated HUDI-722:
-------------------------------
Fix Version/s: (was: 0.9.0)
0.10.0
> IndexOutOfBoundsException in MessageColumnIORecordConsumer.addBinary when writing parquet
> -----------------------------------------------------------------------------------------
>
> Key: HUDI-722
> URL: https://issues.apache.org/jira/browse/HUDI-722
> Project: Apache Hudi
> Issue Type: Bug
> Components: Writer Core
> Affects Versions: 0.9.0
> Reporter: Alexander Filipchik
> Assignee: sivabalan narayanan
> Priority: Major
> Fix For: 0.10.0
>
>
> Some writes fail with java.lang.IndexOutOfBoundsException : Invalid array range: X to X inside MessageColumnIORecordConsumer.addBinary call.
> Specifically: getColumnWriter().write(value, r[currentLevel], currentColumnIO.getDefinitionLevel());
> fails as size of r is the same as current level. What can be causing it?
>
> It gets executed via: ParquetWriter.write(IndexedRecord) Library version: 1.10.1 Avro is a very complex object (~2.5k columns, highly nested, arrays of unions present).
> But what is surprising is that it fails to write top level field: PrimitiveColumnIO _hoodie_commit_time r:0 d:1 [_hoodie_commit_time] which is the first top level field in Avro: {"_hoodie_commit_time": "20200317215711", "_hoodie_commit_seqno": "20200317215711_0_650",
--
This message was sent by Atlassian Jira
(v8.3.4#803005)