You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@arrow.apache.org by "Neal Richardson (Jira)" <ji...@apache.org> on 2021/01/13 19:48:00 UTC
[jira] [Updated] (ARROW-5845) [Java] Implement converter between
Arrow record batches and Avro records
[ https://issues.apache.org/jira/browse/ARROW-5845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Neal Richardson updated ARROW-5845:
-----------------------------------
Fix Version/s: (was: 3.0.0)
> [Java] Implement converter between Arrow record batches and Avro records
> ------------------------------------------------------------------------
>
> Key: ARROW-5845
> URL: https://issues.apache.org/jira/browse/ARROW-5845
> Project: Apache Arrow
> Issue Type: New Feature
> Components: Java
> Reporter: Ji Liu
> Assignee: Ji Liu
> Priority: Major
>
> It would be useful for applications which need convert Avro data to Arrow data.
> This is an adapter which convert data with existing API (like JDBC adapter) rather than a native reader (like orc).
> We implement this function through Avro java project, receiving param like Decoder/Schema/DatumReader of Avro and return VectorSchemaRoot. For each data type we have a consumer class as below to get Avro data and write it into vector to avoid boxing/unboxing (e.g. GenericRecord#get returns Object)
> {code:java}
> public class AvroIntConsumer implements Consumer {
> private final IntWriter writer;
> public AvroIntConsumer(IntVector vector)
> { this.writer = new IntWriterImpl(vector); }
> @Override
> public void consume(Decoder decoder) throws IOException
> { writer.writeInt(decoder.readInt()); writer.setPosition(writer.getPosition() + 1); }
> {code}
> We intended to support primitive and complex types (null value represented via unions type with null type), size limit and field selection could be optional for users.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)