You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by GitBox <gi...@apache.org> on 2020/12/19 17:50:54 UTC

[GitHub] [arrow] nevi-me edited a comment on pull request #8968: ARROW-10979: [Rust] Basic Kafka Reader

nevi-me edited a comment on pull request #8968:
URL: https://github.com/apache/arrow/pull/8968#issuecomment-748503983


   > The stretch goal here could be support for integration with a schema registry, but I haven't worked much with that.
   
   > The only concern I have is with inconsistent schemas between messages in the same RecordBatch, there may be some merging needed.
   
   You could address this by only allowing subscription to 1 topic.
   Perhaps it'd then make sense to convert the `payload` into a `StructArray` or `RecordBatch`, with the other fields remaining as part of `KafkaBatch`?
   
   If you have multiple topics, you'd end up with different schemas for the `StructArray` as you observe, which isn't in the spirit of Arrow, as all arrays should be homogenous. A `UnionArray` could address that issue, but it's too much effort, in my opinion.
   
   Converting `Field::new("payload", DataType::Binary, true)` to `StructArray` will also require the data to be the same.
   
   ___
   
   More rabbit-hole kinds of ideas:
   
   - We could support a JSONArray that is an extension of `BinaryArray` but holds JSON data. I believe it's already a thing in Parquet, but I haven't looked at it in detail. There's `ExtensionArray` as part of the Arrow spec, but we haven't implemented it in Rust as yet.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org