You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2020/08/03 17:27:14 UTC

[GitHub] [iceberg] rdblue commented on a change in pull request #1145: Implement the flink stream writer to accept the row data and emit the complete data files event to downstream

rdblue commented on a change in pull request #1145:
URL: https://github.com/apache/iceberg/pull/1145#discussion_r464555059



##########
File path: core/src/main/java/org/apache/iceberg/BaseFile.java
##########
@@ -360,7 +360,7 @@ public ByteBuffer keyMetadata() {
     if (list != null) {
       List<E> copy = Lists.newArrayListWithExpectedSize(list.size());
       copy.addAll(list);
-      return Collections.unmodifiableList(copy);

Review comment:
       I think we might want to convert the field to an array instead of a `List`. Lists are causing serialization problems, but arrays are fine. This would be similar to how we handle `keyMetadata`, which uses `byte[]` for the field, but returns `ByteBuffer` through the API.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org