You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by HeartSaVioR <gi...@git.apache.org> on 2018/09/03 10:51:10 UTC

[GitHub] spark pull request #22282: [SPARK-23539][SS] Add support for Kafka headers i...

Github user HeartSaVioR commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22282#discussion_r214618743
  
    --- Diff: external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaRecordToUnsafeRowConverter.scala ---
    @@ -44,6 +44,11 @@ private[kafka010] class KafkaRecordToUnsafeRowConverter {
           5,
           DateTimeUtils.fromJavaTimestamp(new java.sql.Timestamp(record.timestamp)))
         rowWriter.write(6, record.timestampType.id)
    +    val keys = record.headers.toArray.map(_.key())
    --- End diff --
    
    Might be better to define a new local value for `record.headers.toArray`, because it creates a new array when `headers` is not empty. It also guarantees consistent view for extracting keys and values, though we know `headers` is unlikely to be modified during this.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org