You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2022/07/22 06:39:48 UTC

[GitHub] [hudi] danny0405 commented on a diff in pull request #6020: [HUDI-4348] fix merge into sql data quality in concurrent scene

danny0405 commented on code in PR #6020:
URL: https://github.com/apache/hudi/pull/6020#discussion_r927336771


##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/spark/sql/hudi/command/payload/SqlTypedRecord.scala:
##########
@@ -53,6 +53,11 @@ object SqlTypedRecord {
 
   private val avroDeserializerCache = CacheBuilder.newBuilder().build[Schema, HoodieAvroDeserializer]()
 
+  private val avroDeserializerCacheLocal = new ThreadLocal[Cache[Schema, HoodieAvroDeserializer]] {
+    override def initialValue(): Cache[Schema, HoodieAvroDeserializer] =
+      CacheBuilder.newBuilder().maximumSize(16).build[Schema, HoodieAvroDeserializer]()

Review Comment:
   So, what are we try to fix here ? The schema key in the cache does not work in multi-thread use case ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org