You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/08/16 01:54:36 UTC

[GitHub] [spark] cloud-fan commented on a change in pull request #25460: [SPARK-25474][SQL][FOLLOW-UP] fallback to hdfs when relation table stats is not available

cloud-fan commented on a change in pull request #25460: [SPARK-25474][SQL][FOLLOW-UP] fallback to hdfs when relation table stats is not available
URL: https://github.com/apache/spark/pull/25460#discussion_r314559804
 
 

 ##########
 File path: sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/HadoopFsRelation.scala
 ##########
 @@ -72,7 +72,8 @@ case class HadoopFsRelation(
     val compressionFactor = sqlContext.conf.fileCompressionFactor
     val defaultSize = (location.sizeInBytes * compressionFactor).toLong
     location match {
-      case cfi: CatalogFileIndex if sparkSession.sessionState.conf.fallBackToHdfsForStatsEnabled =>
+      case cfi: CatalogFileIndex if sparkSession.sessionState.conf.fallBackToHdfsForStatsEnabled
+        && defaultSize == sqlContext.conf.defaultSizeInBytes =>
 
 Review comment:
   Ah good point! Basically there is no way to tell the table stats are available or not at this point. `sqlContext.conf.defaultSizeInBytes` is configurable and it's possible that the table stats just equal to `sqlContext.conf.defaultSizeInBytes`.
   
   #24715 seems to be able to fix it.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org