You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "caoxuewen (JIRA)" <ji...@apache.org> on 2017/08/16 10:27:00 UTC

[jira] [Created] (SPARK-21746) nondeterministic expressions correctly for filter predicates

caoxuewen created SPARK-21746:
---------------------------------

             Summary: nondeterministic expressions correctly for filter predicates
                 Key: SPARK-21746
                 URL: https://issues.apache.org/jira/browse/SPARK-21746
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 2.3.0
            Reporter: caoxuewen


Currently, We do interpretedpredicate optimization, but not very well, because when our filter contained an indeterminate expression, it would have an exception. This PR describes solving this problem by adding the initialize method in InterpretedPredicate.

java.lang.IllegalArgumentException:
java.lang.IllegalArgumentException: requirement failed: Nondeterministic expression org.apache.spark.sql.catalyst.expressions.Rand should be initialized before eval.
	at scala.Predef$.require(Predef.scala:224)
	at org.apache.spark.sql.catalyst.expressions.Nondeterministic$class.eval(Expression.scala:291)
	at org.apache.spark.sql.catalyst.expressions.RDG.eval(randomExpressions.scala:34)
	at org.apache.spark.sql.catalyst.expressions.BinaryExpression.eval(Expression.scala:415)
	at org.apache.spark.sql.catalyst.expressions.InterpretedPredicate.eval(predicates.scala:38)
	at org.apache.spark.sql.catalyst.catalog.ExternalCatalogUtils$$anonfun$prunePartitionsByFilter$1.apply(ExternalCatalogUtils.scala:158)
	at org.apache.spark.sql.catalyst.catalog.ExternalCatalogUtils$$anonfun$prunePartitionsByFilter$1.apply(ExternalCatalogUtils.scala:157)
	at scala.collection.immutable.Stream.filter(Stream.scala:519)
	at scala.collection.immutable.Stream.filter(Stream.scala:202)
	at org.apache.spark.sql.catalyst.catalog.ExternalCatalogUtils$.prunePartitionsByFilter(ExternalCatalogUtils.scala:157)
	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$listPartitionsByFilter$1.apply(HiveExternalCatalog.scala:1129)
	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$listPartitionsByFilter$1.apply(HiveExternalCatalog.scala:1119)
	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
	at org.apache.spark.sql.hive.HiveExternalCatalog.listPartitionsByFilter(HiveExternalCatalog.scala:1119)
	at org.apache.spark.sql.catalyst.catalog.SessionCatalog.listPartitionsByFilter(SessionCatalog.scala:925)
	at org.apache.spark.sql.execution.datasources.CatalogFileIndex.filterPartitions(CatalogFileIndex.scala:73)
	at org.apache.spark.sql.execution.datasources.PruneFileSourcePartitions$$anonfun$apply$1.applyOrElse(PruneFileSourcePartitions.scala:60)
	at org.apache.spark.sql.execution.datasources.PruneFileSourcePartitions$$anonfun$apply$1.applyOrElse(PruneFileSourcePartitions.scala:27)
	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:266)
	at org.apache.spark.sql.execution.datasources.PruneFileSourcePartitions$.apply(PruneFileSourcePartitions.scala:27)
	at org.apache.spark.sql.execution.datasources.PruneFileSourcePartitions$.apply(PruneFileSourcePartitions.scala:26)






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org