You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by maropu <gi...@git.apache.org> on 2017/08/07 05:04:46 UTC

[GitHub] spark pull request #18576: [SPARK-21351][SQL] Update nullability based on ch...

Github user maropu commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18576#discussion_r131573104
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/basicPhysicalOperators.scala ---
    @@ -94,27 +94,14 @@ case class FilterExec(condition: Expression, child: SparkPlan)
         case _ => false
       }
     
    -  // If one expression and its children are null intolerant, it is null intolerant.
    -  private def isNullIntolerant(expr: Expression): Boolean = expr match {
    -    case e: NullIntolerant => e.children.forall(isNullIntolerant)
    -    case _ => false
    -  }
    -
    -  // The columns that will filtered out by `IsNotNull` could be considered as not nullable.
    -  private val notNullAttributes = notNullPreds.flatMap(_.references).distinct.map(_.exprId)
    -
       // Mark this as empty. We'll evaluate the input during doConsume(). We don't want to evaluate
       // all the variables at the beginning to take advantage of short circuiting.
       override def usedInputs: AttributeSet = AttributeSet.empty
     
    +  // Since some plan rewrite rules (e.g., python.ExtractPythonUDFs) possibly change child's output
    +  // from optimized logical plans, we need to adjust the filter's output here.
       override def output: Seq[Attribute] = {
    -    child.output.map { a =>
    -      if (a.nullable && notNullAttributes.contains(a.exprId)) {
    -        a.withNullability(false)
    -      } else {
    -        a
    -      }
    -    }
    +    child.output.map { attr => outputAttrs.find(_.exprId == attr.exprId).getOrElse(attr) }
       }
    --- End diff --
    
    I simply tried to drop updating nullability and reuse output attributes `outputAttrs` in an optimized logical plan here though, some python tests failed (all the scala tests passed). I checked and I found this; in the planner path of python, we have some cases changing operator's output from the optimized logical plan to a physical plan.
    For example;
    ```
    sql("""SELECT strlen(a) FROM test WHERE strlen(a) > 1""")
    
    // pyspark
    >>> spark.sql("SELECT strlen(a) FROM test WHERE strlen(a) > 1").explain(True)
    ...
    == Optimized Logical Plan ==
    Project [strlen(a#0) AS strlen(a)#30]
    +- Filter (strlen(a#0) > 1)
       +- LogicalRDD [a#0]
    
    == Physical Plan ==
    *Project [pythonUDF0#34 AS strlen(a)#30]
    +- BatchEvalPython [strlen(a#0)], [a#0, pythonUDF0#34]
       +- *Filter (pythonUDF0#33 > 1), [a#0]
          +- BatchEvalPython [strlen(a#0)], [a#0, pythonUDF0#33]
             +- Scan ExistingRDD[a#0]
    ```
    So, I added code to check a difference between `outputAttrs` and `child.output`.
    Could you give me insight on this? @gatorsmile 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org