You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/04/12 20:18:30 UTC

[GitHub] [spark] rdblue edited a comment on issue #24129: [SPARK-27190][SQL] add table capability for streaming

rdblue edited a comment on issue #24129: [SPARK-27190][SQL] add table capability for streaming
URL: https://github.com/apache/spark/pull/24129#issuecomment-482707941
 
 
   The check you linked to is done after the plan is analyzed because it is written as a rule that [transforms the `analyzedPlan`](https://github.com/apache/spark/blob/v2.4.1/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/MicroBatchExecution.scala#L83). I think that an analyzer rule would actually catch problems earlier, but only slightly.
   
   But the main point is not when this is caught. The point is to avoid rules and validations scattered throughout the codebase. Certainly, we should validate that the execution mode is compatible with the plan when the execution mode is determined. But we also need to check that the plan is internally consistent to the extent possible because that's what the analyzer guarantees.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org