You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/08/01 13:00:09 UTC

[GitHub] [spark] maryannxue commented on issue #25308: [SPARK-28576][SQL] fix the dead lock issue when enable new adaptive execution

maryannxue commented on issue #25308: [SPARK-28576][SQL] fix the dead lock issue when enable new adaptive execution
URL: https://github.com/apache/spark/pull/25308#issuecomment-517277006
 
 
   @JkSelf As @cloud-fan explained, a subquery does not have its own execution id, so the worst of this bug is that when updating UI, the `QueryExecution` object a subquery gets is actually that of the main query's, and that leads to a logical dead loop, which can potentially cause a dead lock as you see or a stack overflow.
   
   To your other question: 
   > Then only when all the 4 newStages done (currently the subquery thread is not done?), the createQueryStages method is called again
   
   That's in general not the case. The replanning happens as soon as one stage is finished.
   
   Anyway, let me test the UI and get back to you.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org