You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/02/16 00:11:32 UTC

[GitHub] [spark] erenavsarogullari opened a new pull request #35536: [SPARK-38222][SQL] - Expose Node Description attribute in SQL Rest API

erenavsarogullari opened a new pull request #35536:
URL: https://github.com/apache/spark/pull/35536


   ### What changes were proposed in this pull request?
   Currently, SQL public Rest API does not expose `node description` and it is useful to have more details at query level such as:
   ```
   - Join Operators(BHJ, SMJ, SHJ) => when correlating join operator with join type and which leg is built for BHJ. 
   - HashAggregate => aggregated keys and agg functions
   - List can be extended for other physical operators.
   ```
   Current Sample Json Result:
   ```
   {
       "nodeId" : 14,
       "nodeName" : "BroadcastHashJoin",
       "wholeStageCodegenId" : 3,
       "stageIds" : [ 5 ],
       "metrics" : [ {
             "name" : "number of output rows",
             "value" : {
           "amount" : "2"
             }
       }
   },
   ...
   {
       "nodeId" : 8,
       "nodeName" : "HashAggregate",
       "wholeStageCodegenId" : 4,
       "stageIds" : [ 8 ],
       "metrics" : [ {
         "name" : "spill size",
         "value" : {
           "amount" : "0.0"
         }
       }
   } 
   ```
   New Sample Json Result:
   ```
   {
       "nodeId" : 14,
       "nodeName" : "BroadcastHashJoin",
       "nodeDesc" : "BroadcastHashJoin [id#4], [id#24], Inner, BuildLeft, false",
       "wholeStageCodegenId" : 3,
       "stageIds" : [ 5 ],
       "metrics" : [ {
             "name" : "number of output rows",
             "value" : {
           "amount" : "2"
             }
       }
   },
   ...
   {
       "nodeId" : 8,
       "nodeName" : "HashAggregate",
       "nodeDesc" : "HashAggregate(keys=[name#5, age#6, salary#18], functions=[avg(cast(age#6 as bigint)), avg(salary#18)])",
       "wholeStageCodegenId" : 4,
       "stageIds" : [ 8 ],
       "metrics" : [ {
         "name" : "spill size",
         "value" : {
           "amount" : "0.0"
         }
       }
   } 
   ```
   
   ### Why are the changes needed?
   It is useful to have more details  at query level such as join type, which leg is built for BHJ, aggregated keys, agg functions etc.
   
   ### Does this PR introduce _any_ user-facing change?
   No
   
   ### How was this patch tested?
   Add more coverage to existing UTs
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] erenavsarogullari edited a comment on pull request #35536: [SPARK-38222][SQL] Expose Node Description attribute in SQL Rest API

Posted by GitBox <gi...@apache.org>.
erenavsarogullari edited a comment on pull request #35536:
URL: https://github.com/apache/spark/pull/35536#issuecomment-1042392854


   Thanks @srowen for quick feedback. Please find some samples per operation as follows. I also attach whole sql response to jira in order to give clear view for future references:
   https://issues.apache.org/jira/secure/attachment/13040136/Spark_SQL_REST_Result_with-nodeDesc
   
   I think on following options if makes sense and please feel free to extend:
   - We can exclude `WholeStageCodegen` and if (`nodeName == nodeDesc`) for all use-cases.
   - **Option 1:** We can expose SQLConf in order to include white-listed operators. With this option, the user can define the operator white-list. For example:
   ```
   spark.sql.rest.api.include.operators.from.desc=SortMergeJoin,HashAggregate,Project
   ```
   - **Option 2:** We can also expose another SQLConf if end user wants to see `nodeDesc` of all operators. For example:
   ```
   spark.sql.rest.api.enable.all.operators.desc=false(default)
   ```
   Sample Results:
   ```
   "nodeName" : "SerializeFromObject",
   "nodeDesc" : "SerializeFromObject [knownnotnull(assertnotnull(input[0, org.apache.spark.status.api.v1.sql.Salary, true])).personId AS personId#17, knownnotnull(assertnotnull(input[0, org.apache.spark.status.api.v1.sql.Salary, true])).salary AS salary#18]",
   
   "nodeName" : "Project",
   "nodeDesc" : "Project [personId#17 AS id#24, salary#18]",
   
   "nodeName" : "Exchange",
   "nodeDesc" : "Exchange hashpartitioning(name#5, age#6, salary#18, 5), ENSURE_REQUIREMENTS, [id=#87]",
       
   "nodeName" : "Sort",
   "nodeDesc" : "Sort [id#24 ASC NULLS FIRST], false, 0",
       
   "nodeName" : "Scan",
   "nodeDesc" : "Scan[obj#3]",
       
   "nodeName" : "SortMergeJoin",
   "nodeDesc" : "SortMergeJoin [id#4], [id#24], Inner",
       
   "nodeName" : "HashAggregate",
   "nodeDesc" : "HashAggregate(keys=[name#5, age#6, knownfloatingpointnormalized(normalizenanandzero(salary#18)) AS salary#18], functions=[partial_avg(age#6), partial_avg(salary#18)])",
           
   "nodeName" : "Filter",
   "nodeDesc" : "Filter org.apache.spark.status.api.v1.sql.SqlResourceWithActualMetricsSuite$$Lambda$1666/2062184524@72c9ebfa.apply",
   
   // Following ones need exclusion
   "nodeName" : "WholeStageCodegen (3)",
   "nodeDesc" : "WholeStageCodegen (3)",
   
   "nodeName" : "Project",
   "nodeDesc" : "Project",
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] srowen commented on pull request #35536: [SPARK-38222][SQL] Expose Node Description attribute in SQL Rest API

Posted by GitBox <gi...@apache.org>.
srowen commented on pull request #35536:
URL: https://github.com/apache/spark/pull/35536#issuecomment-1041937271


   I don't have enough context to really have an opinion. While it seems reasonable, I wonder in general whether it makes sense to return the description all the time (? this would return the description in many responses?) where it's just overhead in a lot of cases.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] erenavsarogullari commented on pull request #35536: [SPARK-38222][SQL] Expose Node Description attribute in SQL Rest API

Posted by GitBox <gi...@apache.org>.
erenavsarogullari commented on pull request #35536:
URL: https://github.com/apache/spark/pull/35536#issuecomment-1042392854


   > Thanks @srowen for quick feedback. Please find some samples per operation as follows. I also attach whole sql response to jira in order to give clear idea:
   https://issues.apache.org/jira/secure/attachment/13040136/Spark_SQL_REST_Result_with-nodeDesc
   
   I think on following options:
   - We can exclude `WholeStageCodegen` and if (`nodeName == nodeDesc`) for all use-cases.
   - **Option 1:** We can expose SQLConf in order to include white-listed operators. With this option, the user can define the operator white-list. For example:
   ```
   spark.sql.rest.api.include.operators.from.desc=SortMergeJoin,HashAggregate,Project
   ```
   - **Option 2:** We can also expose another SQLConf if end user wants to see `nodeDesc` of all operators. For example:
   ```
   spark.sql.rest.api.enable.all.operators.desc=false(default)
   ```
   Sample Results:
   ```
   "nodeName" : "SerializeFromObject",
   "nodeDesc" : "SerializeFromObject [knownnotnull(assertnotnull(input[0, org.apache.spark.status.api.v1.sql.Salary, true])).personId AS personId#17, knownnotnull(assertnotnull(input[0, org.apache.spark.status.api.v1.sql.Salary, true])).salary AS salary#18]",
   
   "nodeName" : "Project",
   "nodeDesc" : "Project [personId#17 AS id#24, salary#18]",
   
   "nodeName" : "Exchange",
   "nodeDesc" : "Exchange hashpartitioning(name#5, age#6, salary#18, 5), ENSURE_REQUIREMENTS, [id=#87]",
       
   "nodeName" : "Sort",
   "nodeDesc" : "Sort [id#24 ASC NULLS FIRST], false, 0",
       
   "nodeName" : "Scan",
   "nodeDesc" : "Scan[obj#3]",
       
   "nodeName" : "SortMergeJoin",
   "nodeDesc" : "SortMergeJoin [id#4], [id#24], Inner",
       
   "nodeName" : "HashAggregate",
   "nodeDesc" : "HashAggregate(keys=[name#5, age#6, knownfloatingpointnormalized(normalizenanandzero(salary#18)) AS salary#18], functions=[partial_avg(age#6), partial_avg(salary#18)])",
           
   "nodeName" : "Filter",
   "nodeDesc" : "Filter org.apache.spark.status.api.v1.sql.SqlResourceWithActualMetricsSuite$$Lambda$1666/2062184524@72c9ebfa.apply",
   
   // Following ones need exclusion
   "nodeName" : "WholeStageCodegen (3)",
   "nodeDesc" : "WholeStageCodegen (3)",
   
   "nodeName" : "Project",
   "nodeDesc" : "Project",
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] erenavsarogullari commented on pull request #35536: [SPARK-38222][SQL] Expose Node Description attribute in SQL Rest API

Posted by GitBox <gi...@apache.org>.
erenavsarogullari commented on pull request #35536:
URL: https://github.com/apache/spark/pull/35536#issuecomment-1041910719


   cc @gengliangwang @srowen 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] srowen commented on pull request #35536: [SPARK-38222][SQL] Expose Node Description attribute in SQL Rest API

Posted by GitBox <gi...@apache.org>.
srowen commented on pull request #35536:
URL: https://github.com/apache/spark/pull/35536#issuecomment-1042412795


   No to new configs, IMHO. Is this info not available from some other API that tells you about nodes, rather than return it as a side effect in other calls? maybe I misunderstand


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] erenavsarogullari edited a comment on pull request #35536: [SPARK-38222][SQL] Expose Node Description attribute in SQL Rest API

Posted by GitBox <gi...@apache.org>.
erenavsarogullari edited a comment on pull request #35536:
URL: https://github.com/apache/spark/pull/35536#issuecomment-1042392854


   Thanks @srowen for quick feedback. Please find some samples per operation as follows. I also attach whole sql response to jira in order to give clear view for future references:
   https://issues.apache.org/jira/secure/attachment/13040136/Spark_SQL_REST_Result_with-nodeDesc
   
   I think on following options if makes sense and please feel free to extend:
   - We can exclude `WholeStageCodegen` and if (`nodeName == nodeDesc`) for all use-cases.
   - **Option 1:** We can expose SQLConf in order to include white-listed operators. With this option, the user can define the operator white-list. (We can also define default white-list and user can extend with this SQLConf). For example:
   ```
   spark.sql.rest.api.include.operators.from.desc=SortMergeJoin,HashAggregate,Project
   ```
   - **Option 2:** We can also expose another SQLConf if end user wants to see `nodeDesc` of all operators. For example:
   ```
   spark.sql.rest.api.enable.all.operators.desc=false(default)
   ```
   **Sample Node/Operator Descriptions:**
   ```
   "nodeName" : "SerializeFromObject",
   "nodeDesc" : "SerializeFromObject [knownnotnull(assertnotnull(input[0, org.apache.spark.status.api.v1.sql.Salary, true])).personId AS personId#17, knownnotnull(assertnotnull(input[0, org.apache.spark.status.api.v1.sql.Salary, true])).salary AS salary#18]",
   
   "nodeName" : "Project",
   "nodeDesc" : "Project [personId#17 AS id#24, salary#18]",
   
   "nodeName" : "Exchange",
   "nodeDesc" : "Exchange hashpartitioning(name#5, age#6, salary#18, 5), ENSURE_REQUIREMENTS, [id=#87]",
       
   "nodeName" : "Sort",
   "nodeDesc" : "Sort [id#24 ASC NULLS FIRST], false, 0",
       
   "nodeName" : "Scan",
   "nodeDesc" : "Scan[obj#3]",
       
   "nodeName" : "SortMergeJoin",
   "nodeDesc" : "SortMergeJoin [id#4], [id#24], Inner",
       
   "nodeName" : "HashAggregate",
   "nodeDesc" : "HashAggregate(keys=[name#5, age#6, knownfloatingpointnormalized(normalizenanandzero(salary#18)) AS salary#18], functions=[partial_avg(age#6), partial_avg(salary#18)])",
           
   "nodeName" : "Filter",
   "nodeDesc" : "Filter org.apache.spark.status.api.v1.sql.SqlResourceWithActualMetricsSuite$$Lambda$1666/2062184524@72c9ebfa.apply",
   
   // Following ones need exclusion
   "nodeName" : "WholeStageCodegen (3)",
   "nodeDesc" : "WholeStageCodegen (3)",
   
   "nodeName" : "Project",
   "nodeDesc" : "Project",
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] erenavsarogullari edited a comment on pull request #35536: [SPARK-38222][SQL] Expose Node Description attribute in SQL Rest API

Posted by GitBox <gi...@apache.org>.
erenavsarogullari edited a comment on pull request #35536:
URL: https://github.com/apache/spark/pull/35536#issuecomment-1042392854


   Thanks @srowen for quick feedback. Please find some samples per operation as follows. I also attach whole sql response to jira in order to give clear view for future references:
   https://issues.apache.org/jira/secure/attachment/13040136/Spark_SQL_REST_Result_with-nodeDesc
   
   I think on following options if makes sense and please feel free to extend:
   - We can exclude `WholeStageCodegen` and if (`nodeName == nodeDesc`) for all use-cases.
   - **Option 1:** We can expose SQLConf in order to include white-listed operators. With this option, the user can define the operator white-list. (We can also define default white-list and user can extend with this SQLConf). For example:
   ```
   spark.sql.rest.api.include.operators.from.desc=SortMergeJoin,HashAggregate,Project
   ```
   - **Option 2:** We can also expose another SQLConf if end user wants to see `nodeDesc` of all operators. For example:
   ```
   spark.sql.rest.api.enable.all.operators.desc=false(default)
   ```
   Sample Results:
   ```
   "nodeName" : "SerializeFromObject",
   "nodeDesc" : "SerializeFromObject [knownnotnull(assertnotnull(input[0, org.apache.spark.status.api.v1.sql.Salary, true])).personId AS personId#17, knownnotnull(assertnotnull(input[0, org.apache.spark.status.api.v1.sql.Salary, true])).salary AS salary#18]",
   
   "nodeName" : "Project",
   "nodeDesc" : "Project [personId#17 AS id#24, salary#18]",
   
   "nodeName" : "Exchange",
   "nodeDesc" : "Exchange hashpartitioning(name#5, age#6, salary#18, 5), ENSURE_REQUIREMENTS, [id=#87]",
       
   "nodeName" : "Sort",
   "nodeDesc" : "Sort [id#24 ASC NULLS FIRST], false, 0",
       
   "nodeName" : "Scan",
   "nodeDesc" : "Scan[obj#3]",
       
   "nodeName" : "SortMergeJoin",
   "nodeDesc" : "SortMergeJoin [id#4], [id#24], Inner",
       
   "nodeName" : "HashAggregate",
   "nodeDesc" : "HashAggregate(keys=[name#5, age#6, knownfloatingpointnormalized(normalizenanandzero(salary#18)) AS salary#18], functions=[partial_avg(age#6), partial_avg(salary#18)])",
           
   "nodeName" : "Filter",
   "nodeDesc" : "Filter org.apache.spark.status.api.v1.sql.SqlResourceWithActualMetricsSuite$$Lambda$1666/2062184524@72c9ebfa.apply",
   
   // Following ones need exclusion
   "nodeName" : "WholeStageCodegen (3)",
   "nodeDesc" : "WholeStageCodegen (3)",
   
   "nodeName" : "Project",
   "nodeDesc" : "Project",
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] erenavsarogullari edited a comment on pull request #35536: [SPARK-38222][SQL] Expose Node Description attribute in SQL Rest API

Posted by GitBox <gi...@apache.org>.
erenavsarogullari edited a comment on pull request #35536:
URL: https://github.com/apache/spark/pull/35536#issuecomment-1042392854


   Thanks @srowen for quick feedback. Please find some samples per operation as follows. I also attach whole sql response to jira in order to give clear view for future references:
   https://issues.apache.org/jira/secure/attachment/13040136/Spark_SQL_REST_Result_with-nodeDesc
   
   I think on following options:
   - We can exclude `WholeStageCodegen` and if (`nodeName == nodeDesc`) for all use-cases.
   - **Option 1:** We can expose SQLConf in order to include white-listed operators. With this option, the user can define the operator white-list. For example:
   ```
   spark.sql.rest.api.include.operators.from.desc=SortMergeJoin,HashAggregate,Project
   ```
   - **Option 2:** We can also expose another SQLConf if end user wants to see `nodeDesc` of all operators. For example:
   ```
   spark.sql.rest.api.enable.all.operators.desc=false(default)
   ```
   Sample Results:
   ```
   "nodeName" : "SerializeFromObject",
   "nodeDesc" : "SerializeFromObject [knownnotnull(assertnotnull(input[0, org.apache.spark.status.api.v1.sql.Salary, true])).personId AS personId#17, knownnotnull(assertnotnull(input[0, org.apache.spark.status.api.v1.sql.Salary, true])).salary AS salary#18]",
   
   "nodeName" : "Project",
   "nodeDesc" : "Project [personId#17 AS id#24, salary#18]",
   
   "nodeName" : "Exchange",
   "nodeDesc" : "Exchange hashpartitioning(name#5, age#6, salary#18, 5), ENSURE_REQUIREMENTS, [id=#87]",
       
   "nodeName" : "Sort",
   "nodeDesc" : "Sort [id#24 ASC NULLS FIRST], false, 0",
       
   "nodeName" : "Scan",
   "nodeDesc" : "Scan[obj#3]",
       
   "nodeName" : "SortMergeJoin",
   "nodeDesc" : "SortMergeJoin [id#4], [id#24], Inner",
       
   "nodeName" : "HashAggregate",
   "nodeDesc" : "HashAggregate(keys=[name#5, age#6, knownfloatingpointnormalized(normalizenanandzero(salary#18)) AS salary#18], functions=[partial_avg(age#6), partial_avg(salary#18)])",
           
   "nodeName" : "Filter",
   "nodeDesc" : "Filter org.apache.spark.status.api.v1.sql.SqlResourceWithActualMetricsSuite$$Lambda$1666/2062184524@72c9ebfa.apply",
   
   // Following ones need exclusion
   "nodeName" : "WholeStageCodegen (3)",
   "nodeDesc" : "WholeStageCodegen (3)",
   
   "nodeName" : "Project",
   "nodeDesc" : "Project",
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] erenavsarogullari edited a comment on pull request #35536: [SPARK-38222][SQL] Expose Node Description attribute in SQL Rest API

Posted by GitBox <gi...@apache.org>.
erenavsarogullari edited a comment on pull request #35536:
URL: https://github.com/apache/spark/pull/35536#issuecomment-1042392854


   Thanks @srowen for quick feedback. Please find some samples per operation as follows. I also attach whole sql response to jira in order to give clear idea:
   https://issues.apache.org/jira/secure/attachment/13040136/Spark_SQL_REST_Result_with-nodeDesc
   
   I think on following options:
   - We can exclude `WholeStageCodegen` and if (`nodeName == nodeDesc`) for all use-cases.
   - **Option 1:** We can expose SQLConf in order to include white-listed operators. With this option, the user can define the operator white-list. For example:
   ```
   spark.sql.rest.api.include.operators.from.desc=SortMergeJoin,HashAggregate,Project
   ```
   - **Option 2:** We can also expose another SQLConf if end user wants to see `nodeDesc` of all operators. For example:
   ```
   spark.sql.rest.api.enable.all.operators.desc=false(default)
   ```
   Sample Results:
   ```
   "nodeName" : "SerializeFromObject",
   "nodeDesc" : "SerializeFromObject [knownnotnull(assertnotnull(input[0, org.apache.spark.status.api.v1.sql.Salary, true])).personId AS personId#17, knownnotnull(assertnotnull(input[0, org.apache.spark.status.api.v1.sql.Salary, true])).salary AS salary#18]",
   
   "nodeName" : "Project",
   "nodeDesc" : "Project [personId#17 AS id#24, salary#18]",
   
   "nodeName" : "Exchange",
   "nodeDesc" : "Exchange hashpartitioning(name#5, age#6, salary#18, 5), ENSURE_REQUIREMENTS, [id=#87]",
       
   "nodeName" : "Sort",
   "nodeDesc" : "Sort [id#24 ASC NULLS FIRST], false, 0",
       
   "nodeName" : "Scan",
   "nodeDesc" : "Scan[obj#3]",
       
   "nodeName" : "SortMergeJoin",
   "nodeDesc" : "SortMergeJoin [id#4], [id#24], Inner",
       
   "nodeName" : "HashAggregate",
   "nodeDesc" : "HashAggregate(keys=[name#5, age#6, knownfloatingpointnormalized(normalizenanandzero(salary#18)) AS salary#18], functions=[partial_avg(age#6), partial_avg(salary#18)])",
           
   "nodeName" : "Filter",
   "nodeDesc" : "Filter org.apache.spark.status.api.v1.sql.SqlResourceWithActualMetricsSuite$$Lambda$1666/2062184524@72c9ebfa.apply",
   
   // Following ones need exclusion
   "nodeName" : "WholeStageCodegen (3)",
   "nodeDesc" : "WholeStageCodegen (3)",
   
   "nodeName" : "Project",
   "nodeDesc" : "Project",
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org