You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "ryan-johnson-databricks (via GitHub)" <gi...@apache.org> on 2023/03/07 18:06:58 UTC

[GitHub] [spark] ryan-johnson-databricks opened a new pull request, #40321: [SPARK-42704] SubqueryAlias propagates metadata columns that child outputs

ryan-johnson-databricks opened a new pull request, #40321:
URL: https://github.com/apache/spark/pull/40321

   
   <!--
   Thanks for sending a pull request!  Here are some tips for you:
     1. If this is your first time, please read our contributor guidelines: https://spark.apache.org/contributing.html
     2. Ensure you have added or run the appropriate tests for your PR: https://spark.apache.org/developer-tools.html
     3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., '[WIP][SPARK-XXXX] Your PR title ...'.
     4. Be sure to keep the PR description updated to reflect all changes.
     5. Please write your PR title to summarize what this PR proposes.
     6. If possible, provide a concise example to reproduce the issue for a faster review.
     7. If you want to add a new configuration, please read the guideline first for naming configurations in
        'core/src/main/scala/org/apache/spark/internal/config/ConfigEntry.scala'.
     8. If you want to add or modify an error type or message, please read the guideline first in
        'core/src/main/resources/error/README.md'.
   -->
   
   ### What changes were proposed in this pull request?
   
   The `AddMetadataColumns` analyzer rule is designed to resolve metadata columns using `LogicalPlan.metadataOutput` -- even if the plan already contains projections that did not specifically mention the metadata column.
   
   Meanwhile, the `SubqueryAlias` plan node intentionally does _NOT_ propagate metadata columns automatically from a non-leaf/non-subquery child node, because the following should _NOT_ work:
   ```scala
   spark.read.table("t").select("a", "b").as("s").select("_metadata")
   ```
   
   However, the current implementation is too strict in breaking the metadata chain, in case the child node's output already includes the metadata column:
   ```scala
   // expected to work (and does)
   spark.read.table("t")
     .select("a", "b").select("_metadata")
   
   // by extension, this should also work (but does not)
   spark.read.table("t").select("a", "b", "_metadata").as("s")
     .select("a", "b").select("_metadata")
   ```
   
   The solution is for `SubqueryAlias` to propagate metadata columns that are already in the child's output, thus preserving the `metadataOutput` chain for that column.
   
   ### Why are the changes needed?
   
   The current implementation of `SubqueryAlias` breaks the intended behavior of metadata column propagation. 
   
   ### Does this PR introduce _any_ user-facing change?
   
   Yes. The following now works, where previously it did not:
   ```scala
   spark.read.table("t").select("a", "b", "_metadata").as("s")
     .select("a", "b").select("_metadata")
   ```
   
   ### How was this patch tested?
   
   New unit tests verify the expected behavior holds, with and without subqueries in the plan.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] cloud-fan commented on a diff in pull request #40321: [SPARK-42704] SubqueryAlias propagates metadata columns that child outputs

Posted by "cloud-fan (via GitHub)" <gi...@apache.org>.
cloud-fan commented on code in PR #40321:
URL: https://github.com/apache/spark/pull/40321#discussion_r1131939457


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala:
##########
@@ -1033,9 +1033,12 @@ class Analyzer(override val catalogManager: CatalogManager) extends RuleExecutor
         requiredAttrIds.contains(a.exprId)) =>
         s.withMetadataColumns()
       case p: Project if p.metadataOutput.exists(a => requiredAttrIds.contains(a.exprId)) =>
+        // Inject the requested metadata columns into the project's output, if not already present.

Review Comment:
   > but it's not available because the SubqueryAlias blocked it, this rule kept endlessly (re)appending the metadata column
   
   Make sense. So this change is a safe guard



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] github-actions[bot] commented on pull request #40321: [SPARK-42704] SubqueryAlias propagates metadata columns that child outputs

Posted by "github-actions[bot] (via GitHub)" <gi...@apache.org>.
github-actions[bot] commented on PR #40321:
URL: https://github.com/apache/spark/pull/40321#issuecomment-1601849947

   We're closing this PR because it hasn't been updated in a while. This isn't a judgement on the merit of the PR in any way. It's just a way of keeping the PR queue manageable.
   If you'd like to revive this PR, please reopen it and ask a committer to remove the Stale tag!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] ryan-johnson-databricks commented on a diff in pull request #40321: [SPARK-42704] SubqueryAlias propagates metadata columns that child outputs

Posted by "ryan-johnson-databricks (via GitHub)" <gi...@apache.org>.
ryan-johnson-databricks commented on code in PR #40321:
URL: https://github.com/apache/spark/pull/40321#discussion_r1131931758


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala:
##########
@@ -1033,9 +1033,12 @@ class Analyzer(override val catalogManager: CatalogManager) extends RuleExecutor
         requiredAttrIds.contains(a.exprId)) =>
         s.withMetadataColumns()
       case p: Project if p.metadataOutput.exists(a => requiredAttrIds.contains(a.exprId)) =>
+        // Inject the requested metadata columns into the project's output, if not already present.

Review Comment:
   I hit a weird endless loop with this while debugging this `SubqueryAlias` issue. Basically, if the plan root already has a metadata attribute, but it's not available because the `SubqueryAlias` blocked it, this rule kept endlessly (re)appending the metadata column to the projections below the `SubqueryAlias`. Once the rule ran 100 times (leaving 100 copies of `_metadata` in the `Project` output), the endless loop detector kicked in and killed it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] cloud-fan commented on a diff in pull request #40321: [SPARK-42704] SubqueryAlias propagates metadata columns that child outputs

Posted by "cloud-fan (via GitHub)" <gi...@apache.org>.
cloud-fan commented on code in PR #40321:
URL: https://github.com/apache/spark/pull/40321#discussion_r1131938647


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala:
##########
@@ -1033,9 +1033,12 @@ class Analyzer(override val catalogManager: CatalogManager) extends RuleExecutor
         requiredAttrIds.contains(a.exprId)) =>
         s.withMetadataColumns()
       case p: Project if p.metadataOutput.exists(a => requiredAttrIds.contains(a.exprId)) =>
+        // Inject the requested metadata columns into the project's output, if not already present.

Review Comment:
   yup



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] ryan-johnson-databricks commented on a diff in pull request #40321: [SPARK-42704] SubqueryAlias propagates metadata columns that child outputs

Posted by "ryan-johnson-databricks (via GitHub)" <gi...@apache.org>.
ryan-johnson-databricks commented on code in PR #40321:
URL: https://github.com/apache/spark/pull/40321#discussion_r1131931758


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala:
##########
@@ -1033,9 +1033,12 @@ class Analyzer(override val catalogManager: CatalogManager) extends RuleExecutor
         requiredAttrIds.contains(a.exprId)) =>
         s.withMetadataColumns()
       case p: Project if p.metadataOutput.exists(a => requiredAttrIds.contains(a.exprId)) =>
+        // Inject the requested metadata columns into the project's output, if not already present.

Review Comment:
   I hit a weird endless loop with this while debugging this `SubqueryAlias` issue. Basically, if the plan root already has a metadata attribute (perhaps added manually by a query rewrite), but it's not available because the `SubqueryAlias` blocked it, this rule kept endlessly (re)appending the metadata column to the projections below the `SubqueryAlias`. Once the rule ran 100 times (leaving 100 copies of `_metadata` in the `Project` output), the endless loop detector kicked in and killed it.
   
   I don't think filtering by `inputAttrs` helps, when the problem is what's already in the `output` we're appending to?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] cloud-fan commented on a diff in pull request #40321: [SPARK-42704] SubqueryAlias propagates metadata columns that child outputs

Posted by "cloud-fan (via GitHub)" <gi...@apache.org>.
cloud-fan commented on code in PR #40321:
URL: https://github.com/apache/spark/pull/40321#discussion_r1131869082


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala:
##########
@@ -1033,9 +1033,12 @@ class Analyzer(override val catalogManager: CatalogManager) extends RuleExecutor
         requiredAttrIds.contains(a.exprId)) =>
         s.withMetadataColumns()
       case p: Project if p.metadataOutput.exists(a => requiredAttrIds.contains(a.exprId)) =>
+        // Inject the requested metadata columns into the project's output, if not already present.

Review Comment:
   do we hit a real issue with this? If the metadata col is already in the output, this code path should not be triggered as we have `val metaCols = getMetadataAttributes(node).filterNot(inputAttrs.contains)`
   
   What we can improve is to only include metadata columns that are included in `requiredAttrIds`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] cloud-fan commented on a diff in pull request #40321: [SPARK-42704] SubqueryAlias propagates metadata columns that child outputs

Posted by "cloud-fan (via GitHub)" <gi...@apache.org>.
cloud-fan commented on code in PR #40321:
URL: https://github.com/apache/spark/pull/40321#discussion_r1131870627


##########
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileMetadataStructSuite.scala:
##########
@@ -281,6 +281,53 @@ class FileMetadataStructSuite extends QueryTest with SharedSparkSession {
     )
   }
 
+  metadataColumnsTest("metadata propagates through projections automatically",

Review Comment:
   This change is for the general metadata col framework, not file source metadata columns, we can add the tests in `MetadataColumnSuite`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] ryan-johnson-databricks commented on a diff in pull request #40321: [SPARK-42704] SubqueryAlias propagates metadata columns that child outputs

Posted by "ryan-johnson-databricks (via GitHub)" <gi...@apache.org>.
ryan-johnson-databricks commented on code in PR #40321:
URL: https://github.com/apache/spark/pull/40321#discussion_r1131931758


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala:
##########
@@ -1033,9 +1033,12 @@ class Analyzer(override val catalogManager: CatalogManager) extends RuleExecutor
         requiredAttrIds.contains(a.exprId)) =>
         s.withMetadataColumns()
       case p: Project if p.metadataOutput.exists(a => requiredAttrIds.contains(a.exprId)) =>
+        // Inject the requested metadata columns into the project's output, if not already present.

Review Comment:
   I hit a weird endless loop with this while debugging this `SubqueryAlias` issue. Basically, if the plan root already has a metadata attribute (perhaps added manually by a query rewrite), but it's not available because the `SubqueryAlias` blocked it, this rule kept endlessly (re)appending the metadata column to the projections below the `SubqueryAlias`. Once the rule ran 100 times (leaving 100 copies of `_metadata` in the `Project` output), the endless loop detector kicked in and killed it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] olaky commented on a diff in pull request #40321: [SPARK-42704] SubqueryAlias propagates metadata columns that child outputs

Posted by "olaky (via GitHub)" <gi...@apache.org>.
olaky commented on code in PR #40321:
URL: https://github.com/apache/spark/pull/40321#discussion_r1134507196


##########
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileMetadataStructSuite.scala:
##########
@@ -281,6 +281,53 @@ class FileMetadataStructSuite extends QueryTest with SharedSparkSession {
     )
   }
 
+  metadataColumnsTest("metadata propagates through projections automatically",
+    schema) { (df, f0, f1) =>
+
+    checkAnswer(
+      df.select("name")
+        .withColumn("m", col("_metadata").getField("file_name")),
+      Seq(
+        Row("jack", f0(METADATA_FILE_NAME)),
+        Row("lily", f1(METADATA_FILE_NAME))
+      )
+    )
+  }
+
+  metadataColumnsTest("metadata propagates through subqueries only manually",
+    schema) { (df, f0, f1) =>
+
+    // Metadata columns do not automatically propagate through subqueries.
+    checkError(
+      exception = intercept[AnalysisException] {
+        df.select("name").as("s").select("name")
+          .withColumn("m", col("_metadata").getField("file_name"))
+      },
+      errorClass = "UNRESOLVED_COLUMN.WITH_SUGGESTION",
+      parameters = Map("objectName" -> "`_metadata`", "proposal" -> "`s`.`name`"))
+
+    // A metadata column manually propagated through a subquery is available.
+    checkAnswer(
+      df.select("name", "_metadata").as("s").select("name")
+        .withColumn("m", col("_metadata").getField("file_name")),
+      Seq(
+        Row("jack", f0(METADATA_FILE_NAME)),
+        Row("lily", f1(METADATA_FILE_NAME))
+      )
+    )
+
+    // A metadata column manually propagated multiple subqueries is available.
+    checkAnswer(
+      df.select("name", "_metadata").as("s")
+        .select("name", "_metadata").as("t").select("name")
+        .withColumn("m", col("_metadata").getField("file_name")),
+      Seq(
+        Row("jack", f0(METADATA_FILE_NAME)),
+        Row("lily", f1(METADATA_FILE_NAME))
+      )
+    )
+  }
+

Review Comment:
   Maybe one case where this is a node between the subquery and the leaf? So maybe
   
   df.select("name", "_metadata").as("s").filter(...).select("name", "_metadata").as("t").select("name")



##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala:
##########
@@ -1685,14 +1685,13 @@ case class SubqueryAlias(
   }
 
   override def metadataOutput: Seq[Attribute] = {
-    // Propagate metadata columns from leaf nodes through a chain of `SubqueryAlias`.
-    if (child.isInstanceOf[LeafNode] || child.isInstanceOf[SubqueryAlias]) {
-      val qualifierList = identifier.qualifier :+ alias
-      val nonHiddenMetadataOutput = child.metadataOutput.filter(!_.qualifiedAccessOnly)
-      nonHiddenMetadataOutput.map(_.withQualifier(qualifierList))
-    } else {
-      Nil
-    }
+    // Propagate any metadata column that is already in the child's output. Also propagate
+    // non-hidden metadata columns from leaf nodes through a chain of `SubqueryAlias`.
+    val keepChildMetadata = child.isInstanceOf[LeafNode] || child.isInstanceOf[SubqueryAlias]
+    val qualifierList = identifier.qualifier :+ alias
+    child.metadataOutput
+      .filter { a => child.outputSet.contains(a) || (keepChildMetadata && !a.qualifiedAccessOnly) }

Review Comment:
   Thinking out loud: Is it possible that not the attribute, but for example an attribute reference to the attribute is part of the outputSet?



##########
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileMetadataStructSuite.scala:
##########
@@ -281,6 +281,53 @@ class FileMetadataStructSuite extends QueryTest with SharedSparkSession {
     )
   }
 
+  metadataColumnsTest("metadata propagates through projections automatically",
+    schema) { (df, f0, f1) =>
+
+    checkAnswer(
+      df.select("name")
+        .withColumn("m", col("_metadata").getField("file_name")),
+      Seq(
+        Row("jack", f0(METADATA_FILE_NAME)),
+        Row("lily", f1(METADATA_FILE_NAME))
+      )
+    )
+  }
+
+  metadataColumnsTest("metadata propagates through subqueries only manually",
+    schema) { (df, f0, f1) =>
+
+    // Metadata columns do not automatically propagate through subqueries.
+    checkError(
+      exception = intercept[AnalysisException] {
+        df.select("name").as("s").select("name")
+          .withColumn("m", col("_metadata").getField("file_name"))
+      },
+      errorClass = "UNRESOLVED_COLUMN.WITH_SUGGESTION",
+      parameters = Map("objectName" -> "`_metadata`", "proposal" -> "`s`.`name`"))
+
+    // A metadata column manually propagated through a subquery is available.
+    checkAnswer(
+      df.select("name", "_metadata").as("s").select("name")
+        .withColumn("m", col("_metadata").getField("file_name")),
+      Seq(
+        Row("jack", f0(METADATA_FILE_NAME)),
+        Row("lily", f1(METADATA_FILE_NAME))
+      )
+    )
+
+    // A metadata column manually propagated multiple subqueries is available.

Review Comment:
   ```suggestion
       // A metadata column manually propagated through multiple subqueries is available.
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] github-actions[bot] closed pull request #40321: [SPARK-42704] SubqueryAlias propagates metadata columns that child outputs

Posted by "github-actions[bot] (via GitHub)" <gi...@apache.org>.
github-actions[bot] closed pull request #40321: [SPARK-42704] SubqueryAlias propagates metadata columns that child outputs
URL: https://github.com/apache/spark/pull/40321


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] ryan-johnson-databricks commented on pull request #40321: [SPARK-42704] SubqueryAlias propagates metadata columns that child outputs

Posted by "ryan-johnson-databricks (via GitHub)" <gi...@apache.org>.
ryan-johnson-databricks commented on PR #40321:
URL: https://github.com/apache/spark/pull/40321#issuecomment-1459045889

   Something went wrong with [Run spark on kubernetes integration test](https://github.com/ryan-johnson-databricks/spark/actions/runs/4358877500/jobs/7620022040):
   ```
   [info] *** Test still running after 2 minutes, 57 seconds: suite name: KubernetesSuite, test name: Test decommissioning with dynamic allocation & shuffle cleanups. 
   [info] - Test decommissioning with dynamic allocation & shuffle cleanups (3 minutes, 3 seconds)
   [info] - Test decommissioning timeouts (1 minute)
   [info] - SPARK-37576: Rolling decommissioning (1 minute, 11 seconds)
   [info] org.apache.spark.deploy.k8s.integrationtest.KubernetesSuite *** ABORTED *** (25 minutes, 32 seconds)
   [info]   io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://192.168.49.2:8443/api/v1/namespaces. Message: object is being deleted: namespaces "spark-6bff7607e9884740a4bac53b1fb655ae" already exists. Received status: Status(apiVersion=v1, code=409, details=StatusDetails(causes=[], group=null, kind=namespaces, name=spark-6bff7607e9884740a4bac53b1fb655ae, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=object is being deleted: namespaces "spark-6bff7607e9884740a4bac53b1fb655ae" already exists, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=AlreadyExists, status=Failure, additionalProperties={}).
   [info]   at io.fabric8.kubernetes.client.KubernetesClientException.copyAsCause(KubernetesClientException.java:238)
   [info]   at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.waitForResult(OperationSupport.java:538)
      ...
   [info]   at io.fabric8.kubernetes.client.dsl.internal.CreateOnlyResourceOperation.create(CreateOnlyResourceOperation.java:42)
   [info]   at org.apache.spark.deploy.k8s.integrationtest.KubernetesTestComponents.createNamespace(KubernetesTestComponents.scala:51)
   [info]   at org.apache.spark.deploy.k8s.integrationtest.KubernetesSuite.setUpTest(KubernetesSuite.scala:202)
      ...
   [info]   at org.apache.spark.deploy.k8s.integrationtest.KubernetesSuite.runTest(KubernetesSuite.scala:45)
      ...
   [info]   Cause: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://192.168.49.2:8443/api/v1/namespaces. Message: object is being deleted: namespaces "spark-6bff7607e9884740a4bac53b1fb655ae" already exists. Received status: Status(apiVersion=v1, code=409, details=StatusDetails(causes=[], group=null, kind=namespaces, name=spark-6bff7607e9884740a4bac53b1fb655ae, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=object is being deleted: namespaces "spark-6bff7607e9884740a4bac53b1fb655ae" already exists, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=AlreadyExists, status=Failure, additionalProperties={}).
   ```
   (not sure how that could be related to this PR?)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] ryan-johnson-databricks commented on a diff in pull request #40321: [SPARK-42704] SubqueryAlias propagates metadata columns that child outputs

Posted by "ryan-johnson-databricks (via GitHub)" <gi...@apache.org>.
ryan-johnson-databricks commented on code in PR #40321:
URL: https://github.com/apache/spark/pull/40321#discussion_r1131932553


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala:
##########
@@ -1033,9 +1033,12 @@ class Analyzer(override val catalogManager: CatalogManager) extends RuleExecutor
         requiredAttrIds.contains(a.exprId)) =>
         s.withMetadataColumns()
       case p: Project if p.metadataOutput.exists(a => requiredAttrIds.contains(a.exprId)) =>
+        // Inject the requested metadata columns into the project's output, if not already present.

Review Comment:
   Re the "only include" comment, do you mean something like this?
   ```scala
   val missingMetadata = p.metadataOutput
     .filter(a => requiredAttrIds.contains(a.exprId)
     .filterNot(a => p.projectList.exists(_.exprId == a.exprId))
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org