You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2023/01/18 10:33:38 UTC

[GitHub] [iceberg] alex2308 opened a new issue, #6615: Merge into does not work with spark temp table

alex2308 opened a new issue, #6615:
URL: https://github.com/apache/iceberg/issues/6615

   ### Apache Iceberg version
   
   1.1.0 (latest release)
   
   ### Query engine
   
   Spark
   
   ### Please describe the bug 🐞
   
   I'm trying to update an existing table with the merge into operation using a local dataframe as source for my updates, when doing that I get an error: "Requirement failed", any idea how to fix that?
   
   Here is the merge statement:
   `MERGE INTO iceberg_catalog.dummy.affinity_scores_iceberg a
       USING affinity
       ON a.mediauser_marketingid = affinity.mediauser_marketingid and a.touchpoint_touchpointbrand=affinity.touchpoint_touchpointbrand and a.affinity_subject=affinity.affinity_subject 
       WHEN MATCHED AND a.affinity_score<>affinity.affinity_score then update set a.affinity_score=affinity.affinity_score
       WHEN NOT MATCHED THEN INSERT *`
   
   Where affinity is a dataframe that is created as a temp view with the following line:
   `affinity_scores_latest.createTempView("affinity")`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] tenstriker commented on issue #6615: Merge into does not work with spark temp table

Posted by "tenstriker (via GitHub)" <gi...@apache.org>.
tenstriker commented on issue #6615:
URL: https://github.com/apache/iceberg/issues/6615#issuecomment-1644606159

   I'm getting same error . here's the exception. `updates` is the temp table created using `df.createOrReplaceTempView("updates")`
   
   ```
   pyspark.sql.utils.AnalysisException: Table or view not found: updates; line 3 pos 14;
   'MergeIntoTable ('dest.id = 'source.id), [insertstaraction(None)]
   :- SubqueryAlias dest
   :  +- SubqueryAlias iceberg_catalog.data_platform_silver.player_events_ado_2
   :     +- RelationV2[event#81, id#82, date#83, tenant#84, id_field#85, schema_version#86] iceberg_catalog.data_platform_silver.player_events_ado_2
   +- 'SubqueryAlias source
      +- 'UnresolvedRelation [updates], [], false
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] RussellSpitzer commented on issue #6615: Merge into does not work with spark temp table

Posted by GitBox <gi...@apache.org>.
RussellSpitzer commented on issue #6615:
URL: https://github.com/apache/iceberg/issues/6615#issuecomment-1397252089

   Looks like Python is swallowing the error there ... there may be an error in the JVM log or the SparkUI that actually has real info in it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] alex2308 commented on issue #6615: Merge into does not work with spark temp table

Posted by GitBox <gi...@apache.org>.
alex2308 commented on issue #6615:
URL: https://github.com/apache/iceberg/issues/6615#issuecomment-1396778042

   That is the full error I receive in my notebook when running the MERGE INTO:
   
   An error was encountered:
   requirement failed
   Traceback (most recent call last):
     File "/mnt/yarn/usercache/livy/appcache/application_1674031569861_0002/container_1674031569861_0002_01_000001/pyspark.zip/pyspark/sql/session.py", line 1034, in sql
       return DataFrame(self._jsparkSession.sql(sqlQuery), self)
     File "/mnt/yarn/usercache/livy/appcache/application_1674031569861_0002/container_1674031569861_0002_01_000001/py4j-0.10.9.5-src.zip/py4j/java_gateway.py", line 1322, in __call__
       answer, self.gateway_client, self.target_id, self.name)
     File "/mnt/yarn/usercache/livy/appcache/application_1674031569861_0002/container_1674031569861_0002_01_000001/pyspark.zip/pyspark/sql/utils.py", line 196, in deco
       raise converted from None
   pyspark.sql.utils.IllegalArgumentException: requirement failed


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] tenstriker commented on issue #6615: Merge into does not work with spark temp table

Posted by "tenstriker (via GitHub)" <gi...@apache.org>.
tenstriker commented on issue #6615:
URL: https://github.com/apache/iceberg/issues/6615#issuecomment-1644620327

   Anyone has a workaround? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] jhchee commented on issue #6615: Merge into does not work with spark temp table

Posted by "jhchee (via GitHub)" <gi...@apache.org>.
jhchee commented on issue #6615:
URL: https://github.com/apache/iceberg/issues/6615#issuecomment-1575452202

   @RussellSpitzer I'm able to reproduce this issue when creating a SCD type 2 table. If you have time, link over here https://github.com/apache/iceberg/issues/7766


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] mothukur commented on issue #6615: Merge into does not work with spark temp table

Posted by "mothukur (via GitHub)" <gi...@apache.org>.
mothukur commented on issue #6615:
URL: https://github.com/apache/iceberg/issues/6615#issuecomment-1423710865

   I have faced the same error while testing the Merge Into operation. any suggestions ? 
   `Caused by: java.lang.IllegalArgumentException: requirement failed
   	at scala.Predef$.require(Predef.scala:268)
   	at org.apache.spark.sql.catalyst.plans.logical.View.<init>(basicLogicalOperators.scala:569)
   	at org.apache.spark.sql.catalyst.plans.logical.View.copy(basicLogicalOperators.scala:568)
   	at org.apache.spark.sql.catalyst.plans.logical.View.withNewChildInternal(basicLogicalOperators.scala:604)
   	at org.apache.spark.sql.catalyst.plans.logical.View.withNewChildInternal(basicLogicalOperators.scala:565)
   	at org.apache.spark.sql.catalyst.trees.UnaryLike.withNewChildrenInternal(TreeNode.scala:1242)
   	at org.apache.spark.sql.catalyst.trees.UnaryLike.withNewChildrenInternal$(TreeNode.scala:1240)
   	at org.apache.spark.sql.catalyst.plans.logical.View.withNewChildrenInternal(basicLogicalOperators.scala:565)
   	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$withNewChildren$2(TreeNode.scala:462)
   	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176)
   	at org.apache.spark.sql.catalyst.trees.TreeNode.withNewChildren(TreeNode.scala:461)
   	at org.apache.spark.sql.catalyst.analysis.Analyzer$AddMetadataColumns$.org$apache$spark$sql$catalyst$analysis$Analyzer$AddMetadataColumns$$addMetadataCol(Analyzer.scala:975)
   	at org.apache.spark.sql.catalyst.analysis.Analyzer$AddMetadataColumns$.$anonfun$addMetadataCol$1(Analyzer.scala:975)
   	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
   	at scala.collection.Iterator.foreach(Iterator.scala:943)
   	at scala.collection.Iterator.foreach$(Iterator.scala:943)
   	at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
   	at scala.collection.IterableLike.foreach(IterableLike.scala:74)
   	at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
   	at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
   	at scala.collection.TraversableLike.map(TraversableLike.scala:286)
   	at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
   	at scala.collection.AbstractTraversable.map(Traversable.scala:108)
   	at org.apache.spark.sql.catalyst.analysis.Analyzer$AddMetadataColumns$.org$apache$spark$sql$catalyst$analysis$Analyzer$AddMetadataColumns$$addMetadataCol(Analyzer.scala:975)
   	at org.apache.spark.sql.catalyst.analysis.Analyzer$AddMetadataColumns$.$anonfun$addMetadataCol$1(Analyzer.scala:975)
   	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
   	at scala.collection.Iterator.foreach(Iterator.scala:943)
   	at scala.collection.Iterator.foreach$(Iterator.scala:943)
   	at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
   	at scala.collection.IterableLike.foreach(IterableLike.scala:74)
   	at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
   	at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
   	at scala.collection.TraversableLike.map(TraversableLike.scala:286)
   	at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
   	at scala.collection.AbstractTraversable.map(Traversable.scala:108)
   	at org.apache.spark.sql.catalyst.analysis.Analyzer$AddMetadataColumns$.org$apache$spark$sql$catalyst$analysis$Analyzer$AddMetadataColumns$$addMetadataCol(Analyzer.scala:975)
   	at org.apache.spark.sql.catalyst.analysis.Analyzer$AddMetadataColumns$.org$apache$spark$sql$catalyst$analysis$Analyzer$AddMetadataColumns$$addMetadataCol(Analyzer.scala:972)
   	at org.apache.spark.sql.catalyst.analysis.Analyzer$AddMetadataColumns$.$anonfun$addMetadataCol$1(Analyzer.scala:975)
   	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
   	at scala.collection.Iterator.foreach(Iterator.scala:943)
   	at scala.collection.Iterator.foreach$(Iterator.scala:943)
   	at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
   	at scala.collection.IterableLike.foreach(IterableLike.scala:74)
   	at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
   	at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
   	at scala.collection.TraversableLike.map(TraversableLike.scala:286)
   	at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
   	at scala.collection.AbstractTraversable.map(Traversable.scala:108)
   	at org.apache.spark.sql.catalyst.analysis.Analyzer$AddMetadataColumns$.org$apache$spark$sql$catalyst$analysis$Analyzer$AddMetadataColumns$$addMetadataCol(Analyzer.scala:975)
   	at org.apache.spark.sql.catalyst.analysis.Analyzer$AddMetadataColumns$.$anonfun$addMetadataCol$1(Analyzer.scala:975)
   	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
   	at scala.collection.Iterator.foreach(Iterator.scala:943)
   	at scala.collection.Iterator.foreach$(Iterator.scala:943)
   	at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
   	at scala.collection.IterableLike.foreach(IterableLike.scala:74)
   	at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
   	at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
   	at scala.collection.TraversableLike.map(TraversableLike.scala:286)
   	at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
   	at scala.collection.AbstractTraversable.map(Traversable.scala:108)
   	at org.apache.spark.sql.catalyst.analysis.Analyzer$AddMetadataColumns$.org$apache$spark$sql$catalyst$analysis$Analyzer$AddMetadataColumns$$addMetadataCol(Analyzer.scala:975)
   	at org.apache.spark.sql.catalyst.analysis.Analyzer$AddMetadataColumns$$anonfun$apply$12.applyOrElse(Analyzer.scala:933)
   	at org.apache.spark.sql.catalyst.analysis.Analyzer$AddMetadataColumns$$anonfun$apply$12.applyOrElse(Analyzer.scala:925)
   	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$2(AnalysisHelper.scala:170)
   	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176)
   	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$1(AnalysisHelper.scala:170)
   	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:323)
   	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning(AnalysisHelper.scala:168)
   	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning$(AnalysisHelper.scala:164)
   	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDownWithPruning(LogicalPlan.scala:30)
   	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$4(AnalysisHelper.scala:175)
   	at org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren(TreeNode.scala:1228)
   	at org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren$(TreeNode.scala:1227)
   	at org.apache.spark.sql.catalyst.plans.logical.ReplaceIcebergData.mapChildren(ReplaceIcebergData.scala:32)
   	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$1(AnalysisHelper.scala:175)
   	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:323)
   	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning(AnalysisHelper.scala:168)
   	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning$(AnalysisHelper.scala:164)
   	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDownWithPruning(LogicalPlan.scala:30)
   	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$4(AnalysisHelper.scala:175)
   	at scala.collection.immutable.List.map(List.scala:297)
   	at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:698)
   	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$1(AnalysisHelper.scala:175)
   	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:323)
   	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning(AnalysisHelper.scala:168)
   	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning$(AnalysisHelper.scala:164)
   	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDownWithPruning(LogicalPlan.scala:30)
   	at org.apache.spark.sql.catalyst.analysis.Analyzer$AddMetadataColumns$.apply(Analyzer.scala:925)
   	at org.apache.spark.sql.catalyst.analysis.Analyzer$AddMetadataColumns$.apply(Analyzer.scala:920)
   	at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:211)
   	at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
   	at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
   	at scala.collection.immutable.List.foldLeft(List.scala:91)
   	at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:208)
   	at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:200)
   	at scala.collection.immutable.List.foreach(List.scala:431)
   	at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:200)
   	at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:227)
   	at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$execute$1(Analyzer.scala:223)
   	at org.apache.spark.sql.catalyst.analysis.AnalysisContext$.withNewAnalysisContext(Analyzer.scala:172)
   	at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:223)
   	at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:187)
   	at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:179)
   	at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:88)
   	at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:179)
   	at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:208)
   	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:330)
   	at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:207)
   	at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:76)
   	at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
   	at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:185)
   	at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:510)
   	at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:185)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
   	at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:184)
   	at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:76)
   	at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:74)
   	at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:66)
   	at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
   	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
   	at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:622)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
   	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:617)
   `


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] RussellSpitzer commented on issue #6615: Merge into does not work with spark temp table

Posted by GitBox <gi...@apache.org>.
RussellSpitzer commented on issue #6615:
URL: https://github.com/apache/iceberg/issues/6615#issuecomment-1387375582

   Do you have the full error? There is nothing specific I can think of that would prohibit this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] sdaberdaku commented on issue #6615: Merge into does not work with spark temp table

Posted by "sdaberdaku (via GitHub)" <gi...@apache.org>.
sdaberdaku commented on issue #6615:
URL: https://github.com/apache/iceberg/issues/6615#issuecomment-1401709057

   I faced the same exact error as @alex2308 and tried a couple of things. There were no useful logs in the Spark UI, so I tried downgrading from the spark runtime from 3.3.1 to 3.2.3 and the from iceberg-spark-runtime-3.3_2.12:1.1.0 to iceberg-spark-runtime-3.2_2.12:1.1.0.
   After the downgrade, I saw the following error in the logs:
   Caused by: java.lang.NoSuchMethodError: 'void software.amazon.awssdk.utils.IoUtils.closeQuietly(java.lang.AutoCloseable, software.amazon.awssdk.thirdparty.org.slf4j.Logger)'
   Which led me to increase the AWS dependencies version from 2.17.257 to the current latest one: 2.19.21 which seems to have fixed the issue. 
   I have not tried to go back to spark 3.3.1 yet.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] ThibauldC commented on issue #6615: Merge into does not work with spark temp table

Posted by "ThibauldC (via GitHub)" <gi...@apache.org>.
ThibauldC commented on issue #6615:
URL: https://github.com/apache/iceberg/issues/6615#issuecomment-1717634559

   Does anyone have a (temporary) solution for this? We are also facing issues with the `MERGE INTO` statement in combination with a temporary view. We had no issues in PySpark 3.2.1, but we are encountering this issue when trying to upgrade to PySpark 3.3. We tried with 3.3.0 and 3.3.2, but same issues. We are unfortunately not capable of upgrading to PySpark 3.4 (yet).
   
   Here is the piece of our code that is throwing the error:
   ```python
   dataframe.createOrReplaceTempView("example_subquery")
   spark.sql("""
         merge into test.test.example s
         using example_subquery d
         on s.id = d.id and s.lw = d.lw
         when matched then update set s.end_ts} = d.end_ts, 
                     s.current_flag = d.current_flag 
                     s.update_ts = current_timestamp() 
        when not matched then  insert *
   """)
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org