You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "wulingqi (Jira)" <ji...@apache.org> on 2022/03/27 13:19:00 UTC

[jira] [Commented] (HUDI-3725) no primary key error when spark read flink hudi table use default uuid

    [ https://issues.apache.org/jira/browse/HUDI-3725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17512947#comment-17512947 ] 

wulingqi commented on HUDI-3725:
--------------------------------

{code:java}
Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: There are no primary key in table `hudi_test`.`hudi_test_simple_uuid_rt`, cannot execute update operator     at scala.Predef$.require(Predef.scala:281)     at org.apache.spark.sql.hudi.catalog.ProvidesHoodieConfig.buildHoodieConfig(ProvidesHoodieConfig.scala:48)     at org.apache.spark.sql.hudi.catalog.ProvidesHoodieConfig.buildHoodieConfig$(ProvidesHoodieConfig.scala:40)     at org.apache.spark.sql.hudi.analysis.HoodieSpark3Analysis.buildHoodieConfig(HoodieSpark3Analysis.scala:44)     at org.apache.spark.sql.hudi.analysis.HoodieSpark3Analysis$$anonfun$apply$1.applyOrElse(HoodieSpark3Analysis.scala:56)     at org.apache.spark.sql.hudi.analysis.HoodieSpark3Analysis$$anonfun$apply$1.applyOrElse(HoodieSpark3Analysis.scala:47)     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$2(AnalysisHelper.scala:170)     at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:82)     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$1(AnalysisHelper.scala:170)     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:323)     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning(AnalysisHelper.scala:168)     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning$(AnalysisHelper.scala:164)     at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDownWithPruning(LogicalPlan.scala:30)     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$4(AnalysisHelper.scala:175)     at org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren(TreeNode.scala:1128)     at org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren$(TreeNode.scala:1127)     at org.apache.spark.sql.catalyst.plans.logical.OrderPreservingUnaryNode.mapChildren(LogicalPlan.scala:206)     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$1(AnalysisHelper.scala:175)     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:323)     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning(AnalysisHelper.scala:168)     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning$(AnalysisHelper.scala:164)     at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDownWithPruning(LogicalPlan.scala:30)     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$4(AnalysisHelper.scala:175)     at org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren(TreeNode.scala:1128)     at org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren$(TreeNode.scala:1127)     at org.apache.spark.sql.catalyst.plans.logical.OrderPreservingUnaryNode.mapChildren(LogicalPlan.scala:206)     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$1(AnalysisHelper.scala:175)     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:323)     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning(AnalysisHelper.scala:168)     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning$(AnalysisHelper.scala:164)     at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDownWithPruning(LogicalPlan.scala:30)     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown(AnalysisHelper.scala:160)     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown$(AnalysisHelper.scala:159)     at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:30)     at org.apache.spark.sql.hudi.analysis.HoodieSpark3Analysis.apply(HoodieSpark3Analysis.scala:47)     at org.apache.spark.sql.hudi.analysis.HoodieSpark3Analysis.apply(HoodieSpark3Analysis.scala:44)     at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:211)     at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)     at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)     at scala.collection.immutable.List.foldLeft(List.scala:89)     at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:208)     at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:200)     at scala.collection.immutable.List.foreach(List.scala:392)     at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:200)     at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:215)     at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:209)     at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:172)     at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:179)     at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:88)     at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:179)     at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:193)     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:330)     at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:192)     at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:88)     at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)     at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:196)     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)     at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:196)     at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:88)     at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:86)     at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:78)     at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:98)     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)     at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96)     at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:618)     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)     at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:613)     at org.example.HudiTest$.main(HudiTest.scala:132)     at org.example.HudiTest.main(HudiTest.scala)
 {code}

> no primary key error when spark read flink hudi table use default uuid
> ----------------------------------------------------------------------
>
>                 Key: HUDI-3725
>                 URL: https://issues.apache.org/jira/browse/HUDI-3725
>             Project: Apache Hudi
>          Issue Type: Bug
>          Components: flink
>            Reporter: wulingqi
>            Priority: Minor
>
> flink sql like the following will not write default record key uuid to hoodie.properties, when use spark to read it will throw *...There are no primary key...* Exception
> {code:java}
> CREATE TABLE t1(
>  uuid VARCHAR(20) ,
>  name VARCHAR(10),
>  age INT,
>  ts TIMESTAMP(3),
>  `partition` VARCHAR(20)
> )
> PARTITIONED BY (`partition`)
> WITH (
>  'connector' = 'hudi',
>  'path' = '${path}',
>  'table.type' = 'MERGE_ON_READ' -- this creates a MERGE_ON_READ table, by default is COPY_ON_WRITE
> ); {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)