You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2021/09/01 11:21:49 UTC

[GitHub] [hudi] melin edited a comment on issue #3554: [SUPPORT] Support Apache Spark 3.1

melin edited a comment on issue #3554:
URL: https://github.com/apache/hudi/issues/3554#issuecomment-910186444


   SPARK-32976 Support column list in INSERT statement
   ```/Spark3Adapter.scala:70: error: wrong number of arguments for pattern org.apache.spark.sql.catalyst.plans.logical.InsertIntoStatement(table: org.apache.spark.sql.catalyst.plans.logical.LogicalPlan,partitionSpec: Map[String,Option[String]],userSpecifiedCols: Seq[String],query: org.apache.spark.sql.catalyst.plans.logical.LogicalPlan,overwrite: Boolean,ifPartitionNotExists: Boolean)
   ```
   
   [ERROR] /Users/melin/Documents/codes/bigdata/hudi/hudi-sync/hudi-hive-sync/src/test/java/org/apache/hudi/hive/TestParquet2SparkSchemaUtils.java:[39,41] 无法将类 org.apache.spark.sql.execution.SparkSqlParser中的构造器 SparkSqlParser应型;
     需要: 没有参数
     找到: org.apache.spark.sql.internal.SQLConf
     原因: 实际参数列表和形式参数列表长度不同
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org