You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2020/10/02 08:35:00 UTC

[GitHub] [hudi] hotienvu opened a new issue #2138: [SUPPORT] Failed to sync to hive using nested partition columns

hotienvu opened a new issue #2138:
URL: https://github.com/apache/hudi/issues/2138


   **_Tips before filing an issue_**
   
   - Have you gone through our [FAQs](https://cwiki.apache.org/confluence/display/HUDI/FAQ)?
   
   - Join the mailing list to engage in conversations and get faster support at dev-subscribe@hudi.apache.org.
   
   - If you have triaged this as a bug, then file an [issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   **Describe the problem you faced**
   
   Specifying partitionPath as "part1:SIMPLE,part2:SIMPE,part3:SIMPLE" is currently not working. One issue is currently being addressed by https://github.com/apache/hudi/pull/2093
   This issue when syncing to Hive. It seems like the partition field values are not cleanup (still part1:SIMPLE) before passing down to the query generator. 
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   https://gist.github.com/hotienvu/3a859c26c8b997b313e5a0a2b794391c
   
   
   **Expected behavior**
   
   Data should be written to /tmp/hoodie/hoodie_streaming_cow/year=y/month=m/day=d. Currently not working but should be addressed by  https://github.com/apache/hudi/pull/2093 
   Hive table should be created (if first time) and partitions should be synced. Currently this is not working. 
   
   **Environment Description**
   
   * Hudi version : 0.6.1-SNAPSHOT
   
   * Spark version : 2.4.7
   
   * Hive version : 2.3.7
   
   * Hadoop version : 2.10.0
   
   * Storage (HDFS/S3/GCS..) : HDFS/local
   
   * Running on Docker? (yes/no) : no
   
   
   **Additional context**
   
   Add any other context about the problem here.
   
   **Stacktrace**
   org.apache.hudi.hive.HoodieHiveSyncException: Failed in executing SQL CREATE EXTERNAL TABLE  IF NOT EXISTS `default`.`hoodie_streaming_cow`( `_hoodie_commit_time` string, `_hoodie_commit_seqno` string, `_hoodie_record_key` string, `_hoodie_partition_path` string, `_hoodie_file_name` string, `timestamp` bigint, `value` bigint, `year` int, `month` int, `day` int, `ts` bigint, `dt` DATE, `id` int) PARTITIONED BY (`year:SIMPLE` String,`month:SIMPLE` String,`day:SIMPLE` String) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' STORED AS INPUTFORMAT 'org.apache.hudi.hadoop.HoodieParquetInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat' LOCATION '/tmp/hoodie/hoodie_streaming_cow'
   	at org.apache.hudi.hive.HoodieHiveClient.updateHiveSQL(HoodieHiveClient.java:352)
   	at org.apache.hudi.hive.HoodieHiveClient.createTable(HoodieHiveClient.java:262)
   	at org.apache.hudi.hive.HiveSyncTool.syncSchema(HiveSyncTool.java:175)
   	at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:130)
   	at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:94)
   	at org.apache.hudi.HoodieSparkSqlWriter$.syncHive(HoodieSparkSqlWriter.scala:319)
   	at org.apache.hudi.HoodieSparkSqlWriter$.$anonfun$metaSync$4(HoodieSparkSqlWriter.scala:361)
   	at org.apache.hudi.HoodieSparkSqlWriter$.$anonfun$metaSync$4$adapted(HoodieSparkSqlWriter.scala:357)
   	at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
   	at org.apache.hudi.HoodieSparkSqlWriter$.metaSync(HoodieSparkSqlWriter.scala:357)
   	at org.apache.hudi.HoodieSparkSqlWriter$.commitAndPerformPostOperations(HoodieSparkSqlWriter.scala:415)
   	at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:204)
   	at org.apache.hudi.HoodieStreamingSink.$anonfun$addBatch$2(HoodieStreamingSink.scala:74)
   	at scala.util.Try$.apply(Try.scala:213)
   	at org.apache.hudi.HoodieStreamingSink.$anonfun$addBatch$1(HoodieStreamingSink.scala:73)
   	at org.apache.hudi.HoodieStreamingSink.retry(HoodieStreamingSink.scala:144)
   	at org.apache.hudi.HoodieStreamingSink.addBatch(HoodieStreamingSink.scala:72)
   	at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$15(MicroBatchExecution.scala:537)
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:80)
   	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127)
   	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75)
   	at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$14(MicroBatchExecution.scala:536)
   	at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:351)
   	at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:349)
   	at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
   	at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runBatch(MicroBatchExecution.scala:535)
   	at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:198)
   	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
   	at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:351)
   	at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:349)
   	at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
   	at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:166)
   	at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
   	at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:160)
   	at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:281)
   	at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:193)
   Caused by: org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: ParseException line 1:364 Failed to recognize predicate ','. Failed rule: '[., :] can not be used in column name in create table statement.' in column specification
   	at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:267)
   	at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:253)
   	at org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:313)
   	at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:253)
   	at org.apache.hudi.hive.HoodieHiveClient.updateHiveSQL(HoodieHiveClient.java:350)
   	... 35 more
   Caused by: org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: ParseException line 1:364 Failed to recognize predicate ','. Failed rule: '[., :] can not be used in column name in create table statement.' in column specification
   	at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:380)
   	at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:206)
   	at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:290)
   	at org.apache.hive.service.cli.operation.Operation.run(Operation.java:320)
   	at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:530)
   	at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:517)
   	at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   	at java.lang.reflect.Method.invoke(Method.java:498)
   	at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
   	at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
   	at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
   	at java.security.AccessController.doPrivileged(Native Method)
   	at javax.security.auth.Subject.doAs(Subject.java:422)
   	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
   	at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
   	at com.sun.proxy.$Proxy39.executeStatementAsync(Unknown Source)
   	at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:310)
   	at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:530)
   	at org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1437)
   	at org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1422)
   	at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
   	at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
   	at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
   	at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   	at java.lang.Thread.run(Thread.java:745)
   Caused by: java.lang.RuntimeException: org.apache.hadoop.hive.ql.parse.ParseException:line 1:364 Failed to recognize predicate ','. Failed rule: '[., :] can not be used in column name in create table statement.' in column specification
   	at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:211)
   	at org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:77)
   	at org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:70)
   	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:468)
   	at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
   	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1295)
   	at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:204)
   
   ```Add the stacktrace of the error.```
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hotienvu closed issue #2138: [SUPPORT] Failed to sync to hive multi-partition table

Posted by GitBox <gi...@apache.org>.
hotienvu closed issue #2138:
URL: https://github.com/apache/hudi/issues/2138


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hotienvu commented on issue #2138: [SUPPORT] Failed to sync to hive multi-partition table

Posted by GitBox <gi...@apache.org>.
hotienvu commented on issue #2138:
URL: https://github.com/apache/hudi/issues/2138#issuecomment-702842659


   Thanks @bvaradar, it is working now. Closing this and the pr
   IMHO, would be great if  the doc [here](https://hudi.apache.org/docs/writing_data.html) have an example for writing into multiple-partition table as I spend quite a bit of time figuring out the right combination for PARTITIONPATH_FIELD_OPT_KEY, KEYGENERATOR_CLASS_OPT_KEY, HIVE_PARTITION_FIELDS_OPT_KEY and HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY. Happy to contribute to this if possible. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] bvaradar commented on issue #2138: [SUPPORT] Failed to sync to hive multi-partition table

Posted by GitBox <gi...@apache.org>.
bvaradar commented on issue #2138:
URL: https://github.com/apache/hudi/issues/2138#issuecomment-702772462


   @hotienvu : In your code, hive-sync config is configured wrongly 
   
   DataSourceWriteOptions.HIVE_PARTITION_FIELDS_OPT_KEY() should be set to  "year,month,day" instead of "year:SIMPLE,month:SIMPLE,day:SIMPLE". 
   
   @pratyakshsharma : Can you get the PR landed when you get a chance ? 
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] bvaradar edited a comment on issue #2138: [SUPPORT] Failed to sync to hive multi-partition table

Posted by GitBox <gi...@apache.org>.
bvaradar edited a comment on issue #2138:
URL: https://github.com/apache/hudi/issues/2138#issuecomment-702772462


   @hotienvu : In your code, hive-sync config is configured wrongly 
   
   DataSourceWriteOptions.HIVE_PARTITION_FIELDS_OPT_KEY() should be set to  "year,month,day" instead of "year:SIMPLE,month:SIMPLE,day:SIMPLE". 
   
   @pratyakshsharma : Can you get the PR #2093 landed when you get a chance ? 
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org