You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "ShrekerNil (JIRA)" <ji...@apache.org> on 2019/05/11 02:15:00 UTC

[jira] [Created] (HIVE-21720) HiveException: partition spec is invalid; field does not exist or is empty

ShrekerNil created HIVE-21720:
---------------------------------

             Summary: HiveException: partition spec is invalid; field <partition> does not exist or is empty
                 Key: HIVE-21720
                 URL: https://issues.apache.org/jira/browse/HIVE-21720
             Project: Hive
          Issue Type: Bug
         Environment: apache-flume-1.7.0-bin
            Reporter: ShrekerNil


I'm a fresh to hive, and when i used flume sink data to hive, the error occured:

2019-05-11 09:50:31,183 (hive-shive-call-runner-0) [ERROR - org.apache.hadoop.hive.ql.exec.DDLTask.failed(DDLTask.java:512)] org.apache.hadoop.hive.ql.metadata.HiveException: {color:#f79232}partition spec is invalid; field collection does not exist or is empty{color}
 at org.apache.hadoop.hive.ql.metadata.Partition.createMetaPartitionObject(Partition.java:130)
 at org.apache.hadoop.hive.ql.metadata.Hive.convertAddSpecToMetaPartition(Hive.java:1662)
 at org.apache.hadoop.hive.ql.metadata.Hive.createPartitions(Hive.java:1638)
 at org.apache.hadoop.hive.ql.exec.DDLTask.addPartitions(DDLTask.java:900)
 at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:339)
 at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
 at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
 at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1638)
 at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1397)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1183)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1039)
 at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.runDDL(HiveEndPoint.java:404)
 at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.createPartitionIfNotExists(HiveEndPoint.java:372)
 at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.<init>(HiveEndPoint.java:276)
 at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.<init>(HiveEndPoint.java:243)
 at org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnectionImpl(HiveEndPoint.java:180)
 at org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnection(HiveEndPoint.java:157)
 at org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnection(HiveEndPoint.java:110)
 at org.apache.flume.sink.hive.HiveWriter$8.call(HiveWriter.java:379)
 at org.apache.flume.sink.hive.HiveWriter$8.call(HiveWriter.java:376)
 at org.apache.flume.sink.hive.HiveWriter$11.call(HiveWriter.java:428)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)