You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "yx91490 (Jira)" <ji...@apache.org> on 2020/08/05 06:41:00 UTC

[jira] [Created] (SPARK-32536) spark sql insert overwrite dynamic partition error

yx91490 created SPARK-32536:
-------------------------------

             Summary: spark sql insert overwrite dynamic partition error
                 Key: SPARK-32536
                 URL: https://issues.apache.org/jira/browse/SPARK-32536
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 2.3.2
         Environment: HDP version 2.3.2.3.1.4.0-315
            Reporter: yx91490


when execute insert overwrite table statement to dynamic partition :

 
{code:java}
set hive.exec.dynamic.partition=true;
set hive.exec.dynamic.partition.mode=nostrict;
insert overwrite table tmp.id_name2 partition(dt) select * from tmp.id_name where dt='2001';
{code}
output log:
{code:java}
20/08/05 14:38:05 ERROR Hive: Exception when loading partition with parameters  partPath=hdfs://nameservice/user/hive/warehouse/tmp.db/id_name2/.hive-staging_hive_2020-08-05_14-38-00_715_3629476922121193803-1/-ext-10000/dt=2001,  table=id_name2,  partSpec={dt=2001},  loadFileType=REPLACE_ALL,  listBucketingLevel=0,  isAcid=false,  resetStatistics=false
org.apache.hadoop.hive.ql.metadata.HiveException: Directory hdfs://nameservice/user/hive/warehouse/tmp.db/id_name2/dt=2001 could not be cleaned up.
    at org.apache.hadoop.hive.ql.metadata.Hive.deleteOldPathForReplace(Hive.java:4666)
    at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:4597)
    at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:2132)
    at org.apache.hadoop.hive.ql.metadata.Hive$5.call(Hive.java:2588)
    at org.apache.hadoop.hive.ql.metadata.Hive$5.call(Hive.java:2579)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.FileNotFoundException: File hdfs://nameservice/user/hive/warehouse/tmp.db/id_name2/dt=2001 does not exist.
    at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:1053)
    at org.apache.hadoop.hdfs.DistributedFileSystem.access$1000(DistributedFileSystem.java:131)
    at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1113)
    at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1110)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:1120)
    at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1868)
    at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1910)
    at org.apache.hadoop.hive.ql.metadata.Hive.cleanUpOneDirectoryForReplace(Hive.java:4681)
    at org.apache.hadoop.hive.ql.metadata.Hive.deleteOldPathForReplace(Hive.java:4661)
    ... 8 more
Error in query: org.apache.hadoop.hive.ql.metadata.HiveException: Exception when loading 1 in table id_name2 with loadPath=hdfs://nameservice/user/hive/warehouse/tmp.db/id_name2/.hive-staging_hive_2020-08-05_14-38-00_715_3629476922121193803-1/-ext-10000;
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org