You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Stamatis Zampetakis (Jira)" <ji...@apache.org> on 2022/04/14 07:10:00 UTC

[jira] [Resolved] (HIVE-26127) INSERT OVERWRITE throws FileNotFound when destination partition is deleted

     [ https://issues.apache.org/jira/browse/HIVE-26127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Stamatis Zampetakis resolved HIVE-26127.
----------------------------------------
    Fix Version/s: 4.0.0-alpha-2
       Resolution: Fixed

Fixed in https://github.com/apache/hive/commit/260924050b11d3342b44091797d88b6f489dcaef. Thanks for the PR [~hsnusonic]!

> INSERT OVERWRITE throws FileNotFound when destination partition is deleted 
> ---------------------------------------------------------------------------
>
>                 Key: HIVE-26127
>                 URL: https://issues.apache.org/jira/browse/HIVE-26127
>             Project: Hive
>          Issue Type: Bug
>          Components: Query Processor
>            Reporter: Yu-Wen Lai
>            Assignee: Yu-Wen Lai
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 4.0.0-alpha-2
>
>          Time Spent: 40m
>  Remaining Estimate: 0h
>
> Steps to reproduce:
>  # create external table src (col int) partitioned by (year int);
>  # create external table dest (col int) partitioned by (year int);
>  # insert into src partition (year=2022) values (1);
>  # insert into dest partition (year=2022) values (2);
>  # hdfs dfs -rm -r ${hive.metastore.warehouse.external.dir}/dest/year=2022
>  # insert overwrite table dest select * from src;
> We will get FileNotFoundException as below.
> {code:java}
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Directory file:/home/yuwen/workdir/upstream/hive/itests/qtest/target/localfs/warehouse/ext_part/par=1 could not be cleaned up.
>     at org.apache.hadoop.hive.ql.metadata.Hive.deleteOldPathForReplace(Hive.java:5387)
>     at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:5282)
>     at org.apache.hadoop.hive.ql.metadata.Hive.loadPartitionInternal(Hive.java:2657)
>     at org.apache.hadoop.hive.ql.metadata.Hive.lambda$loadDynamicPartitions$6(Hive.java:3143)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>     at java.lang.Thread.run(Thread.java:748) {code}
> It is because it call listStatus on a path doesn't exist. We should not fail insert overwrite because there is nothing to be clean up.
> {code:java}
> fs.listStatus(path, pathFilter){code}
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)