You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Wang XL (JIRA)" <ji...@apache.org> on 2018/06/05 13:06:00 UTC

[jira] [Commented] (HIVE-19625) Archive partition can not be dropped

    [ https://issues.apache.org/jira/browse/HIVE-19625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501730#comment-16501730 ] 

Wang XL commented on HIVE-19625:
--------------------------------

ping  [~ashutoshc] ?, If you have time,  can you give a review .Thanks.

> Archive partition can not be dropped
> ------------------------------------
>
>                 Key: HIVE-19625
>                 URL: https://issues.apache.org/jira/browse/HIVE-19625
>             Project: Hive
>          Issue Type: Bug
>          Components: Metastore
>    Affects Versions: 1.3.0, 1.0.2
>            Reporter: Wang XL
>            Priority: Major
>         Attachments: HIVE-19625-trunk.001.patch
>
>
> In our environment, we use hive Archive Partition command {{ALTER TABLE table_name ARCHIVE PARTITION partition_spec;}}. But when I try to delete a partition by using {{ALTER TABLE table_name DROP [IF EXISTS] PARTITION partition_spec}}, I get error and the stack as follow:
> {code:java}
> 2018-01-15 22:08:36,921 ERROR [fe67c601-9bd7-4d5b-8e6e-8aea50a1167e]: exec.DDLTask (DDLTask.java:failed(526)) - org.apache.hadoop.hive.ql.metadata.HiveException: Table partition not deleted
> since har:/nn01/warehouse/test.db/xiaolong_test/dt=20170826/hour=16/ctime=2017082616 is not writable by hadoop-data
>       at org.apache.hadoop.hive.ql.metadata.Hive.dropPartitions(Hive.java:1990)
>       at org.apache.hadoop.hive.ql.metadata.Hive.dropPartitions(Hive.java:1971)
>       at org.apache.hadoop.hive.ql.exec.DDLTask.dropPartitions(DDLTask.java:3718)
>       at org.apache.hadoop.hive.ql.exec.DDLTask.dropTableOrPartitions(DDLTask.java:3679)
>       at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:337)
>       at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
>       at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:92)
>       at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1704)
>       at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1446)
>       at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1087)
>       at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1223)
>       at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1087)
>       at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1077)
>       at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:216)
>       at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:168)
>       at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:386)
>       at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:321)
>       at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:725)
>       at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:698)
>       at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:634)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:606)
>       at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:168)
>       at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:386)
>       at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:321)
>       at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:725)
>       at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:698)
>       at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:634)
>       at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1690)
>       at org.apache.hadoop.security.SecurityUtil.doAsConfigUser(SecurityUtil.java:649)
>       at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: MetaException(message:Table partition not deleted since har:/nn01/warehouse/test.db/xiaolong_test/dt=20170826/hour=16/ctime=2017082616 is not writable by hadoop-data)
>       at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$drop_partitions_req_result$drop_partitions_req_resultStandardScheme.read(ThriftHiveMetastore.java)
>       at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$drop_partitions_req_result$drop_partitions_req_resultStandardScheme.read(ThriftHiveMetastore.java)
>       at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$drop_partitions_req_result.read(ThriftHiveMetastore.java:65522)
>       at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86)
>       at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_drop_partitions_req(ThriftHiveMetastore.java:1833)
>       at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.drop_partitions_req(ThriftHiveMetastore.java:1820)
>       at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropPartitions(HiveMetaStoreClient.java:912)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:606)
>       at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156)
>       at com.sun.proxy.$Proxy4.dropPartitions(Unknown Source)
>       at org.apache.hadoop.hive.ql.metadata.Hive.dropPartitions(Hive.java:1984)
>       ... 31 more
> {code}
> The request of drop partition will be processed by HiveMetaStore finally, it will invoke HiveMetaStore#drop_partitions_req:
> {code:java}
> for (Partition part : parts) {
>    if (!ignoreProtection && !MetaStoreUtils.canDropPartition(tbl, part)) {
>      throw new MetaException("Table " + tbl.getTableName()
>          + " Partition " + part + " is protected from being dropped");
>    }
>    firePreEvent(new PreDropPartitionEvent(tbl, part, deleteData, this));
>    if (colNames != null) {
>      partNames.add(FileUtils.makePartName(colNames, part.getValues()));
>    }
>    // Preserve the old behavior of failing when we cannot write, even w/o deleteData,
>    // and even if the table is external. That might not make any sense.
>    if (MetaStoreUtils.isArchived(part)) {
>      Path archiveParentDir = MetaStoreUtils.getOriginalLocation(part);
>      verifyIsWritablePath(archiveParentDir);
>      checkTrashPurgeCombination(archiveParentDir, dbName + "." + tblName + "." + part.getValues(), mustPurge);
>      archToDelete.add(archiveParentDir);
>    }
>    if ((part.getSd() != null) && (part.getSd().getLocation() != null)) {
>      Path partPath = new Path(part.getSd().getLocation());
>      verifyIsWritablePath(partPath);
>      checkTrashPurgeCombination(partPath, dbName + "." + tblName + "." + part.getValues(), mustPurge);
>      dirsToDelete.add(new PathAndPartValSize(partPath, part.getValues().size()));
>    }
>  }
>  ms.dropPartitions(dbName, tblName, partNames);
>  success = ms.commitTransaction();
>  DropPartitionsResult result = new DropPartitionsResult();
>  if (needResult) {
>    result.setPartitions(parts);
>  }
>  return result;
> } finally {
>  if (!success) {
>    ms.rollbackTransaction();
>  } else if (deleteData && !isExternal(tbl)) {
>    LOG.info( mustPurge?
>                "dropPartition() will purge partition-directories directly, skipping trash."
>              :  "dropPartition() will move partition-directories to trash-directory.");
>    // Archived partitions have har:/to_har_file as their location.
>    // The original directory was saved in params
>    for (Path path : archToDelete) {
>      wh.deleteDir(path, true, mustPurge);
>    }
>    for (PathAndPartValSize p : dirsToDelete) {
>      wh.deleteDir(p.path, true, mustPurge);
>      try {
>        deleteParentRecursive(p.path.getParent(), p.partValSize - 1, mustPurge);
>      } catch (IOException ex) {
>        LOG.warn("Error from deleteParentRecursive", ex);
>        throw new MetaException("Failed to delete parent: " + ex.getMessage());
>      }
>    }
>  }
>  if (parts != null) {
>    for (Partition part : parts) {
>      for (MetaStoreEventListener listener : listeners) {
>        DropPartitionEvent dropPartitionEvent =
>          new DropPartitionEvent(tbl, part, success, deleteData, this);
>        dropPartitionEvent.setEnvironmentContext(envContext);
>        listener.onDropPartition(dropPartitionEvent);
>      }
>    }
>  }
> }
> {code}
> In this function, if the part is archived, part.getSd().getLocation() will result in HarFileSystem and it is not Writable.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)