You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Lefty Leverenz (JIRA)" <ji...@apache.org> on 2015/08/28 01:10:45 UTC
[jira] [Commented] (HIVE-10978) Document fs.trash.interval wrt Hive
and HDFS Encryption
[ https://issues.apache.org/jira/browse/HIVE-10978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14717732#comment-14717732 ]
Lefty Leverenz commented on HIVE-10978:
---------------------------------------
[~ekoifman], can this go in the HiveServer2 doc or is it more general?
> Document fs.trash.interval wrt Hive and HDFS Encryption
> -------------------------------------------------------
>
> Key: HIVE-10978
> URL: https://issues.apache.org/jira/browse/HIVE-10978
> Project: Hive
> Issue Type: Bug
> Components: Documentation, Security
> Affects Versions: 1.2.0
> Reporter: Eugene Koifman
> Priority: Critical
> Labels: TODOC1.2
>
> This should be documented in 1.2.1 Release Notes
> When HDFS is encrypted (TDE is enabled), DROP TABLE and DROP PARTITION have unexpected behavior when Hadoop Trash feature is enabled.
> The later is enabled by setting fs.trash.interval > 0 in core-site.xml.
> When Trash is enabled, the data file for the table, should be "moved" to Trash bin. If the table is inside an Encryption Zone, this "move" operation is not allowed.
> There are 2 ways to deal with this:
> 1. use PURGE, as in DROP TABLE blah PURGE. This skips the Trash bin even if enabled.
> 2. set fs.trash.interval = 0. It is critical that this config change is done in core-site.xml. Setting it in hive-site.xml may lead to very strange behavior where the table metadata is deleted but the data file remains. This will lead to data corruption if a table with the same name is later created.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)