You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@trafodion.apache.org by "Sandhya Sundaresan (JIRA)" <ji...@apache.org> on 2016/06/13 16:18:21 UTC

[jira] [Assigned] (TRAFODION-2036) Write access permission denied for user TRAFODION on "/hbase/archive/data/default"

     [ https://issues.apache.org/jira/browse/TRAFODION-2036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sandhya Sundaresan reassigned TRAFODION-2036:
---------------------------------------------

    Assignee: Amanda Moran

> Write access permission denied for user TRAFODION on "/hbase/archive/data/default"
> ----------------------------------------------------------------------------------
>
>                 Key: TRAFODION-2036
>                 URL: https://issues.apache.org/jira/browse/TRAFODION-2036
>             Project: Apache Trafodion
>          Issue Type: Bug
>          Components: sql-general
>            Reporter: Roberta Marton
>            Assignee: Amanda Moran
>
> Trafodion using snapshots for loading data and building indexes.  Today, it piggy-backs snapshot locations using the existing location - /hbase/archive/data/default. ACL permissions for this location are not set correctly and are reset at times by HBase.
> From a discussion with Cloudera:
> <That directory is where HBase places the storefiles from a major compaction, deletion, snapshots drops, basically anything that would have caused HBase to move files. Yes, it is periodically cleaned up, and files that don't belong to a table being archived are targeted by that cleanup process*. This should be considered an HBase internal repository and you shouldn't be putting things in there and changing permissions. 
> *https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/backup/example/LongTermArchivingHFileCleaner.java">
> Condensed e-mail exchange on this issue:
> <Originating problem>
> Subject: hive/test018: RE: Trafodion master Daily Test Result - 224 - Failure
> *** ERROR[8448] Unable to access Hbase interface. Call to ExpHbaseInterface::scanOpen returned error HBASE_OPEN_ERROR(-704). Cause:
> > java.io.IOException: java.util.concurrent.ExecutionException: org.apache.hadoop.security.AccessControlException: Permission denied: user=TRAFODION, access=WRITE, inode="/hbase/archive/data/default":hbase:hbase:drwxr-xr-x:user:TRAFODION:rwx,group::r-x,default:user::rwx,default:user:TRAFODION:rwx,default:group::r-x,default:mask::rwx,default:other::r-x We tried to solve this issue with the installer but it can't be done. 
>  <Amanda>
> The installer can create /hbase/archive but not /hbase/archive/data/default. Cloudera/HDFS/HDP goes and deletes these directories sometimes before we can even add the correct permissions to them (the next command). I don't know 'why' this happens... I think it has something to do with creating 3 levels of empty directories but I am not sure. 
>  I tested this and wrote long emails about this a few months ago... I even put in the changes "anyways" but it causes installer to fail when the directory is deleted before the next command (sometimes it deletes the folders quickly sometimes more slowly) so I had to take it out. 
>  I will go back to my original comment on this... who (trafodion, hive, hdfs?) is using this directory? Is this a hard coded value in our code? 
> <Selva>
> Snapshot is supposed to be used by hbase user alone because it was mostly used for Admin purposes. In trafodion case, the snapshot is used as a Trafodion user. This requires that the folders used by snapshot have read and write permission for Trafodion user.  Hence we use ACL to provide access to Traofodion. Alternatively, I have suggested that Trafodion user belongs to hbase group and allow hbase group to have read/write permissions in the early days of Esgyn. I believe it fell through the cracks. 
> <Roberta>
> One thing to consider, a customer who is concerned about security – will it be acceptable to make the Trafodion ID belong to the HBase group?   A customer that has an HBase setup separate from Trafodion may not want to give the Trafodion user more elevated privileges.
> <Selva>
> In that case, we need to make ACL work somehow otherwise we can get into problems at the time of bulkloading or create index. Couple of times, I got into a situation that I was not able to bring up hbase in lava @hp till I changed hdfs to give write permission to everyone.  This issue needs to be addressed and I hope it doesn’t fall through the cracks again.
> <Sandhya>
> Thanks for all the feedback. If this issue has been around for such a long time  , my question is why does this how up so infrequently ? The tests today have also failed and we do need to address this issue ASAP. But it doesn’t fail all the time. 
> Were the ACLs set up manually on the build machines for that 3 level deep directory and do those just stay around ? Is this a new VM  and that’s why it’s showing up ? 
> Amanda,  is there a Cloudera dev contact who can explain this issue that you or CLR have already contacted ? Or can you post  the question in the usual places you usually look for answers about CDH and HDP ? 
> Roberta’s reply seems to indicate we cannot grant permission in all cases. (with Kerberos enabled) 
> <reply from consultation with Cloudera – included earlier in JIRA – we should not be using this directory>



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)