You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Hadoop QA (JIRA)" <ji...@apache.org> on 2019/02/01 22:11:00 UTC

[jira] [Commented] (HBASE-18620) Secure bulkload job fails when HDFS umask has limited scope

    [ https://issues.apache.org/jira/browse/HBASE-18620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16758709#comment-16758709 ] 

Hadoop QA commented on HBASE-18620:
-----------------------------------

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  0m  0s{color} | {color:orange} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 53s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 37s{color} | {color:green} branch-1 passed with JDK v1.8.0_201 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 39s{color} | {color:green} branch-1 passed with JDK v1.7.0_201 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 23s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 40s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 28s{color} | {color:green} branch-1 passed with JDK v1.8.0_201 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 37s{color} | {color:green} branch-1 passed with JDK v1.7.0_201 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 38s{color} | {color:green} the patch passed with JDK v1.8.0_201 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 38s{color} | {color:green} the patch passed with JDK v1.7.0_201 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m  0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 49s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  1m 39s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 29s{color} | {color:green} the patch passed with JDK v1.8.0_201 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 35s{color} | {color:green} the patch passed with JDK v1.7.0_201 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}120m 40s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m 49s{color} | {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:61288f8 |
| JIRA Issue | HBASE-18620 |
| JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12920589/HBASE-18620-branch-1-v3.patch |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux dd80e35164dc 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh |
| git revision | branch-1 / dd9e7a1 |
| maven | version: Apache Maven 3.0.5 |
| Default Java | 1.7.0_201 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-openjdk-amd64:1.8.0_201 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_201 |
|  Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/15844/testReport/ |
| Max. process+thread count | 4011 (vs. ulimit of 10000) |
| modules | C: hbase-server U: hbase-server |
| Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/15844/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Secure bulkload job fails when HDFS umask has limited scope
> -----------------------------------------------------------
>
>                 Key: HBASE-18620
>                 URL: https://issues.apache.org/jira/browse/HBASE-18620
>             Project: HBase
>          Issue Type: Bug
>          Components: security
>            Reporter: Pankaj Kumar
>            Assignee: Pankaj Kumar
>            Priority: Major
>             Fix For: 1.5.1
>
>         Attachments: HBASE-18620-branch-1-v2.patch, HBASE-18620-branch-1-v3.patch, HBASE-18620-branch-1.patch
>
>
> By default "hbase.fs.tmp.dir" parameter value is /user/$\{user.name}/hbase-staging.
> RegionServer creates the staging directory (hbase.bulkload.staging.dir, default value is hbase.fs.tmp.dir) during opening a region as below when SecureBulkLoadEndpoint configured in hbase.coprocessor.region.classes,
> {noformat}
> drwx------ - hbase hadoop 0 2017-08-12 13:55 /user/xyz
> drwx--x--x - hbase hadoop 0 2017-08-12 13:55 /user/xyz/hbase-staging
> drwx--x--x - hbase hadoop 0 2017-08-12 13:55 /user/xyz/hbase-staging/DONOTERASE
> {noformat}
> Here,
> 1. RegionServer is started using "xyz" linux user.
> 2. HDFS umask (fs.permissions.umask-mode) has been set as 077, so file/dir permission will not be wider than 700. "/user/xyz" directory (doesn't exist earlier) permission will be 700 and "/user/xyz/hbase-staging" will be 711 as we are just setting permission of staging directory not the parent directories which are created (fs.mkdirs()) by RegionServer.
> Secure bulkload will fail as other user doesn't have EXECUTE permission on "/user/xyz" directory.
> *Steps to reproduce:*
> ==================
> 1. Configure org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint in "hbase.coprocessor.region.classes" at client side.
> 2. Login to machine as "root" linux user.
> 3. kinit to any kerberos user except RegionServer kerberos user (say admin).
> 4. ImportTSV will create the user temp directory (hbase.fs.tmp.dir) while writing partition file, 
> {noformat}
> drwxrwxrwx - admin hadoop 0 2017-08-12 14:52 /user/root
> drwxrwxrwx - admin hadoop 0 2017-08-12 14:52 /user/root/hbase-staging
> {noformat}
> 4. During LoadIncrementalHFiles job,
> - a. prepareBulkLoad() step - Random dir will be created by RegionServer credentials,
> {noformat}
> drwxrwxrwx - hbase hadoop 0 2017-08-12 14:58 /user/xyz/hbase-staging/hbase__t1__e67b23m2ghe6fkn1bqrb95ak41ferj8957cdhsep4ebmpohm22nvi54vh8g3qh1
> {noformat}
> - b. secureBulkLoadHFiles() step - Family dir existence check and creation is done by using client user credentials. Here client operation will fail as below,
> {noformat}
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=admin, access=EXECUTE, inode="/user/xyz/hbase-staging/admin__t1__e1f3m4r2prud9117thg5pdg91lkg0le0fdvtbbpg03epqg0f14lv54j8sqd8s0n6/cf1":hbase:hadoop:drwx------
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:342)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:279)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:223)
> 	at com.huawei.hadoop.adapter.hdfs.plugin.HWAccessControlEnforce.checkPermission(HWAccessControlEnforce.java:69)
> {noformat}
> So the root cause is "admin" user doesn't have EXECUTE permission over "/user/xyz", because RegionServer has created this intermediate parent directory during opening (SecureBulkLoadEndpoint) a region where the default permission is set as 700 based on the hdfs UMASK 077.
> *Solution:*
> =========
> However it can be handled by the creating /user/xyz manually and setting sufficient permission explicitly. But we should handle this by setting sufficient permission to intermediate staging directories which is created by RegionServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)