You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by "Neil Joshi (Jira)" <ji...@apache.org> on 2022/06/16 03:14:00 UTC

[jira] [Commented] (HDDS-6868) Uploading file got permission denied

    [ https://issues.apache.org/jira/browse/HDDS-6868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17554886#comment-17554886 ] 

Neil Joshi commented on HDDS-6868:
----------------------------------

One thing comes to mind on this issue with the S3 authentication on bucket key writes, the S3_AUTH threadlocal variable used to authenticate s3 requests is set on each read.  Setting the S3_AUTH is done on each s3 request when doing a ServiceInfo, getVolume, getBucket - at least one of which will be issued _*before*_ any write by the s3 gateway client.  So, the s3 credential must be set in the OM S3_AUTH for each write as it has been set in the sequence of OMRequests prior to the write through at least the getVolume OMRequest.

There may be a problem with the upload as described in the Jira issue description.  It seems odd, like the s3 credentials were missing in the upload request.

 

> Uploading file got permission denied
> ------------------------------------
>
>                 Key: HDDS-6868
>                 URL: https://issues.apache.org/jira/browse/HDDS-6868
>             Project: Apache Ozone
>          Issue Type: Bug
>    Affects Versions: 1.3.0
>            Reporter: Shawn
>            Assignee: Ritesh H Shukla
>            Priority: Major
>
> I am testing the tip of the master (at this history point: https://github.com/apache/ozone/tree/34eb378399368dd17e8850282a0dea02abe28373), and found ozone has a major bug for unable to uploading file through s3g. The configuration for the ozone is that the authentication is on with Kerberos, ACL is on, SCM HA and OM HA are on as well, and is deployed to k8s. The reproduce steps are as below:
> 1. create a new kerberos user: test1/test1@XXX
> 2. give this users the full ACL to s3v volume. In one of the om, log in kerberos with user om/om@XXX, and do the following command.
> {code}
> ozone sh vol setacl -a user:test1/test1@XXX:a s3v
> {code}
> 3. generate the s3 secret for this user
> 4. use aws s3 cli and this user's credential to create a new bucket s3://test. This step has no issue.
> 5. then upload a file to this bucket. Then you will see below errors in OM leader:
> {code}
> 2022-06-09 00:45:23 WARN  IPC Server handler 10 on default port 9862 ShellBasedUnixGroupsMapping:210 - unable to return groups for user s3g
> PartialGroupNameException The user name 's3g' is not found. id: s3g: no such user
> id: s3g: no such user
>         at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:294)
>         at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:207)
>         at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
>         at org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:51)
>         at org.apache.hadoop.security.Groups$GroupCacheLoader.fetchGroupList(Groups.java:387)
>         at org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:321)
>         at org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:270)
>         at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3529)
>         at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2278)
>         at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2155)
>         at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2045)
>         at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache.get(LocalCache.java:3962)
>         at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3985)
>         at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4946)
>         at org.apache.hadoop.security.Groups.getGroups(Groups.java:228)
>         at org.apache.hadoop.security.UserGroupInformation.getGroups(UserGroupInformation.java:1734)
>         at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1722)
>         at org.apache.hadoop.ozone.om.helpers.OzoneAclUtil.checkAclRights(OzoneAclUtil.java:128)
>         at org.apache.hadoop.ozone.om.VolumeManagerImpl.checkAccess(VolumeManagerImpl.java:304)
>         at org.apache.hadoop.ozone.security.acl.OzoneNativeAuthorizer.checkAccess(OzoneNativeAuthorizer.java:140)
>         at org.apache.hadoop.ozone.om.OzoneManager.checkAcls(OzoneManager.java:2539)
>         at org.apache.hadoop.ozone.om.OzoneManager.checkAcls(OzoneManager.java:2525)
>         at org.apache.hadoop.ozone.om.OzoneAclUtils.checkAllAcls(OzoneAclUtils.java:119)
>         at org.apache.hadoop.ozone.om.OzoneManager.checkAcls(OzoneManager.java:2379)
>         at org.apache.hadoop.ozone.om.OzoneManager.getBucketInfo(OzoneManager.java:2766)
>         at org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.preExecute(OMKeyCreateRequest.java:135)
>         at org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:192)
>         at org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:87)
>         at org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:147)
>         at org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.processCall(ProtobufRpcEngine.java:466)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:574)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1035)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:963)
>         at java.base/java.security.AccessController.doPrivileged(Native Method)
>         at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2966)
> 2022-06-09 00:45:23 WARN  IPC Server handler 10 on default port 9862 OzoneManager:2547 - User s3g/s3g@DEV.OZONE.K8S.CLOUD.XYZ.COM doesn't have READ permission to access volume Volume:s3v Bucket:shawn-test
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org