You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Arun Ravi M V (Jira)" <ji...@apache.org> on 2021/08/30 07:50:00 UTC

[jira] [Updated] (HADOOP-17879) Unable to use custom SAS tokens for accessing files from ADLS gen2 storage accounts with hierarchical namespace enabled.

     [ https://issues.apache.org/jira/browse/HADOOP-17879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun Ravi M V updated HADOOP-17879:
-----------------------------------
    Description: 
I have some parquet files in abfss://con@sa1.dfs.core.windows.net/folder1/. I generated the User Delegation SAS token with the following permission on 'folder1'

SAS_SIGNED_PERMISSIONS -> "racwdxltmeop"

But when I read from "abfss://con@sa1.dfs.core.windows.net/folder1/"

I get an *HTTP 403 error*, I believe this happens when ABFSS driver makes use of `*getACLStatus*` API call to determine whether the storage service has hierarchical name space enabled or not.

 

I found a workaround, ie to set *fs.azure.account.hns.enabled* to *true* which would skip get ACL API call and as folder level SAS only works for HNS enabled accounts. May I know if this behavior is expected and the workaround I am using is stable for production use and if there are any hidden implications?

 

Thank you in advance. 

  was:
I have some parquet files in abfss://con@sa1.dfs.core.windows.net/folder1/. I generated the User Delegation SAS token with the following permission on 'folder1'

SAS_SIGNED_PERMISSIONS -> "racwdxltmeop"

But when I read from "abfss://con@sa1.dfs.core.windows.net/folder1/"

I get an *HTTP 403 error*, I believe this happens when ABFSS driver makes use of `*getACLStatus*` API call to determine whether the storage service has hierarchical name space enabled or not.

 

I found a workaround, ie to set *fs.azure.account.hns.enabled* to *true* which would skip get ACL API call and as folder level, SAS only works for HNS enabled accounts. May I know if this behaviour is expected and the workaround I am using is stable for production use and if there are any hidden implications?

 

Thank you in advance. 


> Unable to use custom SAS tokens for accessing files from ADLS gen2 storage accounts with hierarchical namespace enabled.
> ------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-17879
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17879
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/azure, tools
>            Reporter: Arun Ravi M V
>            Priority: Major
>
> I have some parquet files in abfss://con@sa1.dfs.core.windows.net/folder1/. I generated the User Delegation SAS token with the following permission on 'folder1'
> SAS_SIGNED_PERMISSIONS -> "racwdxltmeop"
> But when I read from "abfss://con@sa1.dfs.core.windows.net/folder1/"
> I get an *HTTP 403 error*, I believe this happens when ABFSS driver makes use of `*getACLStatus*` API call to determine whether the storage service has hierarchical name space enabled or not.
>  
> I found a workaround, ie to set *fs.azure.account.hns.enabled* to *true* which would skip get ACL API call and as folder level SAS only works for HNS enabled accounts. May I know if this behavior is expected and the workaround I am using is stable for production use and if there are any hidden implications?
>  
> Thank you in advance. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org