You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Steve Loughran (Jira)" <ji...@apache.org> on 2023/04/24 14:56:00 UTC

[jira] [Resolved] (HADOOP-18705) ABFS should exclude incompatible credential providers

     [ https://issues.apache.org/jira/browse/HADOOP-18705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Steve Loughran resolved HADOOP-18705.
-------------------------------------
    Fix Version/s: 3.4.0
                   3.3.9
       Resolution: Fixed

merged to branch-3.3 and trunk, with an updated title now we patch the whole config. thanks!

> ABFS should exclude incompatible credential providers
> -----------------------------------------------------
>
>                 Key: HADOOP-18705
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18705
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/azure
>    Affects Versions: 3.4.0, 3.3.5
>            Reporter: Tamas Domok
>            Assignee: Tamas Domok
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 3.4.0, 3.3.9
>
>
> The DelegationTokenManager in AzureBlobFileSystem.initialize() gets the untouched configuration which may contain a credentialProviderPath config with incompatible credential providers (e.g.: jceks stored on abfs). This results in an error:
> {quote}
> Caused by: org.apache.hadoop.fs.PathIOException: `jceks://abfs@a@b.c.d/tmp/a.jceks': Recursive load of credential provider; if loading a JCEKS file, this means that the filesystem connector is trying to load the same file
> {quote}
> {code}
>         this.delegationTokenManager = abfsConfiguration.getDelegationTokenManager();
>         delegationTokenManager.bind(getUri(), configuration);
> {code}
> The abfsConfiguration excludes the incompatible credential providers already.
> Reproduction steps:
> {code}
> export HADOOP_ROOT_LOGGER=DEBUG,console
> hdfs dfs -rm -r -skipTrash /user/qa/sort_input; hadoop jar hadoop-mapreduce-examples.jar randomwriter "-Dmapreduce.randomwriter.totalbytes=100" "-Dhadoop.security.credential.provider.path=jceks://abfs@a@b.c.d/tmp/a.jceks" /user/qa/sort_input 
> {code}
> Error:
> {code}
> ...
> org.apache.hadoop.fs.azurebfs.security.AbfsDelegationTokenManager.bind(AbfsDelegationTokenManager.java:96)
>     at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:224)
>     at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3452)
>     at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:162)
>     at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3557)
>     at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3504)
>     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:522)
>     at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
>     at org.apache.hadoop.security.alias.KeyStoreProvider.initFileSystem(KeyStoreProvider.java:84)
>     at org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.<init>(AbstractJavaKeyStoreProvider.java:85)
>     at org.apache.hadoop.security.alias.KeyStoreProvider.<init>(KeyStoreProvider.java:49)
>     at org.apache.hadoop.security.alias.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:42)
>     at org.apache.hadoop.security.alias.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:35)
>     at org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:68)
>     at org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:91)
>     at org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2450)
>     at org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2388)
>     at org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBClient.getTruststorePassword(AbfsIDBClient.java:104)
>     at org.apache.knox.gateway.cloud.idbroker.AbstractIDBClient.initializeAsFullIDBClient(AbstractIDBClient.java:860)
>     at org.apache.knox.gateway.cloud.idbroker.AbstractIDBClient.<init>(AbstractIDBClient.java:139)
>     at org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBClient.<init>(AbfsIDBClient.java:74)
>     at org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.getClient(AbfsIDBIntegration.java:287)
>     at org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.serviceStart(AbfsIDBIntegration.java:240)
>     at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>     at org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.fromDelegationTokenManager(AbfsIDBIntegration.java:205)
>     at org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBDelegationTokenManager.bind(AbfsIDBDelegationTokenManager.java:66)
>     at org.apache.hadoop.fs.azurebfs.extensions.ExtensionHelper.bind(ExtensionHelper.java:54)
>     at org.apache.hadoop.fs.azurebfs.security.AbfsDelegationTokenManager.bind(AbfsDelegationTokenManager.java:96)
>     at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:224)
>     at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3452)
>     at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:162)
>     at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3557)
>     at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3504)
>     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:522)
>     at org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.getRollOverLogMaxSize(LogAggregationIndexedFileController.java:1164)
>     at org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.initInternal(LogAggregationIndexedFileController.java:149)
>     at org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileController.initialize(LogAggregationFileController.java:138)
>     at org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileControllerFactory.<init>(LogAggregationFileControllerFactory.java:77)
>     at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.addLogAggregationDelegationToken(YarnClientImpl.java:405)
>     at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:321)
>     at org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:303)
>     at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:331)
>     at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:252)
>     at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1576)
>     at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1573)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:422)
>     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
>     at org.apache.hadoop.mapreduce.Job.submit(Job.java:1573)
>     at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1594)
>     at org.apache.hadoop.examples.RandomWriter.run(RandomWriter.java:282)
>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:81)
>     at org.apache.hadoop.examples.RandomWriter.main(RandomWriter.java:293)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>     at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>     at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
> Caused by: org.apache.hadoop.fs.PathIOException: `jceks://abfs@a@b.c.d/tmp/a.jceks': Recursive load of credential provider; if loading a JCEKS file, this means that the filesystem connector is trying to load the same file
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-dev-help@hadoop.apache.org