You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Wei-Chiu Chuang (JIRA)" <ji...@apache.org> on 2018/04/25 10:28:00 UTC

[jira] [Commented] (HADOOP-15412) Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"

    [ https://issues.apache.org/jira/browse/HADOOP-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16452021#comment-16452021 ] 

Wei-Chiu Chuang commented on HADOOP-15412:
------------------------------------------

Hi Pablo, thanks for filing the issue.

What you mention is not a valid use case. KMS can't use HDFS as the backing storage. As you could imagine, if HDFS is used for KMS, then each HDFS client file access would go through HDFS NameNode --> KMS --> HDFS NameNode --> KMS ....

The file based KMS can use keystore files on the local file system. 

> Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"
> --------------------------------------------------------------
>
>                 Key: HADOOP-15412
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15412
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: kms
>    Affects Versions: 2.7.2, 2.9.0
>         Environment: RHEL 7.3
> Hadoop 2.7.2 and 2.7.9
>  
>            Reporter: Pablo San José
>            Priority: Major
>
> I have been trying to configure the Hadoop kms to use hdfs as the key provider but it seems that this functionality is failing. 
> I followed the Hadoop docs for that matter, and I added the following field to my kms-site.xml:
> {code:java}
> <property> 
>    <name>hadoop.kms.key.provider.uri</name>
>    <value>jceks://hdfs@nn1.example.com/kms/test.jceks</value> 
>    <description> 
>       URI of the backing KeyProvider for the KMS. 
>    </description> 
> </property>{code}
> That route exists in hdfs, and I expect the kms to create the file test.jceks for its keystore. However, the kms failed to start due to this error:
> {code:java}
> ERROR: Hadoop KMS could not be started REASON: org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "hdfs" Stacktrace: --------------------------------------------------- org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "hdfs" at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3220) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3240) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3291) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3259) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at org.apache.hadoop.crypto.key.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:132) at org.apache.hadoop.crypto.key.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:88) at org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:660) at org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96) at org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:187) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080) at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069) at org.apache.catalina.core.StandardHost.start(StandardHost.java:822) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) at org.apache.catalina.core.StandardService.start(StandardService.java:525) at org.apache.catalina.core.StandardServer.start(StandardServer.java:761) at org.apache.catalina.startup.Catalina.start(Catalina.java:595) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414){code}
>  
> For what I could manage to understand, it seems that this error is because there is no FileSystem implemented for HDFS. I have looked up this error but it always refers to a lack of jars for the hdfs-client when upgrading, which I have not done (it is a fresh installation). I have tested it using Hadoop 2.7.2 and 2.9.0
> Thank you in advance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org