You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Philip Shon <ph...@gmail.com> on 2015/05/07 18:38:57 UTC

Testing HDFS TDE - "Failed to close inode"/"Illegal key size" error

I am testing out the TDE feature of HDFS, and am receiving the following
error when trying to copy a file into the encryption zone.

[hdfs@svr501 ~]$ hdfs dfs -copyFromLocal 201502.txt.gz  /secure
copyFromLocal: java.security.InvalidKeyException: Illegal key size
15/05/07 10:59:23 ERROR hdfs.DFSClient: Failed to close inode 589242
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
No lease on /secure/201502.txt.gz._COPYING_ (inode 589242): File does not
exist. Holder DFSClient_NONMAPR66860818_1 does not have any open files.
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3519)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3607)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3577)
        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:700)
        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:526)
        at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

        at org.apache.hadoop.ipc.Client.call(Client.java:1468)
        at org.apache.hadoop.ipc.Client.call(Client.java:1399)
        at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy14.complete(Unknown Source)
        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:443)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy15.complete(Unknown Source)
        at
org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2251)
        at
org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2235)
        at
org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:938)
        at
org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:976)
        at
org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:899)
        at
org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2687)
        at
org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2704)
        at
org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)

I have the following keys and zones defined:

[hdfs@svr501 ~]$  hadoop key list -metadata
Listing keys for KeyProvider: KMSClientProvider[
http://svr504.corp.xxxxx.com:16000/kms/v1/]
key1 : cipher: AES/CTR/NoPadding, length: 256, description: null, created:
Thu May 07 10:58:00 CDT 2015, version: 1, attributes: [key.acl.name=key1]


[hdfs@svr501 ~]$ hdfs crypto -listZones
/secure  key1

The following is from the kms.log file

2015-05-07 11:31:03,992 WARN  AuthenticationFilter - Authentication
exception: Anonymous requests are disallowed
org.apache.hadoop.security.authentication.client.AuthenticationException:
Anonymous requests are disallowed
        at
org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:184)
        at
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:347)
        at
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
        at
org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
        at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
        at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
        at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
        at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
        at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
        at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
        at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
        at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
        at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
        at
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
        at java.lang.Thread.run(Thread.java:745)

Any assistance would be greatly appreciated.

-Phil Shon

Re: Testing HDFS TDE - "Failed to close inode"/"Illegal key size" error

Posted by Philip Shon <ph...@gmail.com>.
Thanks Chris, that did the trick.

I guess that exception in the kms.log file is an unrelated issue, b/c that
exception was still thrown when it worked.

On Thu, May 7, 2015 at 12:21 PM, Chris Nauroth <cn...@hortonworks.com>
wrote:

>   Hi Philip,
>
>  I see that you used a key size of 256.  This would require installation
> of the JCE unlimited strength policy files.
>
>
> http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html
>
>  Alternatively, if you're just testing right now and can accept a smaller
> key size, then you could test using a key size of 128 or 192.  You could
> then decide later whether or not your production usage requires use of a
> 256-bit key.
>
>  --Chris Nauroth
>
>   From: Philip Shon <ph...@gmail.com>
> Reply-To: "user@hadoop.apache.org" <us...@hadoop.apache.org>
> Date: Thursday, May 7, 2015 at 9:38 AM
> To: "user@hadoop.apache.org" <us...@hadoop.apache.org>
> Subject: Testing HDFS TDE - "Failed to close inode"/"Illegal key size"
> error
>
>   I am testing out the TDE feature of HDFS, and am receiving the
> following error when trying to copy a file into the encryption zone.
>
>  [hdfs@svr501 ~]$ hdfs dfs -copyFromLocal 201502.txt.gz  /secure
> copyFromLocal: java.security.InvalidKeyException: Illegal key size
> 15/05/07 10:59:23 ERROR hdfs.DFSClient: Failed to close inode 589242
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
> No lease on /secure/201502.txt.gz._COPYING_ (inode 589242): File does not
> exist. Holder DFSClient_NONMAPR66860818_1 does not have any open files.
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3519)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3607)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3577)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:700)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:526)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>          at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>         at com.sun.proxy.$Proxy14.complete(Unknown Source)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:443)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>         at com.sun.proxy.$Proxy15.complete(Unknown Source)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2251)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2235)
>         at
> org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:938)
>         at
> org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:976)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:899)
>         at
> org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2687)
>         at
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2704)
>         at
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
>
>  I have the following keys and zones defined:
>
>  [hdfs@svr501 ~]$  hadoop key list -metadata
> Listing keys for KeyProvider: KMSClientProvider[
> http://svr504.corp.xxxxx.com:16000/kms/v1/]
> key1 : cipher: AES/CTR/NoPadding, length: 256, description: null, created:
> Thu May 07 10:58:00 CDT 2015, version: 1, attributes: [key.acl.name=key1]
>
>
>  [hdfs@svr501 ~]$ hdfs crypto -listZones
> /secure  key1
>
>  The following is from the kms.log file
>
>  2015-05-07 11:31:03,992 WARN  AuthenticationFilter - Authentication
> exception: Anonymous requests are disallowed
> org.apache.hadoop.security.authentication.client.AuthenticationException:
> Anonymous requests are disallowed
>         at
> org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:184)
>         at
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:347)
>         at
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
>         at
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
>         at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>         at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>         at
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
>         at
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
>         at
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
>         at
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
>         at
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>         at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
>         at
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
>         at
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
>         at
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
>         at java.lang.Thread.run(Thread.java:745)
>
>  Any assistance would be greatly appreciated.
>
>  -Phil Shon
>

Re: Testing HDFS TDE - "Failed to close inode"/"Illegal key size" error

Posted by Philip Shon <ph...@gmail.com>.
Thanks Chris, that did the trick.

I guess that exception in the kms.log file is an unrelated issue, b/c that
exception was still thrown when it worked.

On Thu, May 7, 2015 at 12:21 PM, Chris Nauroth <cn...@hortonworks.com>
wrote:

>   Hi Philip,
>
>  I see that you used a key size of 256.  This would require installation
> of the JCE unlimited strength policy files.
>
>
> http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html
>
>  Alternatively, if you're just testing right now and can accept a smaller
> key size, then you could test using a key size of 128 or 192.  You could
> then decide later whether or not your production usage requires use of a
> 256-bit key.
>
>  --Chris Nauroth
>
>   From: Philip Shon <ph...@gmail.com>
> Reply-To: "user@hadoop.apache.org" <us...@hadoop.apache.org>
> Date: Thursday, May 7, 2015 at 9:38 AM
> To: "user@hadoop.apache.org" <us...@hadoop.apache.org>
> Subject: Testing HDFS TDE - "Failed to close inode"/"Illegal key size"
> error
>
>   I am testing out the TDE feature of HDFS, and am receiving the
> following error when trying to copy a file into the encryption zone.
>
>  [hdfs@svr501 ~]$ hdfs dfs -copyFromLocal 201502.txt.gz  /secure
> copyFromLocal: java.security.InvalidKeyException: Illegal key size
> 15/05/07 10:59:23 ERROR hdfs.DFSClient: Failed to close inode 589242
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
> No lease on /secure/201502.txt.gz._COPYING_ (inode 589242): File does not
> exist. Holder DFSClient_NONMAPR66860818_1 does not have any open files.
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3519)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3607)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3577)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:700)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:526)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>          at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>         at com.sun.proxy.$Proxy14.complete(Unknown Source)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:443)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>         at com.sun.proxy.$Proxy15.complete(Unknown Source)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2251)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2235)
>         at
> org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:938)
>         at
> org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:976)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:899)
>         at
> org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2687)
>         at
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2704)
>         at
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
>
>  I have the following keys and zones defined:
>
>  [hdfs@svr501 ~]$  hadoop key list -metadata
> Listing keys for KeyProvider: KMSClientProvider[
> http://svr504.corp.xxxxx.com:16000/kms/v1/]
> key1 : cipher: AES/CTR/NoPadding, length: 256, description: null, created:
> Thu May 07 10:58:00 CDT 2015, version: 1, attributes: [key.acl.name=key1]
>
>
>  [hdfs@svr501 ~]$ hdfs crypto -listZones
> /secure  key1
>
>  The following is from the kms.log file
>
>  2015-05-07 11:31:03,992 WARN  AuthenticationFilter - Authentication
> exception: Anonymous requests are disallowed
> org.apache.hadoop.security.authentication.client.AuthenticationException:
> Anonymous requests are disallowed
>         at
> org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:184)
>         at
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:347)
>         at
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
>         at
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
>         at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>         at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>         at
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
>         at
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
>         at
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
>         at
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
>         at
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>         at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
>         at
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
>         at
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
>         at
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
>         at java.lang.Thread.run(Thread.java:745)
>
>  Any assistance would be greatly appreciated.
>
>  -Phil Shon
>

Re: Testing HDFS TDE - "Failed to close inode"/"Illegal key size" error

Posted by Philip Shon <ph...@gmail.com>.
Thanks Chris, that did the trick.

I guess that exception in the kms.log file is an unrelated issue, b/c that
exception was still thrown when it worked.

On Thu, May 7, 2015 at 12:21 PM, Chris Nauroth <cn...@hortonworks.com>
wrote:

>   Hi Philip,
>
>  I see that you used a key size of 256.  This would require installation
> of the JCE unlimited strength policy files.
>
>
> http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html
>
>  Alternatively, if you're just testing right now and can accept a smaller
> key size, then you could test using a key size of 128 or 192.  You could
> then decide later whether or not your production usage requires use of a
> 256-bit key.
>
>  --Chris Nauroth
>
>   From: Philip Shon <ph...@gmail.com>
> Reply-To: "user@hadoop.apache.org" <us...@hadoop.apache.org>
> Date: Thursday, May 7, 2015 at 9:38 AM
> To: "user@hadoop.apache.org" <us...@hadoop.apache.org>
> Subject: Testing HDFS TDE - "Failed to close inode"/"Illegal key size"
> error
>
>   I am testing out the TDE feature of HDFS, and am receiving the
> following error when trying to copy a file into the encryption zone.
>
>  [hdfs@svr501 ~]$ hdfs dfs -copyFromLocal 201502.txt.gz  /secure
> copyFromLocal: java.security.InvalidKeyException: Illegal key size
> 15/05/07 10:59:23 ERROR hdfs.DFSClient: Failed to close inode 589242
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
> No lease on /secure/201502.txt.gz._COPYING_ (inode 589242): File does not
> exist. Holder DFSClient_NONMAPR66860818_1 does not have any open files.
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3519)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3607)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3577)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:700)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:526)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>          at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>         at com.sun.proxy.$Proxy14.complete(Unknown Source)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:443)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>         at com.sun.proxy.$Proxy15.complete(Unknown Source)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2251)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2235)
>         at
> org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:938)
>         at
> org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:976)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:899)
>         at
> org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2687)
>         at
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2704)
>         at
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
>
>  I have the following keys and zones defined:
>
>  [hdfs@svr501 ~]$  hadoop key list -metadata
> Listing keys for KeyProvider: KMSClientProvider[
> http://svr504.corp.xxxxx.com:16000/kms/v1/]
> key1 : cipher: AES/CTR/NoPadding, length: 256, description: null, created:
> Thu May 07 10:58:00 CDT 2015, version: 1, attributes: [key.acl.name=key1]
>
>
>  [hdfs@svr501 ~]$ hdfs crypto -listZones
> /secure  key1
>
>  The following is from the kms.log file
>
>  2015-05-07 11:31:03,992 WARN  AuthenticationFilter - Authentication
> exception: Anonymous requests are disallowed
> org.apache.hadoop.security.authentication.client.AuthenticationException:
> Anonymous requests are disallowed
>         at
> org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:184)
>         at
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:347)
>         at
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
>         at
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
>         at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>         at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>         at
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
>         at
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
>         at
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
>         at
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
>         at
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>         at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
>         at
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
>         at
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
>         at
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
>         at java.lang.Thread.run(Thread.java:745)
>
>  Any assistance would be greatly appreciated.
>
>  -Phil Shon
>

Re: Testing HDFS TDE - "Failed to close inode"/"Illegal key size" error

Posted by Philip Shon <ph...@gmail.com>.
Thanks Chris, that did the trick.

I guess that exception in the kms.log file is an unrelated issue, b/c that
exception was still thrown when it worked.

On Thu, May 7, 2015 at 12:21 PM, Chris Nauroth <cn...@hortonworks.com>
wrote:

>   Hi Philip,
>
>  I see that you used a key size of 256.  This would require installation
> of the JCE unlimited strength policy files.
>
>
> http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html
>
>  Alternatively, if you're just testing right now and can accept a smaller
> key size, then you could test using a key size of 128 or 192.  You could
> then decide later whether or not your production usage requires use of a
> 256-bit key.
>
>  --Chris Nauroth
>
>   From: Philip Shon <ph...@gmail.com>
> Reply-To: "user@hadoop.apache.org" <us...@hadoop.apache.org>
> Date: Thursday, May 7, 2015 at 9:38 AM
> To: "user@hadoop.apache.org" <us...@hadoop.apache.org>
> Subject: Testing HDFS TDE - "Failed to close inode"/"Illegal key size"
> error
>
>   I am testing out the TDE feature of HDFS, and am receiving the
> following error when trying to copy a file into the encryption zone.
>
>  [hdfs@svr501 ~]$ hdfs dfs -copyFromLocal 201502.txt.gz  /secure
> copyFromLocal: java.security.InvalidKeyException: Illegal key size
> 15/05/07 10:59:23 ERROR hdfs.DFSClient: Failed to close inode 589242
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
> No lease on /secure/201502.txt.gz._COPYING_ (inode 589242): File does not
> exist. Holder DFSClient_NONMAPR66860818_1 does not have any open files.
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3519)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3607)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3577)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:700)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:526)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>          at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>         at com.sun.proxy.$Proxy14.complete(Unknown Source)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:443)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>         at com.sun.proxy.$Proxy15.complete(Unknown Source)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2251)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2235)
>         at
> org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:938)
>         at
> org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:976)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:899)
>         at
> org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2687)
>         at
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2704)
>         at
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
>
>  I have the following keys and zones defined:
>
>  [hdfs@svr501 ~]$  hadoop key list -metadata
> Listing keys for KeyProvider: KMSClientProvider[
> http://svr504.corp.xxxxx.com:16000/kms/v1/]
> key1 : cipher: AES/CTR/NoPadding, length: 256, description: null, created:
> Thu May 07 10:58:00 CDT 2015, version: 1, attributes: [key.acl.name=key1]
>
>
>  [hdfs@svr501 ~]$ hdfs crypto -listZones
> /secure  key1
>
>  The following is from the kms.log file
>
>  2015-05-07 11:31:03,992 WARN  AuthenticationFilter - Authentication
> exception: Anonymous requests are disallowed
> org.apache.hadoop.security.authentication.client.AuthenticationException:
> Anonymous requests are disallowed
>         at
> org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:184)
>         at
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:347)
>         at
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
>         at
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
>         at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>         at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>         at
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
>         at
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
>         at
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
>         at
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
>         at
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>         at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
>         at
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
>         at
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
>         at
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
>         at java.lang.Thread.run(Thread.java:745)
>
>  Any assistance would be greatly appreciated.
>
>  -Phil Shon
>

Re: Testing HDFS TDE - "Failed to close inode"/"Illegal key size" error

Posted by Chris Nauroth <cn...@hortonworks.com>.
Hi Philip,

I see that you used a key size of 256.  This would require installation of the JCE unlimited strength policy files.

http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html

Alternatively, if you're just testing right now and can accept a smaller key size, then you could test using a key size of 128 or 192.  You could then decide later whether or not your production usage requires use of a 256-bit key.

--Chris Nauroth

From: Philip Shon <ph...@gmail.com>>
Reply-To: "user@hadoop.apache.org<ma...@hadoop.apache.org>" <us...@hadoop.apache.org>>
Date: Thursday, May 7, 2015 at 9:38 AM
To: "user@hadoop.apache.org<ma...@hadoop.apache.org>" <us...@hadoop.apache.org>>
Subject: Testing HDFS TDE - "Failed to close inode"/"Illegal key size" error

I am testing out the TDE feature of HDFS, and am receiving the following error when trying to copy a file into the encryption zone.

[hdfs@svr501 ~]$ hdfs dfs -copyFromLocal 201502.txt.gz  /secure
copyFromLocal: java.security.InvalidKeyException: Illegal key size
15/05/07 10:59:23 ERROR hdfs.DFSClient: Failed to close inode 589242
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /secure/201502.txt.gz._COPYING_ (inode 589242): File does not exist. Holder DFSClient_NONMAPR66860818_1 does not have any open files.
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3519)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3607)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3577)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:700)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:526)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

        at org.apache.hadoop.ipc.Client.call(Client.java:1468)
        at org.apache.hadoop.ipc.Client.call(Client.java:1399)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy14.complete(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:443)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy15.complete(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2251)
        at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2235)
        at org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:938)
        at org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:976)
        at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:899)
        at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2687)
        at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2704)
        at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)

I have the following keys and zones defined:

[hdfs@svr501 ~]$  hadoop key list -metadata
Listing keys for KeyProvider: KMSClientProvider[http://svr504.corp.xxxxx.com:16000/kms/v1/]
key1 : cipher: AES/CTR/NoPadding, length: 256, description: null, created: Thu May 07 10:58:00 CDT 2015, version: 1, attributes: [key.acl.name<http://key.acl.name>=key1]


[hdfs@svr501 ~]$ hdfs crypto -listZones
/secure  key1

The following is from the kms.log file

2015-05-07 11:31:03,992 WARN  AuthenticationFilter - Authentication exception: Anonymous requests are disallowed
org.apache.hadoop.security.authentication.client.AuthenticationException: Anonymous requests are disallowed
        at org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:184)
        at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:347)
        at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
        at org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
        at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
        at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
        at java.lang.Thread.run(Thread.java:745)

Any assistance would be greatly appreciated.

-Phil Shon

Re: Testing HDFS TDE - "Failed to close inode"/"Illegal key size" error

Posted by Chris Nauroth <cn...@hortonworks.com>.
Hi Philip,

I see that you used a key size of 256.  This would require installation of the JCE unlimited strength policy files.

http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html

Alternatively, if you're just testing right now and can accept a smaller key size, then you could test using a key size of 128 or 192.  You could then decide later whether or not your production usage requires use of a 256-bit key.

--Chris Nauroth

From: Philip Shon <ph...@gmail.com>>
Reply-To: "user@hadoop.apache.org<ma...@hadoop.apache.org>" <us...@hadoop.apache.org>>
Date: Thursday, May 7, 2015 at 9:38 AM
To: "user@hadoop.apache.org<ma...@hadoop.apache.org>" <us...@hadoop.apache.org>>
Subject: Testing HDFS TDE - "Failed to close inode"/"Illegal key size" error

I am testing out the TDE feature of HDFS, and am receiving the following error when trying to copy a file into the encryption zone.

[hdfs@svr501 ~]$ hdfs dfs -copyFromLocal 201502.txt.gz  /secure
copyFromLocal: java.security.InvalidKeyException: Illegal key size
15/05/07 10:59:23 ERROR hdfs.DFSClient: Failed to close inode 589242
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /secure/201502.txt.gz._COPYING_ (inode 589242): File does not exist. Holder DFSClient_NONMAPR66860818_1 does not have any open files.
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3519)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3607)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3577)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:700)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:526)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

        at org.apache.hadoop.ipc.Client.call(Client.java:1468)
        at org.apache.hadoop.ipc.Client.call(Client.java:1399)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy14.complete(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:443)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy15.complete(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2251)
        at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2235)
        at org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:938)
        at org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:976)
        at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:899)
        at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2687)
        at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2704)
        at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)

I have the following keys and zones defined:

[hdfs@svr501 ~]$  hadoop key list -metadata
Listing keys for KeyProvider: KMSClientProvider[http://svr504.corp.xxxxx.com:16000/kms/v1/]
key1 : cipher: AES/CTR/NoPadding, length: 256, description: null, created: Thu May 07 10:58:00 CDT 2015, version: 1, attributes: [key.acl.name<http://key.acl.name>=key1]


[hdfs@svr501 ~]$ hdfs crypto -listZones
/secure  key1

The following is from the kms.log file

2015-05-07 11:31:03,992 WARN  AuthenticationFilter - Authentication exception: Anonymous requests are disallowed
org.apache.hadoop.security.authentication.client.AuthenticationException: Anonymous requests are disallowed
        at org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:184)
        at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:347)
        at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
        at org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
        at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
        at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
        at java.lang.Thread.run(Thread.java:745)

Any assistance would be greatly appreciated.

-Phil Shon

Re: Testing HDFS TDE - "Failed to close inode"/"Illegal key size" error

Posted by Chris Nauroth <cn...@hortonworks.com>.
Hi Philip,

I see that you used a key size of 256.  This would require installation of the JCE unlimited strength policy files.

http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html

Alternatively, if you're just testing right now and can accept a smaller key size, then you could test using a key size of 128 or 192.  You could then decide later whether or not your production usage requires use of a 256-bit key.

--Chris Nauroth

From: Philip Shon <ph...@gmail.com>>
Reply-To: "user@hadoop.apache.org<ma...@hadoop.apache.org>" <us...@hadoop.apache.org>>
Date: Thursday, May 7, 2015 at 9:38 AM
To: "user@hadoop.apache.org<ma...@hadoop.apache.org>" <us...@hadoop.apache.org>>
Subject: Testing HDFS TDE - "Failed to close inode"/"Illegal key size" error

I am testing out the TDE feature of HDFS, and am receiving the following error when trying to copy a file into the encryption zone.

[hdfs@svr501 ~]$ hdfs dfs -copyFromLocal 201502.txt.gz  /secure
copyFromLocal: java.security.InvalidKeyException: Illegal key size
15/05/07 10:59:23 ERROR hdfs.DFSClient: Failed to close inode 589242
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /secure/201502.txt.gz._COPYING_ (inode 589242): File does not exist. Holder DFSClient_NONMAPR66860818_1 does not have any open files.
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3519)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3607)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3577)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:700)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:526)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

        at org.apache.hadoop.ipc.Client.call(Client.java:1468)
        at org.apache.hadoop.ipc.Client.call(Client.java:1399)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy14.complete(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:443)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy15.complete(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2251)
        at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2235)
        at org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:938)
        at org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:976)
        at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:899)
        at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2687)
        at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2704)
        at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)

I have the following keys and zones defined:

[hdfs@svr501 ~]$  hadoop key list -metadata
Listing keys for KeyProvider: KMSClientProvider[http://svr504.corp.xxxxx.com:16000/kms/v1/]
key1 : cipher: AES/CTR/NoPadding, length: 256, description: null, created: Thu May 07 10:58:00 CDT 2015, version: 1, attributes: [key.acl.name<http://key.acl.name>=key1]


[hdfs@svr501 ~]$ hdfs crypto -listZones
/secure  key1

The following is from the kms.log file

2015-05-07 11:31:03,992 WARN  AuthenticationFilter - Authentication exception: Anonymous requests are disallowed
org.apache.hadoop.security.authentication.client.AuthenticationException: Anonymous requests are disallowed
        at org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:184)
        at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:347)
        at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
        at org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
        at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
        at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
        at java.lang.Thread.run(Thread.java:745)

Any assistance would be greatly appreciated.

-Phil Shon

Re: Testing HDFS TDE - "Failed to close inode"/"Illegal key size" error

Posted by Chris Nauroth <cn...@hortonworks.com>.
Hi Philip,

I see that you used a key size of 256.  This would require installation of the JCE unlimited strength policy files.

http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html

Alternatively, if you're just testing right now and can accept a smaller key size, then you could test using a key size of 128 or 192.  You could then decide later whether or not your production usage requires use of a 256-bit key.

--Chris Nauroth

From: Philip Shon <ph...@gmail.com>>
Reply-To: "user@hadoop.apache.org<ma...@hadoop.apache.org>" <us...@hadoop.apache.org>>
Date: Thursday, May 7, 2015 at 9:38 AM
To: "user@hadoop.apache.org<ma...@hadoop.apache.org>" <us...@hadoop.apache.org>>
Subject: Testing HDFS TDE - "Failed to close inode"/"Illegal key size" error

I am testing out the TDE feature of HDFS, and am receiving the following error when trying to copy a file into the encryption zone.

[hdfs@svr501 ~]$ hdfs dfs -copyFromLocal 201502.txt.gz  /secure
copyFromLocal: java.security.InvalidKeyException: Illegal key size
15/05/07 10:59:23 ERROR hdfs.DFSClient: Failed to close inode 589242
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /secure/201502.txt.gz._COPYING_ (inode 589242): File does not exist. Holder DFSClient_NONMAPR66860818_1 does not have any open files.
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3519)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3607)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3577)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:700)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:526)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

        at org.apache.hadoop.ipc.Client.call(Client.java:1468)
        at org.apache.hadoop.ipc.Client.call(Client.java:1399)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy14.complete(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:443)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy15.complete(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2251)
        at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2235)
        at org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:938)
        at org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:976)
        at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:899)
        at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2687)
        at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2704)
        at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)

I have the following keys and zones defined:

[hdfs@svr501 ~]$  hadoop key list -metadata
Listing keys for KeyProvider: KMSClientProvider[http://svr504.corp.xxxxx.com:16000/kms/v1/]
key1 : cipher: AES/CTR/NoPadding, length: 256, description: null, created: Thu May 07 10:58:00 CDT 2015, version: 1, attributes: [key.acl.name<http://key.acl.name>=key1]


[hdfs@svr501 ~]$ hdfs crypto -listZones
/secure  key1

The following is from the kms.log file

2015-05-07 11:31:03,992 WARN  AuthenticationFilter - Authentication exception: Anonymous requests are disallowed
org.apache.hadoop.security.authentication.client.AuthenticationException: Anonymous requests are disallowed
        at org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:184)
        at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:347)
        at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
        at org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
        at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
        at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
        at java.lang.Thread.run(Thread.java:745)

Any assistance would be greatly appreciated.

-Phil Shon