You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ranger.apache.org by Loïc Chanel <lo...@telecomnancy.net> on 2016/09/13 09:20:34 UTC
Exception while creating encryption zone
Hi all,
As I was trying to test Ranger KMS, I encountered some troubles.
I created a AES-128 key with ranger KMS named test_lchanel, and as I wanted
to use it to encrypt my home repository using : hdfs crypto -createZone
-keyName test_lchanel -path /user/lchanel, I got the following exception :
16/09/13 11:11:26 WARN retry.RetryInvocationHandler: Exception while
invoking ClientNamenodeProtocolTranslatorPB.createEncryptionZone over null.
Not retrying because try once and fail.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
at org.apache.hadoop.ipc.Client.call(Client.java:1496)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy10.createEncryptionZone(Unknown Source)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.createEncryptionZone(ClientNamenodeProtocolTranslatorPB.java:1426)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
at com.sun.proxy.$Proxy11.createEncryptionZone(Unknown Source)
at
org.apache.hadoop.hdfs.DFSClient.createEncryptionZone(DFSClient.java:3337)
at
org.apache.hadoop.hdfs.DistributedFileSystem.createEncryptionZone(DistributedFileSystem.java:2233)
at
org.apache.hadoop.hdfs.client.HdfsAdmin.createEncryptionZone(HdfsAdmin.java:307)
at
org.apache.hadoop.hdfs.tools.CryptoAdmin$CreateZoneCommand.run(CryptoAdmin.java:142)
at org.apache.hadoop.hdfs.tools.CryptoAdmin.run(CryptoAdmin.java:73)
at
org.apache.hadoop.hdfs.tools.CryptoAdmin.main(CryptoAdmin.java:82)
RemoteException:
As I know CPU must support AES to use such things, I checked on each
server's ILO admin interface and it seems my CPU support AES-128. In
addition, hadoop checknative returns a correct result :
16/09/13 11:16:48 INFO bzip2.Bzip2Factory: Successfully loaded &
initialized native-bzip2 library system-native
16/09/13 11:16:48 INFO zlib.ZlibFactory: Successfully loaded & initialized
native-zlib library
Native library checking:
hadoop: true /usr/hdp/2.5.0.0-1245/hadoop/lib/native/libhadoop.so.1.0.0
zlib: true /lib64/libz.so.1
snappy: true /usr/hdp/2.5.0.0-1245/hadoop/lib/native/libsnappy.so.1
lz4: true revision:99
bzip2: true /lib64/libbz2.so.1
openssl: true /usr/lib64/libcrypto.so
Does someone see where my problem might come from ?
Thanks,
Loïc
Loïc CHANEL
System Big Data engineer
MS&T - WASABI - Worldline (Villeurbanne, France)
Re: Exception while creating encryption zone
Posted by Loïc Chanel <lo...@telecomnancy.net>.
Once again you're right, thanks !
I didn't know that hdfs was blacklisted, but that's a great thing :-)
Regards,
Loïc
Loïc CHANEL
System Big Data engineer
MS&T - WASABI - Worldline (Villeurbanne, France)
2016-09-21 15:13 GMT+02:00 Velmurugan Periasamy <vp...@hortonworks.com>
:
> Loïc:
>
> hdfs user is blacklisted to perform DECRYPT_EEK. Try accessing the data
> in encryption zone as other users (after providing necessary KMS and HDFS
> permissions).
>
> Thank you,
> Vel
>
>
> From: Loïc Chanel <lo...@telecomnancy.net>
> Reply-To: "user@ranger.incubator.apache.org" <
> user@ranger.incubator.apache.org>
> Date: Wednesday, September 21, 2016 at 8:46 AM
>
> To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
> Subject: Re: Exception while creating encryption zone
>
> Now that my encryption zone is created (directory /user/lchanel/testdir),
> I'm trying to put a test file in it, but it seems there is a little bug.
>
> Even though in Ranger KMS I gave hdfs all rights for all keys, when I
> make hdfs dfs -put test.txt /user/lchanel/testdir, I get "put: User:hdfs
> not allowed to do 'DECRYPT_EEK' on 'test_lchanel' " followed by that stack :
>
> 16/09/21 14:22:51 WARN retry.RetryInvocationHandler: Exception while
> invoking ClientNamenodeProtocolTranslatorPB.complete over null. Not
> retrying because try once and fail.
> org.apache.hadoop.ipc.RemoteException(org.apache.
> hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on
> /user/lchanel/testdir/test.txt._COPYING_ (inode 3322459): File does not
> exist. Holder DFSClient_NONMAPREDUCE_1559190789_1 does not have any open
> files.
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.
> checkLease(FSNamesystem.java:3521)
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.
> completeFileInternal(FSNamesystem.java:3611)
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.
> completeFile(FSNamesystem.java:3578)
> at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.
> complete(NameNodeRpcServer.java:905)
> at org.apache.hadoop.hdfs.protocolPB.
> ClientNamenodeProtocolServerSideTranslatorPB.complete(
> ClientNamenodeProtocolServerSideTranslatorPB.java:544)
> at org.apache.hadoop.hdfs.protocol.proto.
> ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(
> ClientNamenodeProtocolProtos.java)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$
> ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1724)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
>
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
> at org.apache.hadoop.ipc.Client.call(Client.java:1496)
> at org.apache.hadoop.ipc.Client.call(Client.java:1396)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.
> invoke(ProtobufRpcEngine.java:233)
> at com.sun.proxy.$Proxy10.complete(Unknown Source)
> at org.apache.hadoop.hdfs.protocolPB.
> ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslat
> orPB.java:501)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(
> RetryInvocationHandler.java:278)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(
> RetryInvocationHandler.java:194)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(
> RetryInvocationHandler.java:176)
> at com.sun.proxy.$Proxy11.complete(Unknown Source)
> at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(
> DFSOutputStream.java:2361)
> at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(
> DFSOutputStream.java:2338)
> at org.apache.hadoop.hdfs.DFSOutputStream.close(
> DFSOutputStream.java:2303)
> at org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(
> DFSClient.java:947)
> at org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(
> DFSClient.java:979)
> at org.apache.hadoop.hdfs.DistributedFileSystem.close(
> DistributedFileSystem.java:1192)
> at org.apache.hadoop.fs.FileSystem$Cache.closeAll(
> FileSystem.java:2852)
> at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(
> FileSystem.java:2869)
> at java.util.concurrent.Executors$RunnableAdapter.
> call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 16/09/21 14:22:51 ERROR hdfs.DFSClient: Failed to close inode 3322459
> org.apache.hadoop.ipc.RemoteException(org.apache.
> hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on
> /user/lchanel/testdir/test.txt._COPYING_ (inode 3322459): File does not
> exist. Holder DFSClient_NONMAPREDUCE_1559190789_1 does not have any open
> files.
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.
> checkLease(FSNamesystem.java:3521)
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.
> completeFileInternal(FSNamesystem.java:3611)
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.
> completeFile(FSNamesystem.java:3578)
> at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.
> complete(NameNodeRpcServer.java:905)
> at org.apache.hadoop.hdfs.protocolPB.
> ClientNamenodeProtocolServerSideTranslatorPB.complete(
> ClientNamenodeProtocolServerSideTranslatorPB.java:544)
> at org.apache.hadoop.hdfs.protocol.proto.
> ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(
> ClientNamenodeProtocolProtos.java)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$
> ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1724)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
>
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
> at org.apache.hadoop.ipc.Client.call(Client.java:1496)
> at org.apache.hadoop.ipc.Client.call(Client.java:1396)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.
> invoke(ProtobufRpcEngine.java:233)
> at com.sun.proxy.$Proxy10.complete(Unknown Source)
> at org.apache.hadoop.hdfs.protocolPB.
> ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslat
> orPB.java:501)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(
> RetryInvocationHandler.java:278)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(
> RetryInvocationHandler.java:194)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(
> RetryInvocationHandler.java:176)
> at com.sun.proxy.$Proxy11.complete(Unknown Source)
> at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(
> DFSOutputStream.java:2361)
> at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(
> DFSOutputStream.java:2338)
> at org.apache.hadoop.hdfs.DFSOutputStream.close(
> DFSOutputStream.java:2303)
> at org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(
> DFSClient.java:947)
> at org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(
> DFSClient.java:979)
> at org.apache.hadoop.hdfs.DistributedFileSystem.close(
> DistributedFileSystem.java:1192)
> at org.apache.hadoop.fs.FileSystem$Cache.closeAll(
> FileSystem.java:2852)
> at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(
> FileSystem.java:2869)
> at java.util.concurrent.Executors$RunnableAdapter.
> call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
> Did I miss something ? Because I definitely gave hdfs the same rights than
> keyadmin user within the interface of Ranger KMS.
> Thanks for your help,
>
>
> Loïc
>
> Loïc CHANEL
> System Big Data engineer
> MS&T - WASABI - Worldline (Villeurbanne, France)
>
> 2016-09-16 16:49 GMT+02:00 Loïc Chanel <lo...@telecomnancy.net>:
>
>> You were right indeed. Only keyadmin user was granted these rights (as I
>> thought hdfs was not submitted to Ranger authorizations), and it was the
>> root issue.
>> Thanks a lot !
>>
>> Regards,
>>
>>
>> Loïc
>>
>> Loïc CHANEL
>> System Big Data engineer
>> MS&T - WASABI - Worldline (Villeurbanne, France)
>>
>> 2016-09-16 16:41 GMT+02:00 Velmurugan Periasamy <
>> vperiasamy@hortonworks.com>:
>>
>>> HDFS user is superuser only for HDFS, for key operations it needs to
>>> have permissions. Login to Ranger using keyadmin/keyadmin and see if there
>>> are KMS policies giving access to “hdfs” user. If not, grant these
>>> permissions.
>>>
>>>
>>> From: Loïc Chanel <lo...@telecomnancy.net>
>>> Reply-To: "user@ranger.incubator.apache.org" <
>>> user@ranger.incubator.apache.org>
>>> Date: Friday, September 16, 2016 at 10:38 AM
>>>
>>> To: "user@ranger.incubator.apache.org" <user@ranger.incubator.apache.org
>>> >
>>> Subject: Re: Exception while creating encryption zone
>>>
>>> As he's the superdamin user, he should be able to do so, right ?
>>> If not, how can I test this ?
>>>
>>> Loïc CHANEL
>>> System Big Data engineer
>>> MS&T - WASABI - Worldline (Villeurbanne, France)
>>>
>>> 2016-09-16 16:20 GMT+02:00 Velmurugan Periasamy <
>>> vperiasamy@hortonworks.com>:
>>>
>>>> Loïc:
>>>>
>>>> Can you make sure hdfs user has permissions for key operations
>>>> (especially GENERATE_EEK and GET_METADATA) and try again?
>>>>
>>>> Thank you,
>>>> Vel
>>>>
>>>> From: Loïc Chanel <lo...@telecomnancy.net>
>>>> Reply-To: "user@ranger.incubator.apache.org" <
>>>> user@ranger.incubator.apache.org>
>>>> Date: Friday, September 16, 2016 at 8:53 AM
>>>> To: "user@ranger.incubator.apache.org" <user@ranger.incubator.apache.
>>>> org>
>>>> Subject: Re: Exception while creating encryption zone
>>>>
>>>> Hi all,
>>>>
>>>> Using TCPDUMP, I investigated a little bit more, and I found that there
>>>> isn't any call from the host I make my "hdfs crypto -createZone
>>>> -keyName test_lchanel -path /user/lchanel" to the port 9292 of the
>>>> host where Ranger KMS is located.
>>>> So it seems it is a configuration or runtime problem.
>>>>
>>>> Does anyone have an idea about where to investigate next ?
>>>>
>>>> Thanks,
>>>>
>>>>
>>>> Loïc
>>>>
>>>> Loïc CHANEL
>>>> System Big Data engineer
>>>> MS&T - WASABI - Worldline (Villeurbanne, France)
>>>>
>>>> 2016-09-13 11:20 GMT+02:00 Loïc Chanel <lo...@telecomnancy.net>:
>>>>
>>>>> Hi all,
>>>>>
>>>>> As I was trying to test Ranger KMS, I encountered some troubles.
>>>>> I created a AES-128 key with ranger KMS named test_lchanel, and as I
>>>>> wanted to use it to encrypt my home repository using : hdfs crypto
>>>>> -createZone -keyName test_lchanel -path /user/lchanel, I got the following
>>>>> exception :
>>>>>
>>>>> 16/09/13 11:11:26 WARN retry.RetryInvocationHandler: Exception while
>>>>> invoking ClientNamenodeProtocolTranslatorPB.createEncryptionZone over
>>>>> null. Not retrying because try once and fail.
>>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.secu
>>>>> rity.authorize.AuthorizationException):
>>>>> at org.apache.hadoop.ipc.Client.g
>>>>> etRpcResponse(Client.java:1552)
>>>>> at org.apache.hadoop.ipc.Client.call(Client.java:1496)
>>>>> at org.apache.hadoop.ipc.Client.call(Client.java:1396)
>>>>> at org.apache.hadoop.ipc.Protobuf
>>>>> RpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>>>>> at com.sun.proxy.$Proxy10.createEncryptionZone(Unknown Source)
>>>>> at org.apache.hadoop.hdfs.protoco
>>>>> lPB.ClientNamenodeProtocolTranslatorPB.createEncryptionZone(
>>>>> ClientNamenodeProtocolTranslatorPB.java:1426)
>>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>> at sun.reflect.NativeMethodAccess
>>>>> orImpl.invoke(NativeMethodAccessorImpl.java:62)
>>>>> at sun.reflect.DelegatingMethodAc
>>>>> cessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>> at java.lang.reflect.Method.invoke(Method.java:497)
>>>>> at org.apache.hadoop.io.retry.Ret
>>>>> ryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
>>>>> at org.apache.hadoop.io.retry.Ret
>>>>> ryInvocationHandler.invoke(RetryInvocationHandler.java:194)
>>>>> at org.apache.hadoop.io.retry.Ret
>>>>> ryInvocationHandler.invoke(RetryInvocationHandler.java:176)
>>>>> at com.sun.proxy.$Proxy11.createEncryptionZone(Unknown Source)
>>>>> at org.apache.hadoop.hdfs.DFSClie
>>>>> nt.createEncryptionZone(DFSClient.java:3337)
>>>>> at org.apache.hadoop.hdfs.Distrib
>>>>> utedFileSystem.createEncryptionZone(DistributedFileSystem.java:2233)
>>>>> at org.apache.hadoop.hdfs.client.
>>>>> HdfsAdmin.createEncryptionZone(HdfsAdmin.java:307)
>>>>> at org.apache.hadoop.hdfs.tools.C
>>>>> ryptoAdmin$CreateZoneCommand.run(CryptoAdmin.java:142)
>>>>> at org.apache.hadoop.hdfs.tools.C
>>>>> ryptoAdmin.run(CryptoAdmin.java:73)
>>>>> at org.apache.hadoop.hdfs.tools.C
>>>>> ryptoAdmin.main(CryptoAdmin.java:82)
>>>>> RemoteException:
>>>>>
>>>>> As I know CPU must support AES to use such things, I checked on each
>>>>> server's ILO admin interface and it seems my CPU support AES-128. In
>>>>> addition, hadoop checknative returns a correct result :
>>>>>
>>>>> 16/09/13 11:16:48 INFO bzip2.Bzip2Factory: Successfully loaded &
>>>>> initialized native-bzip2 library system-native
>>>>> 16/09/13 11:16:48 INFO zlib.ZlibFactory: Successfully loaded &
>>>>> initialized native-zlib library
>>>>> Native library checking:
>>>>> hadoop: true /usr/hdp/2.5.0.0-1245/hadoop/l
>>>>> ib/native/libhadoop.so.1.0.0
>>>>> zlib: true /lib64/libz.so.1
>>>>> snappy: true /usr/hdp/2.5.0.0-1245/hadoop/lib/native/libsnappy.so.1
>>>>> lz4: true revision:99
>>>>> bzip2: true /lib64/libbz2.so.1
>>>>> openssl: true /usr/lib64/libcrypto.so
>>>>>
>>>>> Does someone see where my problem might come from ?
>>>>>
>>>>> Thanks,
>>>>>
>>>>>
>>>>> Loïc
>>>>>
>>>>> Loïc CHANEL
>>>>> System Big Data engineer
>>>>> MS&T - WASABI - Worldline (Villeurbanne, France)
>>>>>
>>>>
>>>>
>>>
>>
>
Re: Exception while creating encryption zone
Posted by Velmurugan Periasamy <vp...@hortonworks.com>.
Loïc:
hdfs user is blacklisted to perform DECRYPT_EEK. Try accessing the data in encryption zone as other users (after providing necessary KMS and HDFS permissions).
Thank you,
Vel
From: Loïc Chanel <lo...@telecomnancy.net>>
Reply-To: "user@ranger.incubator.apache.org<ma...@ranger.incubator.apache.org>" <us...@ranger.incubator.apache.org>>
Date: Wednesday, September 21, 2016 at 8:46 AM
To: "user@ranger.incubator.apache.org<ma...@ranger.incubator.apache.org>" <us...@ranger.incubator.apache.org>>
Subject: Re: Exception while creating encryption zone
Now that my encryption zone is created (directory /user/lchanel/testdir), I'm trying to put a test file in it, but it seems there is a little bug.
Even though in Ranger KMS I gave hdfs all rights for all keys, when I make hdfs dfs -put test.txt /user/lchanel/testdir, I get "put: User:hdfs not allowed to do 'DECRYPT_EEK' on 'test_lchanel' " followed by that stack :
16/09/21 14:22:51 WARN retry.RetryInvocationHandler: Exception while invoking ClientNamenodeProtocolTranslatorPB.complete over null. Not retrying because try once and fail.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /user/lchanel/testdir/test.txt._COPYING_ (inode 3322459): File does not exist. Holder DFSClient_NONMAPREDUCE_1559190789_1 does not have any open files.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3521)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3611)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3578)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:905)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:544)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
at org.apache.hadoop.ipc.Client.call(Client.java:1496)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy10.complete(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:501)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
at com.sun.proxy.$Proxy11.complete(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2361)
at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2338)
at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2303)
at org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:947)
at org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:979)
at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1192)
at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2852)
at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2869)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/09/21 14:22:51 ERROR hdfs.DFSClient: Failed to close inode 3322459
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /user/lchanel/testdir/test.txt._COPYING_ (inode 3322459): File does not exist. Holder DFSClient_NONMAPREDUCE_1559190789_1 does not have any open files.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3521)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3611)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3578)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:905)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:544)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
at org.apache.hadoop.ipc.Client.call(Client.java:1496)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy10.complete(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:501)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
at com.sun.proxy.$Proxy11.complete(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2361)
at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2338)
at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2303)
at org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:947)
at org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:979)
at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1192)
at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2852)
at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2869)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Did I miss something ? Because I definitely gave hdfs the same rights than keyadmin user within the interface of Ranger KMS.
Thanks for your help,
Loïc
Loïc CHANEL
System Big Data engineer
MS&T - WASABI - Worldline (Villeurbanne, France)
2016-09-16 16:49 GMT+02:00 Loïc Chanel <lo...@telecomnancy.net>>:
You were right indeed. Only keyadmin user was granted these rights (as I thought hdfs was not submitted to Ranger authorizations), and it was the root issue.
Thanks a lot !
Regards,
Loïc
Loïc CHANEL
System Big Data engineer
MS&T - WASABI - Worldline (Villeurbanne, France)
2016-09-16 16:41 GMT+02:00 Velmurugan Periasamy <vp...@hortonworks.com>>:
HDFS user is superuser only for HDFS, for key operations it needs to have permissions. Login to Ranger using keyadmin/keyadmin and see if there are KMS policies giving access to “hdfs” user. If not, grant these permissions.
From: Loïc Chanel <lo...@telecomnancy.net>>
Reply-To: "user@ranger.incubator.apache.org<ma...@ranger.incubator.apache.org>" <us...@ranger.incubator.apache.org>>
Date: Friday, September 16, 2016 at 10:38 AM
To: "user@ranger.incubator.apache.org<ma...@ranger.incubator.apache.org>" <us...@ranger.incubator.apache.org>>
Subject: Re: Exception while creating encryption zone
As he's the superdamin user, he should be able to do so, right ?
If not, how can I test this ?
Loïc CHANEL
System Big Data engineer
MS&T - WASABI - Worldline (Villeurbanne, France)
2016-09-16 16:20 GMT+02:00 Velmurugan Periasamy <vp...@hortonworks.com>>:
Loïc:
Can you make sure hdfs user has permissions for key operations (especially GENERATE_EEK and GET_METADATA) and try again?
Thank you,
Vel
From: Loïc Chanel <lo...@telecomnancy.net>>
Reply-To: "user@ranger.incubator.apache.org<ma...@ranger.incubator.apache.org>" <us...@ranger.incubator.apache.org>>
Date: Friday, September 16, 2016 at 8:53 AM
To: "user@ranger.incubator.apache.org<ma...@ranger.incubator.apache.org>" <us...@ranger.incubator.apache.org>>
Subject: Re: Exception while creating encryption zone
Hi all,
Using TCPDUMP, I investigated a little bit more, and I found that there isn't any call from the host I make my "hdfs crypto -createZone -keyName test_lchanel -path /user/lchanel" to the port 9292 of the host where Ranger KMS is located.
So it seems it is a configuration or runtime problem.
Does anyone have an idea about where to investigate next ?
Thanks,
Loïc
Loïc CHANEL
System Big Data engineer
MS&T - WASABI - Worldline (Villeurbanne, France)
2016-09-13 11:20 GMT+02:00 Loïc Chanel <lo...@telecomnancy.net>>:
Hi all,
As I was trying to test Ranger KMS, I encountered some troubles.
I created a AES-128 key with ranger KMS named test_lchanel, and as I wanted to use it to encrypt my home repository using : hdfs crypto -createZone -keyName test_lchanel -path /user/lchanel, I got the following exception :
16/09/13 11:11:26 WARN retry.RetryInvocationHandler: Exception while invoking ClientNamenodeProtocolTranslatorPB.createEncryptionZone over null. Not retrying because try once and fail.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
at org.apache.hadoop.ipc.Client.call(Client.java:1496)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy10.createEncryptionZone(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.createEncryptionZone(ClientNamenodeProtocolTranslatorPB.java:1426)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
at com.sun.proxy.$Proxy11.createEncryptionZone(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.createEncryptionZone(DFSClient.java:3337)
at org.apache.hadoop.hdfs.DistributedFileSystem.createEncryptionZone(DistributedFileSystem.java:2233)
at org.apache.hadoop.hdfs.client.HdfsAdmin.createEncryptionZone(HdfsAdmin.java:307)
at org.apache.hadoop.hdfs.tools.CryptoAdmin$CreateZoneCommand.run(CryptoAdmin.java:142)
at org.apache.hadoop.hdfs.tools.CryptoAdmin.run(CryptoAdmin.java:73)
at org.apache.hadoop.hdfs.tools.CryptoAdmin.main(CryptoAdmin.java:82)
RemoteException:
As I know CPU must support AES to use such things, I checked on each server's ILO admin interface and it seems my CPU support AES-128. In addition, hadoop checknative returns a correct result :
16/09/13 11:16:48 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
16/09/13 11:16:48 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop: true /usr/hdp/2.5.0.0-1245/hadoop/lib/native/libhadoop.so.1.0.0
zlib: true /lib64/libz.so.1
snappy: true /usr/hdp/2.5.0.0-1245/hadoop/lib/native/libsnappy.so.1
lz4: true revision:99
bzip2: true /lib64/libbz2.so.1
openssl: true /usr/lib64/libcrypto.so
Does someone see where my problem might come from ?
Thanks,
Loïc
Loïc CHANEL
System Big Data engineer
MS&T - WASABI - Worldline (Villeurbanne, France)
Re: Exception while creating encryption zone
Posted by Loïc Chanel <lo...@telecomnancy.net>.
Now that my encryption zone is created (directory /user/lchanel/testdir),
I'm trying to put a test file in it, but it seems there is a little bug.
Even though in Ranger KMS I gave hdfs all rights for all keys, when I
make hdfs dfs -put test.txt /user/lchanel/testdir, I get "put: User:hdfs
not allowed to do 'DECRYPT_EEK' on 'test_lchanel' " followed by that stack :
16/09/21 14:22:51 WARN retry.RetryInvocationHandler: Exception while
invoking ClientNamenodeProtocolTranslatorPB.complete over null. Not
retrying because try once and fail.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
No lease on /user/lchanel/testdir/test.txt._COPYING_ (inode 3322459): File
does not exist. Holder DFSClient_NONMAPREDUCE_1559190789_1 does not have
any open files.
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3521)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3611)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3578)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:905)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:544)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
at org.apache.hadoop.ipc.Client.call(Client.java:1496)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy10.complete(Unknown Source)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:501)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
at com.sun.proxy.$Proxy11.complete(Unknown Source)
at
org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2361)
at
org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2338)
at
org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2303)
at
org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:947)
at
org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:979)
at
org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1192)
at
org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2852)
at
org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2869)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/09/21 14:22:51 ERROR hdfs.DFSClient: Failed to close inode 3322459
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
No lease on /user/lchanel/testdir/test.txt._COPYING_ (inode 3322459): File
does not exist. Holder DFSClient_NONMAPREDUCE_1559190789_1 does not have
any open files.
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3521)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3611)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3578)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:905)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:544)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
at org.apache.hadoop.ipc.Client.call(Client.java:1496)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy10.complete(Unknown Source)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:501)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
at com.sun.proxy.$Proxy11.complete(Unknown Source)
at
org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2361)
at
org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2338)
at
org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2303)
at
org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:947)
at
org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:979)
at
org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1192)
at
org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2852)
at
org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2869)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Did I miss something ? Because I definitely gave hdfs the same rights than
keyadmin user within the interface of Ranger KMS.
Thanks for your help,
Loïc
Loïc CHANEL
System Big Data engineer
MS&T - WASABI - Worldline (Villeurbanne, France)
2016-09-16 16:49 GMT+02:00 Loïc Chanel <lo...@telecomnancy.net>:
> You were right indeed. Only keyadmin user was granted these rights (as I
> thought hdfs was not submitted to Ranger authorizations), and it was the
> root issue.
> Thanks a lot !
>
> Regards,
>
>
> Loïc
>
> Loïc CHANEL
> System Big Data engineer
> MS&T - WASABI - Worldline (Villeurbanne, France)
>
> 2016-09-16 16:41 GMT+02:00 Velmurugan Periasamy <
> vperiasamy@hortonworks.com>:
>
>> HDFS user is superuser only for HDFS, for key operations it needs to have
>> permissions. Login to Ranger using keyadmin/keyadmin and see if there are
>> KMS policies giving access to “hdfs” user. If not, grant these permissions.
>>
>>
>> From: Loïc Chanel <lo...@telecomnancy.net>
>> Reply-To: "user@ranger.incubator.apache.org" <
>> user@ranger.incubator.apache.org>
>> Date: Friday, September 16, 2016 at 10:38 AM
>>
>> To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
>> Subject: Re: Exception while creating encryption zone
>>
>> As he's the superdamin user, he should be able to do so, right ?
>> If not, how can I test this ?
>>
>> Loïc CHANEL
>> System Big Data engineer
>> MS&T - WASABI - Worldline (Villeurbanne, France)
>>
>> 2016-09-16 16:20 GMT+02:00 Velmurugan Periasamy <
>> vperiasamy@hortonworks.com>:
>>
>>> Loïc:
>>>
>>> Can you make sure hdfs user has permissions for key operations
>>> (especially GENERATE_EEK and GET_METADATA) and try again?
>>>
>>> Thank you,
>>> Vel
>>>
>>> From: Loïc Chanel <lo...@telecomnancy.net>
>>> Reply-To: "user@ranger.incubator.apache.org" <
>>> user@ranger.incubator.apache.org>
>>> Date: Friday, September 16, 2016 at 8:53 AM
>>> To: "user@ranger.incubator.apache.org" <user@ranger.incubator.apache.org
>>> >
>>> Subject: Re: Exception while creating encryption zone
>>>
>>> Hi all,
>>>
>>> Using TCPDUMP, I investigated a little bit more, and I found that there
>>> isn't any call from the host I make my "hdfs crypto -createZone
>>> -keyName test_lchanel -path /user/lchanel" to the port 9292 of the host
>>> where Ranger KMS is located.
>>> So it seems it is a configuration or runtime problem.
>>>
>>> Does anyone have an idea about where to investigate next ?
>>>
>>> Thanks,
>>>
>>>
>>> Loïc
>>>
>>> Loïc CHANEL
>>> System Big Data engineer
>>> MS&T - WASABI - Worldline (Villeurbanne, France)
>>>
>>> 2016-09-13 11:20 GMT+02:00 Loïc Chanel <lo...@telecomnancy.net>:
>>>
>>>> Hi all,
>>>>
>>>> As I was trying to test Ranger KMS, I encountered some troubles.
>>>> I created a AES-128 key with ranger KMS named test_lchanel, and as I
>>>> wanted to use it to encrypt my home repository using : hdfs crypto
>>>> -createZone -keyName test_lchanel -path /user/lchanel, I got the following
>>>> exception :
>>>>
>>>> 16/09/13 11:11:26 WARN retry.RetryInvocationHandler: Exception while
>>>> invoking ClientNamenodeProtocolTranslatorPB.createEncryptionZone over
>>>> null. Not retrying because try once and fail.
>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.secu
>>>> rity.authorize.AuthorizationException):
>>>> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552
>>>> )
>>>> at org.apache.hadoop.ipc.Client.call(Client.java:1496)
>>>> at org.apache.hadoop.ipc.Client.call(Client.java:1396)
>>>> at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(Proto
>>>> bufRpcEngine.java:233)
>>>> at com.sun.proxy.$Proxy10.createEncryptionZone(Unknown Source)
>>>> at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTran
>>>> slatorPB.createEncryptionZone(ClientNamenodeProtocolTranslat
>>>> orPB.java:1426)
>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>>>> ssorImpl.java:62)
>>>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>>>> thodAccessorImpl.java:43)
>>>> at java.lang.reflect.Method.invoke(Method.java:497)
>>>> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMeth
>>>> od(RetryInvocationHandler.java:278)
>>>> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(Ret
>>>> ryInvocationHandler.java:194)
>>>> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(Ret
>>>> ryInvocationHandler.java:176)
>>>> at com.sun.proxy.$Proxy11.createEncryptionZone(Unknown Source)
>>>> at org.apache.hadoop.hdfs.DFSClient.createEncryptionZone(DFSCli
>>>> ent.java:3337)
>>>> at org.apache.hadoop.hdfs.DistributedFileSystem.createEncryptio
>>>> nZone(DistributedFileSystem.java:2233)
>>>> at org.apache.hadoop.hdfs.client.HdfsAdmin.createEncryptionZone
>>>> (HdfsAdmin.java:307)
>>>> at org.apache.hadoop.hdfs.tools.CryptoAdmin$CreateZoneCommand.r
>>>> un(CryptoAdmin.java:142)
>>>> at org.apache.hadoop.hdfs.tools.CryptoAdmin.run(CryptoAdmin.jav
>>>> a:73)
>>>> at org.apache.hadoop.hdfs.tools.CryptoAdmin.main(CryptoAdmin.ja
>>>> va:82)
>>>> RemoteException:
>>>>
>>>> As I know CPU must support AES to use such things, I checked on each
>>>> server's ILO admin interface and it seems my CPU support AES-128. In
>>>> addition, hadoop checknative returns a correct result :
>>>>
>>>> 16/09/13 11:16:48 INFO bzip2.Bzip2Factory: Successfully loaded &
>>>> initialized native-bzip2 library system-native
>>>> 16/09/13 11:16:48 INFO zlib.ZlibFactory: Successfully loaded &
>>>> initialized native-zlib library
>>>> Native library checking:
>>>> hadoop: true /usr/hdp/2.5.0.0-1245/hadoop/l
>>>> ib/native/libhadoop.so.1.0.0
>>>> zlib: true /lib64/libz.so.1
>>>> snappy: true /usr/hdp/2.5.0.0-1245/hadoop/lib/native/libsnappy.so.1
>>>> lz4: true revision:99
>>>> bzip2: true /lib64/libbz2.so.1
>>>> openssl: true /usr/lib64/libcrypto.so
>>>>
>>>> Does someone see where my problem might come from ?
>>>>
>>>> Thanks,
>>>>
>>>>
>>>> Loïc
>>>>
>>>> Loïc CHANEL
>>>> System Big Data engineer
>>>> MS&T - WASABI - Worldline (Villeurbanne, France)
>>>>
>>>
>>>
>>
>
Re: Exception while creating encryption zone
Posted by Loïc Chanel <lo...@telecomnancy.net>.
You were right indeed. Only keyadmin user was granted these rights (as I
thought hdfs was not submitted to Ranger authorizations), and it was the
root issue.
Thanks a lot !
Regards,
Loïc
Loïc CHANEL
System Big Data engineer
MS&T - WASABI - Worldline (Villeurbanne, France)
2016-09-16 16:41 GMT+02:00 Velmurugan Periasamy <vp...@hortonworks.com>
:
> HDFS user is superuser only for HDFS, for key operations it needs to have
> permissions. Login to Ranger using keyadmin/keyadmin and see if there are
> KMS policies giving access to “hdfs” user. If not, grant these permissions.
>
>
> From: Loïc Chanel <lo...@telecomnancy.net>
> Reply-To: "user@ranger.incubator.apache.org" <
> user@ranger.incubator.apache.org>
> Date: Friday, September 16, 2016 at 10:38 AM
>
> To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
> Subject: Re: Exception while creating encryption zone
>
> As he's the superdamin user, he should be able to do so, right ?
> If not, how can I test this ?
>
> Loïc CHANEL
> System Big Data engineer
> MS&T - WASABI - Worldline (Villeurbanne, France)
>
> 2016-09-16 16:20 GMT+02:00 Velmurugan Periasamy <
> vperiasamy@hortonworks.com>:
>
>> Loïc:
>>
>> Can you make sure hdfs user has permissions for key operations
>> (especially GENERATE_EEK and GET_METADATA) and try again?
>>
>> Thank you,
>> Vel
>>
>> From: Loïc Chanel <lo...@telecomnancy.net>
>> Reply-To: "user@ranger.incubator.apache.org" <
>> user@ranger.incubator.apache.org>
>> Date: Friday, September 16, 2016 at 8:53 AM
>> To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
>> Subject: Re: Exception while creating encryption zone
>>
>> Hi all,
>>
>> Using TCPDUMP, I investigated a little bit more, and I found that there
>> isn't any call from the host I make my "hdfs crypto -createZone -keyName
>> test_lchanel -path /user/lchanel" to the port 9292 of the host where
>> Ranger KMS is located.
>> So it seems it is a configuration or runtime problem.
>>
>> Does anyone have an idea about where to investigate next ?
>>
>> Thanks,
>>
>>
>> Loïc
>>
>> Loïc CHANEL
>> System Big Data engineer
>> MS&T - WASABI - Worldline (Villeurbanne, France)
>>
>> 2016-09-13 11:20 GMT+02:00 Loïc Chanel <lo...@telecomnancy.net>:
>>
>>> Hi all,
>>>
>>> As I was trying to test Ranger KMS, I encountered some troubles.
>>> I created a AES-128 key with ranger KMS named test_lchanel, and as I
>>> wanted to use it to encrypt my home repository using : hdfs crypto
>>> -createZone -keyName test_lchanel -path /user/lchanel, I got the following
>>> exception :
>>>
>>> 16/09/13 11:11:26 WARN retry.RetryInvocationHandler: Exception while
>>> invoking ClientNamenodeProtocolTranslatorPB.createEncryptionZone over
>>> null. Not retrying because try once and fail.
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.secu
>>> rity.authorize.AuthorizationException):
>>> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
>>> at org.apache.hadoop.ipc.Client.call(Client.java:1496)
>>> at org.apache.hadoop.ipc.Client.call(Client.java:1396)
>>> at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(Proto
>>> bufRpcEngine.java:233)
>>> at com.sun.proxy.$Proxy10.createEncryptionZone(Unknown Source)
>>> at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTran
>>> slatorPB.createEncryptionZone(ClientNamenodeProtocolTranslat
>>> orPB.java:1426)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>>> ssorImpl.java:62)
>>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>>> thodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:497)
>>> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMeth
>>> od(RetryInvocationHandler.java:278)
>>> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(Ret
>>> ryInvocationHandler.java:194)
>>> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(Ret
>>> ryInvocationHandler.java:176)
>>> at com.sun.proxy.$Proxy11.createEncryptionZone(Unknown Source)
>>> at org.apache.hadoop.hdfs.DFSClient.createEncryptionZone(DFSCli
>>> ent.java:3337)
>>> at org.apache.hadoop.hdfs.DistributedFileSystem.createEncryptio
>>> nZone(DistributedFileSystem.java:2233)
>>> at org.apache.hadoop.hdfs.client.HdfsAdmin.createEncryptionZone
>>> (HdfsAdmin.java:307)
>>> at org.apache.hadoop.hdfs.tools.CryptoAdmin$CreateZoneCommand.r
>>> un(CryptoAdmin.java:142)
>>> at org.apache.hadoop.hdfs.tools.CryptoAdmin.run(CryptoAdmin.jav
>>> a:73)
>>> at org.apache.hadoop.hdfs.tools.CryptoAdmin.main(CryptoAdmin.ja
>>> va:82)
>>> RemoteException:
>>>
>>> As I know CPU must support AES to use such things, I checked on each
>>> server's ILO admin interface and it seems my CPU support AES-128. In
>>> addition, hadoop checknative returns a correct result :
>>>
>>> 16/09/13 11:16:48 INFO bzip2.Bzip2Factory: Successfully loaded &
>>> initialized native-bzip2 library system-native
>>> 16/09/13 11:16:48 INFO zlib.ZlibFactory: Successfully loaded &
>>> initialized native-zlib library
>>> Native library checking:
>>> hadoop: true /usr/hdp/2.5.0.0-1245/hadoop/lib/native/libhadoop.so.1.0.0
>>> zlib: true /lib64/libz.so.1
>>> snappy: true /usr/hdp/2.5.0.0-1245/hadoop/lib/native/libsnappy.so.1
>>> lz4: true revision:99
>>> bzip2: true /lib64/libbz2.so.1
>>> openssl: true /usr/lib64/libcrypto.so
>>>
>>> Does someone see where my problem might come from ?
>>>
>>> Thanks,
>>>
>>>
>>> Loïc
>>>
>>> Loïc CHANEL
>>> System Big Data engineer
>>> MS&T - WASABI - Worldline (Villeurbanne, France)
>>>
>>
>>
>
Re: Exception while creating encryption zone
Posted by Velmurugan Periasamy <vp...@hortonworks.com>.
HDFS user is superuser only for HDFS, for key operations it needs to have permissions. Login to Ranger using keyadmin/keyadmin and see if there are KMS policies giving access to “hdfs” user. If not, grant these permissions.
From: Loïc Chanel <lo...@telecomnancy.net>>
Reply-To: "user@ranger.incubator.apache.org<ma...@ranger.incubator.apache.org>" <us...@ranger.incubator.apache.org>>
Date: Friday, September 16, 2016 at 10:38 AM
To: "user@ranger.incubator.apache.org<ma...@ranger.incubator.apache.org>" <us...@ranger.incubator.apache.org>>
Subject: Re: Exception while creating encryption zone
As he's the superdamin user, he should be able to do so, right ?
If not, how can I test this ?
Loïc CHANEL
System Big Data engineer
MS&T - WASABI - Worldline (Villeurbanne, France)
2016-09-16 16:20 GMT+02:00 Velmurugan Periasamy <vp...@hortonworks.com>>:
Loïc:
Can you make sure hdfs user has permissions for key operations (especially GENERATE_EEK and GET_METADATA) and try again?
Thank you,
Vel
From: Loïc Chanel <lo...@telecomnancy.net>>
Reply-To: "user@ranger.incubator.apache.org<ma...@ranger.incubator.apache.org>" <us...@ranger.incubator.apache.org>>
Date: Friday, September 16, 2016 at 8:53 AM
To: "user@ranger.incubator.apache.org<ma...@ranger.incubator.apache.org>" <us...@ranger.incubator.apache.org>>
Subject: Re: Exception while creating encryption zone
Hi all,
Using TCPDUMP, I investigated a little bit more, and I found that there isn't any call from the host I make my "hdfs crypto -createZone -keyName test_lchanel -path /user/lchanel" to the port 9292 of the host where Ranger KMS is located.
So it seems it is a configuration or runtime problem.
Does anyone have an idea about where to investigate next ?
Thanks,
Loïc
Loïc CHANEL
System Big Data engineer
MS&T - WASABI - Worldline (Villeurbanne, France)
2016-09-13 11:20 GMT+02:00 Loïc Chanel <lo...@telecomnancy.net>>:
Hi all,
As I was trying to test Ranger KMS, I encountered some troubles.
I created a AES-128 key with ranger KMS named test_lchanel, and as I wanted to use it to encrypt my home repository using : hdfs crypto -createZone -keyName test_lchanel -path /user/lchanel, I got the following exception :
16/09/13 11:11:26 WARN retry.RetryInvocationHandler: Exception while invoking ClientNamenodeProtocolTranslatorPB.createEncryptionZone over null. Not retrying because try once and fail.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
at org.apache.hadoop.ipc.Client.call(Client.java:1496)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy10.createEncryptionZone(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.createEncryptionZone(ClientNamenodeProtocolTranslatorPB.java:1426)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
at com.sun.proxy.$Proxy11.createEncryptionZone(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.createEncryptionZone(DFSClient.java:3337)
at org.apache.hadoop.hdfs.DistributedFileSystem.createEncryptionZone(DistributedFileSystem.java:2233)
at org.apache.hadoop.hdfs.client.HdfsAdmin.createEncryptionZone(HdfsAdmin.java:307)
at org.apache.hadoop.hdfs.tools.CryptoAdmin$CreateZoneCommand.run(CryptoAdmin.java:142)
at org.apache.hadoop.hdfs.tools.CryptoAdmin.run(CryptoAdmin.java:73)
at org.apache.hadoop.hdfs.tools.CryptoAdmin.main(CryptoAdmin.java:82)
RemoteException:
As I know CPU must support AES to use such things, I checked on each server's ILO admin interface and it seems my CPU support AES-128. In addition, hadoop checknative returns a correct result :
16/09/13 11:16:48 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
16/09/13 11:16:48 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop: true /usr/hdp/2.5.0.0-1245/hadoop/lib/native/libhadoop.so.1.0.0
zlib: true /lib64/libz.so.1
snappy: true /usr/hdp/2.5.0.0-1245/hadoop/lib/native/libsnappy.so.1
lz4: true revision:99
bzip2: true /lib64/libbz2.so.1
openssl: true /usr/lib64/libcrypto.so
Does someone see where my problem might come from ?
Thanks,
Loïc
Loïc CHANEL
System Big Data engineer
MS&T - WASABI - Worldline (Villeurbanne, France)
Re: Exception while creating encryption zone
Posted by Loïc Chanel <lo...@telecomnancy.net>.
As he's the superdamin user, he should be able to do so, right ?
If not, how can I test this ?
Loïc CHANEL
System Big Data engineer
MS&T - WASABI - Worldline (Villeurbanne, France)
2016-09-16 16:20 GMT+02:00 Velmurugan Periasamy <vp...@hortonworks.com>
:
> Loïc:
>
> Can you make sure hdfs user has permissions for key operations
> (especially GENERATE_EEK and GET_METADATA) and try again?
>
> Thank you,
> Vel
>
> From: Loïc Chanel <lo...@telecomnancy.net>
> Reply-To: "user@ranger.incubator.apache.org" <
> user@ranger.incubator.apache.org>
> Date: Friday, September 16, 2016 at 8:53 AM
> To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
> Subject: Re: Exception while creating encryption zone
>
> Hi all,
>
> Using TCPDUMP, I investigated a little bit more, and I found that there
> isn't any call from the host I make my "hdfs crypto -createZone -keyName
> test_lchanel -path /user/lchanel" to the port 9292 of the host where
> Ranger KMS is located.
> So it seems it is a configuration or runtime problem.
>
> Does anyone have an idea about where to investigate next ?
>
> Thanks,
>
>
> Loïc
>
> Loïc CHANEL
> System Big Data engineer
> MS&T - WASABI - Worldline (Villeurbanne, France)
>
> 2016-09-13 11:20 GMT+02:00 Loïc Chanel <lo...@telecomnancy.net>:
>
>> Hi all,
>>
>> As I was trying to test Ranger KMS, I encountered some troubles.
>> I created a AES-128 key with ranger KMS named test_lchanel, and as I
>> wanted to use it to encrypt my home repository using : hdfs crypto
>> -createZone -keyName test_lchanel -path /user/lchanel, I got the following
>> exception :
>>
>> 16/09/13 11:11:26 WARN retry.RetryInvocationHandler: Exception while
>> invoking ClientNamenodeProtocolTranslatorPB.createEncryptionZone over
>> null. Not retrying because try once and fail.
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.
>> security.authorize.AuthorizationException):
>> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
>> at org.apache.hadoop.ipc.Client.call(Client.java:1496)
>> at org.apache.hadoop.ipc.Client.call(Client.java:1396)
>> at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(
>> ProtobufRpcEngine.java:233)
>> at com.sun.proxy.$Proxy10.createEncryptionZone(Unknown Source)
>> at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTran
>> slatorPB.createEncryptionZone(ClientNamenodeProtocolTranslat
>> orPB.java:1426)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>> ssorImpl.java:62)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>> thodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:497)
>> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMeth
>> od(RetryInvocationHandler.java:278)
>> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(Ret
>> ryInvocationHandler.java:194)
>> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(Ret
>> ryInvocationHandler.java:176)
>> at com.sun.proxy.$Proxy11.createEncryptionZone(Unknown Source)
>> at org.apache.hadoop.hdfs.DFSClient.createEncryptionZone(DFSCli
>> ent.java:3337)
>> at org.apache.hadoop.hdfs.DistributedFileSystem.createEncryptio
>> nZone(DistributedFileSystem.java:2233)
>> at org.apache.hadoop.hdfs.client.HdfsAdmin.createEncryptionZone
>> (HdfsAdmin.java:307)
>> at org.apache.hadoop.hdfs.tools.CryptoAdmin$CreateZoneCommand.r
>> un(CryptoAdmin.java:142)
>> at org.apache.hadoop.hdfs.tools.CryptoAdmin.run(CryptoAdmin.jav
>> a:73)
>> at org.apache.hadoop.hdfs.tools.CryptoAdmin.main(CryptoAdmin.ja
>> va:82)
>> RemoteException:
>>
>> As I know CPU must support AES to use such things, I checked on each
>> server's ILO admin interface and it seems my CPU support AES-128. In
>> addition, hadoop checknative returns a correct result :
>>
>> 16/09/13 11:16:48 INFO bzip2.Bzip2Factory: Successfully loaded &
>> initialized native-bzip2 library system-native
>> 16/09/13 11:16:48 INFO zlib.ZlibFactory: Successfully loaded &
>> initialized native-zlib library
>> Native library checking:
>> hadoop: true /usr/hdp/2.5.0.0-1245/hadoop/lib/native/libhadoop.so.1.0.0
>> zlib: true /lib64/libz.so.1
>> snappy: true /usr/hdp/2.5.0.0-1245/hadoop/lib/native/libsnappy.so.1
>> lz4: true revision:99
>> bzip2: true /lib64/libbz2.so.1
>> openssl: true /usr/lib64/libcrypto.so
>>
>> Does someone see where my problem might come from ?
>>
>> Thanks,
>>
>>
>> Loïc
>>
>> Loïc CHANEL
>> System Big Data engineer
>> MS&T - WASABI - Worldline (Villeurbanne, France)
>>
>
>
Re: Exception while creating encryption zone
Posted by Velmurugan Periasamy <vp...@hortonworks.com>.
Loïc:
Can you make sure hdfs user has permissions for key operations (especially GENERATE_EEK and GET_METADATA) and try again?
Thank you,
Vel
From: Loïc Chanel <lo...@telecomnancy.net>>
Reply-To: "user@ranger.incubator.apache.org<ma...@ranger.incubator.apache.org>" <us...@ranger.incubator.apache.org>>
Date: Friday, September 16, 2016 at 8:53 AM
To: "user@ranger.incubator.apache.org<ma...@ranger.incubator.apache.org>" <us...@ranger.incubator.apache.org>>
Subject: Re: Exception while creating encryption zone
Hi all,
Using TCPDUMP, I investigated a little bit more, and I found that there isn't any call from the host I make my "hdfs crypto -createZone -keyName test_lchanel -path /user/lchanel" to the port 9292 of the host where Ranger KMS is located.
So it seems it is a configuration or runtime problem.
Does anyone have an idea about where to investigate next ?
Thanks,
Loïc
Loïc CHANEL
System Big Data engineer
MS&T - WASABI - Worldline (Villeurbanne, France)
2016-09-13 11:20 GMT+02:00 Loïc Chanel <lo...@telecomnancy.net>>:
Hi all,
As I was trying to test Ranger KMS, I encountered some troubles.
I created a AES-128 key with ranger KMS named test_lchanel, and as I wanted to use it to encrypt my home repository using : hdfs crypto -createZone -keyName test_lchanel -path /user/lchanel, I got the following exception :
16/09/13 11:11:26 WARN retry.RetryInvocationHandler: Exception while invoking ClientNamenodeProtocolTranslatorPB.createEncryptionZone over null. Not retrying because try once and fail.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
at org.apache.hadoop.ipc.Client.call(Client.java:1496)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy10.createEncryptionZone(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.createEncryptionZone(ClientNamenodeProtocolTranslatorPB.java:1426)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
at com.sun.proxy.$Proxy11.createEncryptionZone(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.createEncryptionZone(DFSClient.java:3337)
at org.apache.hadoop.hdfs.DistributedFileSystem.createEncryptionZone(DistributedFileSystem.java:2233)
at org.apache.hadoop.hdfs.client.HdfsAdmin.createEncryptionZone(HdfsAdmin.java:307)
at org.apache.hadoop.hdfs.tools.CryptoAdmin$CreateZoneCommand.run(CryptoAdmin.java:142)
at org.apache.hadoop.hdfs.tools.CryptoAdmin.run(CryptoAdmin.java:73)
at org.apache.hadoop.hdfs.tools.CryptoAdmin.main(CryptoAdmin.java:82)
RemoteException:
As I know CPU must support AES to use such things, I checked on each server's ILO admin interface and it seems my CPU support AES-128. In addition, hadoop checknative returns a correct result :
16/09/13 11:16:48 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
16/09/13 11:16:48 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop: true /usr/hdp/2.5.0.0-1245/hadoop/lib/native/libhadoop.so.1.0.0
zlib: true /lib64/libz.so.1
snappy: true /usr/hdp/2.5.0.0-1245/hadoop/lib/native/libsnappy.so.1
lz4: true revision:99
bzip2: true /lib64/libbz2.so.1
openssl: true /usr/lib64/libcrypto.so
Does someone see where my problem might come from ?
Thanks,
Loïc
Loïc CHANEL
System Big Data engineer
MS&T - WASABI - Worldline (Villeurbanne, France)
Re: Exception while creating encryption zone
Posted by Loïc Chanel <lo...@telecomnancy.net>.
Hi all,
Using TCPDUMP, I investigated a little bit more, and I found that there
isn't any call from the host I make my "hdfs crypto -createZone -keyName
test_lchanel -path /user/lchanel" to the port 9292 of the host where Ranger
KMS is located.
So it seems it is a configuration or runtime problem.
Does anyone have an idea about where to investigate next ?
Thanks,
Loïc
Loïc CHANEL
System Big Data engineer
MS&T - WASABI - Worldline (Villeurbanne, France)
2016-09-13 11:20 GMT+02:00 Loïc Chanel <lo...@telecomnancy.net>:
> Hi all,
>
> As I was trying to test Ranger KMS, I encountered some troubles.
> I created a AES-128 key with ranger KMS named test_lchanel, and as I
> wanted to use it to encrypt my home repository using : hdfs crypto
> -createZone -keyName test_lchanel -path /user/lchanel, I got the following
> exception :
>
> 16/09/13 11:11:26 WARN retry.RetryInvocationHandler: Exception while
> invoking ClientNamenodeProtocolTranslatorPB.createEncryptionZone over
> null. Not retrying because try once and fail.
> org.apache.hadoop.ipc.RemoteException(org.apache.
> hadoop.security.authorize.AuthorizationException):
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
> at org.apache.hadoop.ipc.Client.call(Client.java:1496)
> at org.apache.hadoop.ipc.Client.call(Client.java:1396)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.
> invoke(ProtobufRpcEngine.java:233)
> at com.sun.proxy.$Proxy10.createEncryptionZone(Unknown Source)
> at org.apache.hadoop.hdfs.protocolPB.
> ClientNamenodeProtocolTranslatorPB.createEncryptionZone(
> ClientNamenodeProtocolTranslatorPB.java:1426)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(
> RetryInvocationHandler.java:278)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(
> RetryInvocationHandler.java:194)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(
> RetryInvocationHandler.java:176)
> at com.sun.proxy.$Proxy11.createEncryptionZone(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.createEncryptionZone(
> DFSClient.java:3337)
> at org.apache.hadoop.hdfs.DistributedFileSystem.
> createEncryptionZone(DistributedFileSystem.java:2233)
> at org.apache.hadoop.hdfs.client.HdfsAdmin.createEncryptionZone(
> HdfsAdmin.java:307)
> at org.apache.hadoop.hdfs.tools.CryptoAdmin$CreateZoneCommand.
> run(CryptoAdmin.java:142)
> at org.apache.hadoop.hdfs.tools.CryptoAdmin.run(CryptoAdmin.
> java:73)
> at org.apache.hadoop.hdfs.tools.CryptoAdmin.main(CryptoAdmin.
> java:82)
> RemoteException:
>
> As I know CPU must support AES to use such things, I checked on each
> server's ILO admin interface and it seems my CPU support AES-128. In
> addition, hadoop checknative returns a correct result :
>
> 16/09/13 11:16:48 INFO bzip2.Bzip2Factory: Successfully loaded &
> initialized native-bzip2 library system-native
> 16/09/13 11:16:48 INFO zlib.ZlibFactory: Successfully loaded & initialized
> native-zlib library
> Native library checking:
> hadoop: true /usr/hdp/2.5.0.0-1245/hadoop/lib/native/libhadoop.so.1.0.0
> zlib: true /lib64/libz.so.1
> snappy: true /usr/hdp/2.5.0.0-1245/hadoop/lib/native/libsnappy.so.1
> lz4: true revision:99
> bzip2: true /lib64/libbz2.so.1
> openssl: true /usr/lib64/libcrypto.so
>
> Does someone see where my problem might come from ?
>
> Thanks,
>
>
> Loïc
>
> Loïc CHANEL
> System Big Data engineer
> MS&T - WASABI - Worldline (Villeurbanne, France)
>