You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Seonyeong Bak <re...@gmail.com> on 2013/02/26 06:10:08 UTC

Encryption in HDFS

Hello, I'm a university student.

I implemented AES and Triple DES with CompressionCodec in java cryptography
architecture (JCA)
The encryption is performed by a client node using Hadoop API.
Map tasks read blocks from HDFS and these blocks are decrypted by each map
tasks.
I tested my implementation with generic HDFS.
My cluster consists of 3 nodes (1 master node, 3 worker nodes) and each
machines have quad core processor (i7-2600) and 4GB memory.
A test input is 1TB text file which consists of 32 multiple text files (1
text file is 32GB)

I expected that the encryption takes much more time than generic HDFS.
The performance does not differ significantly.
The decryption step takes about 5-7% more than generic HDFS.
The encryption step takes about 20-30% more than generic HDFS because it is
implemented by single thread and executed by 1 client node.
So the encryption can get more performance.

May there be any error in my test?

I know there are several implementation for encryting files in HDFS.
Are these implementations enough to secure HDFS?

best regards,

seonpark

* Sorry for my bad english

Re: Encryption in HDFS

Posted by Michael Segel <mi...@hotmail.com>.
You can encrypt the splits separately. 

The issue of key management is actually a layer above this. 

Looks like the research is on the encryption process w a known key. 
The layer above would handle key management which can be done a couple of different ways... 

On Feb 26, 2013, at 1:52 PM, java8964 java8964 <ja...@hotmail.com> wrote:

> I am also interested in your research. Can you share some insight about the following questions?
> 
> 1) When you use CompressionCodec, can the encrypted file split? From my understand, there is no encrypt way can make the file decryption individually by block, right?  For example, if I have 1G file, encrypted using AES, how do you or can you decrypt the file block by block, instead of just using one mapper to decrypt the whole file?
> 2) In your CompressionCodec implementation, do you use the DecompressorStream or BlockDecompressorStream? If BlockDecompressorStream, can you share some examples? Right now, I have some problems to use BlockDecompressorStream to do the exactly same thing as you did.
> 3) Do you have any plan to share your code, especially if you did use BlockDecompressorStream and make the encryption file decrypted block by block in the hadoop MapReduce job.
> 
> Thanks
> 
> Yong
> 
> From: renderaid@gmail.com
> Date: Tue, 26 Feb 2013 14:10:08 +0900
> Subject: Encryption in HDFS
> To: user@hadoop.apache.org
> 
> Hello, I'm a university student.
> 
> I implemented AES and Triple DES with CompressionCodec in java cryptography architecture (JCA)
> The encryption is performed by a client node using Hadoop API.
> Map tasks read blocks from HDFS and these blocks are decrypted by each map tasks.
> I tested my implementation with generic HDFS. 
> My cluster consists of 3 nodes (1 master node, 3 worker nodes) and each machines have quad core processor (i7-2600) and 4GB memory. 
> A test input is 1TB text file which consists of 32 multiple text files (1 text file is 32GB)
> 
> I expected that the encryption takes much more time than generic HDFS. 
> The performance does not differ significantly. 
> The decryption step takes about 5-7% more than generic HDFS. 
> The encryption step takes about 20-30% more than generic HDFS because it is implemented by single thread and executed by 1 client node. 
> So the encryption can get more performance. 
> 
> May there be any error in my test?
> 
> I know there are several implementation for encryting files in HDFS. 
> Are these implementations enough to secure HDFS?
> 
> best regards,
> 
> seonpark
> 
> * Sorry for my bad english 


Re: Encryption in HDFS

Posted by Michael Segel <mi...@hotmail.com>.
You can encrypt the splits separately. 

The issue of key management is actually a layer above this. 

Looks like the research is on the encryption process w a known key. 
The layer above would handle key management which can be done a couple of different ways... 

On Feb 26, 2013, at 1:52 PM, java8964 java8964 <ja...@hotmail.com> wrote:

> I am also interested in your research. Can you share some insight about the following questions?
> 
> 1) When you use CompressionCodec, can the encrypted file split? From my understand, there is no encrypt way can make the file decryption individually by block, right?  For example, if I have 1G file, encrypted using AES, how do you or can you decrypt the file block by block, instead of just using one mapper to decrypt the whole file?
> 2) In your CompressionCodec implementation, do you use the DecompressorStream or BlockDecompressorStream? If BlockDecompressorStream, can you share some examples? Right now, I have some problems to use BlockDecompressorStream to do the exactly same thing as you did.
> 3) Do you have any plan to share your code, especially if you did use BlockDecompressorStream and make the encryption file decrypted block by block in the hadoop MapReduce job.
> 
> Thanks
> 
> Yong
> 
> From: renderaid@gmail.com
> Date: Tue, 26 Feb 2013 14:10:08 +0900
> Subject: Encryption in HDFS
> To: user@hadoop.apache.org
> 
> Hello, I'm a university student.
> 
> I implemented AES and Triple DES with CompressionCodec in java cryptography architecture (JCA)
> The encryption is performed by a client node using Hadoop API.
> Map tasks read blocks from HDFS and these blocks are decrypted by each map tasks.
> I tested my implementation with generic HDFS. 
> My cluster consists of 3 nodes (1 master node, 3 worker nodes) and each machines have quad core processor (i7-2600) and 4GB memory. 
> A test input is 1TB text file which consists of 32 multiple text files (1 text file is 32GB)
> 
> I expected that the encryption takes much more time than generic HDFS. 
> The performance does not differ significantly. 
> The decryption step takes about 5-7% more than generic HDFS. 
> The encryption step takes about 20-30% more than generic HDFS because it is implemented by single thread and executed by 1 client node. 
> So the encryption can get more performance. 
> 
> May there be any error in my test?
> 
> I know there are several implementation for encryting files in HDFS. 
> Are these implementations enough to secure HDFS?
> 
> best regards,
> 
> seonpark
> 
> * Sorry for my bad english 


Re: Encryption in HDFS

Posted by Michael Segel <mi...@hotmail.com>.
You can encrypt the splits separately. 

The issue of key management is actually a layer above this. 

Looks like the research is on the encryption process w a known key. 
The layer above would handle key management which can be done a couple of different ways... 

On Feb 26, 2013, at 1:52 PM, java8964 java8964 <ja...@hotmail.com> wrote:

> I am also interested in your research. Can you share some insight about the following questions?
> 
> 1) When you use CompressionCodec, can the encrypted file split? From my understand, there is no encrypt way can make the file decryption individually by block, right?  For example, if I have 1G file, encrypted using AES, how do you or can you decrypt the file block by block, instead of just using one mapper to decrypt the whole file?
> 2) In your CompressionCodec implementation, do you use the DecompressorStream or BlockDecompressorStream? If BlockDecompressorStream, can you share some examples? Right now, I have some problems to use BlockDecompressorStream to do the exactly same thing as you did.
> 3) Do you have any plan to share your code, especially if you did use BlockDecompressorStream and make the encryption file decrypted block by block in the hadoop MapReduce job.
> 
> Thanks
> 
> Yong
> 
> From: renderaid@gmail.com
> Date: Tue, 26 Feb 2013 14:10:08 +0900
> Subject: Encryption in HDFS
> To: user@hadoop.apache.org
> 
> Hello, I'm a university student.
> 
> I implemented AES and Triple DES with CompressionCodec in java cryptography architecture (JCA)
> The encryption is performed by a client node using Hadoop API.
> Map tasks read blocks from HDFS and these blocks are decrypted by each map tasks.
> I tested my implementation with generic HDFS. 
> My cluster consists of 3 nodes (1 master node, 3 worker nodes) and each machines have quad core processor (i7-2600) and 4GB memory. 
> A test input is 1TB text file which consists of 32 multiple text files (1 text file is 32GB)
> 
> I expected that the encryption takes much more time than generic HDFS. 
> The performance does not differ significantly. 
> The decryption step takes about 5-7% more than generic HDFS. 
> The encryption step takes about 20-30% more than generic HDFS because it is implemented by single thread and executed by 1 client node. 
> So the encryption can get more performance. 
> 
> May there be any error in my test?
> 
> I know there are several implementation for encryting files in HDFS. 
> Are these implementations enough to secure HDFS?
> 
> best regards,
> 
> seonpark
> 
> * Sorry for my bad english 


Re: Encryption in HDFS

Posted by Michael Segel <mi...@hotmail.com>.
You can encrypt the splits separately. 

The issue of key management is actually a layer above this. 

Looks like the research is on the encryption process w a known key. 
The layer above would handle key management which can be done a couple of different ways... 

On Feb 26, 2013, at 1:52 PM, java8964 java8964 <ja...@hotmail.com> wrote:

> I am also interested in your research. Can you share some insight about the following questions?
> 
> 1) When you use CompressionCodec, can the encrypted file split? From my understand, there is no encrypt way can make the file decryption individually by block, right?  For example, if I have 1G file, encrypted using AES, how do you or can you decrypt the file block by block, instead of just using one mapper to decrypt the whole file?
> 2) In your CompressionCodec implementation, do you use the DecompressorStream or BlockDecompressorStream? If BlockDecompressorStream, can you share some examples? Right now, I have some problems to use BlockDecompressorStream to do the exactly same thing as you did.
> 3) Do you have any plan to share your code, especially if you did use BlockDecompressorStream and make the encryption file decrypted block by block in the hadoop MapReduce job.
> 
> Thanks
> 
> Yong
> 
> From: renderaid@gmail.com
> Date: Tue, 26 Feb 2013 14:10:08 +0900
> Subject: Encryption in HDFS
> To: user@hadoop.apache.org
> 
> Hello, I'm a university student.
> 
> I implemented AES and Triple DES with CompressionCodec in java cryptography architecture (JCA)
> The encryption is performed by a client node using Hadoop API.
> Map tasks read blocks from HDFS and these blocks are decrypted by each map tasks.
> I tested my implementation with generic HDFS. 
> My cluster consists of 3 nodes (1 master node, 3 worker nodes) and each machines have quad core processor (i7-2600) and 4GB memory. 
> A test input is 1TB text file which consists of 32 multiple text files (1 text file is 32GB)
> 
> I expected that the encryption takes much more time than generic HDFS. 
> The performance does not differ significantly. 
> The decryption step takes about 5-7% more than generic HDFS. 
> The encryption step takes about 20-30% more than generic HDFS because it is implemented by single thread and executed by 1 client node. 
> So the encryption can get more performance. 
> 
> May there be any error in my test?
> 
> I know there are several implementation for encryting files in HDFS. 
> Are these implementations enough to secure HDFS?
> 
> best regards,
> 
> seonpark
> 
> * Sorry for my bad english 


RE: Encryption in HDFS

Posted by java8964 java8964 <ja...@hotmail.com>.
I am also interested in your research. Can you share some insight about the following questions?
1) When you use CompressionCodec, can the encrypted file split? From my understand, there is no encrypt way can make the file decryption individually by block, right?  For example, if I have 1G file, encrypted using AES, how do you or can you decrypt the file block by block, instead of just using one mapper to decrypt the whole file?
2) In your CompressionCodec implementation, do you use the DecompressorStream or BlockDecompressorStream? If BlockDecompressorStream, can you share some examples? Right now, I have some problems to use BlockDecompressorStream to do the exactly same thing as you did.3) Do you have any plan to share your code, especially if you did use BlockDecompressorStream and make the encryption file decrypted block by block in the hadoop MapReduce job.
Thanks
Yong
From: renderaid@gmail.com
Date: Tue, 26 Feb 2013 14:10:08 +0900
Subject: Encryption in HDFS
To: user@hadoop.apache.org

Hello, I'm a university student.
I implemented AES and Triple DES with CompressionCodec in java cryptography architecture (JCA)The encryption is performed by a client node using Hadoop API.

Map tasks read blocks from HDFS and these blocks are decrypted by each map tasks.I tested my implementation with generic HDFS. My cluster consists of 3 nodes (1 master node, 3 worker nodes) and each machines have quad core processor (i7-2600) and 4GB memory. 

A test input is 1TB text file which consists of 32 multiple text files (1 text file is 32GB)
I expected that the encryption takes much more time than generic HDFS. The performance does not differ significantly. 

The decryption step takes about 5-7% more than generic HDFS. The encryption step takes about 20-30% more than generic HDFS because it is implemented by single thread and executed by 1 client node. 

So the encryption can get more performance. 
May there be any error in my test?
I know there are several implementation for encryting files in HDFS. Are these implementations enough to secure HDFS?


best regards,
seonpark
* Sorry for my bad english  		 	   		  

RE: Encryption in HDFS

Posted by java8964 java8964 <ja...@hotmail.com>.
I am also interested in your research. Can you share some insight about the following questions?
1) When you use CompressionCodec, can the encrypted file split? From my understand, there is no encrypt way can make the file decryption individually by block, right?  For example, if I have 1G file, encrypted using AES, how do you or can you decrypt the file block by block, instead of just using one mapper to decrypt the whole file?
2) In your CompressionCodec implementation, do you use the DecompressorStream or BlockDecompressorStream? If BlockDecompressorStream, can you share some examples? Right now, I have some problems to use BlockDecompressorStream to do the exactly same thing as you did.3) Do you have any plan to share your code, especially if you did use BlockDecompressorStream and make the encryption file decrypted block by block in the hadoop MapReduce job.
Thanks
Yong
From: renderaid@gmail.com
Date: Tue, 26 Feb 2013 14:10:08 +0900
Subject: Encryption in HDFS
To: user@hadoop.apache.org

Hello, I'm a university student.
I implemented AES and Triple DES with CompressionCodec in java cryptography architecture (JCA)The encryption is performed by a client node using Hadoop API.

Map tasks read blocks from HDFS and these blocks are decrypted by each map tasks.I tested my implementation with generic HDFS. My cluster consists of 3 nodes (1 master node, 3 worker nodes) and each machines have quad core processor (i7-2600) and 4GB memory. 

A test input is 1TB text file which consists of 32 multiple text files (1 text file is 32GB)
I expected that the encryption takes much more time than generic HDFS. The performance does not differ significantly. 

The decryption step takes about 5-7% more than generic HDFS. The encryption step takes about 20-30% more than generic HDFS because it is implemented by single thread and executed by 1 client node. 

So the encryption can get more performance. 
May there be any error in my test?
I know there are several implementation for encryting files in HDFS. Are these implementations enough to secure HDFS?


best regards,
seonpark
* Sorry for my bad english  		 	   		  

Re: Encryption in HDFS

Posted by Ted Yu <yu...@gmail.com>.
The following JIRAs are related to your research:

HADOOP-9331: Hadoop crypto codec framework and crypto codec implementations<
https://issues.apache.org/jira/browse/hadoop-9331> and related sub-tasks

MAPREDUCE-5025: Key Distribution and Management for supporting crypto codec
in Map Reduce<https://issues.apache.org/jira/browse/mapreduce-5025> and
related JIRAs

On Mon, Feb 25, 2013 at 9:10 PM, Seonyeong Bak <re...@gmail.com> wrote:

> Hello, I'm a university student.
>
> I implemented AES and Triple DES with CompressionCodec in java
> cryptography architecture (JCA)
> The encryption is performed by a client node using Hadoop API.
> Map tasks read blocks from HDFS and these blocks are decrypted by each map
> tasks.
> I tested my implementation with generic HDFS.
> My cluster consists of 3 nodes (1 master node, 3 worker nodes) and each
> machines have quad core processor (i7-2600) and 4GB memory.
> A test input is 1TB text file which consists of 32 multiple text files (1
> text file is 32GB)
>
> I expected that the encryption takes much more time than generic HDFS.
> The performance does not differ significantly.
> The decryption step takes about 5-7% more than generic HDFS.
> The encryption step takes about 20-30% more than generic HDFS because it
> is implemented by single thread and executed by 1 client node.
> So the encryption can get more performance.
>
> May there be any error in my test?
>
> I know there are several implementation for encryting files in HDFS.
> Are these implementations enough to secure HDFS?
>
> best regards,
>
> seonpark
>
> * Sorry for my bad english
>
>

Re: Encryption in HDFS

Posted by Ted Yu <yu...@gmail.com>.
The following JIRAs are related to your research:

HADOOP-9331: Hadoop crypto codec framework and crypto codec implementations<
https://issues.apache.org/jira/browse/hadoop-9331> and related sub-tasks

MAPREDUCE-5025: Key Distribution and Management for supporting crypto codec
in Map Reduce<https://issues.apache.org/jira/browse/mapreduce-5025> and
related JIRAs

On Mon, Feb 25, 2013 at 9:10 PM, Seonyeong Bak <re...@gmail.com> wrote:

> Hello, I'm a university student.
>
> I implemented AES and Triple DES with CompressionCodec in java
> cryptography architecture (JCA)
> The encryption is performed by a client node using Hadoop API.
> Map tasks read blocks from HDFS and these blocks are decrypted by each map
> tasks.
> I tested my implementation with generic HDFS.
> My cluster consists of 3 nodes (1 master node, 3 worker nodes) and each
> machines have quad core processor (i7-2600) and 4GB memory.
> A test input is 1TB text file which consists of 32 multiple text files (1
> text file is 32GB)
>
> I expected that the encryption takes much more time than generic HDFS.
> The performance does not differ significantly.
> The decryption step takes about 5-7% more than generic HDFS.
> The encryption step takes about 20-30% more than generic HDFS because it
> is implemented by single thread and executed by 1 client node.
> So the encryption can get more performance.
>
> May there be any error in my test?
>
> I know there are several implementation for encryting files in HDFS.
> Are these implementations enough to secure HDFS?
>
> best regards,
>
> seonpark
>
> * Sorry for my bad english
>
>

Re: Encryption in HDFS

Posted by Seonyeong Bak <re...@gmail.com>.
java8964

1) To my knowledge, there is no way to split the encrypted file in apache
hadoop. In hadoop 1.1.X, however, It is possible to decrypt the encrypted
file individually by block, using SplittableCompressionCodec and
SplitCompressionInputStream.

SplittableCompressionCodec -
http://hadoop.apache.org/docs/r1.1.1/api/org/apache/hadoop/io/compress/SplittableCompressionCodec.html
SplitCompressionInputStream -
http://hadoop.apache.org/docs/r1.1.1/api/org/apache/hadoop/io/compress/SplitCompressionInputStream.html

2) I don't use the Decompressor Stream and BlockDompressorStream. My
implementation was developed as like bzip2 implementation.
If you analyze bzip2 code, I think you will get a big help.

3) I want to share my implementation after cleansing my code.

I'm sorry for late reply.

- seonpark

Re: Encryption in HDFS

Posted by Seonyeong Bak <re...@gmail.com>.
java8964

1) To my knowledge, there is no way to split the encrypted file in apache
hadoop. In hadoop 1.1.X, however, It is possible to decrypt the encrypted
file individually by block, using SplittableCompressionCodec and
SplitCompressionInputStream.

SplittableCompressionCodec -
http://hadoop.apache.org/docs/r1.1.1/api/org/apache/hadoop/io/compress/SplittableCompressionCodec.html
SplitCompressionInputStream -
http://hadoop.apache.org/docs/r1.1.1/api/org/apache/hadoop/io/compress/SplitCompressionInputStream.html

2) I don't use the Decompressor Stream and BlockDompressorStream. My
implementation was developed as like bzip2 implementation.
If you analyze bzip2 code, I think you will get a big help.

3) I want to share my implementation after cleansing my code.

I'm sorry for late reply.

- seonpark

Re: Encryption in HDFS

Posted by Seonyeong Bak <re...@gmail.com>.
java8964

1) To my knowledge, there is no way to split the encrypted file in apache
hadoop. In hadoop 1.1.X, however, It is possible to decrypt the encrypted
file individually by block, using SplittableCompressionCodec and
SplitCompressionInputStream.

SplittableCompressionCodec -
http://hadoop.apache.org/docs/r1.1.1/api/org/apache/hadoop/io/compress/SplittableCompressionCodec.html
SplitCompressionInputStream -
http://hadoop.apache.org/docs/r1.1.1/api/org/apache/hadoop/io/compress/SplitCompressionInputStream.html

2) I don't use the Decompressor Stream and BlockDompressorStream. My
implementation was developed as like bzip2 implementation.
If you analyze bzip2 code, I think you will get a big help.

3) I want to share my implementation after cleansing my code.

I'm sorry for late reply.

- seonpark

Re: Encryption in HDFS

Posted by Seonyeong Bak <re...@gmail.com>.
java8964

1) To my knowledge, there is no way to split the encrypted file in apache
hadoop. In hadoop 1.1.X, however, It is possible to decrypt the encrypted
file individually by block, using SplittableCompressionCodec and
SplitCompressionInputStream.

SplittableCompressionCodec -
http://hadoop.apache.org/docs/r1.1.1/api/org/apache/hadoop/io/compress/SplittableCompressionCodec.html
SplitCompressionInputStream -
http://hadoop.apache.org/docs/r1.1.1/api/org/apache/hadoop/io/compress/SplitCompressionInputStream.html

2) I don't use the Decompressor Stream and BlockDompressorStream. My
implementation was developed as like bzip2 implementation.
If you analyze bzip2 code, I think you will get a big help.

3) I want to share my implementation after cleansing my code.

I'm sorry for late reply.

- seonpark

Re: Encryption in HDFS

Posted by Lance Norskog <go...@gmail.com>.
Excellent!

On 02/25/2013 10:43 PM, Mathias Herberts wrote:
> Encryption without proper key management only addresses the 'stolen
> hard drive' problem.
>
> So far I have not found 100% satisfactory solutions to this hard
> problem. I've written OSS (Open Secret Server) partly to address this
> problem in Pig, i.e. accessing encrypted data without embedding key
> info into the job description file. Proper encrypted data handling
> implies striict code review though, as in the case of Pig databags are
> spillable and you could end up with unencrypted data stored on disk
> without intent.
>
> OSS http://github.com/hbs/oss and the Pig specific code:
> https://github.com/hbs/oss/blob/master/src/main/java/com/geoxp/oss/pig/PigSecretStore.java
>
> On Tue, Feb 26, 2013 at 6:33 AM, Seonyeong Bak <re...@gmail.com> wrote:
>> I didn't handle a key distribution problem because I thought that this
>> problem is more difficult.
>> I simply hardcode a key into the code.
>>
>> A challenge related to security are handled in HADOOP-9331, MAPREDUCE-5025,
>> and so on.


Re: Encryption in HDFS

Posted by Lance Norskog <go...@gmail.com>.
Excellent!

On 02/25/2013 10:43 PM, Mathias Herberts wrote:
> Encryption without proper key management only addresses the 'stolen
> hard drive' problem.
>
> So far I have not found 100% satisfactory solutions to this hard
> problem. I've written OSS (Open Secret Server) partly to address this
> problem in Pig, i.e. accessing encrypted data without embedding key
> info into the job description file. Proper encrypted data handling
> implies striict code review though, as in the case of Pig databags are
> spillable and you could end up with unencrypted data stored on disk
> without intent.
>
> OSS http://github.com/hbs/oss and the Pig specific code:
> https://github.com/hbs/oss/blob/master/src/main/java/com/geoxp/oss/pig/PigSecretStore.java
>
> On Tue, Feb 26, 2013 at 6:33 AM, Seonyeong Bak <re...@gmail.com> wrote:
>> I didn't handle a key distribution problem because I thought that this
>> problem is more difficult.
>> I simply hardcode a key into the code.
>>
>> A challenge related to security are handled in HADOOP-9331, MAPREDUCE-5025,
>> and so on.


Re: Encryption in HDFS

Posted by Seonyeong Bak <re...@gmail.com>.
Thank you so much for all your comments. :)

Re: Encryption in HDFS

Posted by Seonyeong Bak <re...@gmail.com>.
Thank you so much for all your comments. :)

Re: Encryption in HDFS

Posted by Seonyeong Bak <re...@gmail.com>.
Thank you so much for all your comments. :)

Re: Encryption in HDFS

Posted by Lance Norskog <go...@gmail.com>.
Excellent!

On 02/25/2013 10:43 PM, Mathias Herberts wrote:
> Encryption without proper key management only addresses the 'stolen
> hard drive' problem.
>
> So far I have not found 100% satisfactory solutions to this hard
> problem. I've written OSS (Open Secret Server) partly to address this
> problem in Pig, i.e. accessing encrypted data without embedding key
> info into the job description file. Proper encrypted data handling
> implies striict code review though, as in the case of Pig databags are
> spillable and you could end up with unencrypted data stored on disk
> without intent.
>
> OSS http://github.com/hbs/oss and the Pig specific code:
> https://github.com/hbs/oss/blob/master/src/main/java/com/geoxp/oss/pig/PigSecretStore.java
>
> On Tue, Feb 26, 2013 at 6:33 AM, Seonyeong Bak <re...@gmail.com> wrote:
>> I didn't handle a key distribution problem because I thought that this
>> problem is more difficult.
>> I simply hardcode a key into the code.
>>
>> A challenge related to security are handled in HADOOP-9331, MAPREDUCE-5025,
>> and so on.


Re: Encryption in HDFS

Posted by Seonyeong Bak <re...@gmail.com>.
Thank you so much for all your comments. :)

Re: Encryption in HDFS

Posted by Lance Norskog <go...@gmail.com>.
Excellent!

On 02/25/2013 10:43 PM, Mathias Herberts wrote:
> Encryption without proper key management only addresses the 'stolen
> hard drive' problem.
>
> So far I have not found 100% satisfactory solutions to this hard
> problem. I've written OSS (Open Secret Server) partly to address this
> problem in Pig, i.e. accessing encrypted data without embedding key
> info into the job description file. Proper encrypted data handling
> implies striict code review though, as in the case of Pig databags are
> spillable and you could end up with unencrypted data stored on disk
> without intent.
>
> OSS http://github.com/hbs/oss and the Pig specific code:
> https://github.com/hbs/oss/blob/master/src/main/java/com/geoxp/oss/pig/PigSecretStore.java
>
> On Tue, Feb 26, 2013 at 6:33 AM, Seonyeong Bak <re...@gmail.com> wrote:
>> I didn't handle a key distribution problem because I thought that this
>> problem is more difficult.
>> I simply hardcode a key into the code.
>>
>> A challenge related to security are handled in HADOOP-9331, MAPREDUCE-5025,
>> and so on.


Re: Encryption in HDFS

Posted by Mathias Herberts <ma...@gmail.com>.
Encryption without proper key management only addresses the 'stolen
hard drive' problem.

So far I have not found 100% satisfactory solutions to this hard
problem. I've written OSS (Open Secret Server) partly to address this
problem in Pig, i.e. accessing encrypted data without embedding key
info into the job description file. Proper encrypted data handling
implies striict code review though, as in the case of Pig databags are
spillable and you could end up with unencrypted data stored on disk
without intent.

OSS http://github.com/hbs/oss and the Pig specific code:
https://github.com/hbs/oss/blob/master/src/main/java/com/geoxp/oss/pig/PigSecretStore.java

On Tue, Feb 26, 2013 at 6:33 AM, Seonyeong Bak <re...@gmail.com> wrote:
> I didn't handle a key distribution problem because I thought that this
> problem is more difficult.
> I simply hardcode a key into the code.
>
> A challenge related to security are handled in HADOOP-9331, MAPREDUCE-5025,
> and so on.

Re: Encryption in HDFS

Posted by Mathias Herberts <ma...@gmail.com>.
Encryption without proper key management only addresses the 'stolen
hard drive' problem.

So far I have not found 100% satisfactory solutions to this hard
problem. I've written OSS (Open Secret Server) partly to address this
problem in Pig, i.e. accessing encrypted data without embedding key
info into the job description file. Proper encrypted data handling
implies striict code review though, as in the case of Pig databags are
spillable and you could end up with unencrypted data stored on disk
without intent.

OSS http://github.com/hbs/oss and the Pig specific code:
https://github.com/hbs/oss/blob/master/src/main/java/com/geoxp/oss/pig/PigSecretStore.java

On Tue, Feb 26, 2013 at 6:33 AM, Seonyeong Bak <re...@gmail.com> wrote:
> I didn't handle a key distribution problem because I thought that this
> problem is more difficult.
> I simply hardcode a key into the code.
>
> A challenge related to security are handled in HADOOP-9331, MAPREDUCE-5025,
> and so on.

Re: Encryption in HDFS

Posted by Mathias Herberts <ma...@gmail.com>.
Encryption without proper key management only addresses the 'stolen
hard drive' problem.

So far I have not found 100% satisfactory solutions to this hard
problem. I've written OSS (Open Secret Server) partly to address this
problem in Pig, i.e. accessing encrypted data without embedding key
info into the job description file. Proper encrypted data handling
implies striict code review though, as in the case of Pig databags are
spillable and you could end up with unencrypted data stored on disk
without intent.

OSS http://github.com/hbs/oss and the Pig specific code:
https://github.com/hbs/oss/blob/master/src/main/java/com/geoxp/oss/pig/PigSecretStore.java

On Tue, Feb 26, 2013 at 6:33 AM, Seonyeong Bak <re...@gmail.com> wrote:
> I didn't handle a key distribution problem because I thought that this
> problem is more difficult.
> I simply hardcode a key into the code.
>
> A challenge related to security are handled in HADOOP-9331, MAPREDUCE-5025,
> and so on.

Re: Encryption in HDFS

Posted by Mathias Herberts <ma...@gmail.com>.
Encryption without proper key management only addresses the 'stolen
hard drive' problem.

So far I have not found 100% satisfactory solutions to this hard
problem. I've written OSS (Open Secret Server) partly to address this
problem in Pig, i.e. accessing encrypted data without embedding key
info into the job description file. Proper encrypted data handling
implies striict code review though, as in the case of Pig databags are
spillable and you could end up with unencrypted data stored on disk
without intent.

OSS http://github.com/hbs/oss and the Pig specific code:
https://github.com/hbs/oss/blob/master/src/main/java/com/geoxp/oss/pig/PigSecretStore.java

On Tue, Feb 26, 2013 at 6:33 AM, Seonyeong Bak <re...@gmail.com> wrote:
> I didn't handle a key distribution problem because I thought that this
> problem is more difficult.
> I simply hardcode a key into the code.
>
> A challenge related to security are handled in HADOOP-9331, MAPREDUCE-5025,
> and so on.

Re: Encryption in HDFS

Posted by Seonyeong Bak <re...@gmail.com>.
I didn't handle a key distribution problem because I thought that this
problem is more difficult.
I simply hardcode a key into the code.

A challenge related to security are handled in
HADOOP-9331<https://issues.apache.org/jira/browse/HADOOP-9331>,
MAPREDUCE-5025 <https://issues.apache.org/jira/browse/MAPREDUCE-5025>, and
so on.

Re: Encryption in HDFS

Posted by Seonyeong Bak <re...@gmail.com>.
I didn't handle a key distribution problem because I thought that this
problem is more difficult.
I simply hardcode a key into the code.

A challenge related to security are handled in
HADOOP-9331<https://issues.apache.org/jira/browse/HADOOP-9331>,
MAPREDUCE-5025 <https://issues.apache.org/jira/browse/MAPREDUCE-5025>, and
so on.

Re: Encryption in HDFS

Posted by Seonyeong Bak <re...@gmail.com>.
I didn't handle a key distribution problem because I thought that this
problem is more difficult.
I simply hardcode a key into the code.

A challenge related to security are handled in
HADOOP-9331<https://issues.apache.org/jira/browse/HADOOP-9331>,
MAPREDUCE-5025 <https://issues.apache.org/jira/browse/MAPREDUCE-5025>, and
so on.

Re: Encryption in HDFS

Posted by Seonyeong Bak <re...@gmail.com>.
I didn't handle a key distribution problem because I thought that this
problem is more difficult.
I simply hardcode a key into the code.

A challenge related to security are handled in
HADOOP-9331<https://issues.apache.org/jira/browse/HADOOP-9331>,
MAPREDUCE-5025 <https://issues.apache.org/jira/browse/MAPREDUCE-5025>, and
so on.

Re: Encryption in HDFS

Posted by lohit <lo...@gmail.com>.
Another challenge of encrypt/decrypt is key management.
Can  you share how are this is handled in your implementation/research

2013/2/25 Seonyeong Bak <re...@gmail.com>

> Hello, I'm a university student.
>
> I implemented AES and Triple DES with CompressionCodec in java
> cryptography architecture (JCA)
> The encryption is performed by a client node using Hadoop API.
> Map tasks read blocks from HDFS and these blocks are decrypted by each map
> tasks.
> I tested my implementation with generic HDFS.
> My cluster consists of 3 nodes (1 master node, 3 worker nodes) and each
> machines have quad core processor (i7-2600) and 4GB memory.
> A test input is 1TB text file which consists of 32 multiple text files (1
> text file is 32GB)
>
> I expected that the encryption takes much more time than generic HDFS.
> The performance does not differ significantly.
> The decryption step takes about 5-7% more than generic HDFS.
> The encryption step takes about 20-30% more than generic HDFS because it
> is implemented by single thread and executed by 1 client node.
> So the encryption can get more performance.
>
> May there be any error in my test?
>
> I know there are several implementation for encryting files in HDFS.
> Are these implementations enough to secure HDFS?
>
> best regards,
>
> seonpark
>
> * Sorry for my bad english
>
>


-- 
Have a Nice Day!
Lohit

Re: Encryption in HDFS

Posted by Ted Dunning <td...@maprtech.com>.
Most recent crypto libraries use the special instructions on Intel
processors.

See for instance:
http://software.intel.com/en-us/articles/intel-advanced-encryption-standard-aes-instructions-set


On Mon, Feb 25, 2013 at 9:10 PM, Seonyeong Bak <re...@gmail.com> wrote:

> Hello, I'm a university student.
>
> I implemented AES and Triple DES with CompressionCodec in java
> cryptography architecture (JCA)
> The encryption is performed by a client node using Hadoop API.
> Map tasks read blocks from HDFS and these blocks are decrypted by each map
> tasks.
> I tested my implementation with generic HDFS.
> My cluster consists of 3 nodes (1 master node, 3 worker nodes) and each
> machines have quad core processor (i7-2600) and 4GB memory.
> A test input is 1TB text file which consists of 32 multiple text files (1
> text file is 32GB)
>
> I expected that the encryption takes much more time than generic HDFS.
> The performance does not differ significantly.
> The decryption step takes about 5-7% more than generic HDFS.
> The encryption step takes about 20-30% more than generic HDFS because it
> is implemented by single thread and executed by 1 client node.
> So the encryption can get more performance.
>
> May there be any error in my test?
>
> I know there are several implementation for encryting files in HDFS.
> Are these implementations enough to secure HDFS?
>
> best regards,
>
> seonpark
>
> * Sorry for my bad english
>
>

Re: Encryption in HDFS

Posted by lohit <lo...@gmail.com>.
Another challenge of encrypt/decrypt is key management.
Can  you share how are this is handled in your implementation/research

2013/2/25 Seonyeong Bak <re...@gmail.com>

> Hello, I'm a university student.
>
> I implemented AES and Triple DES with CompressionCodec in java
> cryptography architecture (JCA)
> The encryption is performed by a client node using Hadoop API.
> Map tasks read blocks from HDFS and these blocks are decrypted by each map
> tasks.
> I tested my implementation with generic HDFS.
> My cluster consists of 3 nodes (1 master node, 3 worker nodes) and each
> machines have quad core processor (i7-2600) and 4GB memory.
> A test input is 1TB text file which consists of 32 multiple text files (1
> text file is 32GB)
>
> I expected that the encryption takes much more time than generic HDFS.
> The performance does not differ significantly.
> The decryption step takes about 5-7% more than generic HDFS.
> The encryption step takes about 20-30% more than generic HDFS because it
> is implemented by single thread and executed by 1 client node.
> So the encryption can get more performance.
>
> May there be any error in my test?
>
> I know there are several implementation for encryting files in HDFS.
> Are these implementations enough to secure HDFS?
>
> best regards,
>
> seonpark
>
> * Sorry for my bad english
>
>


-- 
Have a Nice Day!
Lohit

Re: Encryption in HDFS

Posted by Ted Dunning <td...@maprtech.com>.
Most recent crypto libraries use the special instructions on Intel
processors.

See for instance:
http://software.intel.com/en-us/articles/intel-advanced-encryption-standard-aes-instructions-set


On Mon, Feb 25, 2013 at 9:10 PM, Seonyeong Bak <re...@gmail.com> wrote:

> Hello, I'm a university student.
>
> I implemented AES and Triple DES with CompressionCodec in java
> cryptography architecture (JCA)
> The encryption is performed by a client node using Hadoop API.
> Map tasks read blocks from HDFS and these blocks are decrypted by each map
> tasks.
> I tested my implementation with generic HDFS.
> My cluster consists of 3 nodes (1 master node, 3 worker nodes) and each
> machines have quad core processor (i7-2600) and 4GB memory.
> A test input is 1TB text file which consists of 32 multiple text files (1
> text file is 32GB)
>
> I expected that the encryption takes much more time than generic HDFS.
> The performance does not differ significantly.
> The decryption step takes about 5-7% more than generic HDFS.
> The encryption step takes about 20-30% more than generic HDFS because it
> is implemented by single thread and executed by 1 client node.
> So the encryption can get more performance.
>
> May there be any error in my test?
>
> I know there are several implementation for encryting files in HDFS.
> Are these implementations enough to secure HDFS?
>
> best regards,
>
> seonpark
>
> * Sorry for my bad english
>
>

Re: Encryption in HDFS

Posted by Ted Dunning <td...@maprtech.com>.
Most recent crypto libraries use the special instructions on Intel
processors.

See for instance:
http://software.intel.com/en-us/articles/intel-advanced-encryption-standard-aes-instructions-set


On Mon, Feb 25, 2013 at 9:10 PM, Seonyeong Bak <re...@gmail.com> wrote:

> Hello, I'm a university student.
>
> I implemented AES and Triple DES with CompressionCodec in java
> cryptography architecture (JCA)
> The encryption is performed by a client node using Hadoop API.
> Map tasks read blocks from HDFS and these blocks are decrypted by each map
> tasks.
> I tested my implementation with generic HDFS.
> My cluster consists of 3 nodes (1 master node, 3 worker nodes) and each
> machines have quad core processor (i7-2600) and 4GB memory.
> A test input is 1TB text file which consists of 32 multiple text files (1
> text file is 32GB)
>
> I expected that the encryption takes much more time than generic HDFS.
> The performance does not differ significantly.
> The decryption step takes about 5-7% more than generic HDFS.
> The encryption step takes about 20-30% more than generic HDFS because it
> is implemented by single thread and executed by 1 client node.
> So the encryption can get more performance.
>
> May there be any error in my test?
>
> I know there are several implementation for encryting files in HDFS.
> Are these implementations enough to secure HDFS?
>
> best regards,
>
> seonpark
>
> * Sorry for my bad english
>
>

RE: Encryption in HDFS

Posted by java8964 java8964 <ja...@hotmail.com>.
I am also interested in your research. Can you share some insight about the following questions?
1) When you use CompressionCodec, can the encrypted file split? From my understand, there is no encrypt way can make the file decryption individually by block, right?  For example, if I have 1G file, encrypted using AES, how do you or can you decrypt the file block by block, instead of just using one mapper to decrypt the whole file?
2) In your CompressionCodec implementation, do you use the DecompressorStream or BlockDecompressorStream? If BlockDecompressorStream, can you share some examples? Right now, I have some problems to use BlockDecompressorStream to do the exactly same thing as you did.3) Do you have any plan to share your code, especially if you did use BlockDecompressorStream and make the encryption file decrypted block by block in the hadoop MapReduce job.
Thanks
Yong
From: renderaid@gmail.com
Date: Tue, 26 Feb 2013 14:10:08 +0900
Subject: Encryption in HDFS
To: user@hadoop.apache.org

Hello, I'm a university student.
I implemented AES and Triple DES with CompressionCodec in java cryptography architecture (JCA)The encryption is performed by a client node using Hadoop API.

Map tasks read blocks from HDFS and these blocks are decrypted by each map tasks.I tested my implementation with generic HDFS. My cluster consists of 3 nodes (1 master node, 3 worker nodes) and each machines have quad core processor (i7-2600) and 4GB memory. 

A test input is 1TB text file which consists of 32 multiple text files (1 text file is 32GB)
I expected that the encryption takes much more time than generic HDFS. The performance does not differ significantly. 

The decryption step takes about 5-7% more than generic HDFS. The encryption step takes about 20-30% more than generic HDFS because it is implemented by single thread and executed by 1 client node. 

So the encryption can get more performance. 
May there be any error in my test?
I know there are several implementation for encryting files in HDFS. Are these implementations enough to secure HDFS?


best regards,
seonpark
* Sorry for my bad english  		 	   		  

Re: Encryption in HDFS

Posted by Ted Yu <yu...@gmail.com>.
The following JIRAs are related to your research:

HADOOP-9331: Hadoop crypto codec framework and crypto codec implementations<
https://issues.apache.org/jira/browse/hadoop-9331> and related sub-tasks

MAPREDUCE-5025: Key Distribution and Management for supporting crypto codec
in Map Reduce<https://issues.apache.org/jira/browse/mapreduce-5025> and
related JIRAs

On Mon, Feb 25, 2013 at 9:10 PM, Seonyeong Bak <re...@gmail.com> wrote:

> Hello, I'm a university student.
>
> I implemented AES and Triple DES with CompressionCodec in java
> cryptography architecture (JCA)
> The encryption is performed by a client node using Hadoop API.
> Map tasks read blocks from HDFS and these blocks are decrypted by each map
> tasks.
> I tested my implementation with generic HDFS.
> My cluster consists of 3 nodes (1 master node, 3 worker nodes) and each
> machines have quad core processor (i7-2600) and 4GB memory.
> A test input is 1TB text file which consists of 32 multiple text files (1
> text file is 32GB)
>
> I expected that the encryption takes much more time than generic HDFS.
> The performance does not differ significantly.
> The decryption step takes about 5-7% more than generic HDFS.
> The encryption step takes about 20-30% more than generic HDFS because it
> is implemented by single thread and executed by 1 client node.
> So the encryption can get more performance.
>
> May there be any error in my test?
>
> I know there are several implementation for encryting files in HDFS.
> Are these implementations enough to secure HDFS?
>
> best regards,
>
> seonpark
>
> * Sorry for my bad english
>
>

RE: Encryption in HDFS

Posted by java8964 java8964 <ja...@hotmail.com>.
I am also interested in your research. Can you share some insight about the following questions?
1) When you use CompressionCodec, can the encrypted file split? From my understand, there is no encrypt way can make the file decryption individually by block, right?  For example, if I have 1G file, encrypted using AES, how do you or can you decrypt the file block by block, instead of just using one mapper to decrypt the whole file?
2) In your CompressionCodec implementation, do you use the DecompressorStream or BlockDecompressorStream? If BlockDecompressorStream, can you share some examples? Right now, I have some problems to use BlockDecompressorStream to do the exactly same thing as you did.3) Do you have any plan to share your code, especially if you did use BlockDecompressorStream and make the encryption file decrypted block by block in the hadoop MapReduce job.
Thanks
Yong
From: renderaid@gmail.com
Date: Tue, 26 Feb 2013 14:10:08 +0900
Subject: Encryption in HDFS
To: user@hadoop.apache.org

Hello, I'm a university student.
I implemented AES and Triple DES with CompressionCodec in java cryptography architecture (JCA)The encryption is performed by a client node using Hadoop API.

Map tasks read blocks from HDFS and these blocks are decrypted by each map tasks.I tested my implementation with generic HDFS. My cluster consists of 3 nodes (1 master node, 3 worker nodes) and each machines have quad core processor (i7-2600) and 4GB memory. 

A test input is 1TB text file which consists of 32 multiple text files (1 text file is 32GB)
I expected that the encryption takes much more time than generic HDFS. The performance does not differ significantly. 

The decryption step takes about 5-7% more than generic HDFS. The encryption step takes about 20-30% more than generic HDFS because it is implemented by single thread and executed by 1 client node. 

So the encryption can get more performance. 
May there be any error in my test?
I know there are several implementation for encryting files in HDFS. Are these implementations enough to secure HDFS?


best regards,
seonpark
* Sorry for my bad english  		 	   		  

Re: Encryption in HDFS

Posted by Ted Dunning <td...@maprtech.com>.
Most recent crypto libraries use the special instructions on Intel
processors.

See for instance:
http://software.intel.com/en-us/articles/intel-advanced-encryption-standard-aes-instructions-set


On Mon, Feb 25, 2013 at 9:10 PM, Seonyeong Bak <re...@gmail.com> wrote:

> Hello, I'm a university student.
>
> I implemented AES and Triple DES with CompressionCodec in java
> cryptography architecture (JCA)
> The encryption is performed by a client node using Hadoop API.
> Map tasks read blocks from HDFS and these blocks are decrypted by each map
> tasks.
> I tested my implementation with generic HDFS.
> My cluster consists of 3 nodes (1 master node, 3 worker nodes) and each
> machines have quad core processor (i7-2600) and 4GB memory.
> A test input is 1TB text file which consists of 32 multiple text files (1
> text file is 32GB)
>
> I expected that the encryption takes much more time than generic HDFS.
> The performance does not differ significantly.
> The decryption step takes about 5-7% more than generic HDFS.
> The encryption step takes about 20-30% more than generic HDFS because it
> is implemented by single thread and executed by 1 client node.
> So the encryption can get more performance.
>
> May there be any error in my test?
>
> I know there are several implementation for encryting files in HDFS.
> Are these implementations enough to secure HDFS?
>
> best regards,
>
> seonpark
>
> * Sorry for my bad english
>
>

Re: Encryption in HDFS

Posted by lohit <lo...@gmail.com>.
Another challenge of encrypt/decrypt is key management.
Can  you share how are this is handled in your implementation/research

2013/2/25 Seonyeong Bak <re...@gmail.com>

> Hello, I'm a university student.
>
> I implemented AES and Triple DES with CompressionCodec in java
> cryptography architecture (JCA)
> The encryption is performed by a client node using Hadoop API.
> Map tasks read blocks from HDFS and these blocks are decrypted by each map
> tasks.
> I tested my implementation with generic HDFS.
> My cluster consists of 3 nodes (1 master node, 3 worker nodes) and each
> machines have quad core processor (i7-2600) and 4GB memory.
> A test input is 1TB text file which consists of 32 multiple text files (1
> text file is 32GB)
>
> I expected that the encryption takes much more time than generic HDFS.
> The performance does not differ significantly.
> The decryption step takes about 5-7% more than generic HDFS.
> The encryption step takes about 20-30% more than generic HDFS because it
> is implemented by single thread and executed by 1 client node.
> So the encryption can get more performance.
>
> May there be any error in my test?
>
> I know there are several implementation for encryting files in HDFS.
> Are these implementations enough to secure HDFS?
>
> best regards,
>
> seonpark
>
> * Sorry for my bad english
>
>


-- 
Have a Nice Day!
Lohit

Re: Encryption in HDFS

Posted by Ted Yu <yu...@gmail.com>.
The following JIRAs are related to your research:

HADOOP-9331: Hadoop crypto codec framework and crypto codec implementations<
https://issues.apache.org/jira/browse/hadoop-9331> and related sub-tasks

MAPREDUCE-5025: Key Distribution and Management for supporting crypto codec
in Map Reduce<https://issues.apache.org/jira/browse/mapreduce-5025> and
related JIRAs

On Mon, Feb 25, 2013 at 9:10 PM, Seonyeong Bak <re...@gmail.com> wrote:

> Hello, I'm a university student.
>
> I implemented AES and Triple DES with CompressionCodec in java
> cryptography architecture (JCA)
> The encryption is performed by a client node using Hadoop API.
> Map tasks read blocks from HDFS and these blocks are decrypted by each map
> tasks.
> I tested my implementation with generic HDFS.
> My cluster consists of 3 nodes (1 master node, 3 worker nodes) and each
> machines have quad core processor (i7-2600) and 4GB memory.
> A test input is 1TB text file which consists of 32 multiple text files (1
> text file is 32GB)
>
> I expected that the encryption takes much more time than generic HDFS.
> The performance does not differ significantly.
> The decryption step takes about 5-7% more than generic HDFS.
> The encryption step takes about 20-30% more than generic HDFS because it
> is implemented by single thread and executed by 1 client node.
> So the encryption can get more performance.
>
> May there be any error in my test?
>
> I know there are several implementation for encryting files in HDFS.
> Are these implementations enough to secure HDFS?
>
> best regards,
>
> seonpark
>
> * Sorry for my bad english
>
>

Re: Encryption in HDFS

Posted by lohit <lo...@gmail.com>.
Another challenge of encrypt/decrypt is key management.
Can  you share how are this is handled in your implementation/research

2013/2/25 Seonyeong Bak <re...@gmail.com>

> Hello, I'm a university student.
>
> I implemented AES and Triple DES with CompressionCodec in java
> cryptography architecture (JCA)
> The encryption is performed by a client node using Hadoop API.
> Map tasks read blocks from HDFS and these blocks are decrypted by each map
> tasks.
> I tested my implementation with generic HDFS.
> My cluster consists of 3 nodes (1 master node, 3 worker nodes) and each
> machines have quad core processor (i7-2600) and 4GB memory.
> A test input is 1TB text file which consists of 32 multiple text files (1
> text file is 32GB)
>
> I expected that the encryption takes much more time than generic HDFS.
> The performance does not differ significantly.
> The decryption step takes about 5-7% more than generic HDFS.
> The encryption step takes about 20-30% more than generic HDFS because it
> is implemented by single thread and executed by 1 client node.
> So the encryption can get more performance.
>
> May there be any error in my test?
>
> I know there are several implementation for encryting files in HDFS.
> Are these implementations enough to secure HDFS?
>
> best regards,
>
> seonpark
>
> * Sorry for my bad english
>
>


-- 
Have a Nice Day!
Lohit