You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Gavin Yue <yu...@gmail.com> on 2016/01/09 01:45:20 UTC

how to quickly fs -cp dir with thousand files?

I want to cp a dir with over 8000 files to another dir in the same hdfs.
but the copy process is really slow since it is copying one by one.
Is there a fast way to copy this using Java FileSystem or FileUtil api?

Thanks.

Re: how to quickly fs -cp dir with thousand files?

Posted by Gavin Yue <yu...@gmail.com>.
Yes. I need two different copy. And  I tried Chris's solution, distcp
indeed works.
Thank you all

On Sun, Jan 10, 2016 at 3:00 PM, Chris Nauroth <cn...@hortonworks.com>
wrote:

> Yes, certainly, if you only need it in one spot, then -mv is a fast
> metadata-only operation.  I was under the impression that Gavin really
> wanted to achieve 2 distinct copies.  Perhaps I was mistaken.
>
> --Chris Nauroth
>
> From: sandeep vura <sa...@gmail.com>
> Date: Sunday, January 10, 2016 at 6:23 AM
> To: Chris Nauroth <cn...@hortonworks.com>
> Cc: Gavin Yue <yu...@gmail.com>, "user@hadoop.apache.org" <
> user@hadoop.apache.org>
> Subject: Re: how to quickly fs -cp dir with thousand files?
>
> Hi Chris,
>
> Instead of copying files . Use mv command .
>
>
>    - hadoop fs -mv /user/hadoop/file1 /user/hadoop/file2
>
>
> Sandeep.v
>
>
> On Sat, Jan 9, 2016 at 9:55 AM, Chris Nauroth <cn...@hortonworks.com>
> wrote:
>
>> DistCp is capable of running large copies like this in distributed
>> fashion, implemented as a MapReduce job.
>>
>> http://hadoop.apache.org/docs/r2.7.1/hadoop-distcp/DistCp.html
>>
>> A lot of the literature on DistCp talks about use cases for copying
>> across different clusters, but it's also completely legitimate to run
>> DistCp within the same cluster.
>>
>> --Chris Nauroth
>>
>> From: Gavin Yue <yu...@gmail.com>
>> Date: Friday, January 8, 2016 at 4:45 PM
>> To: "user@hadoop.apache.org" <us...@hadoop.apache.org>
>> Subject: how to quickly fs -cp dir with thousand files?
>>
>> I want to cp a dir with over 8000 files to another dir in the same hdfs.
>> but the copy process is really slow since it is copying one by one.
>> Is there a fast way to copy this using Java FileSystem or FileUtil api?
>>
>> Thanks.
>>
>>
>

Re: how to quickly fs -cp dir with thousand files?

Posted by Gavin Yue <yu...@gmail.com>.
Yes. I need two different copy. And  I tried Chris's solution, distcp
indeed works.
Thank you all

On Sun, Jan 10, 2016 at 3:00 PM, Chris Nauroth <cn...@hortonworks.com>
wrote:

> Yes, certainly, if you only need it in one spot, then -mv is a fast
> metadata-only operation.  I was under the impression that Gavin really
> wanted to achieve 2 distinct copies.  Perhaps I was mistaken.
>
> --Chris Nauroth
>
> From: sandeep vura <sa...@gmail.com>
> Date: Sunday, January 10, 2016 at 6:23 AM
> To: Chris Nauroth <cn...@hortonworks.com>
> Cc: Gavin Yue <yu...@gmail.com>, "user@hadoop.apache.org" <
> user@hadoop.apache.org>
> Subject: Re: how to quickly fs -cp dir with thousand files?
>
> Hi Chris,
>
> Instead of copying files . Use mv command .
>
>
>    - hadoop fs -mv /user/hadoop/file1 /user/hadoop/file2
>
>
> Sandeep.v
>
>
> On Sat, Jan 9, 2016 at 9:55 AM, Chris Nauroth <cn...@hortonworks.com>
> wrote:
>
>> DistCp is capable of running large copies like this in distributed
>> fashion, implemented as a MapReduce job.
>>
>> http://hadoop.apache.org/docs/r2.7.1/hadoop-distcp/DistCp.html
>>
>> A lot of the literature on DistCp talks about use cases for copying
>> across different clusters, but it's also completely legitimate to run
>> DistCp within the same cluster.
>>
>> --Chris Nauroth
>>
>> From: Gavin Yue <yu...@gmail.com>
>> Date: Friday, January 8, 2016 at 4:45 PM
>> To: "user@hadoop.apache.org" <us...@hadoop.apache.org>
>> Subject: how to quickly fs -cp dir with thousand files?
>>
>> I want to cp a dir with over 8000 files to another dir in the same hdfs.
>> but the copy process is really slow since it is copying one by one.
>> Is there a fast way to copy this using Java FileSystem or FileUtil api?
>>
>> Thanks.
>>
>>
>

Re: how to quickly fs -cp dir with thousand files?

Posted by Gavin Yue <yu...@gmail.com>.
Yes. I need two different copy. And  I tried Chris's solution, distcp
indeed works.
Thank you all

On Sun, Jan 10, 2016 at 3:00 PM, Chris Nauroth <cn...@hortonworks.com>
wrote:

> Yes, certainly, if you only need it in one spot, then -mv is a fast
> metadata-only operation.  I was under the impression that Gavin really
> wanted to achieve 2 distinct copies.  Perhaps I was mistaken.
>
> --Chris Nauroth
>
> From: sandeep vura <sa...@gmail.com>
> Date: Sunday, January 10, 2016 at 6:23 AM
> To: Chris Nauroth <cn...@hortonworks.com>
> Cc: Gavin Yue <yu...@gmail.com>, "user@hadoop.apache.org" <
> user@hadoop.apache.org>
> Subject: Re: how to quickly fs -cp dir with thousand files?
>
> Hi Chris,
>
> Instead of copying files . Use mv command .
>
>
>    - hadoop fs -mv /user/hadoop/file1 /user/hadoop/file2
>
>
> Sandeep.v
>
>
> On Sat, Jan 9, 2016 at 9:55 AM, Chris Nauroth <cn...@hortonworks.com>
> wrote:
>
>> DistCp is capable of running large copies like this in distributed
>> fashion, implemented as a MapReduce job.
>>
>> http://hadoop.apache.org/docs/r2.7.1/hadoop-distcp/DistCp.html
>>
>> A lot of the literature on DistCp talks about use cases for copying
>> across different clusters, but it's also completely legitimate to run
>> DistCp within the same cluster.
>>
>> --Chris Nauroth
>>
>> From: Gavin Yue <yu...@gmail.com>
>> Date: Friday, January 8, 2016 at 4:45 PM
>> To: "user@hadoop.apache.org" <us...@hadoop.apache.org>
>> Subject: how to quickly fs -cp dir with thousand files?
>>
>> I want to cp a dir with over 8000 files to another dir in the same hdfs.
>> but the copy process is really slow since it is copying one by one.
>> Is there a fast way to copy this using Java FileSystem or FileUtil api?
>>
>> Thanks.
>>
>>
>

Re: how to quickly fs -cp dir with thousand files?

Posted by Gavin Yue <yu...@gmail.com>.
Yes. I need two different copy. And  I tried Chris's solution, distcp
indeed works.
Thank you all

On Sun, Jan 10, 2016 at 3:00 PM, Chris Nauroth <cn...@hortonworks.com>
wrote:

> Yes, certainly, if you only need it in one spot, then -mv is a fast
> metadata-only operation.  I was under the impression that Gavin really
> wanted to achieve 2 distinct copies.  Perhaps I was mistaken.
>
> --Chris Nauroth
>
> From: sandeep vura <sa...@gmail.com>
> Date: Sunday, January 10, 2016 at 6:23 AM
> To: Chris Nauroth <cn...@hortonworks.com>
> Cc: Gavin Yue <yu...@gmail.com>, "user@hadoop.apache.org" <
> user@hadoop.apache.org>
> Subject: Re: how to quickly fs -cp dir with thousand files?
>
> Hi Chris,
>
> Instead of copying files . Use mv command .
>
>
>    - hadoop fs -mv /user/hadoop/file1 /user/hadoop/file2
>
>
> Sandeep.v
>
>
> On Sat, Jan 9, 2016 at 9:55 AM, Chris Nauroth <cn...@hortonworks.com>
> wrote:
>
>> DistCp is capable of running large copies like this in distributed
>> fashion, implemented as a MapReduce job.
>>
>> http://hadoop.apache.org/docs/r2.7.1/hadoop-distcp/DistCp.html
>>
>> A lot of the literature on DistCp talks about use cases for copying
>> across different clusters, but it's also completely legitimate to run
>> DistCp within the same cluster.
>>
>> --Chris Nauroth
>>
>> From: Gavin Yue <yu...@gmail.com>
>> Date: Friday, January 8, 2016 at 4:45 PM
>> To: "user@hadoop.apache.org" <us...@hadoop.apache.org>
>> Subject: how to quickly fs -cp dir with thousand files?
>>
>> I want to cp a dir with over 8000 files to another dir in the same hdfs.
>> but the copy process is really slow since it is copying one by one.
>> Is there a fast way to copy this using Java FileSystem or FileUtil api?
>>
>> Thanks.
>>
>>
>

Re: how to quickly fs -cp dir with thousand files?

Posted by Chris Nauroth <cn...@hortonworks.com>.
Yes, certainly, if you only need it in one spot, then -mv is a fast metadata-only operation.  I was under the impression that Gavin really wanted to achieve 2 distinct copies.  Perhaps I was mistaken.

--Chris Nauroth

From: sandeep vura <sa...@gmail.com>>
Date: Sunday, January 10, 2016 at 6:23 AM
To: Chris Nauroth <cn...@hortonworks.com>>
Cc: Gavin Yue <yu...@gmail.com>>, "user@hadoop.apache.org<ma...@hadoop.apache.org>" <us...@hadoop.apache.org>>
Subject: Re: how to quickly fs -cp dir with thousand files?

Hi Chris,

Instead of copying files . Use mv command .


  *   hadoop fs -mv /user/hadoop/file1 /user/hadoop/file2

Sandeep.v


On Sat, Jan 9, 2016 at 9:55 AM, Chris Nauroth <cn...@hortonworks.com>> wrote:
DistCp is capable of running large copies like this in distributed fashion, implemented as a MapReduce job.

http://hadoop.apache.org/docs/r2.7.1/hadoop-distcp/DistCp.html

A lot of the literature on DistCp talks about use cases for copying across different clusters, but it's also completely legitimate to run DistCp within the same cluster.

--Chris Nauroth

From: Gavin Yue <yu...@gmail.com>>
Date: Friday, January 8, 2016 at 4:45 PM
To: "user@hadoop.apache.org<ma...@hadoop.apache.org>" <us...@hadoop.apache.org>>
Subject: how to quickly fs -cp dir with thousand files?

I want to cp a dir with over 8000 files to another dir in the same hdfs.  but the copy process is really slow since it is copying one by one.
Is there a fast way to copy this using Java FileSystem or FileUtil api?

Thanks.



Re: how to quickly fs -cp dir with thousand files?

Posted by Chris Nauroth <cn...@hortonworks.com>.
Yes, certainly, if you only need it in one spot, then -mv is a fast metadata-only operation.  I was under the impression that Gavin really wanted to achieve 2 distinct copies.  Perhaps I was mistaken.

--Chris Nauroth

From: sandeep vura <sa...@gmail.com>>
Date: Sunday, January 10, 2016 at 6:23 AM
To: Chris Nauroth <cn...@hortonworks.com>>
Cc: Gavin Yue <yu...@gmail.com>>, "user@hadoop.apache.org<ma...@hadoop.apache.org>" <us...@hadoop.apache.org>>
Subject: Re: how to quickly fs -cp dir with thousand files?

Hi Chris,

Instead of copying files . Use mv command .


  *   hadoop fs -mv /user/hadoop/file1 /user/hadoop/file2

Sandeep.v


On Sat, Jan 9, 2016 at 9:55 AM, Chris Nauroth <cn...@hortonworks.com>> wrote:
DistCp is capable of running large copies like this in distributed fashion, implemented as a MapReduce job.

http://hadoop.apache.org/docs/r2.7.1/hadoop-distcp/DistCp.html

A lot of the literature on DistCp talks about use cases for copying across different clusters, but it's also completely legitimate to run DistCp within the same cluster.

--Chris Nauroth

From: Gavin Yue <yu...@gmail.com>>
Date: Friday, January 8, 2016 at 4:45 PM
To: "user@hadoop.apache.org<ma...@hadoop.apache.org>" <us...@hadoop.apache.org>>
Subject: how to quickly fs -cp dir with thousand files?

I want to cp a dir with over 8000 files to another dir in the same hdfs.  but the copy process is really slow since it is copying one by one.
Is there a fast way to copy this using Java FileSystem or FileUtil api?

Thanks.



Re: how to quickly fs -cp dir with thousand files?

Posted by Chris Nauroth <cn...@hortonworks.com>.
Yes, certainly, if you only need it in one spot, then -mv is a fast metadata-only operation.  I was under the impression that Gavin really wanted to achieve 2 distinct copies.  Perhaps I was mistaken.

--Chris Nauroth

From: sandeep vura <sa...@gmail.com>>
Date: Sunday, January 10, 2016 at 6:23 AM
To: Chris Nauroth <cn...@hortonworks.com>>
Cc: Gavin Yue <yu...@gmail.com>>, "user@hadoop.apache.org<ma...@hadoop.apache.org>" <us...@hadoop.apache.org>>
Subject: Re: how to quickly fs -cp dir with thousand files?

Hi Chris,

Instead of copying files . Use mv command .


  *   hadoop fs -mv /user/hadoop/file1 /user/hadoop/file2

Sandeep.v


On Sat, Jan 9, 2016 at 9:55 AM, Chris Nauroth <cn...@hortonworks.com>> wrote:
DistCp is capable of running large copies like this in distributed fashion, implemented as a MapReduce job.

http://hadoop.apache.org/docs/r2.7.1/hadoop-distcp/DistCp.html

A lot of the literature on DistCp talks about use cases for copying across different clusters, but it's also completely legitimate to run DistCp within the same cluster.

--Chris Nauroth

From: Gavin Yue <yu...@gmail.com>>
Date: Friday, January 8, 2016 at 4:45 PM
To: "user@hadoop.apache.org<ma...@hadoop.apache.org>" <us...@hadoop.apache.org>>
Subject: how to quickly fs -cp dir with thousand files?

I want to cp a dir with over 8000 files to another dir in the same hdfs.  but the copy process is really slow since it is copying one by one.
Is there a fast way to copy this using Java FileSystem or FileUtil api?

Thanks.



Re: how to quickly fs -cp dir with thousand files?

Posted by Chris Nauroth <cn...@hortonworks.com>.
Yes, certainly, if you only need it in one spot, then -mv is a fast metadata-only operation.  I was under the impression that Gavin really wanted to achieve 2 distinct copies.  Perhaps I was mistaken.

--Chris Nauroth

From: sandeep vura <sa...@gmail.com>>
Date: Sunday, January 10, 2016 at 6:23 AM
To: Chris Nauroth <cn...@hortonworks.com>>
Cc: Gavin Yue <yu...@gmail.com>>, "user@hadoop.apache.org<ma...@hadoop.apache.org>" <us...@hadoop.apache.org>>
Subject: Re: how to quickly fs -cp dir with thousand files?

Hi Chris,

Instead of copying files . Use mv command .


  *   hadoop fs -mv /user/hadoop/file1 /user/hadoop/file2

Sandeep.v


On Sat, Jan 9, 2016 at 9:55 AM, Chris Nauroth <cn...@hortonworks.com>> wrote:
DistCp is capable of running large copies like this in distributed fashion, implemented as a MapReduce job.

http://hadoop.apache.org/docs/r2.7.1/hadoop-distcp/DistCp.html

A lot of the literature on DistCp talks about use cases for copying across different clusters, but it's also completely legitimate to run DistCp within the same cluster.

--Chris Nauroth

From: Gavin Yue <yu...@gmail.com>>
Date: Friday, January 8, 2016 at 4:45 PM
To: "user@hadoop.apache.org<ma...@hadoop.apache.org>" <us...@hadoop.apache.org>>
Subject: how to quickly fs -cp dir with thousand files?

I want to cp a dir with over 8000 files to another dir in the same hdfs.  but the copy process is really slow since it is copying one by one.
Is there a fast way to copy this using Java FileSystem or FileUtil api?

Thanks.



Re: how to quickly fs -cp dir with thousand files?

Posted by sandeep vura <sa...@gmail.com>.
Hi Chris,

Instead of copying files . Use mv command .


   - hadoop fs -mv /user/hadoop/file1 /user/hadoop/file2


Sandeep.v


On Sat, Jan 9, 2016 at 9:55 AM, Chris Nauroth <cn...@hortonworks.com>
wrote:

> DistCp is capable of running large copies like this in distributed
> fashion, implemented as a MapReduce job.
>
> http://hadoop.apache.org/docs/r2.7.1/hadoop-distcp/DistCp.html
>
> A lot of the literature on DistCp talks about use cases for copying across
> different clusters, but it's also completely legitimate to run DistCp
> within the same cluster.
>
> --Chris Nauroth
>
> From: Gavin Yue <yu...@gmail.com>
> Date: Friday, January 8, 2016 at 4:45 PM
> To: "user@hadoop.apache.org" <us...@hadoop.apache.org>
> Subject: how to quickly fs -cp dir with thousand files?
>
> I want to cp a dir with over 8000 files to another dir in the same hdfs.
> but the copy process is really slow since it is copying one by one.
> Is there a fast way to copy this using Java FileSystem or FileUtil api?
>
> Thanks.
>
>

Re: how to quickly fs -cp dir with thousand files?

Posted by sandeep vura <sa...@gmail.com>.
Hi Chris,

Instead of copying files . Use mv command .


   - hadoop fs -mv /user/hadoop/file1 /user/hadoop/file2


Sandeep.v


On Sat, Jan 9, 2016 at 9:55 AM, Chris Nauroth <cn...@hortonworks.com>
wrote:

> DistCp is capable of running large copies like this in distributed
> fashion, implemented as a MapReduce job.
>
> http://hadoop.apache.org/docs/r2.7.1/hadoop-distcp/DistCp.html
>
> A lot of the literature on DistCp talks about use cases for copying across
> different clusters, but it's also completely legitimate to run DistCp
> within the same cluster.
>
> --Chris Nauroth
>
> From: Gavin Yue <yu...@gmail.com>
> Date: Friday, January 8, 2016 at 4:45 PM
> To: "user@hadoop.apache.org" <us...@hadoop.apache.org>
> Subject: how to quickly fs -cp dir with thousand files?
>
> I want to cp a dir with over 8000 files to another dir in the same hdfs.
> but the copy process is really slow since it is copying one by one.
> Is there a fast way to copy this using Java FileSystem or FileUtil api?
>
> Thanks.
>
>

Re: how to quickly fs -cp dir with thousand files?

Posted by sandeep vura <sa...@gmail.com>.
Hi Chris,

Instead of copying files . Use mv command .


   - hadoop fs -mv /user/hadoop/file1 /user/hadoop/file2


Sandeep.v


On Sat, Jan 9, 2016 at 9:55 AM, Chris Nauroth <cn...@hortonworks.com>
wrote:

> DistCp is capable of running large copies like this in distributed
> fashion, implemented as a MapReduce job.
>
> http://hadoop.apache.org/docs/r2.7.1/hadoop-distcp/DistCp.html
>
> A lot of the literature on DistCp talks about use cases for copying across
> different clusters, but it's also completely legitimate to run DistCp
> within the same cluster.
>
> --Chris Nauroth
>
> From: Gavin Yue <yu...@gmail.com>
> Date: Friday, January 8, 2016 at 4:45 PM
> To: "user@hadoop.apache.org" <us...@hadoop.apache.org>
> Subject: how to quickly fs -cp dir with thousand files?
>
> I want to cp a dir with over 8000 files to another dir in the same hdfs.
> but the copy process is really slow since it is copying one by one.
> Is there a fast way to copy this using Java FileSystem or FileUtil api?
>
> Thanks.
>
>

Re: how to quickly fs -cp dir with thousand files?

Posted by sandeep vura <sa...@gmail.com>.
Hi Chris,

Instead of copying files . Use mv command .


   - hadoop fs -mv /user/hadoop/file1 /user/hadoop/file2


Sandeep.v


On Sat, Jan 9, 2016 at 9:55 AM, Chris Nauroth <cn...@hortonworks.com>
wrote:

> DistCp is capable of running large copies like this in distributed
> fashion, implemented as a MapReduce job.
>
> http://hadoop.apache.org/docs/r2.7.1/hadoop-distcp/DistCp.html
>
> A lot of the literature on DistCp talks about use cases for copying across
> different clusters, but it's also completely legitimate to run DistCp
> within the same cluster.
>
> --Chris Nauroth
>
> From: Gavin Yue <yu...@gmail.com>
> Date: Friday, January 8, 2016 at 4:45 PM
> To: "user@hadoop.apache.org" <us...@hadoop.apache.org>
> Subject: how to quickly fs -cp dir with thousand files?
>
> I want to cp a dir with over 8000 files to another dir in the same hdfs.
> but the copy process is really slow since it is copying one by one.
> Is there a fast way to copy this using Java FileSystem or FileUtil api?
>
> Thanks.
>
>

Re: how to quickly fs -cp dir with thousand files?

Posted by Chris Nauroth <cn...@hortonworks.com>.
DistCp is capable of running large copies like this in distributed fashion, implemented as a MapReduce job.

http://hadoop.apache.org/docs/r2.7.1/hadoop-distcp/DistCp.html

A lot of the literature on DistCp talks about use cases for copying across different clusters, but it's also completely legitimate to run DistCp within the same cluster.

--Chris Nauroth

From: Gavin Yue <yu...@gmail.com>>
Date: Friday, January 8, 2016 at 4:45 PM
To: "user@hadoop.apache.org<ma...@hadoop.apache.org>" <us...@hadoop.apache.org>>
Subject: how to quickly fs -cp dir with thousand files?

I want to cp a dir with over 8000 files to another dir in the same hdfs.  but the copy process is really slow since it is copying one by one.
Is there a fast way to copy this using Java FileSystem or FileUtil api?

Thanks.


Re: how to quickly fs -cp dir with thousand files?

Posted by Chris Nauroth <cn...@hortonworks.com>.
DistCp is capable of running large copies like this in distributed fashion, implemented as a MapReduce job.

http://hadoop.apache.org/docs/r2.7.1/hadoop-distcp/DistCp.html

A lot of the literature on DistCp talks about use cases for copying across different clusters, but it's also completely legitimate to run DistCp within the same cluster.

--Chris Nauroth

From: Gavin Yue <yu...@gmail.com>>
Date: Friday, January 8, 2016 at 4:45 PM
To: "user@hadoop.apache.org<ma...@hadoop.apache.org>" <us...@hadoop.apache.org>>
Subject: how to quickly fs -cp dir with thousand files?

I want to cp a dir with over 8000 files to another dir in the same hdfs.  but the copy process is really slow since it is copying one by one.
Is there a fast way to copy this using Java FileSystem or FileUtil api?

Thanks.


Re: how to quickly fs -cp dir with thousand files?

Posted by Chris Nauroth <cn...@hortonworks.com>.
DistCp is capable of running large copies like this in distributed fashion, implemented as a MapReduce job.

http://hadoop.apache.org/docs/r2.7.1/hadoop-distcp/DistCp.html

A lot of the literature on DistCp talks about use cases for copying across different clusters, but it's also completely legitimate to run DistCp within the same cluster.

--Chris Nauroth

From: Gavin Yue <yu...@gmail.com>>
Date: Friday, January 8, 2016 at 4:45 PM
To: "user@hadoop.apache.org<ma...@hadoop.apache.org>" <us...@hadoop.apache.org>>
Subject: how to quickly fs -cp dir with thousand files?

I want to cp a dir with over 8000 files to another dir in the same hdfs.  but the copy process is really slow since it is copying one by one.
Is there a fast way to copy this using Java FileSystem or FileUtil api?

Thanks.


Re: how to quickly fs -cp dir with thousand files?

Posted by Chris Nauroth <cn...@hortonworks.com>.
DistCp is capable of running large copies like this in distributed fashion, implemented as a MapReduce job.

http://hadoop.apache.org/docs/r2.7.1/hadoop-distcp/DistCp.html

A lot of the literature on DistCp talks about use cases for copying across different clusters, but it's also completely legitimate to run DistCp within the same cluster.

--Chris Nauroth

From: Gavin Yue <yu...@gmail.com>>
Date: Friday, January 8, 2016 at 4:45 PM
To: "user@hadoop.apache.org<ma...@hadoop.apache.org>" <us...@hadoop.apache.org>>
Subject: how to quickly fs -cp dir with thousand files?

I want to cp a dir with over 8000 files to another dir in the same hdfs.  but the copy process is really slow since it is copying one by one.
Is there a fast way to copy this using Java FileSystem or FileUtil api?

Thanks.