You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Koji Noguchi (JIRA)" <ji...@apache.org> on 2007/09/10 18:37:29 UTC
[jira] Created: (HADOOP-1866) distcp requires large heapsize when
copying many files
distcp requires large heapsize when copying many files
------------------------------------------------------
Key: HADOOP-1866
URL: https://issues.apache.org/jira/browse/HADOOP-1866
Project: Hadoop
Issue Type: Bug
Components: util
Reporter: Koji Noguchi
Priority: Minor
Trying to distcp 1.5 million files with 1G client heapsize, failed with outofmemory.
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit
exceeded
at java.util.regex.Pattern.compile(Pattern.java:1438)
at java.util.regex.Pattern.<init>(Pattern.java:1130)
at java.util.regex.Pattern.compile(Pattern.java:846)
at java.lang.String.replace(String.java:2208)
at org.apache.hadoop.fs.Path.normalizePath(Path.java:147)
at org.apache.hadoop.fs.Path.initialize(Path.java:137)
at org.apache.hadoop.fs.Path.<init>(Path.java:126)
at org.apache.hadoop.dfs.DfsPath.<init>(DfsPath.java:32)
at org.apache.hadoop.dfs.DistributedFileSystem$RawDistributedFileSystem.listPaths(DistributedFileSystem.java:214)
at org.apache.hadoop.fs.FileSystem.listPaths(FileSystem.java:483)
at org.apache.hadoop.fs.FileSystem.listPaths(FileSystem.java:496)
at org.apache.hadoop.fs.ChecksumFileSystem.listPaths(ChecksumFileSystem.java:539)
at org.apache.hadoop.util.CopyFiles$FSCopyFilesMapper.setup(CopyFiles.java:327)
at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:762)
at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:808)
at org.apache.hadoop.util.ToolBase.doMain(ToolBase.java:189)
at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:818)
It would be nice if distcp doesn't require gigs of heapsize when copying large amount of files.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Resolved: (HADOOP-1866) distcp requires large heapsize when
copying many files
Posted by "Owen O'Malley (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Owen O'Malley resolved HADOOP-1866.
-----------------------------------
Resolution: Duplicate
Fix Version/s: 0.15.0
Assignee: Chris Douglas
This was fixed by HADOOP-1569.
> distcp requires large heapsize when copying many files
> ------------------------------------------------------
>
> Key: HADOOP-1866
> URL: https://issues.apache.org/jira/browse/HADOOP-1866
> Project: Hadoop
> Issue Type: Bug
> Components: util
> Affects Versions: 0.13.1
> Reporter: Koji Noguchi
> Assignee: Chris Douglas
> Priority: Minor
> Fix For: 0.15.0
>
>
> Trying to distcp 1.5 million files with 1G client heapsize, failed with outofmemory.
> Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit
> exceeded
> at java.util.regex.Pattern.compile(Pattern.java:1438)
> at java.util.regex.Pattern.<init>(Pattern.java:1130)
> at java.util.regex.Pattern.compile(Pattern.java:846)
> at java.lang.String.replace(String.java:2208)
> at org.apache.hadoop.fs.Path.normalizePath(Path.java:147)
> at org.apache.hadoop.fs.Path.initialize(Path.java:137)
> at org.apache.hadoop.fs.Path.<init>(Path.java:126)
> at org.apache.hadoop.dfs.DfsPath.<init>(DfsPath.java:32)
> at org.apache.hadoop.dfs.DistributedFileSystem$RawDistributedFileSystem.listPaths(DistributedFileSystem.java:214)
> at org.apache.hadoop.fs.FileSystem.listPaths(FileSystem.java:483)
> at org.apache.hadoop.fs.FileSystem.listPaths(FileSystem.java:496)
> at org.apache.hadoop.fs.ChecksumFileSystem.listPaths(ChecksumFileSystem.java:539)
> at org.apache.hadoop.util.CopyFiles$FSCopyFilesMapper.setup(CopyFiles.java:327)
> at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:762)
> at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:808)
> at org.apache.hadoop.util.ToolBase.doMain(ToolBase.java:189)
> at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:818)
> It would be nice if distcp doesn't require gigs of heapsize when copying large amount of files.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-1866) distcp requires large heapsize when
copying many files
Posted by "Owen O'Malley (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12526187 ]
Owen O'Malley commented on HADOOP-1866:
---------------------------------------
Koji, I assume this was on 0.13 or 0.14? The new distcp in 0.15 is almost completely re-written, so the version matters a lot.
> distcp requires large heapsize when copying many files
> ------------------------------------------------------
>
> Key: HADOOP-1866
> URL: https://issues.apache.org/jira/browse/HADOOP-1866
> Project: Hadoop
> Issue Type: Bug
> Components: util
> Reporter: Koji Noguchi
> Priority: Minor
>
> Trying to distcp 1.5 million files with 1G client heapsize, failed with outofmemory.
> Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit
> exceeded
> at java.util.regex.Pattern.compile(Pattern.java:1438)
> at java.util.regex.Pattern.<init>(Pattern.java:1130)
> at java.util.regex.Pattern.compile(Pattern.java:846)
> at java.lang.String.replace(String.java:2208)
> at org.apache.hadoop.fs.Path.normalizePath(Path.java:147)
> at org.apache.hadoop.fs.Path.initialize(Path.java:137)
> at org.apache.hadoop.fs.Path.<init>(Path.java:126)
> at org.apache.hadoop.dfs.DfsPath.<init>(DfsPath.java:32)
> at org.apache.hadoop.dfs.DistributedFileSystem$RawDistributedFileSystem.listPaths(DistributedFileSystem.java:214)
> at org.apache.hadoop.fs.FileSystem.listPaths(FileSystem.java:483)
> at org.apache.hadoop.fs.FileSystem.listPaths(FileSystem.java:496)
> at org.apache.hadoop.fs.ChecksumFileSystem.listPaths(ChecksumFileSystem.java:539)
> at org.apache.hadoop.util.CopyFiles$FSCopyFilesMapper.setup(CopyFiles.java:327)
> at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:762)
> at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:808)
> at org.apache.hadoop.util.ToolBase.doMain(ToolBase.java:189)
> at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:818)
> It would be nice if distcp doesn't require gigs of heapsize when copying large amount of files.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-1866) distcp requires large heapsize when
copying many files
Posted by "Koji Noguchi (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Koji Noguchi updated HADOOP-1866:
---------------------------------
Affects Version/s: 0.13.1
bq. Koji, I assume this was on 0.13 or 0.14? The new distcp in 0.15 is almost completely re-written, so the version matters a lot.
Sorry. This was on 0.13.1. Looks like 0.15 distcp handles it much better. (no ArrayList finalPathList that holds all the files)
> distcp requires large heapsize when copying many files
> ------------------------------------------------------
>
> Key: HADOOP-1866
> URL: https://issues.apache.org/jira/browse/HADOOP-1866
> Project: Hadoop
> Issue Type: Bug
> Components: util
> Affects Versions: 0.13.1
> Reporter: Koji Noguchi
> Priority: Minor
>
> Trying to distcp 1.5 million files with 1G client heapsize, failed with outofmemory.
> Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit
> exceeded
> at java.util.regex.Pattern.compile(Pattern.java:1438)
> at java.util.regex.Pattern.<init>(Pattern.java:1130)
> at java.util.regex.Pattern.compile(Pattern.java:846)
> at java.lang.String.replace(String.java:2208)
> at org.apache.hadoop.fs.Path.normalizePath(Path.java:147)
> at org.apache.hadoop.fs.Path.initialize(Path.java:137)
> at org.apache.hadoop.fs.Path.<init>(Path.java:126)
> at org.apache.hadoop.dfs.DfsPath.<init>(DfsPath.java:32)
> at org.apache.hadoop.dfs.DistributedFileSystem$RawDistributedFileSystem.listPaths(DistributedFileSystem.java:214)
> at org.apache.hadoop.fs.FileSystem.listPaths(FileSystem.java:483)
> at org.apache.hadoop.fs.FileSystem.listPaths(FileSystem.java:496)
> at org.apache.hadoop.fs.ChecksumFileSystem.listPaths(ChecksumFileSystem.java:539)
> at org.apache.hadoop.util.CopyFiles$FSCopyFilesMapper.setup(CopyFiles.java:327)
> at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:762)
> at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:808)
> at org.apache.hadoop.util.ToolBase.doMain(ToolBase.java:189)
> at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:818)
> It would be nice if distcp doesn't require gigs of heapsize when copying large amount of files.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.