You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Milind Bhandarkar (JIRA)" <ji...@apache.org> on 2007/10/26 00:23:51 UTC
[jira] Resolved: (HADOOP-1864) Support for big jar file (>2G)
[ https://issues.apache.org/jira/browse/HADOOP-1864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Milind Bhandarkar resolved HADOOP-1864.
---------------------------------------
Resolution: Won't Fix
> Support for big jar file (>2G)
> ------------------------------
>
> Key: HADOOP-1864
> URL: https://issues.apache.org/jira/browse/HADOOP-1864
> Project: Hadoop
> Issue Type: Bug
> Components: contrib/streaming
> Affects Versions: 0.14.1
> Reporter: Yiping Han
> Priority: Critical
>
> We have huge size binary that need to be distributed onto tasktracker nodes in Hadoop streaming mode. We've tried both -file option and -cacheArchive option. It seems the tasktracker node cannot unjar jar files bigger than 2G. We are considering split our binaries into multiple jars, but with -file, it seems we cannot do it. Also, we would prefer -cacheArchive option for performance issue, but it seems -cacheArchive does not allow more than appearance in the streaming options. Even if -cacheArchive support multiple jars, we still need a way to put the jars into a single directory tree, instead of using multiple symbolic links.
> So, in general, we need a feasible and efficient way to update large size (>2G) binaries for Hadoop streaming. Don't know if there is an existing solution that we either didn't find or took it wrong. Or there should be some extra work to provide a solution?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.