You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Todd Lipcon (JIRA)" <ji...@apache.org> on 2009/05/15 01:04:45 UTC

[jira] Commented: (HADOOP-5175) Option to prohibit jars unpacking

    [ https://issues.apache.org/jira/browse/HADOOP-5175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12709622#action_12709622 ] 

Todd Lipcon commented on HADOOP-5175:
-------------------------------------

Agreed. We have seen this issue with the same root cause (lots of libjars makes job cleanup very slow).

It seems to me that it's an oversight that JobClient.java calls DistributedCache.addArchiveToClassPath(...) for the libjars arguments. Instead, it should use DistributedCache.addFileToClassPath for jar files.

Does anyone see any issue with that? In my opinion, libjars are explicitly supposed to stay self-contained - there's no reason to expand them.

> Option to prohibit jars unpacking
> ---------------------------------
>
>                 Key: HADOOP-5175
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5175
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: mapred
>    Affects Versions: 0.19.0
>         Environment: Hadoop cluster of 5 servers, each with:
> HDD: two disks WDC WD1000FYPS-01ZKB0
> OS: Linux 2.6.26-1-686 #1 SMP
> FS: XFS
>            Reporter: Andrew Gudkov
>
> I've noticed that task tracker moves all unpacked jars into 
> ${hadoop.tmp.dir}/mapred/local/taskTracker.
> We are using a lot of external libraries, that are deployed via "-libjars" 
> option. The total number of files after unpacking is about 20 thousands.
> After running a number of jobs, tasks start to be killed with timeout reason 
> ("Task attempt_200901281518_0011_m_000173_2 failed to report status for 601 
> seconds. Killing!"). All killed tasks are in "initializing" state. I've 
> watched the tasktracker logs and found such messages:
> {quote}
> Thread 20926 (Thread-10368):
>   State: BLOCKED
>   Blocked count: 3611
>   Waited count: 24
>   Blocked on java.lang.ref.Reference$Lock@e48ed6
>   Blocked by 20882 (Thread-10341)
>   Stack:
>     java.lang.StringCoding$StringEncoder.encode(StringCoding.java:232)
>     java.lang.StringCoding.encode(StringCoding.java:272)
>     java.lang.String.getBytes(String.java:947)
>     java.io.UnixFileSystem.getBooleanAttributes0(Native Method)
>     java.io.UnixFileSystem.getBooleanAttributes(UnixFileSystem.java:228)
>     java.io.File.isDirectory(File.java:754)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:427)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
> {quote}
> HADOOP-4780 patch brings the code which stores map of directories along 
> with their DU's, thus reducing the number of calls to DU. However, the delete operation takes too long. I've manually deleted archive after 10 jobs had run and it took over 30 minutes on XFS.
> I suppose that an option to prohibit jars unpacking would be helpfull in my situation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.