You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Ivan Mitic (JIRA)" <ji...@apache.org> on 2013/06/09 02:50:20 UTC

[jira] [Commented] (MAPREDUCE-5278) Distributed cache is broken when JT staging dir is not on the default FS

    [ https://issues.apache.org/jira/browse/MAPREDUCE-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13678901#comment-13678901 ] 

Ivan Mitic commented on MAPREDUCE-5278:
---------------------------------------

Thanks Xi for posting the patch!

+1 on the proposal, I have largely reviewed this already and tested it out E2E.

A couple of additional comments below:
1.	You’ll also have to provide a trunk compatible patch for the new functionality
2.	TestMRWithDistributedCache#DistributedCacheCheckerJTStagingOnNondefaultFS: I would add the validation that localized dist cache entries are properly added to the classpath (below check).
{code}
      // Check the class loaders
      LOG.info("Java Classpath: " + System.getProperty("java.class.path"));
      ClassLoader cl = Thread.currentThread().getContextClassLoader();
      // Both the file and the archive were added to classpath, so both
      // should be reachable via the class loader.
      TestCase.assertNotNull(cl.getResource("distributed.jar.inside2"));
      TestCase.assertNotNull(cl.getResource("distributed.jar.inside3"));
      TestCase.assertNull(cl.getResource("distributed.jar.inside4"));
{code}

It would be really good to get feedback on the approach from some more senior MR folks. 

                
> Distributed cache is broken when JT staging dir is not on the default FS
> ------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-5278
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5278
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: distributed-cache
>    Affects Versions: 1-win
>         Environment: Windows
>            Reporter: Xi Fang
>            Assignee: Xi Fang
>             Fix For: 1-win
>
>         Attachments: MAPREDUCE-5278.patch
>
>
> Today, the JobTracker staging dir ("mapreduce.jobtracker.staging.root.dir) is set to point to HDFS, even though other file systems (e.g. Amazon S3 file system and Windows ASV file system) are the default file systems.
> For ASV, this config was chosen and there are a few reasons why:
> 1. To prevent leak of the storage account credentials to the user's storage account; 
> 2. It uses HDFS for the transient job files what is good for two reasons – a) it does not flood the user's storage account with irrelevant data/files b) it leverages HDFS locality for small files
> However, this approach conflicts with how distributed cache caching works, completely negating the feature's functionality.
> When files are added to the distributed cache (thru files/achieves/libjars hadoop generic options), they are copied to the job tracker staging dir only if they reside on a file system different that the jobtracker's. Later on, this path is used as a "key" to cache the files locally on the tasktracker's machine, and avoid localization (download/unzip) of the distributed cache files if they are already localized.
> In this configuration the caching is completely disabled and we always end up copying dist cache files to the job tracker's staging dir first and localizing them on the task tracker machine second.
> This is especially not good for Oozie scenarios as Oozie uses dist cache to populate Hive/Pig jars throughout the cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira