You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@pig.apache.org by "Sriranjan Manjunath (JIRA)" <ji...@apache.org> on 2009/11/17 02:21:39 UTC

[jira] Updated: (PIG-872) use distributed cache for the replicated data set in FR join

     [ https://issues.apache.org/jira/browse/PIG-872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sriranjan Manjunath updated PIG-872:
------------------------------------

    Attachment: PIG_872.patch

I have verified that the job.xml has mapred.cache.files set to the replicated files.

> use distributed cache for the replicated data set in FR join
> ------------------------------------------------------------
>
>                 Key: PIG-872
>                 URL: https://issues.apache.org/jira/browse/PIG-872
>             Project: Pig
>          Issue Type: Improvement
>            Reporter: Olga Natkovich
>         Attachments: PIG_872.patch
>
>
> Currently, the replicated file is read directly from DFS by all maps. If the number of the concurrent maps is huge, we can overwhelm the NameNode with open calls.
> Using distributed cache will address the issue and might also give a performance boost since the file will be copied locally once and the reused by all tasks running on the same machine.
> The basic approach would be to use cacheArchive to place the file into the cache on the frontend and on the backend, the tasks would need to refer to the data using path from the cache.
> Note that cacheArchive does not work in Hadoop local mode. (Not a problem for us right now as we don't use it.)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.