You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@pig.apache.org by "Alan Gates (JIRA)" <ji...@apache.org> on 2010/07/22 19:50:50 UTC

[jira] Commented: (PIG-1511) Pig removes packages from its own jar when building the JAR to ship to Hadoop

    [ https://issues.apache.org/jira/browse/PIG-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12891261#action_12891261 ] 

Alan Gates commented on PIG-1511:
---------------------------------

We don't want to do this by default.  In a couple of instances keeping the size of this jar down is more important.  One, when the number of tasks being used is very large, since that jar is being copied once to each task, and two when the job itself is quite small and the setup costs become a concern.

> Pig removes packages from its own jar when building the JAR to ship to Hadoop
> -----------------------------------------------------------------------------
>
>                 Key: PIG-1511
>                 URL: https://issues.apache.org/jira/browse/PIG-1511
>             Project: Pig
>          Issue Type: Bug
>    Affects Versions: 0.7.0
>            Reporter: Eric Tschetter
>         Attachments: pig-1511.diff
>
>
> Pig generates a new jar file to ship over to Hadoop.  Pig has a couple of packages whitelisted that it includes from its own jar.  Pig throws away everything else.
> I package all of my dependencies into a single jar file.  Pig is included in this jar file.  I do it this way because my code needs to run reliably and reproducibly in production.  Pig throws away all of my dependencies.
> I don't know what the performance gain is of shaving ~5MB off of a jar that is pushed to a job tracker once and then used to run over 100s of GB of data.  The overhead is minimal on my cluster.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.