You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@pig.apache.org by "Travis Crawford (Updated) (JIRA)" <ji...@apache.org> on 2012/03/08 22:07:58 UTC

[jira] [Updated] (PIG-2573) Automagically setting parallelism based on input file size does not work with HCatalog

     [ https://issues.apache.org/jira/browse/PIG-2573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Travis Crawford updated PIG-2573:
---------------------------------

    Attachment: PIG-2573_move_getinputbytes_to_loadfunc.diff

The attached patch shows what moving the "get size of input" logic into {{LoadFunc}} looks like.

I have some concerns about how invasive this approach is. Specifically it requires loaders to implement a new {{getLocation}} method for the optimization to work. While I can update the loaders shipped by pig, this breaks the optimization for all other loaders out there. Option: fall back to the current behavior if the loader returns null for getLocation (meaning its likely using the default implementation).

If we like this general approach I can polish & update all the included loaders. If we don't like this approach - any suggestions?
                
> Automagically setting parallelism based on input file size does not work with HCatalog
> --------------------------------------------------------------------------------------
>
>                 Key: PIG-2573
>                 URL: https://issues.apache.org/jira/browse/PIG-2573
>             Project: Pig
>          Issue Type: Bug
>            Reporter: Travis Crawford
>            Assignee: Travis Crawford
>         Attachments: PIG-2573_move_getinputbytes_to_loadfunc.diff
>
>
> PIG-2334 was helpful in understanding this issue. Short version is input file size is only computed if the path begins with a whitelisted prefix, currently:
> * /
> * hdfs:
> * file:
> * s3n:
> As HCatalog locations use the form {{dbname.tablename}} the input file size is not computed, and the size-based parallelism optimization breaks.
> DETAILS:
> I discovered this issue comparing two runs on the same script, one loading regular HDFS paths, and one with HCatalog db.table names. I just happened to notice the "Setting number of reducers" line difference.
> {code:title=Loading HDFS files reducers is set to 99}
> 2012-03-08 01:33:56,522 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - BytesPerReducer=1000000000 maxReducers=999 totalInputFileSize=98406674162
> 2012-03-08 01:33:56,522 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Neither PARALLEL nor default parallelism is set for this job. Setting number of reducers to 99
> {code}
> {code:title=Loading with an HCatalog db.table name}
> 2012-03-08 01:06:02,283 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - BytesPerReducer=1000000000 maxReducers=999 totalInputFileSize=0
> 2012-03-08 01:06:02,283 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Neither PARALLEL nor default parallelism is set for this job. Setting number of reducers to 1
> {code}
> Possible fix: Pig should just ask the loader for the size of its inputs rather than special-casing certain location types.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira