You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@pig.apache.org by "Russell Jurney (JIRA)" <ji...@apache.org> on 2012/11/20 08:14:58 UTC

[jira] [Created] (PIG-3059) Global configurable minimum 'bad record' thresholds

Russell Jurney created PIG-3059:
-----------------------------------

             Summary: Global configurable minimum 'bad record' thresholds
                 Key: PIG-3059
                 URL: https://issues.apache.org/jira/browse/PIG-3059
             Project: Pig
          Issue Type: New Feature
          Components: impl
    Affects Versions: 0.11
            Reporter: Russell Jurney
            Assignee: Russell Jurney
             Fix For: site


See PIG-2614. 

Pig dies when one record in a LOAD of a billion records fails to parse. This is almost certainly not the desired behavior. elephant-bird and some other storage UDFs have minimum thresholds in terms of percent and count that must be exceeded before a job will fail outright.

We need these limits to be configurable for Pig, globally. I've come to realize what a major problem Pig's crashing on bad records is for new Pig users. I believe this feature can greatly improve Pig.

An example of a config would look like:

pig.storage.bad.record.threshold=0.01
pig.storage.bad.record.min=100

A thorough discussion of this issue is available here: http://www.quora.com/Big-Data/In-Big-Data-ETL-how-many-records-are-an-acceptable-loss

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira