You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Mithun Radhakrishnan (JIRA)" <ji...@apache.org> on 2015/04/03 22:59:54 UTC

[jira] [Updated] (HIVE-10213) MapReduce jobs using dynamic-partitioning fail on commit.

     [ https://issues.apache.org/jira/browse/HIVE-10213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Mithun Radhakrishnan updated HIVE-10213:
----------------------------------------
    Attachment: HIVE-10213.1.patch

> MapReduce jobs using dynamic-partitioning fail on commit.
> ---------------------------------------------------------
>
>                 Key: HIVE-10213
>                 URL: https://issues.apache.org/jira/browse/HIVE-10213
>             Project: Hive
>          Issue Type: Bug
>          Components: HCatalog
>            Reporter: Mithun Radhakrishnan
>            Assignee: Mithun Radhakrishnan
>         Attachments: HIVE-10213.1.patch
>
>
> I recently ran into a problem in {{TaskCommitContextRegistry}}, when using dynamic-partitions.
> Consider a MapReduce program that reads HCatRecords from a table (using HCatInputFormat), and then writes to another table (with identical schema), using HCatOutputFormat. The Map-task fails with the following exception:
> {code}
> Error: java.io.IOException: No callback registered for TaskAttemptID:attempt_1426589008676_509707_m_000000_0@hdfs://crystalmyth.myth.net:8020/user/mithunr/mythdb/target/_DYN0.6784154320609959/grid=__HIVE_DEFAULT_PARTITION__/dt=__HIVE_DEFAULT_PARTITION__
>         at org.apache.hive.hcatalog.mapreduce.TaskCommitContextRegistry.commitTask(TaskCommitContextRegistry.java:56)
>         at org.apache.hive.hcatalog.mapreduce.FileOutputCommitterContainer.commitTask(FileOutputCommitterContainer.java:139)
>         at org.apache.hadoop.mapred.Task.commit(Task.java:1163)
>         at org.apache.hadoop.mapred.Task.done(Task.java:1025)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:345)
>         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1694)
>         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> {code}
> {{TaskCommitContextRegistry::commitTask()}} uses call-backs registered from {{DynamicPartitionFileRecordWriter}}. But in case {{HCatInputFormat}} and {{HCatOutputFormat}} are both used in the same job, the {{DynamicPartitionFileRecordWriter}} might only be exercised in the Reducer.
> I'm relaxing the IOException, and log a warning message instead of just failing.
> (I'll post the fix shortly.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)