You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "qing yan (JIRA)" <ji...@apache.org> on 2009/02/20 04:32:01 UTC
[jira] Created: (HIVE-295) Handles error input
Handles error input
-------------------
Key: HIVE-295
URL: https://issues.apache.org/jira/browse/HIVE-295
Project: Hadoop Hive
Issue Type: New Feature
Reporter: qing yan
It is common for Hive to encounter bad records when processing massive data.
Currently the forked hadoop job would throw exception and cause the whole job to fail ultimately.
Hive should handle error input systematically instead of treat it as an exception.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Resolved: (HIVE-295) Handles error input
Posted by "Raghotham Murthy (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HIVE-295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Raghotham Murthy resolved HIVE-295.
-----------------------------------
Resolution: Duplicate
Fixing HIVE-293 should also fix this issue. Parse exceptions thrown by serdes should be counted and then the query should fail based on a threshold.
> Handles error input
> -------------------
>
> Key: HIVE-295
> URL: https://issues.apache.org/jira/browse/HIVE-295
> Project: Hadoop Hive
> Issue Type: New Feature
> Reporter: qing yan
>
> It is common for Hive to encounter bad records when processing massive data.
> Currently the forked hadoop job would throw exception and cause the whole job to fail ultimately.
> Hive should handle error input systematically instead of treat it as an exception.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.