You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@beam.apache.org by "Reuven Lax (JIRA)" <ji...@apache.org> on 2017/04/17 23:28:41 UTC

[jira] [Updated] (BEAM-190) Dead-letter drop for bad BigQuery records

     [ https://issues.apache.org/jira/browse/BEAM-190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Reuven Lax updated BEAM-190:
----------------------------

There might be a pending PR, but I don't know if it will be merged
immediately given the focus of Beam committers on getting the stable
release out.

On Mon, Apr 17, 2017 at 3:34 PM, Josh Forman-Gornall (JIRA) <jira@apache.org



> Dead-letter drop for bad BigQuery records
> -----------------------------------------
>
>                 Key: BEAM-190
>                 URL: https://issues.apache.org/jira/browse/BEAM-190
>             Project: Beam
>          Issue Type: Bug
>          Components: runner-core
>            Reporter: Mark Shields
>            Assignee: Reuven Lax
>
> If a BigQuery insert fails for data-specific rather than structural reasons (eg cannot parse a date) then the bundle will be retried indefinitely, first by BigQueryTableInserter.insertAll then by the overall production retry logic of the underlying runner.
> Better would be to allow customer to specify a dead-letter store for records such as those so that overall processing can continue while bad records are quarantined.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)