You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Egbert (JIRA)" <ji...@apache.org> on 2019/04/18 08:02:00 UTC

[jira] [Commented] (BEAM-6886) Change batch handling in ElasticsearchIO to avoid necessity for GroupIntoBatches

    [ https://issues.apache.org/jira/browse/BEAM-6886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16820820#comment-16820820 ] 

Egbert commented on BEAM-6886:
------------------------------

[~echauchot] I'm not sure if I can find the time to contribute a patch, I'll keep it in mind.

For BigQueryIO I actually meant there is a GroupByKey operation in the write. I think it's intended for deduplication of records, but as a side-effect it seems to batch together windows in multiple bundles. It's not something you have to do explicitly anyway. The same holds for PubsubIO#writeMessages - that also collects multiple messages in a short time period to be published in one batch. I didn't dive into the code to see how it is done there, though.

> Change batch handling in ElasticsearchIO to avoid necessity for GroupIntoBatches
> --------------------------------------------------------------------------------
>
>                 Key: BEAM-6886
>                 URL: https://issues.apache.org/jira/browse/BEAM-6886
>             Project: Beam
>          Issue Type: Improvement
>          Components: io-java-elasticsearch
>    Affects Versions: 2.11.0
>            Reporter: Egbert
>            Priority: Major
>
> I have a streaming job inserting records into an Elasticsearch cluster. I set the batch size appropriately big, but I found out this is not causing any effect at all: I found that all elements are inserted in batches of 1 or 2 elements.
> The reason seems to be that this is a streaming pipeline, which may result in tiny bundles. Since ElasticsearchIO uses `@FinishBundle` to flush a batch, this will result in equally small batches.
> This results in a huge amount of bulk requests with just one element, grinding the Elasticsearch cluster to a halt.
> I have now been able to work around this by using a `GroupIntoBatches` operation before the insert, but this results in 3 steps (mapping to a key, applying GroupIntoBatches, stripping key and outputting all collected elements), making the process quite awkward.
> A much better approach would be to internalize this into the ElasticsearchIO write transform.. Use a timer that flushes the batch at batch size or end of window, not at the end of a bundle.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)