You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2021/04/05 15:09:00 UTC

[jira] [Work logged] (BEAM-12093) Overhaul ElasticsearchIO#Write

     [ https://issues.apache.org/jira/browse/BEAM-12093?focusedWorklogId=576900&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-576900 ]

ASF GitHub Bot logged work on BEAM-12093:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 05/Apr/21 15:08
            Start Date: 05/Apr/21 15:08
    Worklog Time Spent: 10m 
      Work Description: egalpin commented on pull request #14347:
URL: https://github.com/apache/beam/pull/14347#issuecomment-813443632


   @echauchot I've made a jira ticket and linked it. I'm working on getting the build to pass but struggling a bit with trying to determine the cause of the new errors in the Java PreCommit build. All the warnings have to do with a single Kotlin example which seems far removed from the changes here. I'll keep poking away at it though.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

            Worklog Id:     (was: 576900)
    Remaining Estimate: 0h
            Time Spent: 10m

> Overhaul ElasticsearchIO#Write
> ------------------------------
>
>                 Key: BEAM-12093
>                 URL: https://issues.apache.org/jira/browse/BEAM-12093
>             Project: Beam
>          Issue Type: Improvement
>          Components: io-java-elasticsearch
>            Reporter: Evan Galpin
>            Priority: P2
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> The current ElasticsearchIO#Write is great, but there are two related areas which could be improved:
>  # Separation of concern
>  # Bulk API batch size optimization
>  
> Presently, the Write transform has 2 responsibilities which are coupled and inseparable by users:
>  # Convert input documents into Bulk API entities, serializing based on user settings (partial update, delete, upsert, etc)
>  # Batch the converted Bulk API entities together and interface with the target ES cluster
>  
> Having these 2 roles tightly coupled means testing requires an available Elasticsearch cluster, making unit testing almost impossible. Allowing access to the serialized documents would make unit testing much easier for pipeline developers, among numerous other benefits to having separation between serialization and IO.
> Relatedly, the batching of entities when creating Bulk API payloads is currently limited by the lesser of Beam Runner bundling semantics, and the `ElasticsearchIO#Write#maxBatchSize` setting. This is understandable for portability between runners, but it also means most Bulk payloads only have a few (1-5) entities. By using Stateful Processing to better adhere to the `ElasticsearchIO#Write#maxBatchSize` setting, we have been able to drop the number of indexing requests in an Elasticsearch cluster by 50-100x. Separating the role of document serialization and IO allows supporting multiple IO techniques with minimal and understandable code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)