You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@pinot.apache.org by "davecromberge (via GitHub)" <gi...@apache.org> on 2024/02/20 14:11:02 UTC

[I] Request: Flink connector enhancements [pinot]

davecromberge opened a new issue, #12448:
URL: https://github.com/apache/pinot/issues/12448

   **What needs to be done?**
   
   - Upgrade Flink version
   The version of Flink in the pinot project pom should be updated from 1.14.6 to the latest version of Flink, 1.18.1.
   
   - Authentication
   The `org.apache.pinot.controller.helix.ControllerRequestClient` does not currently accept an `org.apache.pinot.spi.auth.AuthProvider`.
   Similarly, the `org.apache.pinot.connector.flink.sink.PinotSinkFunction<T>` does not accept an AuthProvider. 
   
   - Error handling
   Verify that the underlying SegmentWriter handles errors appending records to a segment and that errors propagate correctly to the Flink runtime.  Check that the SegmentUploader handles transmission errors and propagates these correctly.
   
   - Schema and Data Type Mapping
   Correctly convert types from Flink timestamps with timezones to the Pinot equivalent.  See [this comment](https://docs.google.com/document/d/1GVoFHOHSDPs1MEDKEmKguKwWMqM1lwQKj2e64RAKDf8/edit?disco=AAAAH-h_wJc).
   Double / decimal conversions - see [this comment](https://docs.google.com/document/d/1GVoFHOHSDPs1MEDKEmKguKwWMqM1lwQKj2e64RAKDf8/edit?disco=AAAAH-h_wJU).
   
   Nice to haves:
   
   - Segment size parameter
   Currently `segmentFlushMaxNumRecords` controls when the segment is flushed according to the number of ingested rows.  A dual to this could be `desiredSegmentSize` that could be used to flush segments when the number of bytes approaches or exceeds a size threshold.
   
   - Checkpoint support
   Understanding the limitations behind only supporting Batch mode execution in Flink.  Can the current segment writer be serialized and is there support for resuming from the serialized state?
   
   - Connector assembly
   Packaging the connector as a single assembly with shaded dependencies so that it can be used within the FlinkSQL environment.  This is done in other connectors such as Delta Lake, Google BigQuery etc.
   
   Other questions:
   - Is it possible to upload directly to a deep store and push metadata to the controller, or do we need the controller to implement the two-phase commit protocol?
   
   **Why the feature is needed**
   
   Our particular use case involves using pre-aggregation before ingestion into Pinot using Apache Datasketches.  These are serialized as binary and can be in the order of megabytes.  These are appended to a Delta Lake.  The idea is to stream records continuously from the Delta Lake using the [Flink Delta Connector](https://github.com/delta-io/delta/tree/master/connectors/flink) and have fine grained control over Pinot Segment generation.  These segments are to be uploaded directly to Pinot.   Our Pinot controllers are secured using Basic Authentication.  
   
   It is possible to clone and modify the existing connector and make modifications but some of these enhancements might benefit other users and discussing here is better.
   
   **Initial idea/proposal**
   Discuss the points above and collaborate on implementation.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org


Re: [I] Request: Flink connector enhancements [pinot]

Posted by "davecromberge (via GitHub)" <gi...@apache.org>.
davecromberge commented on issue #12448:
URL: https://github.com/apache/pinot/issues/12448#issuecomment-1961400180

   @snleee we do not plan to support CDC directly through this interface.  This issue is more about using Flink to build segments and upload them directly to the controller with the intention of giving the user more control over the ingestion process.  
   Whilst a delta lake might support the operations you mention, we only currently support INSERT/append through our connector.  In some sense, we are tackling this problem in increments - eventually we will have to consider the delta lake semantics and synchronise the state between Pinot and the Delta lake by one of the two methods you describe.  However, this is out of scope for the issue here which is narrowed to Flink and Pinot.  It is entirely possible that this could fall away if there was enough segment / input file lineage to make the sync a reality - which could even be done via a spark job which optimises the delta lake and applies mutation.  For our use case having the building blocks in place allows us to replay the INSERT operations from the delta lake from a given checkpoint / version into Pinot should we need to rebuild a table from scratch.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org


Re: [I] Request: Flink connector enhancements [pinot]

Posted by "snleee (via GitHub)" <gi...@apache.org>.
snleee commented on issue #12448:
URL: https://github.com/apache/pinot/issues/12448#issuecomment-1960673472

   @davecromberge Does `FlinkDeltaConnector` emit CDC (change data capture) stream? How does it going to handle records update or delete? In other words, what would be the strategy to sync data between delta lake and pinot using Flink Delta Connector?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org


Re: [I] Request: Flink connector enhancements [pinot]

Posted by "sullis (via GitHub)" <gi...@apache.org>.
sullis commented on issue #12448:
URL: https://github.com/apache/pinot/issues/12448#issuecomment-2018687020

   this PR bumps the Flink version to 1.19.0
   
   https://github.com/apache/pinot/pull/12659
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org