You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@inlong.apache.org by GitBox <gi...@apache.org> on 2022/10/26 09:22:05 UTC

[GitHub] [inlong] yunqingmoswu opened a new pull request, #6298: [INLONG-6296][Sort] Split one record to multiple records when the physical data has more records for KafkaLoadNode

yunqingmoswu opened a new pull request, #6298:
URL: https://github.com/apache/inlong/pull/6298

   ### Prepare a Pull Request
   *(Change the title refer to the following example)*
   
   Title: [INLONG-6296][Sort] Split one record to multiple records when the physical data has more records for KafkaLoadNode
   
   *(The following *XYZ* should be replaced by the actual [GitHub Issue](https://github.com/apache/inlong/issues) number)*
   
   Fixes #6296
   
   ### Motivation
   
   Split one record to multiple records when the physical data has more records for KafkaLoadNode
   It is only used for multiple sink scenario, for example, the raw data is:
   ``` {
       "data":[
           {
               "id":352,
               "price":1.14,
               "currency":"US",
               "order_time":"2022-08-16 18:59:09"
           },
           {
               "id":353,
               "price":1.22,
               "currency":"US",
               "order_time":"2022-08-16 18:59:09"
           }
       ],
       "type":"INSERT",
       "pkNames":[
           "id"
       ],
       "database":"inlong",
       "ts":1666772931248,
       "table":"orders"
   }
   ```
   and then it will be split into two records when writing as follows:
   ```
   {
       "data":[
           {
               "id":352,
               "price":1.14,
               "currency":"US",
               "order_time":"2022-08-16 18:59:09"
           }
       ],
       "type":"INSERT",
       "pkNames":[
           "id"
       ],
       "database":"inlong",
       "ts":1666772931248,
       "table":"orders"
   }
   {
       "data":[
           {
               "id":353,
               "price":1.22,
               "currency":"US",
               "order_time":"2022-08-16 18:59:09"
           }
       ],
       "type":"INSERT",
       "pkNames":[
           "id"
       ],
       "database":"inlong",
       "ts":1666772931248,
       "table":"orders"
   }
   ```
   ### Modifications
   
   1. Add split and serializeForList handle for DynamicKafkaSerializationSchema
   2. Add serializeForList for FlinkKafkaProducer
   3. Update the visibility for ObjectMapper of JsonDynamicSchemaFormat
   
   ### Verifying this change
   
   *(Please pick either of the following options)*
   
   - [ ] This change is a trivial rework/code cleanup without any test coverage.
   
   - [x] This change is already covered by existing tests, such as:
     *(please describe tests)*
   
   - [ ] This change added tests and can be verified as follows:
   
     *(example:)*
     - *Added integration tests for end-to-end deployment with large payloads (10MB)*
     - *Extended integration test for recovery after broker failure*
   
   ### Documentation
   
     - Does this pull request introduce a new feature? (yes / no)
     - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
     - If a feature is not applicable for documentation, explain why?
     - If a feature is not documented yet in this PR, please create a follow-up issue for adding the documentation
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@inlong.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [inlong] EMsnap commented on a diff in pull request #6298: [INLONG-6296][Sort] Split one record to multiple records when the physical data has more records for KafkaLoadNode

Posted by GitBox <gi...@apache.org>.
EMsnap commented on code in PR #6298:
URL: https://github.com/apache/inlong/pull/6298#discussion_r1006357137


##########
inlong-sort/sort-connectors/kafka/src/main/java/org/apache/inlong/sort/kafka/DynamicKafkaSerializationSchema.java:
##########
@@ -172,6 +189,101 @@ public ProducerRecord<byte[], byte[]> serialize(RowData consumedRow, @Nullable L
                 readMetadata(consumedRow, KafkaDynamicSink.WritableMetadata.HEADERS));
     }
 
+    /**
+     * Serialize for list it is used for multiple sink scenes when a record contains mulitple real records.
+     *
+     * @param consumedRow The consumeRow
+     * @param timestamp The timestamp
+     * @return List of ProducerRecord
+     */
+    public List<ProducerRecord<byte[], byte[]>> serializeForList(RowData consumedRow, @Nullable Long timestamp) {
+        if (!multipleSink) {
+            return Collections.singletonList(serialize(consumedRow, timestamp));
+        }
+        List<ProducerRecord<byte[], byte[]>> values = new ArrayList<>();
+        try {
+            JsonNode rootNode = jsonDynamicSchemaFormat.deserialize(consumedRow.getBinary(0));
+            boolean isDDL = jsonDynamicSchemaFormat.extractDDLFlag(rootNode);
+            if (isDDL) {
+                values.add(new ProducerRecord<>(
+                        jsonDynamicSchemaFormat.parse(rootNode, topicPattern),
+                        extractPartition(consumedRow, null, consumedRow.getBinary(0)),
+                        null,
+                        consumedRow.getBinary(0)));
+                return values;
+            }
+            JsonNode updateBeforeNode = jsonDynamicSchemaFormat.getUpdateBefore(rootNode);
+            JsonNode updateAfterNode = jsonDynamicSchemaFormat.getUpdateAfter(rootNode);
+            boolean splitRequired = (updateAfterNode != null && updateAfterNode.isArray()

Review Comment:
   maybe a method for such operation would make the code cleaner



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@inlong.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [inlong] yunqingmoswu commented on a diff in pull request #6298: [INLONG-6296][Sort] Split one record to multiple records when the physical data has more records for KafkaLoadNode

Posted by GitBox <gi...@apache.org>.
yunqingmoswu commented on code in PR #6298:
URL: https://github.com/apache/inlong/pull/6298#discussion_r1006588475


##########
inlong-sort/sort-connectors/kafka/src/main/java/org/apache/inlong/sort/kafka/DynamicKafkaSerializationSchema.java:
##########
@@ -172,6 +189,101 @@ public ProducerRecord<byte[], byte[]> serialize(RowData consumedRow, @Nullable L
                 readMetadata(consumedRow, KafkaDynamicSink.WritableMetadata.HEADERS));
     }
 
+    /**
+     * Serialize for list it is used for multiple sink scenes when a record contains mulitple real records.
+     *
+     * @param consumedRow The consumeRow
+     * @param timestamp The timestamp
+     * @return List of ProducerRecord
+     */
+    public List<ProducerRecord<byte[], byte[]>> serializeForList(RowData consumedRow, @Nullable Long timestamp) {
+        if (!multipleSink) {
+            return Collections.singletonList(serialize(consumedRow, timestamp));
+        }
+        List<ProducerRecord<byte[], byte[]>> values = new ArrayList<>();
+        try {
+            JsonNode rootNode = jsonDynamicSchemaFormat.deserialize(consumedRow.getBinary(0));
+            boolean isDDL = jsonDynamicSchemaFormat.extractDDLFlag(rootNode);
+            if (isDDL) {
+                values.add(new ProducerRecord<>(
+                        jsonDynamicSchemaFormat.parse(rootNode, topicPattern),
+                        extractPartition(consumedRow, null, consumedRow.getBinary(0)),
+                        null,
+                        consumedRow.getBinary(0)));
+                return values;
+            }
+            JsonNode updateBeforeNode = jsonDynamicSchemaFormat.getUpdateBefore(rootNode);
+            JsonNode updateAfterNode = jsonDynamicSchemaFormat.getUpdateAfter(rootNode);
+            boolean splitRequired = (updateAfterNode != null && updateAfterNode.isArray()

Review Comment:
   It is a good idea.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@inlong.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [inlong] healchow merged pull request #6298: [INLONG-6296][Sort] Split one record to multiple records when the physical data has more records for KafkaLoadNode

Posted by GitBox <gi...@apache.org>.
healchow merged PR #6298:
URL: https://github.com/apache/inlong/pull/6298


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@inlong.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org