You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@inlong.apache.org by GitBox <gi...@apache.org> on 2022/03/01 06:55:37 UTC

[GitHub] [incubator-inlong] wardlican opened a new pull request #2797: [INLONG-2383][SDK]Sort-sdk support Kafka consumer of PB compression cache message protocol

wardlican opened a new pull request #2797:
URL: https://github.com/apache/incubator-inlong/pull/2797


   …ache message protocol
   
   [INLONG-2383][SDK]Sort-sdk support Kafka consumer of PB compression cache message protocol
   
   
   ### Title Name: [INLONG-2383][SDK]Sort-sdk support Kafka consumer of PB compression cache message protocol
   
   where *XYZ* should be replaced by the actual issue number.
   
   Fixes #2383
   
   ### Motivation
   
   *Explain here the context, and why you're making that change. What is the problem you're trying to solve.*
   
   ### Modifications
   
   *Describe the modifications you've done.*
   
   ### Verifying this change
   
   - [ ] Make sure that the change passes the CI checks.
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
     - *Added integration tests for end-to-end deployment with large payloads (10MB)*
     - *Extended integration test for recovery after broker failure*
   
   ### Documentation
   
     - Does this pull request introduce a new feature? (yes / no)
     - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
     - If a feature is not applicable for documentation, explain why?
     - If a feature is not documented yet in this PR, please create a followup issue for adding the documentation
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@inlong.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-inlong] dockerzhang merged pull request #2797: [INLONG-2383][SDK] Support Kafka to consume PB compressed message protocol

Posted by GitBox <gi...@apache.org>.
dockerzhang merged pull request #2797:
URL: https://github.com/apache/incubator-inlong/pull/2797


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@inlong.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-inlong] wardlican commented on a change in pull request #2797: [INLONG-2383][SDK] Support Kafka to consume PB compressed message protocol

Posted by GitBox <gi...@apache.org>.
wardlican commented on a change in pull request #2797:
URL: https://github.com/apache/incubator-inlong/pull/2797#discussion_r820115252



##########
File path: inlong-sdk/sort-sdk/src/main/java/org/apache/inlong/sdk/sort/impl/kafka/InLongKafkaFetcherImpl.java
##########
@@ -0,0 +1,324 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.inlong.sdk.sort.impl.kafka;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicLong;
+import org.apache.inlong.sdk.sort.api.ClientContext;
+import org.apache.inlong.sdk.sort.api.InLongTopicFetcher;
+import org.apache.inlong.sdk.sort.api.SortClientConfig.ConsumeStrategy;
+import org.apache.inlong.sdk.sort.entity.InLongMessage;
+import org.apache.inlong.sdk.sort.entity.InLongTopic;
+import org.apache.inlong.sdk.sort.entity.MessageRecord;
+import org.apache.inlong.sdk.sort.util.StringUtil;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.consumer.ConsumerRecord;
+import org.apache.kafka.clients.consumer.ConsumerRecords;
+import org.apache.kafka.clients.consumer.KafkaConsumer;
+import org.apache.kafka.clients.consumer.OffsetAndMetadata;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.header.Header;
+import org.apache.kafka.common.header.Headers;
+import org.apache.kafka.common.serialization.ByteArrayDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class InLongKafkaFetcherImpl extends InLongTopicFetcher {
+
+    private final Logger logger = LoggerFactory.getLogger(InLongKafkaFetcherImpl.class);
+    private final ConcurrentHashMap<TopicPartition, OffsetAndMetadata> commitOffsetMap = new ConcurrentHashMap<>();
+    private final AtomicLong ackOffsets = new AtomicLong(0);
+    private volatile boolean stopConsume = false;
+    private String bootstrapServers;
+    private KafkaConsumer<byte[], byte[]> consumer;
+
+    public InLongKafkaFetcherImpl(InLongTopic inLongTopic, ClientContext context) {
+        super(inLongTopic, context);
+    }
+
+    @Override
+    public boolean init(Object object) {
+        String bootstrapServers = (String) object;
+        try {
+            createKafkaConsumer(bootstrapServers);
+            if (consumer != null) {
+                consumer.subscribe(Collections.singletonList(inLongTopic.getTopic()),
+                        new AckOffsetOnRebalance(consumer, commitOffsetMap));
+            } else {
+                return false;
+            }
+            this.bootstrapServers = bootstrapServers;
+            String threadName = "sort_sdk_fetch_thread_" + StringUtil.formatDate(new Date(), "yyyy-MM-dd HH:mm:ss");
+            this.fetchThread = new Thread(new Fetcher(), threadName);
+            this.fetchThread.start();
+        } catch (Exception e) {
+            logger.error(e.getMessage(), e);
+            return false;
+        }
+        return true;
+    }
+
+    @Override
+    public void ack(String msgOffset) throws Exception {
+        String[] offset = msgOffset.split(":");
+        if (offset.length == 2) {
+            TopicPartition topicPartition = new TopicPartition(inLongTopic.getTopic(), Integer.parseInt(offset[0]));
+            OffsetAndMetadata offsetAndMetadata = new OffsetAndMetadata(Long.parseLong(offset[1]));
+            commitOffsetMap.put(topicPartition, offsetAndMetadata);
+        }
+    }
+
+    @Override
+    public void pause() {
+        this.stopConsume = true;
+    }
+
+    @Override
+    public void resume() {
+        this.stopConsume = false;
+    }
+
+    @Override
+    public boolean close() {
+        this.closed = true;
+        try {
+            if (fetchThread != null) {
+                fetchThread.interrupt();
+            }
+            if (consumer != null) {
+                consumer.close();
+            }
+        } catch (Throwable throwable) {
+            throwable.printStackTrace();
+        }
+        logger.info("closed {}", inLongTopic);
+        return true;
+    }
+
+    @Override
+    public boolean isClosed() {
+        return false;

Review comment:
       not complete yet




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@inlong.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-inlong] wardlican commented on a change in pull request #2797: [INLONG-2383][SDK] Support Kafka to consume PB compressed message protocol

Posted by GitBox <gi...@apache.org>.
wardlican commented on a change in pull request #2797:
URL: https://github.com/apache/incubator-inlong/pull/2797#discussion_r820115301



##########
File path: inlong-sdk/sort-sdk/src/main/java/org/apache/inlong/sdk/sort/impl/kafka/InLongKafkaFetcherImpl.java
##########
@@ -0,0 +1,324 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.inlong.sdk.sort.impl.kafka;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicLong;
+import org.apache.inlong.sdk.sort.api.ClientContext;
+import org.apache.inlong.sdk.sort.api.InLongTopicFetcher;
+import org.apache.inlong.sdk.sort.api.SortClientConfig.ConsumeStrategy;
+import org.apache.inlong.sdk.sort.entity.InLongMessage;
+import org.apache.inlong.sdk.sort.entity.InLongTopic;
+import org.apache.inlong.sdk.sort.entity.MessageRecord;
+import org.apache.inlong.sdk.sort.util.StringUtil;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.consumer.ConsumerRecord;
+import org.apache.kafka.clients.consumer.ConsumerRecords;
+import org.apache.kafka.clients.consumer.KafkaConsumer;
+import org.apache.kafka.clients.consumer.OffsetAndMetadata;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.header.Header;
+import org.apache.kafka.common.header.Headers;
+import org.apache.kafka.common.serialization.ByteArrayDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class InLongKafkaFetcherImpl extends InLongTopicFetcher {
+
+    private final Logger logger = LoggerFactory.getLogger(InLongKafkaFetcherImpl.class);
+    private final ConcurrentHashMap<TopicPartition, OffsetAndMetadata> commitOffsetMap = new ConcurrentHashMap<>();
+    private final AtomicLong ackOffsets = new AtomicLong(0);
+    private volatile boolean stopConsume = false;
+    private String bootstrapServers;
+    private KafkaConsumer<byte[], byte[]> consumer;
+
+    public InLongKafkaFetcherImpl(InLongTopic inLongTopic, ClientContext context) {
+        super(inLongTopic, context);
+    }
+
+    @Override
+    public boolean init(Object object) {
+        String bootstrapServers = (String) object;
+        try {
+            createKafkaConsumer(bootstrapServers);
+            if (consumer != null) {
+                consumer.subscribe(Collections.singletonList(inLongTopic.getTopic()),
+                        new AckOffsetOnRebalance(consumer, commitOffsetMap));
+            } else {
+                return false;
+            }
+            this.bootstrapServers = bootstrapServers;
+            String threadName = "sort_sdk_fetch_thread_" + StringUtil.formatDate(new Date(), "yyyy-MM-dd HH:mm:ss");
+            this.fetchThread = new Thread(new Fetcher(), threadName);
+            this.fetchThread.start();
+        } catch (Exception e) {
+            logger.error(e.getMessage(), e);
+            return false;
+        }
+        return true;
+    }
+
+    @Override
+    public void ack(String msgOffset) throws Exception {
+        String[] offset = msgOffset.split(":");
+        if (offset.length == 2) {
+            TopicPartition topicPartition = new TopicPartition(inLongTopic.getTopic(), Integer.parseInt(offset[0]));
+            OffsetAndMetadata offsetAndMetadata = new OffsetAndMetadata(Long.parseLong(offset[1]));
+            commitOffsetMap.put(topicPartition, offsetAndMetadata);
+        }
+    }
+
+    @Override
+    public void pause() {
+        this.stopConsume = true;
+    }
+
+    @Override
+    public void resume() {
+        this.stopConsume = false;
+    }
+
+    @Override
+    public boolean close() {
+        this.closed = true;
+        try {
+            if (fetchThread != null) {
+                fetchThread.interrupt();
+            }
+            if (consumer != null) {
+                consumer.close();
+            }
+        } catch (Throwable throwable) {
+            throwable.printStackTrace();
+        }
+        logger.info("closed {}", inLongTopic);
+        return true;
+    }
+
+    @Override
+    public boolean isClosed() {
+        return false;
+    }
+
+    @Override
+    public void stopConsume(boolean stopConsume) {
+        this.stopConsume = stopConsume;
+    }
+
+    @Override
+    public boolean isConsumeStop() {
+        return this.stopConsume;
+    }
+
+    @Override
+    public InLongTopic getInLongTopic() {
+        return inLongTopic;
+    }
+
+    @Override
+    public long getConsumedDataSize() {
+        return 0;
+    }
+
+    @Override
+    public long getAckedOffset() {
+        return 0;
+    }
+
+    private void createKafkaConsumer(String bootstrapServers) {
+        Properties properties = new Properties();
+        properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
+        properties.put(ConsumerConfig.CLIENT_ID_CONFIG, context.getConfig().getSortTaskId());
+        properties.put(ConsumerConfig.GROUP_ID_CONFIG, context.getConfig().getSortTaskId());
+        properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
+                ByteArrayDeserializer.class.getName());
+        properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
+                ByteArrayDeserializer.class.getName());
+        properties.put(ConsumerConfig.RECEIVE_BUFFER_CONFIG,
+                context.getConfig().getKafkaSocketRecvBufferSize());
+        properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
+        ConsumeStrategy offsetResetStrategy = context.getConfig().getOffsetResetStrategy();
+        if (offsetResetStrategy == ConsumeStrategy.lastest
+                || offsetResetStrategy == ConsumeStrategy.lastest_absolutely) {
+            properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
+        } else if (offsetResetStrategy == ConsumeStrategy.earliest
+                || offsetResetStrategy == ConsumeStrategy.earliest_absolutely) {
+            properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
+        } else {
+            properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "none");
+        }
+        properties.put(ConsumerConfig.FETCH_MAX_BYTES_CONFIG,
+                context.getConfig().getKafkaFetchSizeBytes());
+        properties.put(ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG,
+                context.getConfig().getKafkaFetchWaitMs());
+        properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
+        properties.put(ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG,
+                "org.apache.kafka.clients.consumer.StickyAssignor");
+        properties.put(ConsumerConfig.CONNECTIONS_MAX_IDLE_MS_CONFIG, 120000L);
+        this.bootstrapServers = bootstrapServers;
+        this.consumer = new KafkaConsumer<>(properties);
+    }
+
+    public class Fetcher implements Runnable {
+
+        private void commitKafkaOffset() {
+            if (consumer != null && commitOffsetMap.size() > 0) {
+                try {
+                    consumer.commitSync(commitOffsetMap);
+                    commitOffsetMap.clear();
+                    //TODO monitor commit succ
+
+                } catch (Exception e) {
+                    //TODO monitor commit fail
+                }
+            }
+        }
+
+        /**
+         * put the received msg to onFinished method
+         *
+         * @param messageRecords {@link List < MessageRecord >}
+         */
+        private void handleAndCallbackMsg(List<MessageRecord> messageRecords) {
+            long start = System.currentTimeMillis();
+            try {
+                context.getStatManager()
+                        .getStatistics(context.getConfig().getSortTaskId(),
+                                inLongTopic.getInLongCluster().getClusterId(), inLongTopic.getTopic())
+                        .addCallbackTimes(1);
+                context.getConfig().getCallback().onFinishedBatch(messageRecords);
+                context.getStatManager()
+                        .getStatistics(context.getConfig().getSortTaskId(),
+                                inLongTopic.getInLongCluster().getClusterId(), inLongTopic.getTopic())
+                        .addCallbackTimeCost(System.currentTimeMillis() - start).addCallbackDoneTimes(1);
+            } catch (Exception e) {
+                context.getStatManager()
+                        .getStatistics(context.getConfig().getSortTaskId(),
+                                inLongTopic.getInLongCluster().getClusterId(), inLongTopic.getTopic())
+                        .addCallbackErrorTimes(1);
+                e.printStackTrace();

Review comment:
       ok




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@inlong.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-inlong] vernedeng commented on a change in pull request #2797: [INLONG-2383][SDK] Support Kafka to consume PB compressed message protocol

Posted by GitBox <gi...@apache.org>.
vernedeng commented on a change in pull request #2797:
URL: https://github.com/apache/incubator-inlong/pull/2797#discussion_r819311186



##########
File path: inlong-sdk/sort-sdk/src/main/java/org/apache/inlong/sdk/sort/impl/kafka/InLongKafkaFetcherImpl.java
##########
@@ -0,0 +1,324 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.inlong.sdk.sort.impl.kafka;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicLong;
+import org.apache.inlong.sdk.sort.api.ClientContext;
+import org.apache.inlong.sdk.sort.api.InLongTopicFetcher;
+import org.apache.inlong.sdk.sort.api.SortClientConfig.ConsumeStrategy;
+import org.apache.inlong.sdk.sort.entity.InLongMessage;
+import org.apache.inlong.sdk.sort.entity.InLongTopic;
+import org.apache.inlong.sdk.sort.entity.MessageRecord;
+import org.apache.inlong.sdk.sort.util.StringUtil;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.consumer.ConsumerRecord;
+import org.apache.kafka.clients.consumer.ConsumerRecords;
+import org.apache.kafka.clients.consumer.KafkaConsumer;
+import org.apache.kafka.clients.consumer.OffsetAndMetadata;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.header.Header;
+import org.apache.kafka.common.header.Headers;
+import org.apache.kafka.common.serialization.ByteArrayDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class InLongKafkaFetcherImpl extends InLongTopicFetcher {
+
+    private final Logger logger = LoggerFactory.getLogger(InLongKafkaFetcherImpl.class);
+    private final ConcurrentHashMap<TopicPartition, OffsetAndMetadata> commitOffsetMap = new ConcurrentHashMap<>();
+    private final AtomicLong ackOffsets = new AtomicLong(0);
+    private volatile boolean stopConsume = false;
+    private String bootstrapServers;
+    private KafkaConsumer<byte[], byte[]> consumer;
+
+    public InLongKafkaFetcherImpl(InLongTopic inLongTopic, ClientContext context) {
+        super(inLongTopic, context);
+    }
+
+    @Override
+    public boolean init(Object object) {
+        String bootstrapServers = (String) object;
+        try {
+            createKafkaConsumer(bootstrapServers);
+            if (consumer != null) {
+                consumer.subscribe(Collections.singletonList(inLongTopic.getTopic()),
+                        new AckOffsetOnRebalance(consumer, commitOffsetMap));
+            } else {
+                return false;
+            }
+            this.bootstrapServers = bootstrapServers;
+            String threadName = "sort_sdk_fetch_thread_" + StringUtil.formatDate(new Date(), "yyyy-MM-dd HH:mm:ss");
+            this.fetchThread = new Thread(new Fetcher(), threadName);
+            this.fetchThread.start();
+        } catch (Exception e) {
+            logger.error(e.getMessage(), e);
+            return false;
+        }
+        return true;
+    }
+
+    @Override
+    public void ack(String msgOffset) throws Exception {
+        String[] offset = msgOffset.split(":");
+        if (offset.length == 2) {
+            TopicPartition topicPartition = new TopicPartition(inLongTopic.getTopic(), Integer.parseInt(offset[0]));
+            OffsetAndMetadata offsetAndMetadata = new OffsetAndMetadata(Long.parseLong(offset[1]));
+            commitOffsetMap.put(topicPartition, offsetAndMetadata);
+        }
+    }
+
+    @Override
+    public void pause() {
+        this.stopConsume = true;
+    }
+
+    @Override
+    public void resume() {
+        this.stopConsume = false;
+    }
+
+    @Override
+    public boolean close() {
+        this.closed = true;
+        try {
+            if (fetchThread != null) {
+                fetchThread.interrupt();
+            }
+            if (consumer != null) {
+                consumer.close();
+            }
+        } catch (Throwable throwable) {
+            throwable.printStackTrace();
+        }
+        logger.info("closed {}", inLongTopic);
+        return true;
+    }
+
+    @Override
+    public boolean isClosed() {
+        return false;

Review comment:
       why return false always?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@inlong.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-inlong] vernedeng commented on a change in pull request #2797: [INLONG-2383][SDK] Support Kafka to consume PB compressed message protocol

Posted by GitBox <gi...@apache.org>.
vernedeng commented on a change in pull request #2797:
URL: https://github.com/apache/incubator-inlong/pull/2797#discussion_r819311786



##########
File path: inlong-sdk/sort-sdk/src/main/java/org/apache/inlong/sdk/sort/impl/kafka/InLongKafkaFetcherImpl.java
##########
@@ -0,0 +1,324 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.inlong.sdk.sort.impl.kafka;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicLong;
+import org.apache.inlong.sdk.sort.api.ClientContext;
+import org.apache.inlong.sdk.sort.api.InLongTopicFetcher;
+import org.apache.inlong.sdk.sort.api.SortClientConfig.ConsumeStrategy;
+import org.apache.inlong.sdk.sort.entity.InLongMessage;
+import org.apache.inlong.sdk.sort.entity.InLongTopic;
+import org.apache.inlong.sdk.sort.entity.MessageRecord;
+import org.apache.inlong.sdk.sort.util.StringUtil;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.consumer.ConsumerRecord;
+import org.apache.kafka.clients.consumer.ConsumerRecords;
+import org.apache.kafka.clients.consumer.KafkaConsumer;
+import org.apache.kafka.clients.consumer.OffsetAndMetadata;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.header.Header;
+import org.apache.kafka.common.header.Headers;
+import org.apache.kafka.common.serialization.ByteArrayDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class InLongKafkaFetcherImpl extends InLongTopicFetcher {
+
+    private final Logger logger = LoggerFactory.getLogger(InLongKafkaFetcherImpl.class);
+    private final ConcurrentHashMap<TopicPartition, OffsetAndMetadata> commitOffsetMap = new ConcurrentHashMap<>();
+    private final AtomicLong ackOffsets = new AtomicLong(0);
+    private volatile boolean stopConsume = false;
+    private String bootstrapServers;
+    private KafkaConsumer<byte[], byte[]> consumer;
+
+    public InLongKafkaFetcherImpl(InLongTopic inLongTopic, ClientContext context) {
+        super(inLongTopic, context);
+    }
+
+    @Override
+    public boolean init(Object object) {
+        String bootstrapServers = (String) object;
+        try {
+            createKafkaConsumer(bootstrapServers);
+            if (consumer != null) {
+                consumer.subscribe(Collections.singletonList(inLongTopic.getTopic()),
+                        new AckOffsetOnRebalance(consumer, commitOffsetMap));
+            } else {
+                return false;
+            }
+            this.bootstrapServers = bootstrapServers;
+            String threadName = "sort_sdk_fetch_thread_" + StringUtil.formatDate(new Date(), "yyyy-MM-dd HH:mm:ss");
+            this.fetchThread = new Thread(new Fetcher(), threadName);
+            this.fetchThread.start();
+        } catch (Exception e) {
+            logger.error(e.getMessage(), e);
+            return false;
+        }
+        return true;
+    }
+
+    @Override
+    public void ack(String msgOffset) throws Exception {
+        String[] offset = msgOffset.split(":");
+        if (offset.length == 2) {
+            TopicPartition topicPartition = new TopicPartition(inLongTopic.getTopic(), Integer.parseInt(offset[0]));
+            OffsetAndMetadata offsetAndMetadata = new OffsetAndMetadata(Long.parseLong(offset[1]));
+            commitOffsetMap.put(topicPartition, offsetAndMetadata);
+        }
+    }
+
+    @Override
+    public void pause() {
+        this.stopConsume = true;
+    }
+
+    @Override
+    public void resume() {
+        this.stopConsume = false;
+    }
+
+    @Override
+    public boolean close() {
+        this.closed = true;
+        try {
+            if (fetchThread != null) {
+                fetchThread.interrupt();
+            }
+            if (consumer != null) {
+                consumer.close();
+            }
+        } catch (Throwable throwable) {
+            throwable.printStackTrace();
+        }
+        logger.info("closed {}", inLongTopic);
+        return true;
+    }
+
+    @Override
+    public boolean isClosed() {
+        return false;
+    }
+
+    @Override
+    public void stopConsume(boolean stopConsume) {
+        this.stopConsume = stopConsume;
+    }
+
+    @Override
+    public boolean isConsumeStop() {
+        return this.stopConsume;
+    }
+
+    @Override
+    public InLongTopic getInLongTopic() {
+        return inLongTopic;
+    }
+
+    @Override
+    public long getConsumedDataSize() {
+        return 0;
+    }
+
+    @Override
+    public long getAckedOffset() {
+        return 0;
+    }
+
+    private void createKafkaConsumer(String bootstrapServers) {
+        Properties properties = new Properties();
+        properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
+        properties.put(ConsumerConfig.CLIENT_ID_CONFIG, context.getConfig().getSortTaskId());
+        properties.put(ConsumerConfig.GROUP_ID_CONFIG, context.getConfig().getSortTaskId());
+        properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
+                ByteArrayDeserializer.class.getName());
+        properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
+                ByteArrayDeserializer.class.getName());
+        properties.put(ConsumerConfig.RECEIVE_BUFFER_CONFIG,
+                context.getConfig().getKafkaSocketRecvBufferSize());
+        properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
+        ConsumeStrategy offsetResetStrategy = context.getConfig().getOffsetResetStrategy();
+        if (offsetResetStrategy == ConsumeStrategy.lastest
+                || offsetResetStrategy == ConsumeStrategy.lastest_absolutely) {
+            properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
+        } else if (offsetResetStrategy == ConsumeStrategy.earliest
+                || offsetResetStrategy == ConsumeStrategy.earliest_absolutely) {
+            properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
+        } else {
+            properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "none");
+        }
+        properties.put(ConsumerConfig.FETCH_MAX_BYTES_CONFIG,
+                context.getConfig().getKafkaFetchSizeBytes());
+        properties.put(ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG,
+                context.getConfig().getKafkaFetchWaitMs());
+        properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
+        properties.put(ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG,
+                "org.apache.kafka.clients.consumer.StickyAssignor");
+        properties.put(ConsumerConfig.CONNECTIONS_MAX_IDLE_MS_CONFIG, 120000L);
+        this.bootstrapServers = bootstrapServers;
+        this.consumer = new KafkaConsumer<>(properties);
+    }
+
+    public class Fetcher implements Runnable {
+
+        private void commitKafkaOffset() {
+            if (consumer != null && commitOffsetMap.size() > 0) {
+                try {
+                    consumer.commitSync(commitOffsetMap);
+                    commitOffsetMap.clear();
+                    //TODO monitor commit succ
+
+                } catch (Exception e) {
+                    //TODO monitor commit fail
+                }
+            }
+        }
+
+        /**
+         * put the received msg to onFinished method
+         *
+         * @param messageRecords {@link List < MessageRecord >}
+         */
+        private void handleAndCallbackMsg(List<MessageRecord> messageRecords) {
+            long start = System.currentTimeMillis();
+            try {
+                context.getStatManager()
+                        .getStatistics(context.getConfig().getSortTaskId(),
+                                inLongTopic.getInLongCluster().getClusterId(), inLongTopic.getTopic())
+                        .addCallbackTimes(1);
+                context.getConfig().getCallback().onFinishedBatch(messageRecords);
+                context.getStatManager()
+                        .getStatistics(context.getConfig().getSortTaskId(),
+                                inLongTopic.getInLongCluster().getClusterId(), inLongTopic.getTopic())
+                        .addCallbackTimeCost(System.currentTimeMillis() - start).addCallbackDoneTimes(1);
+            } catch (Exception e) {
+                context.getStatManager()
+                        .getStatistics(context.getConfig().getSortTaskId(),
+                                inLongTopic.getInLongCluster().getClusterId(), inLongTopic.getTopic())
+                        .addCallbackErrorTimes(1);
+                e.printStackTrace();

Review comment:
       should print log




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@inlong.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org