You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@seatunnel.apache.org by GitBox <gi...@apache.org> on 2022/10/23 09:17:31 UTC

[GitHub] [incubator-seatunnel] 531651225 opened a new pull request, #3164: [Feature][Connector-V2] Starrocks sink connector

531651225 opened a new pull request, #3164:
URL: https://github.com/apache/incubator-seatunnel/pull/3164

   <!--
   
   Thank you for contributing to SeaTunnel! Please make sure that your code changes
   are covered with tests. And in case of new features or big changes
   remember to adjust the documentation.
   
   Feel free to ping committers for the review!
   
   ## Contribution Checklist
   
     - Make sure that the pull request corresponds to a [GITHUB issue](https://github.com/apache/incubator-seatunnel/issues).
   
     - Name the pull request in the form "[Feature] [component] Title of the pull request", where *Feature* can be replaced by `Hotfix`, `Bug`, etc.
   
     - Minor fixes should be named following this pattern: `[hotfix] [docs] Fix typo in README.md doc`.
   
   -->
   
   ## Purpose of this pull request
   refer to https://github.com/apache/incubator-seatunnel/issues/3018
   add starrocks sink connector use StarRocks stream load
   
   #### Description
   Used to send data to StarRocks. Both support streaming and batch mode.
   The internal implementation of StarRocks sink connector is cached and imported by stream load in batches.
   
   <!-- Describe the purpose of this pull request. For example: This pull request adds checkstyle plugin.-->
   
   ## Check list
   
   * [x] Code changed are covered with tests, or it does not need tests for reason:
   * [x] If any new Jar binary package adding in your PR, please add License Notice according
     [New License Guide](https://github.com/apache/incubator-seatunnel/blob/dev/docs/en/contribution/new-license.md)
   * [x] If necessary, please update the documentation to describe the new feature. https://github.com/apache/incubator-seatunnel/tree/dev/docs
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [incubator-seatunnel] EricJoy2048 closed pull request #3164: [Feature][Connector-V2] Starrocks sink connector

Posted by GitBox <gi...@apache.org>.
EricJoy2048 closed pull request #3164: [Feature][Connector-V2] Starrocks sink connector
URL: https://github.com/apache/incubator-seatunnel/pull/3164


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [incubator-seatunnel] TaoZex commented on a diff in pull request #3164: [Feature][Connector-V2] Starrocks sink connector

Posted by GitBox <gi...@apache.org>.
TaoZex commented on code in PR #3164:
URL: https://github.com/apache/incubator-seatunnel/pull/3164#discussion_r1002835501


##########
docs/en/connector-v2/sink/StarRocks.md:
##########
@@ -0,0 +1,120 @@
+# StarRocks
+
+> StarRocks sink connector
+
+## Description
+Used to send data to StarRocks. Both support streaming and batch mode.
+The internal implementation of StarRocks sink connector is cached and imported by stream load in batches.
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+## Options
+
+| name                        | type                         | required | default value   |
+|-----------------------------|------------------------------|----------|-----------------|
+| node_urls                   | list                         | yes      | -               |
+| username                    | string                       | yes      | -               |
+| password                    | string                       | yes      | -               |
+| database                    | string                       | yes      | -               |
+| table                       | string                       | no       | -               |
+| labelPrefix                 | string                       | no       | -               |
+| batch_max_rows              | long                         | no       | 1024            |
+| batch_max_bytes             | int                          | no       | 5 * 1024 * 1024 |
+| batch_interval_ms           | int                          | no       | -               |
+| max_retries                 | int                          | no       | -               |
+| retry_backoff_multiplier_ms | int                          | no       | -               |
+| max_retry_backoff_ms        | int                          | no       | -               |
+| sink.properties.*           | starrocks stream load config | no       | -               |
+
+### node_urls [list]
+
+`StarRocks` cluster address, the format is `["fe_ip:fe_http_port", ...]`
+
+### username [string]
+
+`StarRocks` user username
+
+### password [string]
+
+`StarRocks` user password
+
+### database [string]
+
+The name of StarRocks database
+
+### table [string]
+
+The name of StarRocks table
+
+### labelPrefix [string]
+
+the prefix of  StarRocks stream load label

Review Comment:
   Add a blank line



##########
docs/en/connector-v2/sink/StarRocks.md:
##########
@@ -0,0 +1,120 @@
+# StarRocks
+
+> StarRocks sink connector
+
+## Description
+Used to send data to StarRocks. Both support streaming and batch mode.
+The internal implementation of StarRocks sink connector is cached and imported by stream load in batches.
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+## Options
+
+| name                        | type                         | required | default value   |
+|-----------------------------|------------------------------|----------|-----------------|
+| node_urls                   | list                         | yes      | -               |
+| username                    | string                       | yes      | -               |
+| password                    | string                       | yes      | -               |
+| database                    | string                       | yes      | -               |
+| table                       | string                       | no       | -               |
+| labelPrefix                 | string                       | no       | -               |
+| batch_max_rows              | long                         | no       | 1024            |
+| batch_max_bytes             | int                          | no       | 5 * 1024 * 1024 |
+| batch_interval_ms           | int                          | no       | -               |
+| max_retries                 | int                          | no       | -               |
+| retry_backoff_multiplier_ms | int                          | no       | -               |
+| max_retry_backoff_ms        | int                          | no       | -               |
+| sink.properties.*           | starrocks stream load config | no       | -               |
+
+### node_urls [list]
+
+`StarRocks` cluster address, the format is `["fe_ip:fe_http_port", ...]`
+
+### username [string]
+
+`StarRocks` user username
+
+### password [string]
+
+`StarRocks` user password
+
+### database [string]
+
+The name of StarRocks database
+
+### table [string]
+
+The name of StarRocks table
+
+### labelPrefix [string]
+
+the prefix of  StarRocks stream load label
+### batch_max_rows [string]
+
+For batch writing, when the number of buffers reaches the number of `batch_max_rows` or the byte size of `batch_max_bytes` or the time reaches `batch_interval_ms`, the data will be flushed into the StarRocks
+
+### batch_max_bytes [string]
+
+For batch writing, when the number of buffers reaches the number of `batch_max_rows` or the byte size of `batch_max_bytes` or the time reaches `batch_interval_ms`, the data will be flushed into the StarRocks
+
+### batch_interval_ms [string]
+
+For batch writing, when the number of buffers reaches the number of `batch_max_rows` or the byte size of `batch_max_bytes` or the time reaches `batch_interval_ms`, the data will be flushed into the StarRocks
+
+### max_retries [string]
+
+The number of retries to flush failed
+
+### retry_backoff_multiplier_ms [string]
+
+Using as a multiplier for generating the next delay for backoff
+
+### max_retry_backoff_ms [string]
+
+The amount of time to wait before attempting to retry a request to `StarRocks`
+
+### sink.properties.*  [starrocks stream load config]

Review Comment:
   Add a blank line



##########
seatunnel-connectors-v2/connector-starrocks/pom.xml:
##########
@@ -0,0 +1,59 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+
+    Licensed to the Apache Software Foundation (ASF) under one or more
+    contributor license agreements.  See the NOTICE file distributed with
+    this work for additional information regarding copyright ownership.
+    The ASF licenses this file to You under the Apache License, Version 2.0
+    (the "License"); you may not use this file except in compliance with
+    the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
+    <parent>
+        <artifactId>seatunnel-connectors-v2</artifactId>
+        <groupId>org.apache.seatunnel</groupId>
+        <version>${revision}</version>
+    </parent>
+    <modelVersion>4.0.0</modelVersion>
+
+    <artifactId>connector-starrocks</artifactId>
+
+    <properties>
+        <httpclient.version>4.5.13</httpclient.version>
+        <httpcore.version>4.4.4</httpcore.version>
+    </properties>
+
+    <dependencies>
+        <dependency>
+            <groupId>org.apache.seatunnel</groupId>
+            <artifactId>seatunnel-api</artifactId>
+            <version>${project.version}</version>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.seatunnel</groupId>
+            <artifactId>connector-common</artifactId>
+            <version>${project.version}</version>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.httpcomponents</groupId>
+            <artifactId>httpclient</artifactId>
+            <version>${httpclient.version}</version>
+        </dependency>
+

Review Comment:
   Delete this line



##########
seatunnel-connectors-v2/connector-starrocks/src/main/java/org/apache/seatunnel/connectors/seatunnel/starrocks/sink/StarRocksSink.java:
##########
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.seatunnel.connectors.seatunnel.starrocks.sink;
+
+import static org.apache.seatunnel.connectors.seatunnel.starrocks.config.SinkConfig.DATABASE;
+import static org.apache.seatunnel.connectors.seatunnel.starrocks.config.SinkConfig.NODE_URLS;
+import static org.apache.seatunnel.connectors.seatunnel.starrocks.config.SinkConfig.TABLE;
+
+import org.apache.seatunnel.api.common.PrepareFailException;
+import org.apache.seatunnel.api.sink.SeaTunnelSink;
+import org.apache.seatunnel.api.sink.SinkWriter;
+import org.apache.seatunnel.api.table.type.SeaTunnelDataType;
+import org.apache.seatunnel.api.table.type.SeaTunnelRow;
+import org.apache.seatunnel.api.table.type.SeaTunnelRowType;
+import org.apache.seatunnel.common.config.CheckConfigUtil;
+import org.apache.seatunnel.common.config.CheckResult;
+import org.apache.seatunnel.common.constants.PluginType;
+import org.apache.seatunnel.connectors.seatunnel.common.sink.AbstractSimpleSink;
+import org.apache.seatunnel.connectors.seatunnel.common.sink.AbstractSinkWriter;
+
+import org.apache.seatunnel.shade.com.typesafe.config.Config;
+
+import com.google.auto.service.AutoService;
+
+@AutoService(SeaTunnelSink.class)
+public class StarRocksSink extends AbstractSimpleSink<SeaTunnelRow, Void> {
+
+    private Config pluginConfig;
+    private SeaTunnelRowType seaTunnelRowType;
+
+    @Override
+    public String getPluginName() {
+        return "StarRocks";
+    }
+
+    @Override
+    public void prepare(Config pluginConfig) throws PrepareFailException {
+        this.pluginConfig = pluginConfig;
+        CheckResult result = CheckConfigUtil.checkAllExists(pluginConfig, NODE_URLS, DATABASE, TABLE);

Review Comment:
   Do we need to verify username and password?



##########
seatunnel-connectors-v2/connector-starrocks/src/main/java/org/apache/seatunnel/connectors/seatunnel/starrocks/client/StarRocksStreamLoadVisitor.java:
##########
@@ -0,0 +1,324 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.seatunnel.connectors.seatunnel.starrocks.client;
+
+import org.apache.seatunnel.common.utils.JsonUtils;
+import org.apache.seatunnel.connectors.seatunnel.starrocks.config.SinkConfig;
+import org.apache.seatunnel.connectors.seatunnel.starrocks.serialize.StarRocksDelimiterParser;
+
+import org.apache.commons.codec.binary.Base64;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpStatus;
+import org.apache.http.client.config.RequestConfig;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.impl.client.CloseableHttpClient;
+import org.apache.http.impl.client.DefaultRedirectStrategy;
+import org.apache.http.impl.client.HttpClientBuilder;
+import org.apache.http.impl.client.HttpClients;
+import org.apache.http.util.EntityUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.nio.ByteBuffer;
+import java.nio.charset.StandardCharsets;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+public class StarRocksStreamLoadVisitor {
+
+    private static final Logger LOG = LoggerFactory.getLogger(StarRocksStreamLoadVisitor.class);
+    private static final int CONNECT_TIMEOUT = 1000000;
+    private static final int MAX_SLEEP_TIME = 5;
+
+    private final SinkConfig sinkConfig;
+    private long pos;
+    private static final String RESULT_FAILED = "Fail";
+    private static final String RESULT_LABEL_EXISTED = "Label Already Exists";
+    private static final String LAEBL_STATE_VISIBLE = "VISIBLE";
+    private static final String LAEBL_STATE_COMMITTED = "COMMITTED";
+    private static final String RESULT_LABEL_PREPARE = "PREPARE";
+    private static final String RESULT_LABEL_ABORTED = "ABORTED";
+    private static final String RESULT_LABEL_UNKNOWN = "UNKNOWN";
+
+    private List<String> fieldNames;
+
+    public StarRocksStreamLoadVisitor(SinkConfig sinkConfig, List<String> fieldNames) {
+        this.sinkConfig = sinkConfig;
+        this.fieldNames = fieldNames;
+    }
+
+    public void doStreamLoad(StarRocksFlushTuple flushData) throws IOException {
+        String host = getAvailableHost();
+        if (null == host) {
+            throw new IOException("None of the host in `load_url` could be connected.");
+        }
+        String loadUrl = new StringBuilder(host)
+                .append("/api/")
+                .append(sinkConfig.getDatabase())
+                .append("/")
+                .append(sinkConfig.getTable())
+                .append("/_stream_load")
+                .toString();
+        if (LOG.isDebugEnabled()) {
+            LOG.debug(String.format("Start to join batch data: rows[%d] bytes[%d] label[%s].", flushData.getRows().size(), flushData.getBytes(), flushData.getLabel()));
+        }
+        Map<String, Object> loadResult = doHttpPut(loadUrl, flushData.getLabel(), joinRows(flushData.getRows(), flushData.getBytes().intValue()));
+        final String keyStatus = "Status";
+        if (null == loadResult || !loadResult.containsKey(keyStatus)) {
+            LOG.error("unknown result status. {}", loadResult);
+            throw new IOException("Unable to flush data to StarRocks: unknown result status. " + loadResult);
+        }
+        if (LOG.isDebugEnabled()) {
+            LOG.debug(new StringBuilder("StreamLoad response:\n").append(JsonUtils.toJsonString(loadResult)).toString());
+        }
+        if (RESULT_FAILED.equals(loadResult.get(keyStatus))) {
+            StringBuilder errorBuilder = new StringBuilder("Failed to flush data to StarRocks.\n");
+            if (loadResult.containsKey("Message")) {
+                errorBuilder.append(loadResult.get("Message"));
+                errorBuilder.append('\n');
+            }
+            if (loadResult.containsKey("ErrorURL")) {
+                LOG.error("StreamLoad response: {}", loadResult);
+                try {
+                    errorBuilder.append(doHttpGet(loadResult.get("ErrorURL").toString()));
+                    errorBuilder.append('\n');
+                } catch (IOException e) {
+                    LOG.warn("Get Error URL failed. {} ", loadResult.get("ErrorURL"), e);
+                }
+            } else {
+                errorBuilder.append(JsonUtils.toJsonString(loadResult));
+                errorBuilder.append('\n');
+            }
+            throw new IOException(errorBuilder.toString());
+        } else if (RESULT_LABEL_EXISTED.equals(loadResult.get(keyStatus))) {
+            LOG.debug(new StringBuilder("StreamLoad response:\n").append(JsonUtils.toJsonString(loadResult)).toString());
+            // has to block-checking the state to get the final result
+            checkLabelState(host, flushData.getLabel());
+        }
+    }
+
+    private String getAvailableHost() {
+        List<String> hostList = sinkConfig.getNodeUrls();
+        long tmp = pos + hostList.size();
+        for (; pos < tmp; pos++) {
+            String host = new StringBuilder("http://").append(hostList.get((int) (pos % hostList.size()))).toString();
+            if (tryHttpConnection(host)) {
+                return host;
+            }
+        }
+        return null;
+    }
+
+    private boolean tryHttpConnection(String host) {
+        try {
+            URL url = new URL(host);
+            HttpURLConnection co = (HttpURLConnection) url.openConnection();
+            co.setConnectTimeout(CONNECT_TIMEOUT);
+            co.connect();
+            co.disconnect();
+            return true;
+        } catch (Exception e1) {
+            LOG.warn("Failed to connect to address:{}", host, e1);
+            return false;
+        }
+    }
+
+    private byte[] joinRows(List<byte[]> rows, int totalBytes) {
+        if (SinkConfig.StreamLoadFormat.CSV.equals(sinkConfig.getLoadFormat())) {
+            Map<String, Object> props = sinkConfig.getStreamLoadProps();
+            byte[] lineDelimiter = StarRocksDelimiterParser.parse((String) props.get("row_delimiter"), "\n").getBytes(StandardCharsets.UTF_8);
+            ByteBuffer bos = ByteBuffer.allocate(totalBytes + rows.size() * lineDelimiter.length);
+            for (byte[] row : rows) {
+                bos.put(row);
+                bos.put(lineDelimiter);
+            }
+            return bos.array();
+        }
+
+        if (SinkConfig.StreamLoadFormat.JSON.equals(sinkConfig.getLoadFormat())) {
+            ByteBuffer bos = ByteBuffer.allocate(totalBytes + (rows.isEmpty() ? 2 : rows.size() + 1));
+            bos.put("[".getBytes(StandardCharsets.UTF_8));
+            byte[] jsonDelimiter = ",".getBytes(StandardCharsets.UTF_8);
+            boolean isFirstElement = true;
+            for (byte[] row : rows) {
+                if (!isFirstElement) {
+                    bos.put(jsonDelimiter);
+                }
+                bos.put(row);
+                isFirstElement = false;
+            }
+            bos.put("]".getBytes(StandardCharsets.UTF_8));
+            return bos.array();
+        }
+        throw new RuntimeException("Failed to join rows data, unsupported `format` from stream load properties:");
+    }
+
+    @SuppressWarnings("unchecked")
+    private void checkLabelState(String host, String label) throws IOException {
+        int idx = 0;
+        while (true) {
+            try {
+                TimeUnit.SECONDS.sleep(Math.min(++idx, MAX_SLEEP_TIME));
+            } catch (InterruptedException ex) {
+                break;
+            }
+            try (CloseableHttpClient httpclient = HttpClients.createDefault()) {
+                HttpGet httpGet = new HttpGet(new StringBuilder(host).append("/api/").append(sinkConfig.getDatabase()).append("/get_load_state?label=").append(label).toString());
+                httpGet.setHeader("Authorization", getBasicAuthHeader(sinkConfig.getUsername(), sinkConfig.getPassword()));
+                httpGet.setHeader("Connection", "close");
+                try (CloseableHttpResponse resp = httpclient.execute(httpGet)) {
+                    HttpEntity respEntity = getHttpEntity(resp);
+                    if (respEntity == null) {
+                        throw new IOException(String.format("Failed to flush data to StarRocks, Error " +
+                                "could not get the final state of label[%s].\n", label), null);
+                    }
+
+                    Map<String, Object> result = JsonUtils.parseObject(EntityUtils.toString(respEntity), Map.class);
+                    String labelState = (String) result.get("state");
+                    if (null == labelState) {
+                        throw new IOException(String.format("Failed to flush data to StarRocks, Error " +
+                                "could not get the final state of label[%s]. response[%s]\n", label, EntityUtils.toString(respEntity)), null);
+                    }
+                    LOG.info(String.format("Checking label[%s] state[%s]\n", label, labelState));
+                    switch (labelState) {
+                        case LAEBL_STATE_VISIBLE:
+                        case LAEBL_STATE_COMMITTED:
+                            return;
+                        case RESULT_LABEL_PREPARE:
+                            continue;
+                        case RESULT_LABEL_ABORTED:
+                            throw new StarRocksStreamLoadFailedException(String.format("Failed to flush data to StarRocks, Error " +
+                                    "label[%s] state[%s]\n", label, labelState), null, true);
+                        case RESULT_LABEL_UNKNOWN:
+                        default:
+                            throw new StarRocksStreamLoadFailedException(String.format("Failed to flush data to StarRocks, Error " +
+                                    "label[%s] state[%s]\n", label, labelState), null);
+                    }
+                }
+            }
+        }
+    }
+
+    @SuppressWarnings("unchecked")
+    private Map<String, Object> doHttpPut(String loadUrl, String label, byte[] data) throws IOException {

Review Comment:
   How about Http related methods written in a separate class?



##########
seatunnel-connectors-v2/connector-starrocks/src/main/java/org/apache/seatunnel/connectors/seatunnel/starrocks/sink/StarRocksSinkWriter.java:
##########
@@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.seatunnel.connectors.seatunnel.starrocks.sink;
+
+import org.apache.seatunnel.api.table.type.SeaTunnelRow;
+import org.apache.seatunnel.api.table.type.SeaTunnelRowType;
+import org.apache.seatunnel.connectors.seatunnel.common.sink.AbstractSinkWriter;
+import org.apache.seatunnel.connectors.seatunnel.starrocks.client.StarRocksSinkManager;
+import org.apache.seatunnel.connectors.seatunnel.starrocks.config.SinkConfig;
+import org.apache.seatunnel.connectors.seatunnel.starrocks.serialize.StarRocksCsvSerializer;
+import org.apache.seatunnel.connectors.seatunnel.starrocks.serialize.StarRocksISerializer;
+import org.apache.seatunnel.connectors.seatunnel.starrocks.serialize.StarRocksJsonSerializer;
+
+import org.apache.seatunnel.shade.com.typesafe.config.Config;
+
+import lombok.SneakyThrows;
+import lombok.extern.slf4j.Slf4j;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Optional;
+import java.util.stream.Collectors;
+
+@Slf4j
+public class StarRocksSinkWriter extends AbstractSinkWriter<SeaTunnelRow, Void> {
+
+    private final StarRocksISerializer serializer;
+    private final StarRocksSinkManager manager;
+
+    public StarRocksSinkWriter(Config pluginConfig,
+                               SeaTunnelRowType seaTunnelRowType) {
+        SinkConfig sinkConfig = SinkConfig.loadConfig(pluginConfig);
+        List<String> fieldNames = Arrays.stream(seaTunnelRowType.getFieldNames()).collect(Collectors.toList());
+        this.serializer = createSerializer(sinkConfig, seaTunnelRowType);
+        this.manager = new StarRocksSinkManager(sinkConfig, fieldNames);
+    }
+
+    @Override
+    public void write(SeaTunnelRow element) throws IOException {
+        String record = serializer.serialize(element);
+        manager.write(record);
+    }
+
+    @SneakyThrows
+    @Override
+    public Optional<Void> prepareCommit() {
+        // Flush to storage before snapshot state is performed
+        manager.flush();
+        return super.prepareCommit();
+    }
+
+    @Override
+    public void close() throws IOException {
+        manager.close();

Review Comment:
   How about judge whether manager is null?



##########
docs/en/connector-v2/sink/StarRocks.md:
##########
@@ -0,0 +1,120 @@
+# StarRocks
+
+> StarRocks sink connector
+
+## Description
+Used to send data to StarRocks. Both support streaming and batch mode.
+The internal implementation of StarRocks sink connector is cached and imported by stream load in batches.
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+## Options
+
+| name                        | type                         | required | default value   |
+|-----------------------------|------------------------------|----------|-----------------|
+| node_urls                   | list                         | yes      | -               |
+| username                    | string                       | yes      | -               |
+| password                    | string                       | yes      | -               |
+| database                    | string                       | yes      | -               |
+| table                       | string                       | no       | -               |
+| labelPrefix                 | string                       | no       | -               |
+| batch_max_rows              | long                         | no       | 1024            |
+| batch_max_bytes             | int                          | no       | 5 * 1024 * 1024 |
+| batch_interval_ms           | int                          | no       | -               |
+| max_retries                 | int                          | no       | -               |
+| retry_backoff_multiplier_ms | int                          | no       | -               |
+| max_retry_backoff_ms        | int                          | no       | -               |
+| sink.properties.*           | starrocks stream load config | no       | -               |
+
+### node_urls [list]
+
+`StarRocks` cluster address, the format is `["fe_ip:fe_http_port", ...]`
+
+### username [string]
+
+`StarRocks` user username
+
+### password [string]
+
+`StarRocks` user password
+
+### database [string]
+
+The name of StarRocks database
+
+### table [string]
+
+The name of StarRocks table
+
+### labelPrefix [string]
+
+the prefix of  StarRocks stream load label
+### batch_max_rows [string]
+
+For batch writing, when the number of buffers reaches the number of `batch_max_rows` or the byte size of `batch_max_bytes` or the time reaches `batch_interval_ms`, the data will be flushed into the StarRocks
+
+### batch_max_bytes [string]
+
+For batch writing, when the number of buffers reaches the number of `batch_max_rows` or the byte size of `batch_max_bytes` or the time reaches `batch_interval_ms`, the data will be flushed into the StarRocks
+
+### batch_interval_ms [string]
+
+For batch writing, when the number of buffers reaches the number of `batch_max_rows` or the byte size of `batch_max_bytes` or the time reaches `batch_interval_ms`, the data will be flushed into the StarRocks
+
+### max_retries [string]
+
+The number of retries to flush failed
+
+### retry_backoff_multiplier_ms [string]
+
+Using as a multiplier for generating the next delay for backoff
+
+### max_retry_backoff_ms [string]
+
+The amount of time to wait before attempting to retry a request to `StarRocks`
+
+### sink.properties.*  [starrocks stream load config]
+the parameter of the stream load `data_desc`
+The way to specify the parameter is to add the prefix `sink.properties.` to the original stream load parameter name. 
+For example, the way to specify `strip_outer_array` is: `sink.properties.strip_outer_array`.
+
+#### Supported import data formats

Review Comment:
   Add a blank line



##########
seatunnel-connectors-v2/connector-starrocks/src/main/java/org/apache/seatunnel/connectors/seatunnel/starrocks/client/StarRocksSinkManager.java:
##########
@@ -0,0 +1,154 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.seatunnel.connectors.seatunnel.starrocks.client;
+
+import org.apache.seatunnel.connectors.seatunnel.starrocks.config.SinkConfig;
+
+import com.google.common.base.Strings;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import lombok.extern.slf4j.Slf4j;
+
+import java.io.IOException;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ScheduledFuture;
+import java.util.concurrent.TimeUnit;
+
+@Slf4j
+public class StarRocksSinkManager {
+
+    private final SinkConfig sinkConfig;
+    private final List<byte[]> batchList;
+
+    private StarRocksStreamLoadVisitor starrocksStreamLoadVisitor;
+    private ScheduledExecutorService scheduler;
+    private ScheduledFuture<?> scheduledFuture;
+    private volatile boolean initialize;
+    private volatile Exception flushException;
+    private int batchRowCount = 0;
+    private long batchBytesSize = 0;
+
+    public StarRocksSinkManager(SinkConfig sinkConfig, List<String> fileNames) {
+        this.sinkConfig = sinkConfig;
+        this.batchList = new ArrayList<>();
+        starrocksStreamLoadVisitor = new StarRocksStreamLoadVisitor(sinkConfig, fileNames);
+    }
+
+    private void tryInit() throws IOException {
+        if (initialize) {
+            return;
+        }
+
+        if (sinkConfig.getBatchIntervalMs() != null) {

Review Comment:
   sinkConfig.getBatchIntervalMs() is used repeatedly, it is recommended to define variables to accept it.



##########
seatunnel-connectors-v2/connector-starrocks/src/main/java/org/apache/seatunnel/connectors/seatunnel/starrocks/client/StarRocksSinkManager.java:
##########
@@ -0,0 +1,154 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.seatunnel.connectors.seatunnel.starrocks.client;
+
+import org.apache.seatunnel.connectors.seatunnel.starrocks.config.SinkConfig;
+
+import com.google.common.base.Strings;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import lombok.extern.slf4j.Slf4j;
+
+import java.io.IOException;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ScheduledFuture;
+import java.util.concurrent.TimeUnit;
+
+@Slf4j
+public class StarRocksSinkManager {
+
+    private final SinkConfig sinkConfig;
+    private final List<byte[]> batchList;
+
+    private StarRocksStreamLoadVisitor starrocksStreamLoadVisitor;
+    private ScheduledExecutorService scheduler;
+    private ScheduledFuture<?> scheduledFuture;
+    private volatile boolean initialize;
+    private volatile Exception flushException;
+    private int batchRowCount = 0;
+    private long batchBytesSize = 0;
+
+    public StarRocksSinkManager(SinkConfig sinkConfig, List<String> fileNames) {
+        this.sinkConfig = sinkConfig;
+        this.batchList = new ArrayList<>();
+        starrocksStreamLoadVisitor = new StarRocksStreamLoadVisitor(sinkConfig, fileNames);
+    }
+
+    private void tryInit() throws IOException {
+        if (initialize) {
+            return;
+        }
+
+        if (sinkConfig.getBatchIntervalMs() != null) {
+            scheduler = Executors.newSingleThreadScheduledExecutor(
+                    new ThreadFactoryBuilder().setNameFormat("StarRocks-sink-output-%s").build());
+            scheduledFuture = scheduler.scheduleAtFixedRate(
+                () -> {
+                    try {
+                        flush();
+                    } catch (IOException e) {
+                        flushException = e;
+                    }
+                },
+                    sinkConfig.getBatchIntervalMs(),
+                    sinkConfig.getBatchIntervalMs(),
+                    TimeUnit.MILLISECONDS);
+        }
+        initialize = true;

Review Comment:
   How about putting this line before line 61?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [incubator-seatunnel] EricJoy2048 commented on a diff in pull request #3164: [Feature][Connector-V2] Starrocks sink connector

Posted by GitBox <gi...@apache.org>.
EricJoy2048 commented on code in PR #3164:
URL: https://github.com/apache/incubator-seatunnel/pull/3164#discussion_r1007934366


##########
docs/en/connector-v2/sink/StarRocks.md:
##########
@@ -0,0 +1,122 @@
+# StarRocks
+
+> StarRocks sink connector
+
+## Description
+Used to send data to StarRocks. Both support streaming and batch mode.
+The internal implementation of StarRocks sink connector is cached and imported by stream load in batches.
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+## Options
+
+| name                        | type                         | required | default value   |
+|-----------------------------|------------------------------|----------|-----------------|
+| node_urls                   | list                         | yes      | -               |
+| username                    | string                       | yes      | -               |
+| password                    | string                       | yes      | -               |
+| database                    | string                       | yes      | -               |
+| table                       | string                       | no       | -               |
+| labelPrefix                 | string                       | no       | -               |
+| batch_max_rows              | long                         | no       | 1024            |
+| batch_max_bytes             | int                          | no       | 5 * 1024 * 1024 |
+| batch_interval_ms           | int                          | no       | -               |
+| max_retries                 | int                          | no       | -               |
+| retry_backoff_multiplier_ms | int                          | no       | -               |
+| max_retry_backoff_ms        | int                          | no       | -               |
+| sink.properties.*           | starrocks stream load config | no       | -               |
+
+### node_urls [list]
+
+`StarRocks` cluster address, the format is `["fe_ip:fe_http_port", ...]`
+
+### username [string]
+
+`StarRocks` user username
+
+### password [string]
+
+`StarRocks` user password
+
+### database [string]
+
+The name of StarRocks database
+
+### table [string]
+
+The name of StarRocks table
+
+### labelPrefix [string]
+
+the prefix of  StarRocks stream load label
+
+### batch_max_rows [string]
+
+For batch writing, when the number of buffers reaches the number of `batch_max_rows` or the byte size of `batch_max_bytes` or the time reaches `batch_interval_ms`, the data will be flushed into the StarRocks
+
+### batch_max_bytes [string]
+
+For batch writing, when the number of buffers reaches the number of `batch_max_rows` or the byte size of `batch_max_bytes` or the time reaches `batch_interval_ms`, the data will be flushed into the StarRocks
+
+### batch_interval_ms [string]
+
+For batch writing, when the number of buffers reaches the number of `batch_max_rows` or the byte size of `batch_max_bytes` or the time reaches `batch_interval_ms`, the data will be flushed into the StarRocks
+
+### max_retries [string]
+
+The number of retries to flush failed
+
+### retry_backoff_multiplier_ms [string]
+
+Using as a multiplier for generating the next delay for backoff
+
+### max_retry_backoff_ms [string]
+
+The amount of time to wait before attempting to retry a request to `StarRocks`
+
+### sink.properties.*  [starrocks stream load config]
+
+the parameter of the stream load `data_desc`
+The way to specify the parameter is to add the prefix `sink.properties.` to the original stream load parameter name. 
+For example, the way to specify `strip_outer_array` is: `sink.properties.strip_outer_array`.
+
+#### Supported import data formats
+
+The supported formats include CSV and JSON. Default value: CSV
+
+## Example
+Use JSON format to import data
+```
+sink {
+    StarRocks {
+        nodeUrls = ["e2e_starRocksdb:8030"]
+        username = root
+        password = ""
+        database = "test"
+        table = "e2e_table_sink"
+        batch_max_rows = 10
+        sink.properties.format = "JSON"
+        sink.properties.strip_outer_array = true
+    }
+}
+
+```
+
+Use CSV format to import data
+```
+sink {
+    StarRocks {
+        nodeUrls = ["e2e_starRocksdb:8030"]
+        username = root
+        password = ""
+        database = "test"
+        table = "e2e_table_sink"
+        batch_max_rows = 10
+        sink.properties.format = "CSV"
+        sink.properties.column_separator = "\\x01",
+        sink.properties.row_delimiter = "\\x02"
+    }
+}
+```

Review Comment:
   Please add the `Change log` reference https://github.com/apache/incubator-seatunnel/blob/dev/docs/en/connector-v2/source/SftpFile.md



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [incubator-seatunnel] hailin0 commented on a diff in pull request #3164: [Feature][Connector-V2] Starrocks sink connector

Posted by GitBox <gi...@apache.org>.
hailin0 commented on code in PR #3164:
URL: https://github.com/apache/incubator-seatunnel/pull/3164#discussion_r1002842786


##########
seatunnel-connectors-v2/connector-starrocks/src/main/java/org/apache/seatunnel/connectors/seatunnel/starrocks/client/StarRocksFlushTuple.java:
##########
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.seatunnel.connectors.seatunnel.starrocks.client;
+
+import java.util.List;
+
+public class StarRocksFlushTuple {

Review Comment:
   ```suggestion
   @AllArgsConstructor
   @Getter
   @Setter
   public class StarRocksFlushTuple {
   ```



##########
seatunnel-connectors-v2/connector-starrocks/src/main/java/org/apache/seatunnel/connectors/seatunnel/starrocks/client/StarRocksFlushTuple.java:
##########
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.seatunnel.connectors.seatunnel.starrocks.client;
+
+import java.util.List;
+
+public class StarRocksFlushTuple {
+    private String label;
+    private Long bytes;
+    private List<byte[]> rows;
+
+    public StarRocksFlushTuple(String label, Long bytes, List<byte[]> rows) {
+        this.label = label;
+        this.bytes = bytes;
+        this.rows = rows;
+    }
+
+    public String getLabel() {
+        return label;
+    }
+
+    public void setLabel(String label) {
+        this.label = label;
+    }
+
+    public Long getBytes() {
+        return bytes;
+    }
+
+    public List<byte[]> getRows() {
+        return rows;
+    }

Review Comment:
   remove



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [incubator-seatunnel] 531651225 commented on a diff in pull request #3164: [Feature][Connector-V2] Starrocks sink connector

Posted by GitBox <gi...@apache.org>.
531651225 commented on code in PR #3164:
URL: https://github.com/apache/incubator-seatunnel/pull/3164#discussion_r1013962165


##########
seatunnel-e2e/seatunnel-connector-v2-e2e/connector-starrocks-e2e/src/test/resources/log4j.properties:
##########
@@ -0,0 +1,22 @@
+#

Review Comment:
   > Remove this file.
   > 
   > This is the common configuration of e2e https://github.com/apache/incubator-seatunnel/blob/dev/seatunnel-e2e/seatunnel-e2e-common/src/test/resources/log4j2.properties
   
   thinks, i fixed it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [incubator-seatunnel] 531651225 commented on a diff in pull request #3164: [Feature][Connector-V2] Starrocks sink connector

Posted by GitBox <gi...@apache.org>.
531651225 commented on code in PR #3164:
URL: https://github.com/apache/incubator-seatunnel/pull/3164#discussion_r1007986763


##########
docs/en/connector-v2/sink/StarRocks.md:
##########
@@ -0,0 +1,122 @@
+# StarRocks
+
+> StarRocks sink connector
+
+## Description
+Used to send data to StarRocks. Both support streaming and batch mode.
+The internal implementation of StarRocks sink connector is cached and imported by stream load in batches.
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+## Options
+
+| name                        | type                         | required | default value   |
+|-----------------------------|------------------------------|----------|-----------------|
+| node_urls                   | list                         | yes      | -               |
+| username                    | string                       | yes      | -               |
+| password                    | string                       | yes      | -               |
+| database                    | string                       | yes      | -               |
+| table                       | string                       | no       | -               |
+| labelPrefix                 | string                       | no       | -               |
+| batch_max_rows              | long                         | no       | 1024            |
+| batch_max_bytes             | int                          | no       | 5 * 1024 * 1024 |
+| batch_interval_ms           | int                          | no       | -               |
+| max_retries                 | int                          | no       | -               |
+| retry_backoff_multiplier_ms | int                          | no       | -               |
+| max_retry_backoff_ms        | int                          | no       | -               |
+| sink.properties.*           | starrocks stream load config | no       | -               |
+
+### node_urls [list]
+
+`StarRocks` cluster address, the format is `["fe_ip:fe_http_port", ...]`
+
+### username [string]
+
+`StarRocks` user username
+
+### password [string]
+
+`StarRocks` user password
+
+### database [string]
+
+The name of StarRocks database
+
+### table [string]
+
+The name of StarRocks table
+
+### labelPrefix [string]
+
+the prefix of  StarRocks stream load label
+
+### batch_max_rows [string]
+
+For batch writing, when the number of buffers reaches the number of `batch_max_rows` or the byte size of `batch_max_bytes` or the time reaches `batch_interval_ms`, the data will be flushed into the StarRocks
+
+### batch_max_bytes [string]
+
+For batch writing, when the number of buffers reaches the number of `batch_max_rows` or the byte size of `batch_max_bytes` or the time reaches `batch_interval_ms`, the data will be flushed into the StarRocks
+
+### batch_interval_ms [string]
+
+For batch writing, when the number of buffers reaches the number of `batch_max_rows` or the byte size of `batch_max_bytes` or the time reaches `batch_interval_ms`, the data will be flushed into the StarRocks
+
+### max_retries [string]
+
+The number of retries to flush failed
+
+### retry_backoff_multiplier_ms [string]
+
+Using as a multiplier for generating the next delay for backoff
+
+### max_retry_backoff_ms [string]
+
+The amount of time to wait before attempting to retry a request to `StarRocks`
+
+### sink.properties.*  [starrocks stream load config]
+
+the parameter of the stream load `data_desc`
+The way to specify the parameter is to add the prefix `sink.properties.` to the original stream load parameter name. 
+For example, the way to specify `strip_outer_array` is: `sink.properties.strip_outer_array`.
+
+#### Supported import data formats
+
+The supported formats include CSV and JSON. Default value: CSV
+
+## Example
+Use JSON format to import data
+```
+sink {
+    StarRocks {
+        nodeUrls = ["e2e_starRocksdb:8030"]
+        username = root
+        password = ""
+        database = "test"
+        table = "e2e_table_sink"
+        batch_max_rows = 10
+        sink.properties.format = "JSON"
+        sink.properties.strip_outer_array = true
+    }
+}
+
+```
+
+Use CSV format to import data
+```
+sink {
+    StarRocks {
+        nodeUrls = ["e2e_starRocksdb:8030"]
+        username = root
+        password = ""
+        database = "test"
+        table = "e2e_table_sink"
+        batch_max_rows = 10
+        sink.properties.format = "CSV"
+        sink.properties.column_separator = "\\x01",
+        sink.properties.row_delimiter = "\\x02"
+    }
+}
+```

Review Comment:
   > Please add the `Change log` reference https://github.com/apache/incubator-seatunnel/blob/dev/docs/en/connector-v2/source/SftpFile.md
   
   ok, i fixed. Is that so? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [incubator-seatunnel] 531651225 commented on a diff in pull request #3164: [Feature][Connector-V2] Starrocks sink connector

Posted by GitBox <gi...@apache.org>.
531651225 commented on code in PR #3164:
URL: https://github.com/apache/incubator-seatunnel/pull/3164#discussion_r1006406891


##########
seatunnel-connectors-v2/connector-starrocks/src/main/java/org/apache/seatunnel/connectors/seatunnel/starrocks/client/StarRocksFlushTuple.java:
##########
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.seatunnel.connectors.seatunnel.starrocks.client;
+
+import java.util.List;
+
+public class StarRocksFlushTuple {
+    private String label;
+    private Long bytes;
+    private List<byte[]> rows;
+
+    public StarRocksFlushTuple(String label, Long bytes, List<byte[]> rows) {
+        this.label = label;
+        this.bytes = bytes;
+        this.rows = rows;
+    }
+
+    public String getLabel() {
+        return label;
+    }
+
+    public void setLabel(String label) {
+        this.label = label;
+    }
+
+    public Long getBytes() {
+        return bytes;
+    }
+
+    public List<byte[]> getRows() {
+        return rows;
+    }

Review Comment:
   > remove
   
   thinks,i have fixed above . PTAL



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [incubator-seatunnel] ic4y merged pull request #3164: [Feature][Connector-V2] Starrocks sink connector

Posted by GitBox <gi...@apache.org>.
ic4y merged PR #3164:
URL: https://github.com/apache/incubator-seatunnel/pull/3164


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [incubator-seatunnel] EricJoy2048 commented on pull request #3164: [Feature][Connector-V2] Starrocks sink connector

Posted by GitBox <gi...@apache.org>.
EricJoy2048 commented on PR #3164:
URL: https://github.com/apache/incubator-seatunnel/pull/3164#issuecomment-1302923092

   Please resolve conflicts and fix the e2e error. @531651225 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [incubator-seatunnel] hailin0 commented on a diff in pull request #3164: [Feature][Connector-V2] Starrocks sink connector

Posted by GitBox <gi...@apache.org>.
hailin0 commented on code in PR #3164:
URL: https://github.com/apache/incubator-seatunnel/pull/3164#discussion_r1013918786


##########
seatunnel-e2e/seatunnel-connector-v2-e2e/connector-starrocks-e2e/src/test/resources/log4j.properties:
##########
@@ -0,0 +1,22 @@
+#

Review Comment:
   Remove this file.
   
   This is the common configuration of e2e
   https://github.com/apache/incubator-seatunnel/blob/dev/seatunnel-e2e/seatunnel-e2e-common/src/test/resources/log4j2.properties



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [incubator-seatunnel] 531651225 commented on a diff in pull request #3164: [Feature][Connector-V2] Starrocks sink connector

Posted by GitBox <gi...@apache.org>.
531651225 commented on code in PR #3164:
URL: https://github.com/apache/incubator-seatunnel/pull/3164#discussion_r1006406674


##########
seatunnel-connectors-v2/connector-starrocks/src/main/java/org/apache/seatunnel/connectors/seatunnel/starrocks/client/StarRocksStreamLoadVisitor.java:
##########
@@ -0,0 +1,324 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.seatunnel.connectors.seatunnel.starrocks.client;
+
+import org.apache.seatunnel.common.utils.JsonUtils;
+import org.apache.seatunnel.connectors.seatunnel.starrocks.config.SinkConfig;
+import org.apache.seatunnel.connectors.seatunnel.starrocks.serialize.StarRocksDelimiterParser;
+
+import org.apache.commons.codec.binary.Base64;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpStatus;
+import org.apache.http.client.config.RequestConfig;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.impl.client.CloseableHttpClient;
+import org.apache.http.impl.client.DefaultRedirectStrategy;
+import org.apache.http.impl.client.HttpClientBuilder;
+import org.apache.http.impl.client.HttpClients;
+import org.apache.http.util.EntityUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.nio.ByteBuffer;
+import java.nio.charset.StandardCharsets;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+public class StarRocksStreamLoadVisitor {
+
+    private static final Logger LOG = LoggerFactory.getLogger(StarRocksStreamLoadVisitor.class);
+    private static final int CONNECT_TIMEOUT = 1000000;
+    private static final int MAX_SLEEP_TIME = 5;
+
+    private final SinkConfig sinkConfig;
+    private long pos;
+    private static final String RESULT_FAILED = "Fail";
+    private static final String RESULT_LABEL_EXISTED = "Label Already Exists";
+    private static final String LAEBL_STATE_VISIBLE = "VISIBLE";
+    private static final String LAEBL_STATE_COMMITTED = "COMMITTED";
+    private static final String RESULT_LABEL_PREPARE = "PREPARE";
+    private static final String RESULT_LABEL_ABORTED = "ABORTED";
+    private static final String RESULT_LABEL_UNKNOWN = "UNKNOWN";
+
+    private List<String> fieldNames;
+
+    public StarRocksStreamLoadVisitor(SinkConfig sinkConfig, List<String> fieldNames) {
+        this.sinkConfig = sinkConfig;
+        this.fieldNames = fieldNames;
+    }
+
+    public void doStreamLoad(StarRocksFlushTuple flushData) throws IOException {
+        String host = getAvailableHost();
+        if (null == host) {
+            throw new IOException("None of the host in `load_url` could be connected.");
+        }
+        String loadUrl = new StringBuilder(host)
+                .append("/api/")
+                .append(sinkConfig.getDatabase())
+                .append("/")
+                .append(sinkConfig.getTable())
+                .append("/_stream_load")
+                .toString();
+        if (LOG.isDebugEnabled()) {
+            LOG.debug(String.format("Start to join batch data: rows[%d] bytes[%d] label[%s].", flushData.getRows().size(), flushData.getBytes(), flushData.getLabel()));
+        }
+        Map<String, Object> loadResult = doHttpPut(loadUrl, flushData.getLabel(), joinRows(flushData.getRows(), flushData.getBytes().intValue()));
+        final String keyStatus = "Status";
+        if (null == loadResult || !loadResult.containsKey(keyStatus)) {
+            LOG.error("unknown result status. {}", loadResult);
+            throw new IOException("Unable to flush data to StarRocks: unknown result status. " + loadResult);
+        }
+        if (LOG.isDebugEnabled()) {
+            LOG.debug(new StringBuilder("StreamLoad response:\n").append(JsonUtils.toJsonString(loadResult)).toString());
+        }
+        if (RESULT_FAILED.equals(loadResult.get(keyStatus))) {
+            StringBuilder errorBuilder = new StringBuilder("Failed to flush data to StarRocks.\n");
+            if (loadResult.containsKey("Message")) {
+                errorBuilder.append(loadResult.get("Message"));
+                errorBuilder.append('\n');
+            }
+            if (loadResult.containsKey("ErrorURL")) {
+                LOG.error("StreamLoad response: {}", loadResult);
+                try {
+                    errorBuilder.append(doHttpGet(loadResult.get("ErrorURL").toString()));
+                    errorBuilder.append('\n');
+                } catch (IOException e) {
+                    LOG.warn("Get Error URL failed. {} ", loadResult.get("ErrorURL"), e);
+                }
+            } else {
+                errorBuilder.append(JsonUtils.toJsonString(loadResult));
+                errorBuilder.append('\n');
+            }
+            throw new IOException(errorBuilder.toString());
+        } else if (RESULT_LABEL_EXISTED.equals(loadResult.get(keyStatus))) {
+            LOG.debug(new StringBuilder("StreamLoad response:\n").append(JsonUtils.toJsonString(loadResult)).toString());
+            // has to block-checking the state to get the final result
+            checkLabelState(host, flushData.getLabel());
+        }
+    }
+
+    private String getAvailableHost() {
+        List<String> hostList = sinkConfig.getNodeUrls();
+        long tmp = pos + hostList.size();
+        for (; pos < tmp; pos++) {
+            String host = new StringBuilder("http://").append(hostList.get((int) (pos % hostList.size()))).toString();
+            if (tryHttpConnection(host)) {
+                return host;
+            }
+        }
+        return null;
+    }
+
+    private boolean tryHttpConnection(String host) {
+        try {
+            URL url = new URL(host);
+            HttpURLConnection co = (HttpURLConnection) url.openConnection();
+            co.setConnectTimeout(CONNECT_TIMEOUT);
+            co.connect();
+            co.disconnect();
+            return true;
+        } catch (Exception e1) {
+            LOG.warn("Failed to connect to address:{}", host, e1);
+            return false;
+        }
+    }
+
+    private byte[] joinRows(List<byte[]> rows, int totalBytes) {
+        if (SinkConfig.StreamLoadFormat.CSV.equals(sinkConfig.getLoadFormat())) {
+            Map<String, Object> props = sinkConfig.getStreamLoadProps();
+            byte[] lineDelimiter = StarRocksDelimiterParser.parse((String) props.get("row_delimiter"), "\n").getBytes(StandardCharsets.UTF_8);
+            ByteBuffer bos = ByteBuffer.allocate(totalBytes + rows.size() * lineDelimiter.length);
+            for (byte[] row : rows) {
+                bos.put(row);
+                bos.put(lineDelimiter);
+            }
+            return bos.array();
+        }
+
+        if (SinkConfig.StreamLoadFormat.JSON.equals(sinkConfig.getLoadFormat())) {
+            ByteBuffer bos = ByteBuffer.allocate(totalBytes + (rows.isEmpty() ? 2 : rows.size() + 1));
+            bos.put("[".getBytes(StandardCharsets.UTF_8));
+            byte[] jsonDelimiter = ",".getBytes(StandardCharsets.UTF_8);
+            boolean isFirstElement = true;
+            for (byte[] row : rows) {
+                if (!isFirstElement) {
+                    bos.put(jsonDelimiter);
+                }
+                bos.put(row);
+                isFirstElement = false;
+            }
+            bos.put("]".getBytes(StandardCharsets.UTF_8));
+            return bos.array();
+        }
+        throw new RuntimeException("Failed to join rows data, unsupported `format` from stream load properties:");
+    }
+
+    @SuppressWarnings("unchecked")
+    private void checkLabelState(String host, String label) throws IOException {
+        int idx = 0;
+        while (true) {
+            try {
+                TimeUnit.SECONDS.sleep(Math.min(++idx, MAX_SLEEP_TIME));
+            } catch (InterruptedException ex) {
+                break;
+            }
+            try (CloseableHttpClient httpclient = HttpClients.createDefault()) {
+                HttpGet httpGet = new HttpGet(new StringBuilder(host).append("/api/").append(sinkConfig.getDatabase()).append("/get_load_state?label=").append(label).toString());
+                httpGet.setHeader("Authorization", getBasicAuthHeader(sinkConfig.getUsername(), sinkConfig.getPassword()));
+                httpGet.setHeader("Connection", "close");
+                try (CloseableHttpResponse resp = httpclient.execute(httpGet)) {
+                    HttpEntity respEntity = getHttpEntity(resp);
+                    if (respEntity == null) {
+                        throw new IOException(String.format("Failed to flush data to StarRocks, Error " +
+                                "could not get the final state of label[%s].\n", label), null);
+                    }
+
+                    Map<String, Object> result = JsonUtils.parseObject(EntityUtils.toString(respEntity), Map.class);
+                    String labelState = (String) result.get("state");
+                    if (null == labelState) {
+                        throw new IOException(String.format("Failed to flush data to StarRocks, Error " +
+                                "could not get the final state of label[%s]. response[%s]\n", label, EntityUtils.toString(respEntity)), null);
+                    }
+                    LOG.info(String.format("Checking label[%s] state[%s]\n", label, labelState));
+                    switch (labelState) {
+                        case LAEBL_STATE_VISIBLE:
+                        case LAEBL_STATE_COMMITTED:
+                            return;
+                        case RESULT_LABEL_PREPARE:
+                            continue;
+                        case RESULT_LABEL_ABORTED:
+                            throw new StarRocksStreamLoadFailedException(String.format("Failed to flush data to StarRocks, Error " +
+                                    "label[%s] state[%s]\n", label, labelState), null, true);
+                        case RESULT_LABEL_UNKNOWN:
+                        default:
+                            throw new StarRocksStreamLoadFailedException(String.format("Failed to flush data to StarRocks, Error " +
+                                    "label[%s] state[%s]\n", label, labelState), null);
+                    }
+                }
+            }
+        }
+    }
+
+    @SuppressWarnings("unchecked")
+    private Map<String, Object> doHttpPut(String loadUrl, String label, byte[] data) throws IOException {

Review Comment:
   > How about Http related methods written in a separate class?
   
   thinks,i have fixed above . PTAL



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org