You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@seatunnel.apache.org by 李洪军 <li...@163.com> on 2022/01/23 03:14:13 UTC

about data transform problem

Hello,I have a problem. How can I use the seatunnel for source is jdbc and sink is kafka 。
thanks very much.

Re:Re:about data transform problem

Posted by 李洪军 <li...@163.com>.
With dev branch 
 Can I use the flink?how can I change my config?
now my config like this:
env {
  execution.parallelism = 1
}
source {
    JdbcSource {
        driver = com.mysql.cj.jdbc.Driver
        url = "jdbc:mysql://127.0.0.1:3306/damp_v2_xxl_job?serverTimezone=Asia/Shanghai&characterEncoding=utf8&useSSL=false"
        username = root
        password = root
        query = "select id,job_group,job_id,executor_address,executor_handler,executor_param,executor_sharding_param,executor_fail_retry_count,trigger_time,trigger_code,trigger_msg,handle_time,handle_code,handle_msg,alarm_status from xxl_job_log"
        result_table_name="xxl_job_log"
        field_name="id,job_group,job_id,executor_address,executor_handler,executor_param,executor_sharding_param,executor_fail_retry_count,trigger_time,trigger_code,trigger_msg,handle_time,handle_code,handle_msg,alarm_status"
    }
}
transform {


}


sink {
      KafkaTable {
        producer.bootstrap.servers = "127.0.0.1:9092"
        topics = mysql_test
      }
}




thanks

















At 2022-01-23 12:13:58, "rickyhuo" <hu...@163.com> wrote:
>With dev branch.
>
>
>The config like this:
>
>
>spark {
>spark.app.name = "seatunnel"
>spark.executor.instances = 2
>spark.executor.cores = 1
>spark.executor.memory = "1g"
>}
>
>
>input {
>
>
>jdbc {
>}
>}
>
>
>filter {
>
>
>
>
>
>
>}
>
>
>output {
>
>
>kafka {
>streaming_output_mode = "Append"
>}
>}
>
>
>Detail usage can refer https://interestinglab.github.io/seatunnel-docs/#/zh-cn/v1/configuration/input-plugin.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>At 2022-01-23 11:14:13, "李洪军" <li...@163.com> wrote:
>>Hello,I have a problem. How can I use the seatunnel for source is jdbc and sink is kafka 。
>>thanks very much.

Re:about data transform problem

Posted by rickyhuo <hu...@163.com>.
With dev branch.


The config like this:


spark {
spark.app.name = "seatunnel"
spark.executor.instances = 2
spark.executor.cores = 1
spark.executor.memory = "1g"
}


input {


jdbc {
}
}


filter {






}


output {


kafka {
streaming_output_mode = "Append"
}
}


Detail usage can refer https://interestinglab.github.io/seatunnel-docs/#/zh-cn/v1/configuration/input-plugin.

















At 2022-01-23 11:14:13, "李洪军" <li...@163.com> wrote:
>Hello,I have a problem. How can I use the seatunnel for source is jdbc and sink is kafka 。
>thanks very much.