You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flink.apache.org by "TongMeng (Jira)" <ji...@apache.org> on 2022/05/29 16:41:00 UTC

[jira] [Created] (FLINK-27830) My Pyflink job could not submit to Flink cluster

TongMeng created FLINK-27830:
--------------------------------

             Summary: My Pyflink job could not submit to Flink cluster
                 Key: FLINK-27830
                 URL: https://issues.apache.org/jira/browse/FLINK-27830
             Project: Flink
          Issue Type: Bug
          Components: API / Python
    Affects Versions: 1.13.0
            Reporter: TongMeng
         Attachments: error.txt

I use commd
{code:java}
//代码占位符
./flink run --python /home/ubuntu/pyflink/main.py /home/ubuntu/pyflink/xmltest.xml /home/ubuntu/pyflink/task.xml --pyFliles /home/ubuntu/pyflink/logtest.py /home/ubuntu/pyflink/KafkaSource.py /home/ubuntu/pyflink/KafkaSink.py /home/ubuntu/pyflink/pyFlinkState.py /home/ubuntu/pyflink/projectInit.py /home/ubuntu/pyflink/taskInit.py /home/ubuntu/pyflink/UDF1.py {code}
 

to submit my pyflink job.

The error happened on:
{code:java}
//代码占位符
st_env.create_statement_set().add_insert_sql(f"insert into algorithmsink select {taskInfo}(line, calculationStatusMap, gathertime, storetime, rawData, terminal, deviceID, recievetime, car, sendtime, baseInfoMap, workStatusMap, `timestamp`) from mysource").execute().wait()
{code}
My appendix error.txt contains the exceptions. It seems like there is something wrong with Apache Beam.

When I use python command to run my job (in standalone mode instead of submitting to Flink cluster), it works well.

 

 

 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)