You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "chun111111 (Jira)" <ji...@apache.org> on 2019/11/08 06:16:00 UTC

[jira] [Comment Edited] (FLINK-14667) flink1.9,1.8.2 run flinkSql fat jar ,can't load the right tableFactory (TableSourceFactory) for the kafka

    [ https://issues.apache.org/jira/browse/FLINK-14667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969870#comment-16969870 ] 

chun111111 edited comment on FLINK-14667 at 11/8/19 6:15 AM:
-------------------------------------------------------------

1. flink version :flink182,flink191 

2. My program is a fat jar with all flink lib.

The Kafka011TableSourceSinkFactory is packaged in the fat jar ,but it dones't work. :)

 


was (Author: chun111111):
The Kafka011TableSourceSinkFactory is packaged in the fat jar . 

> flink1.9,1.8.2 run flinkSql fat jar ,can't load the right tableFactory (TableSourceFactory) for the kafka 
> ----------------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-14667
>                 URL: https://issues.apache.org/jira/browse/FLINK-14667
>             Project: Flink
>          Issue Type: Bug
>          Components: Table SQL / Client
>    Affects Versions: 1.8.2, 1.9.1
>            Reporter: chun111111
>            Priority: Major
>
> [root@mj flink-1.9.1]# ./bin/flink run -m yarn-cluster /root/cp19/streaming-view-0.0.1-SNAPSHOT-jar-with-dependencies.jar[root@mj flink-1.9.1]# ./bin/flink run -m yarn-cluster /root/cp19/streaming-view-0.0.1-SNAPSHOT-jar-with-dependencies.jar2019-11-07 16:48:57,616 INFO  org.apache.hadoop.yarn.client.RMProxy                         - Connecting to ResourceManager at /0.0.0.0:80322019-11-07 16:48:57,789 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                 - No path for the flink jar passed. Using the location of class org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2019-11-07 16:48:57,789 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                 - No path for the flink jar passed. Using the location of class org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2019-11-07 16:48:57,986 INFO  org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Cluster specification: ClusterSpecification\{masterMemoryMB=1024, taskManagerMemoryMB=1024, numberTaskManagers=1, slotsPerTaskManager=1}2019-11-07 16:48:58,657 WARN  org.apache.flink.yarn.AbstractYarnClusterDescriptor           - The configuration directory ('/opt/app/flink-1.9.1/conf') contains both LOG4J and Logback configuration files. Please delete or rename one of them.2019-11-07 16:49:00,954 INFO  org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Submitting application master application_1573090964983_00392019-11-07 16:49:00,986 INFO  org.apache.hadoop.yarn.client.api.impl.YarnClientImpl         - Submitted application application_1573090964983_00392019-11-07 16:49:00,986 INFO  org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Waiting for the cluster to be allocated2019-11-07 16:49:00,988 INFO  org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Deploying cluster, current state ACCEPTED2019-11-07 16:49:06,534 INFO  org.apache.flink.yarn.AbstractYarnClusterDescriptor           - YARN application has been deployed successfully.Starting execution of program
> ------------------------------------------------------------ The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: findAndCreateTableSource failed. at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:593) at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438) at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274) at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746) at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273) at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205) at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010) at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754) at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)Caused by: org.apache.flink.table.api.TableException: findAndCreateTableSource failed. at org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:67) at org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:54) at org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.java:69) at com.streaming.activity.task.PaymentViewIndex.registerKafkaTable(PaymentViewIndex.java:205) at com.streaming.activity.task.PaymentViewIndex.registerSourceTable(PaymentViewIndex.java:106) at com.streaming.activity.task.PaymentViewIndex.main(PaymentViewIndex.java:68) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576) ... 12 moreCaused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a suitable table factory for 'org.apache.flink.table.factories.TableSourceFactory' inthe classpath.
> Reason: No context matches.
> The following properties are requested:connector.properties.0.key=key.deserializerconnector.properties.0.value=org.apache.kafka.common.serialization.StringDeserializerconnector.properties.1.key=value.deserializerconnector.properties.1.value=org.apache.kafka.common.serialization.StringDeserializerconnector.properties.2.key=group.idconnector.properties.2.value=stream_01connector.properties.3.key=bootstrap.serversconnector.properties.3.value=192.168.163.129:9092connector.property-version=1connector.startup-mode=latest-offsetconnector.topic=acconnector.type=kafkaconnector.version=0.11format.derive-schema=trueformat.fail-on-missing-field=falseformat.property-version=1format.type=jsonschema.0.name=rowtimeschema.0.rowtime.timestamps.from=timestampschema.0.rowtime.timestamps.type=from-fieldschema.0.rowtime.watermarks.delay=60000schema.0.rowtime.watermarks.type=periodic-boundedschema.0.type=TIMESTAMPschema.1.name=proctimeschema.1.proctime=trueschema.1.type=TIMESTAMPschema.2.name=tradeNoschema.2.type=VARCHARupdate-mode=append
> The following factories have been considered:org.apache.flink.table.catalog.GenericInMemoryCatalogFactoryorg.apache.flink.table.sources.CsvBatchTableSourceFactoryorg.apache.flink.table.sources.CsvAppendTableSourceFactoryorg.apache.flink.table.sinks.CsvBatchTableSinkFactoryorg.apache.flink.table.sinks.CsvAppendTableSinkFactoryorg.apache.flink.table.planner.StreamPlannerFactoryorg.apache.flink.table.executor.StreamExecutorFactoryorg.apache.flink.table.planner.delegation.BlinkPlannerFactoryorg.apache.flink.table.planner.delegation.BlinkExecutorFactoryorg.apache.flink.formats.json.JsonRowFormatFactory at org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:283) at org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:191) at org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:144) at org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:97) at org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:64) ... 22 more[root@mj flink-1.9.1]#



--
This message was sent by Atlassian Jira
(v8.3.4#803005)