You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2015/11/23 17:58:11 UTC
[jira] [Commented] (FLINK-3061) Kafka Consumer is not failing if
broker is not available
[ https://issues.apache.org/jira/browse/FLINK-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15022452#comment-15022452 ]
ASF GitHub Bot commented on FLINK-3061:
---------------------------------------
GitHub user rmetzger opened a pull request:
https://github.com/apache/flink/pull/1395
[FLINK-3061] Properly fail Kafka Consumer if broker is not available
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/rmetzger/flink flink3061
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/flink/pull/1395.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #1395
----
commit fa6fbb9471132e88f271b4ff5ac6496831ba5939
Author: Robert Metzger <rm...@apache.org>
Date: 2015-11-23T16:57:26Z
[FLINK-3061] Properly fail Kafka Consumer if broker is not available
----
> Kafka Consumer is not failing if broker is not available
> --------------------------------------------------------
>
> Key: FLINK-3061
> URL: https://issues.apache.org/jira/browse/FLINK-3061
> Project: Flink
> Issue Type: Bug
> Components: Kafka Connector
> Reporter: Robert Metzger
> Assignee: Robert Metzger
> Fix For: 1.0.0
>
>
> It seems that the FlinkKafkaConsumer is just logging the errors when trying to get the initial list of partitions for the topic, but its not failing.
> The following code ALWAYS runs, even if there is no broker or zookeeper running.
> {code}
> def main(args: Array[String]) {
> val env = StreamExecutionEnvironment.getExecutionEnvironment
> val properties = new Properties()
> properties.setProperty("bootstrap.servers", "localhost:9092")
> properties.setProperty("zookeeper.connect", "localhost:2181")
> properties.setProperty("group.id", "test")
> val stream = env
> .addSource(new FlinkKafkaConsumer082[String]("topic", new SimpleStringSchema(), properties))
> .print
> env.execute("Flink Kafka Example")
> }
> {code}
> The runtime consumers are designed to idle when they have no partitions assigned, but there is no check that there are no partitions at all.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)