You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by Janardhan Reddy <ja...@olacabs.com> on 2016/08/02 11:26:39 UTC

Flink Kafka Consumer Behaviour

Hi,

When the run the following command i am getting that no topic is available
for that consumer group.  i am suing
flink-connector-kafka-0.8_${scala.version}(2.11).

./bin/kafka-consumer-groups.sh --zookeeper <> --group <> --describe

No topic available for consumer group provided


Does the kafka consumer commit offset to the broker always ? Do we need to
enable checkpointing for the offsets to be committed ?


Thanks

Re: Flink Kafka Consumer Behaviour

Posted by Stephan Ewen <se...@apache.org>.
The latest Flink has consumers for Kafka 0.8, 0.9, 0.10 - which one are you
using?

I would assume you use Flink with Kafka 0.8.x, because as far as I know,
starting from Kafka 0.9, offsets are not handled by ZooKeeper any more...

On Mon, Apr 24, 2017 at 12:18 AM, Meghashyam Sandeep V <
vr1meghashyam@gmail.com> wrote:

> Hi All,
>
> Sorry for the miscommunication. I'm not using 0.8. I'm using latest
> available flink-kafka client. I don't see my app registered as a consumer
> group. I wanted to know if there is a way to monitor Kafka offsets.
>
> Thanks,
> Sandeep
>
> On Apr 23, 2017 9:38 AM, "Stephan Ewen" <se...@apache.org> wrote:
>
>> Since it is something special to Kafka 0.8, it could be implemented in a
>> simple addition to the ZooKeeperOffsetHandler used by the
>> FlinkKafkaConsumer08.
>>
>> Would you be willing to contribute this? That would certainly help
>> speeding up the resolution of the issue...
>>
>>
>> On Fri, Apr 21, 2017 at 2:33 AM, Tzu-Li (Gordon) Tai <tzulitai@apache.org
>> > wrote:
>>
>>> One additional note:
>>>
>>> In FlinkKafkaConsumer 0.9+, the current read offset should already exist
>>> in Flink metrics.
>>> See https://issues.apache.org/jira/browse/FLINK-4186.
>>>
>>> But yes, this is still missing for 0.8, so you need to directly query ZK
>>> for this.
>>>
>>> Cheers,
>>> Gordon
>>>
>>>
>>> On 21 April 2017 at 8:28:09 AM, Tzu-Li (Gordon) Tai (tzulitai@apache.org)
>>> wrote:
>>>
>>> Hi Sandeep,
>>>
>>> It isn’t fixed yet, so I think external tools like the Kafka offset
>>> checker still won’t work.
>>> If you’re using 08 and is currently stuck with this issue, you can still
>>> directly query ZK to get the offsets.
>>>
>>> I think for FlinkKafkaConsumer09 the offset is exposed to Flink's metric
>>> system using Kafka’s own returned metrics, but for 08 this is still missing.
>>>
>>> There is this JIRA [1] that aims at exposing consumer lag across all
>>> Kafka consumer versions to Flink metrics. Perhaps it would make sense to
>>> also generally expose the offset for all Kafka consumer versions to Flink
>>> metrics as well.
>>>
>>> - Gordon
>>>
>>> [1] https://issues.apache.org/jira/browse/FLINK-6109
>>>
>>>
>>> On 19 April 2017 at 5:11:11 AM, sandeep6 (vr1meghashyam@gmail.com)
>>> wrote:
>>>
>>> Is this fixed now? If not, is there any way to monitor kafka offset that
>>> is
>>> being processed by Flink? This should be a use case for everyone who uses
>>> Flink with Kafka.
>>>
>>>
>>>
>>> --
>>> View this message in context: http://apache-flink-user-maili
>>> ng-list-archive.2336050.n4.nabble.com/Flink-Kafka-Consumer-
>>> Behaviour-tp8257p12663.html
>>> Sent from the Apache Flink User Mailing List archive. mailing list
>>> archive at Nabble.com.
>>>
>>>
>>

Re: Flink Kafka Consumer Behaviour

Posted by Meghashyam Sandeep V <vr...@gmail.com>.
Hi All,

Sorry for the miscommunication. I'm not using 0.8. I'm using latest
available flink-kafka client. I don't see my app registered as a consumer
group. I wanted to know if there is a way to monitor Kafka offsets.

Thanks,
Sandeep

On Apr 23, 2017 9:38 AM, "Stephan Ewen" <se...@apache.org> wrote:

> Since it is something special to Kafka 0.8, it could be implemented in a
> simple addition to the ZooKeeperOffsetHandler used by the
> FlinkKafkaConsumer08.
>
> Would you be willing to contribute this? That would certainly help
> speeding up the resolution of the issue...
>
>
> On Fri, Apr 21, 2017 at 2:33 AM, Tzu-Li (Gordon) Tai <tz...@apache.org>
> wrote:
>
>> One additional note:
>>
>> In FlinkKafkaConsumer 0.9+, the current read offset should already exist
>> in Flink metrics.
>> See https://issues.apache.org/jira/browse/FLINK-4186.
>>
>> But yes, this is still missing for 0.8, so you need to directly query ZK
>> for this.
>>
>> Cheers,
>> Gordon
>>
>>
>> On 21 April 2017 at 8:28:09 AM, Tzu-Li (Gordon) Tai (tzulitai@apache.org)
>> wrote:
>>
>> Hi Sandeep,
>>
>> It isn’t fixed yet, so I think external tools like the Kafka offset
>> checker still won’t work.
>> If you’re using 08 and is currently stuck with this issue, you can still
>> directly query ZK to get the offsets.
>>
>> I think for FlinkKafkaConsumer09 the offset is exposed to Flink's metric
>> system using Kafka’s own returned metrics, but for 08 this is still missing.
>>
>> There is this JIRA [1] that aims at exposing consumer lag across all
>> Kafka consumer versions to Flink metrics. Perhaps it would make sense to
>> also generally expose the offset for all Kafka consumer versions to Flink
>> metrics as well.
>>
>> - Gordon
>>
>> [1] https://issues.apache.org/jira/browse/FLINK-6109
>>
>>
>> On 19 April 2017 at 5:11:11 AM, sandeep6 (vr1meghashyam@gmail.com) wrote:
>>
>> Is this fixed now? If not, is there any way to monitor kafka offset that
>> is
>> being processed by Flink? This should be a use case for everyone who uses
>> Flink with Kafka.
>>
>>
>>
>> --
>> View this message in context: http://apache-flink-user-maili
>> ng-list-archive.2336050.n4.nabble.com/Flink-Kafka-Consume
>> r-Behaviour-tp8257p12663.html
>> Sent from the Apache Flink User Mailing List archive. mailing list
>> archive at Nabble.com.
>>
>>
>

Re: Flink Kafka Consumer Behaviour

Posted by Stephan Ewen <se...@apache.org>.
Since it is something special to Kafka 0.8, it could be implemented in a
simple addition to the ZooKeeperOffsetHandler used by the
FlinkKafkaConsumer08.

Would you be willing to contribute this? That would certainly help speeding
up the resolution of the issue...


On Fri, Apr 21, 2017 at 2:33 AM, Tzu-Li (Gordon) Tai <tz...@apache.org>
wrote:

> One additional note:
>
> In FlinkKafkaConsumer 0.9+, the current read offset should already exist
> in Flink metrics.
> See https://issues.apache.org/jira/browse/FLINK-4186.
>
> But yes, this is still missing for 0.8, so you need to directly query ZK
> for this.
>
> Cheers,
> Gordon
>
>
> On 21 April 2017 at 8:28:09 AM, Tzu-Li (Gordon) Tai (tzulitai@apache.org)
> wrote:
>
> Hi Sandeep,
>
> It isn’t fixed yet, so I think external tools like the Kafka offset
> checker still won’t work.
> If you’re using 08 and is currently stuck with this issue, you can still
> directly query ZK to get the offsets.
>
> I think for FlinkKafkaConsumer09 the offset is exposed to Flink's metric
> system using Kafka’s own returned metrics, but for 08 this is still missing.
>
> There is this JIRA [1] that aims at exposing consumer lag across all Kafka
> consumer versions to Flink metrics. Perhaps it would make sense to also
> generally expose the offset for all Kafka consumer versions to Flink
> metrics as well.
>
> - Gordon
>
> [1] https://issues.apache.org/jira/browse/FLINK-6109
>
>
> On 19 April 2017 at 5:11:11 AM, sandeep6 (vr1meghashyam@gmail.com) wrote:
>
> Is this fixed now? If not, is there any way to monitor kafka offset that is
> being processed by Flink? This should be a use case for everyone who uses
> Flink with Kafka.
>
>
>
> --
> View this message in context: http://apache-flink-user-
> mailing-list-archive.2336050.n4.nabble.com/Flink-Kafka-Consumer-Behaviour-
> tp8257p12663.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive
> at Nabble.com.
>
>

Re: Flink Kafka Consumer Behaviour

Posted by "Tzu-Li (Gordon) Tai" <tz...@apache.org>.
One additional note:

In FlinkKafkaConsumer 0.9+, the current read offset should already exist in Flink metrics.
See https://issues.apache.org/jira/browse/FLINK-4186.

But yes, this is still missing for 0.8, so you need to directly query ZK for this.

Cheers,
Gordon

On 21 April 2017 at 8:28:09 AM, Tzu-Li (Gordon) Tai (tzulitai@apache.org) wrote:

Hi Sandeep,

It isn’t fixed yet, so I think external tools like the Kafka offset checker still won’t work.
If you’re using 08 and is currently stuck with this issue, you can still directly query ZK to get the offsets.

I think for FlinkKafkaConsumer09 the offset is exposed to Flink's metric system using Kafka’s own returned metrics, but for 08 this is still missing.

There is this JIRA [1] that aims at exposing consumer lag across all Kafka consumer versions to Flink metrics. Perhaps it would make sense to also generally expose the offset for all Kafka consumer versions to Flink metrics as well.

- Gordon

[1] https://issues.apache.org/jira/browse/FLINK-6109


On 19 April 2017 at 5:11:11 AM, sandeep6 (vr1meghashyam@gmail.com) wrote:

Is this fixed now? If not, is there any way to monitor kafka offset that is
being processed by Flink? This should be a use case for everyone who uses
Flink with Kafka.



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-Kafka-Consumer-Behaviour-tp8257p12663.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

Re: Flink Kafka Consumer Behaviour

Posted by "Tzu-Li (Gordon) Tai" <tz...@apache.org>.
Hi Sandeep,

It isn’t fixed yet, so I think external tools like the Kafka offset checker still won’t work.
If you’re using 08 and is currently stuck with this issue, you can still directly query ZK to get the offsets.

I think for FlinkKafkaConsumer09 the offset is exposed to Flink's metric system using Kafka’s own returned metrics, but for 08 this is still missing.

There is this JIRA [1] that aims at exposing consumer lag across all Kafka consumer versions to Flink metrics. Perhaps it would make sense to also generally expose the offset for all Kafka consumer versions to Flink metrics as well.

- Gordon

[1] https://issues.apache.org/jira/browse/FLINK-6109


On 19 April 2017 at 5:11:11 AM, sandeep6 (vr1meghashyam@gmail.com) wrote:

Is this fixed now? If not, is there any way to monitor kafka offset that is  
being processed by Flink? This should be a use case for everyone who uses  
Flink with Kafka.  



--  
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-Kafka-Consumer-Behaviour-tp8257p12663.html  
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.  

Re: Flink Kafka Consumer Behaviour

Posted by sandeep6 <vr...@gmail.com>.
Is this fixed now? If not, is there any way to monitor kafka offset that is
being processed by Flink? This should be a use case for everyone who uses
Flink with Kafka.



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-Kafka-Consumer-Behaviour-tp8257p12663.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

Re: Flink Kafka Consumer Behaviour

Posted by Robert Metzger <rm...@apache.org>.
Thank you for investigating the issue. I've filed a JIRA:
https://issues.apache.org/jira/browse/FLINK-4822

On Wed, Oct 12, 2016 at 8:12 PM, Anchit Jatana <development.anchit@gmail.com
> wrote:

> Hi Janardhan/Stephan,
>
> I just figured out what the issue is (Talking about Flink KafkaConnector08,
> don't know about Flink KafkaConnector09)
>
> The reason why- bin/kafka-consumer-groups.sh --zookeeper
> <zookeeper-url:port> --describe --group <group-id> is not showing any
> result
> is because of the absence of the
>
> /consumers/<group-id>/owners/<topic> in the zookeeper.
>
> The flink application is creating and updating
> /consumers/<group-id>/offsets/<topic>/<partition> but not creating
> "owners"
> Znode
>
> If I manually create the Znode using the following:
>
> create /consumers/<group-id>/owners “firstchildren”
>
> create /consumers/<group-id>/owners/<topic> null
>
> It works fine, bin/kafka-consumer-groups.sh --zookeeper
> <zookeeper-url:port>
> --describe --group <group-id> starts pulling offset results for me.
>
> I think this needs to be corrected in the application: to check and create
> "/consumers/<group-id>/owners/<topic>" if it does not exist.
>
> Regards,
> Anchit
>
>
>
> --
> View this message in context: http://apache-flink-user-
> mailing-list-archive.2336050.n4.nabble.com/Flink-Kafka-Consumer-Behaviour-
> tp8257p9499.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive
> at Nabble.com.
>

Re: Flink Kafka Consumer Behaviour

Posted by Anchit Jatana <de...@gmail.com>.
Hi Janardhan/Stephan,

I just figured out what the issue is (Talking about Flink KafkaConnector08,
don't know about Flink KafkaConnector09)

The reason why- bin/kafka-consumer-groups.sh --zookeeper
<zookeeper-url:port> --describe --group <group-id> is not showing any result
is because of the absence of the 

/consumers/<group-id>/owners/<topic> in the zookeeper. 

The flink application is creating and updating
/consumers/<group-id>/offsets/<topic>/<partition> but not creating "owners"
Znode 

If I manually create the Znode using the following:

create /consumers/<group-id>/owners “firstchildren”

create /consumers/<group-id>/owners/<topic> null

It works fine, bin/kafka-consumer-groups.sh --zookeeper <zookeeper-url:port>
--describe --group <group-id> starts pulling offset results for me.

I think this needs to be corrected in the application: to check and create
"/consumers/<group-id>/owners/<topic>" if it does not exist.

Regards,
Anchit



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-Kafka-Consumer-Behaviour-tp8257p9499.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

Re: Flink Kafka Consumer Behaviour

Posted by Stephan Ewen <se...@apache.org>.
Hi!

There was an issue in the Kafka 0.9 consumer in Flink concerning
checkpoints. It was relevant mostly for lower-throughput topics /
partitions.

It is fixed in the 1.1.3 release. Can you try out the release candidate and
see if that solves your problem?
See here for details on the release candidate:
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Release-Apache-Flink-1-1-3-RC1-td13860.html

To test this, set the dependency for the flink-connector-kafka-09 to
"1.1.3" and add the staging repository described in the above link to your
pom.xml.

Thanks,
Stephan


On Tue, Oct 4, 2016 at 5:51 AM, ankitcha <an...@gmail.com>
wrote:

> Hi Prabhu, cc Stephan, Robert,
>
> I was having similar issues where flink Kafka 09 consumer was not
> committing
> offsets to kafka. After digging into JobManager logs, I found that
> checkpoints were getting expired before getting completed and hence
> "checkpoint completed" message was being ignored.
>
> I increased the checkpoint interval from default 10 mins to 30 mins to
> verify, and then checkpoints were getting finished way before timeout (~12
> mins), and then consumer was correctly updating offsets in kafka.
>
> This seems to be working for us at the moment, and also note this scenarios
> normally happens at the start of the job and the consumer group already has
> some decent lag.
>
> So, you might wanna try increasing checkpoint timeouts and check if that
> solves the issue. You should look for following in the jobmanager logs
>
> [Checkpoint Timer] org.apache.flink.runtime.check
> point.CheckpointCoordinator
> - Checkpoint 37 expired before completing.
> [Checkpoint Timer] org.apache.flink.runtime.check
> point.CheckpointCoordinator
> - Triggering checkpoint 38 @ 1474313373634
> [Checkpoint Timer] org.apache.flink.runtime.check
> point.CheckpointCoordinator
> - Checkpoint 38 expired before completing.
> [Checkpoint Timer] org.apache.flink.runtime.check
> point.CheckpointCoordinator
> - Triggering checkpoint 39 @ 1474313973640
>
> --
> Ankit
>
>
>
> --
> View this message in context: http://apache-flink-user-maili
> ng-list-archive.2336050.n4.nabble.com/Flink-Kafka-Consume
> r-Behaviour-tp8257p9300.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive
> at Nabble.com.
>

Re: Flink Kafka Consumer Behaviour

Posted by ankitcha <an...@gmail.com>.
Hi Prabhu, cc Stephan, Robert, 

I was having similar issues where flink Kafka 09 consumer was not committing
offsets to kafka. After digging into JobManager logs, I found that
checkpoints were getting expired before getting completed and hence
"checkpoint completed" message was being ignored. 

I increased the checkpoint interval from default 10 mins to 30 mins to
verify, and then checkpoints were getting finished way before timeout (~12
mins), and then consumer was correctly updating offsets in kafka. 

This seems to be working for us at the moment, and also note this scenarios
normally happens at the start of the job and the consumer group already has
some decent lag. 

So, you might wanna try increasing checkpoint timeouts and check if that
solves the issue. You should look for following in the jobmanager logs 

[Checkpoint Timer] org.apache.flink.runtime.checkpoint.CheckpointCoordinator
- Checkpoint 37 expired before completing. 
[Checkpoint Timer] org.apache.flink.runtime.checkpoint.CheckpointCoordinator
- Triggering checkpoint 38 @ 1474313373634 
[Checkpoint Timer] org.apache.flink.runtime.checkpoint.CheckpointCoordinator
- Checkpoint 38 expired before completing. 
[Checkpoint Timer] org.apache.flink.runtime.checkpoint.CheckpointCoordinator
- Triggering checkpoint 39 @ 1474313973640 

-- 
Ankit 



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-Kafka-Consumer-Behaviour-tp8257p9300.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

Re: Flink Kafka Consumer Behaviour

Posted by Robert Metzger <rm...@apache.org>.
Hi Prabhu,

I'm pretty sure that the Kafka 09 consumer commits offsets to Kafka when
checkpointing is turned on.

In the FlinkKafkaConsumerBase.notifyCheckpointComplete(), we call
fetcher.commitSpecificOffsetsToKafka(checkpointOffsets);, which calls
this.consumer.commitSync(offsetsToCommit); in
Kafka09Fetcher.commitSpecificOffsetsToKafka().


On Mon, Aug 8, 2016 at 8:24 PM, vprabhu@gmail.com <vp...@gmail.com> wrote:

> Hi Stephan,
>
> The flink kafka 09 connector does not do offset commits to kafka when
> checkpointing is turned on. Is there a way to monitor the offset lag in
> this
> case,
>
> I am turning on a flink job that reads data from kafka (has about a week
> data - around 7 TB) , currently the approximate way that I use is the
> number
> of records read shown in the flink-UI and the last offset in kafka.
>
> Thanks,
> Prabhu
>
>
>
> --
> View this message in context: http://apache-flink-user-
> mailing-list-archive.2336050.n4.nabble.com/Flink-Kafka-Consumer-Behaviour-
> tp8257p8375.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive
> at Nabble.com.
>

Re: Flink Kafka Consumer Behaviour

Posted by "vprabhu@gmail.com" <vp...@gmail.com>.
Hi Stephan,

The flink kafka 09 connector does not do offset commits to kafka when
checkpointing is turned on. Is there a way to monitor the offset lag in this
case,

I am turning on a flink job that reads data from kafka (has about a week
data - around 7 TB) , currently the approximate way that I use is the number
of records read shown in the flink-UI and the last offset in kafka. 

Thanks,
Prabhu



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-Kafka-Consumer-Behaviour-tp8257p8375.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

Re: Flink Kafka Consumer Behaviour

Posted by Stephan Ewen <se...@apache.org>.
Hi!

I have not used the Kafka Offset Checker before, maybe someone who worked
with that can chime in.

Greetings,
Stephan


On Wed, Aug 3, 2016 at 4:59 PM, Janardhan Reddy <janardhan.reddy@olacabs.com
> wrote:

> I can see that offsets are stored in zookeeper and are not returned when i
> query through kafka offset checker.
>
> Can you please tell me how to monitor kafka flink consumer lag for 0.8
> flink kafka consumer.
>
> On Wed, Aug 3, 2016 at 3:29 PM, Stephan Ewen <se...@apache.org> wrote:
>
>> Hi!
>>
>> Just check in the code. The 0.8 FlinkKafkaConsumer should always commit
>> offsets, regardless of whether checkpointing is enables. The 0.9
>> FlinkKafkaConsumer actually does not do any periodic offset committing when
>> checkpointing is disabled.
>>
>> Greetings,
>> Stephan
>>
>> On Wed, Aug 3, 2016 at 7:36 AM, Janardhan Reddy <
>> janardhan.reddy@olacabs.com> wrote:
>>
>>> Checkpointing wasn't enabled in the streaming job, but the offsets
>>> should have been committed to zookeeper.
>>>
>>> But we don't see the offsets being written to zookeeper.
>>>
>>> On Tue, Aug 2, 2016 at 7:41 PM, Till Rohrmann <tr...@apache.org>
>>> wrote:
>>>
>>>> Hi Janardhan,
>>>>
>>>> Flink should commit the current offsets to Zookeeper whenever a
>>>> checkpoint has been completed. In case that you disabled checkpointing,
>>>> then the offsets will be periodically committed to ZooKeeper. The default
>>>> value is 60s.
>>>>
>>>> Could it be that there wasn't yet a completed checkpoint? Which version
>>>> of Flink are you using?
>>>>
>>>> Cheers,
>>>> Till
>>>>
>>>> On Tue, Aug 2, 2016 at 7:26 PM, Janardhan Reddy <
>>>> janardhan.reddy@olacabs.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> When the run the following command i am getting that no topic is
>>>>> available for that consumer group.  i am suing
>>>>> flink-connector-kafka-0.8_${scala.version}(2.11).
>>>>>
>>>>> ./bin/kafka-consumer-groups.sh --zookeeper <> --group <> --describe
>>>>>
>>>>> No topic available for consumer group provided
>>>>>
>>>>>
>>>>> Does the kafka consumer commit offset to the broker always ? Do we
>>>>> need to enable checkpointing for the offsets to be committed ?
>>>>>
>>>>>
>>>>> Thanks
>>>>>
>>>>
>>>>
>>>
>>
>

Re: Flink Kafka Consumer Behaviour

Posted by Janardhan Reddy <ja...@olacabs.com>.
I can see that offsets are stored in zookeeper and are not returned when i
query through kafka offset checker.

Can you please tell me how to monitor kafka flink consumer lag for 0.8
flink kafka consumer.

On Wed, Aug 3, 2016 at 3:29 PM, Stephan Ewen <se...@apache.org> wrote:

> Hi!
>
> Just check in the code. The 0.8 FlinkKafkaConsumer should always commit
> offsets, regardless of whether checkpointing is enables. The 0.9
> FlinkKafkaConsumer actually does not do any periodic offset committing when
> checkpointing is disabled.
>
> Greetings,
> Stephan
>
> On Wed, Aug 3, 2016 at 7:36 AM, Janardhan Reddy <
> janardhan.reddy@olacabs.com> wrote:
>
>> Checkpointing wasn't enabled in the streaming job, but the offsets should
>> have been committed to zookeeper.
>>
>> But we don't see the offsets being written to zookeeper.
>>
>> On Tue, Aug 2, 2016 at 7:41 PM, Till Rohrmann <tr...@apache.org>
>> wrote:
>>
>>> Hi Janardhan,
>>>
>>> Flink should commit the current offsets to Zookeeper whenever a
>>> checkpoint has been completed. In case that you disabled checkpointing,
>>> then the offsets will be periodically committed to ZooKeeper. The default
>>> value is 60s.
>>>
>>> Could it be that there wasn't yet a completed checkpoint? Which version
>>> of Flink are you using?
>>>
>>> Cheers,
>>> Till
>>>
>>> On Tue, Aug 2, 2016 at 7:26 PM, Janardhan Reddy <
>>> janardhan.reddy@olacabs.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> When the run the following command i am getting that no topic is
>>>> available for that consumer group.  i am suing
>>>> flink-connector-kafka-0.8_${scala.version}(2.11).
>>>>
>>>> ./bin/kafka-consumer-groups.sh --zookeeper <> --group <> --describe
>>>>
>>>> No topic available for consumer group provided
>>>>
>>>>
>>>> Does the kafka consumer commit offset to the broker always ? Do we need
>>>> to enable checkpointing for the offsets to be committed ?
>>>>
>>>>
>>>> Thanks
>>>>
>>>
>>>
>>
>

Re: Flink Kafka Consumer Behaviour

Posted by Stephan Ewen <se...@apache.org>.
Hi!

Just check in the code. The 0.8 FlinkKafkaConsumer should always commit
offsets, regardless of whether checkpointing is enables. The 0.9
FlinkKafkaConsumer actually does not do any periodic offset committing when
checkpointing is disabled.

Greetings,
Stephan

On Wed, Aug 3, 2016 at 7:36 AM, Janardhan Reddy <janardhan.reddy@olacabs.com
> wrote:

> Checkpointing wasn't enabled in the streaming job, but the offsets should
> have been committed to zookeeper.
>
> But we don't see the offsets being written to zookeeper.
>
> On Tue, Aug 2, 2016 at 7:41 PM, Till Rohrmann <tr...@apache.org>
> wrote:
>
>> Hi Janardhan,
>>
>> Flink should commit the current offsets to Zookeeper whenever a
>> checkpoint has been completed. In case that you disabled checkpointing,
>> then the offsets will be periodically committed to ZooKeeper. The default
>> value is 60s.
>>
>> Could it be that there wasn't yet a completed checkpoint? Which version
>> of Flink are you using?
>>
>> Cheers,
>> Till
>>
>> On Tue, Aug 2, 2016 at 7:26 PM, Janardhan Reddy <
>> janardhan.reddy@olacabs.com> wrote:
>>
>>> Hi,
>>>
>>> When the run the following command i am getting that no topic is
>>> available for that consumer group.  i am suing
>>> flink-connector-kafka-0.8_${scala.version}(2.11).
>>>
>>> ./bin/kafka-consumer-groups.sh --zookeeper <> --group <> --describe
>>>
>>> No topic available for consumer group provided
>>>
>>>
>>> Does the kafka consumer commit offset to the broker always ? Do we need
>>> to enable checkpointing for the offsets to be committed ?
>>>
>>>
>>> Thanks
>>>
>>
>>
>

Re: Flink Kafka Consumer Behaviour

Posted by Janardhan Reddy <ja...@olacabs.com>.
Checkpointing wasn't enabled in the streaming job, but the offsets should
have been committed to zookeeper.

But we don't see the offsets being written to zookeeper.

On Tue, Aug 2, 2016 at 7:41 PM, Till Rohrmann <tr...@apache.org> wrote:

> Hi Janardhan,
>
> Flink should commit the current offsets to Zookeeper whenever a checkpoint
> has been completed. In case that you disabled checkpointing, then the
> offsets will be periodically committed to ZooKeeper. The default value is
> 60s.
>
> Could it be that there wasn't yet a completed checkpoint? Which version of
> Flink are you using?
>
> Cheers,
> Till
>
> On Tue, Aug 2, 2016 at 7:26 PM, Janardhan Reddy <
> janardhan.reddy@olacabs.com> wrote:
>
>> Hi,
>>
>> When the run the following command i am getting that no topic is
>> available for that consumer group.  i am suing
>> flink-connector-kafka-0.8_${scala.version}(2.11).
>>
>> ./bin/kafka-consumer-groups.sh --zookeeper <> --group <> --describe
>>
>> No topic available for consumer group provided
>>
>>
>> Does the kafka consumer commit offset to the broker always ? Do we need
>> to enable checkpointing for the offsets to be committed ?
>>
>>
>> Thanks
>>
>
>

Re: Flink Kafka Consumer Behaviour

Posted by Till Rohrmann <tr...@apache.org>.
Hi Janardhan,

Flink should commit the current offsets to Zookeeper whenever a checkpoint
has been completed. In case that you disabled checkpointing, then the
offsets will be periodically committed to ZooKeeper. The default value is
60s.

Could it be that there wasn't yet a completed checkpoint? Which version of
Flink are you using?

Cheers,
Till

On Tue, Aug 2, 2016 at 7:26 PM, Janardhan Reddy <janardhan.reddy@olacabs.com
> wrote:

> Hi,
>
> When the run the following command i am getting that no topic is available
> for that consumer group.  i am suing
> flink-connector-kafka-0.8_${scala.version}(2.11).
>
> ./bin/kafka-consumer-groups.sh --zookeeper <> --group <> --describe
>
> No topic available for consumer group provided
>
>
> Does the kafka consumer commit offset to the broker always ? Do we need to
> enable checkpointing for the offsets to be committed ?
>
>
> Thanks
>