You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@storm.apache.org by Milind Vaidya <ka...@gmail.com> on 2017/05/16 05:06:01 UTC

Too many tuples failing in KafkaSpout

I have Kakfa - Kafka Spout - Storm Bolts set up.

 It processes heavy data (well it is supposed to). But I am accumulating it
in files and eventually move it to "uploading" directory. Another bolt
uploads it to S3.

If anything happens to file : say IO error, opening, closing file error,
transfer error. I fail all the tuples belonging to the file. Similar logic
is in place if upload fails.

Now topology was running fine.I want to make it resilient to restart of
workers or topology as such.

When I restarted the topology kafka spout reported "Too many tuples
failing" error. Then eventually the "failed" tuples no. went decreasing. I
suspect it is due to failed files in process which led to failure of tuples
which were in large no. I am acking or failing properly at appropriate
places.

1. So when a Kafka Spout fails tuples, are they replayed or discarded ?
2. When "failed" filed in UI indicates some number does it mean it will be
replayed as per "Guaranteed Processing" paradigm of storm.
3. Is it good idea to set "*maxOffsetBehind*" to a larger value ? It is
currently : *100000*
Would like to have 1000000 at least to cover scenario like above.