You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Mariusz Galus (JIRA)" <ji...@apache.org> on 2017/10/11 18:46:00 UTC
[jira] [Commented] (SPARK-22255) SPARK-22255 FileAppender
InputStream Read timeout and blocking state
[ https://issues.apache.org/jira/browse/SPARK-22255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16200749#comment-16200749 ]
Mariusz Galus commented on SPARK-22255:
---------------------------------------
I would like an answer on why we need to block. I am using the RollingFileAppender with Java Piped I/O Stream classes. I am sending Kafka records to a PipedOutputStream that is linked to the FileAppender via the PipedInputStream.
If I do allow it to block, in my custom case, I end up getting a Broken Pipe because my FileOutputStream closes and the InputStream that is read blocked throws a IOException for Broken Pipe and then I'll get flooded with Read End Dead IOExceptions.
I may be overlooking something critical to the overall implementation, as I am just using this small piece in a custom solution.
> SPARK-22255 FileAppender InputStream Read timeout and blocking state
> --------------------------------------------------------------------
>
> Key: SPARK-22255
> URL: https://issues.apache.org/jira/browse/SPARK-22255
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 2.2.0
> Reporter: Mariusz Galus
> Priority: Minor
>
> The FileAppender logic when reading from InputStream blocks. This can be simply avoided with a InputStream.available() check prior to reading.
> If this is done, a variable for estimated available bytes needs to be instantiated to use in the conditionals. The conditional for reading from the inputstream and the conditional for appending to the file.
> See: https://github.com/Galus/spark/pull/1/commits/8ee5133c40e3f627ed0ebfb3aa63d5749b5bfdcb
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org