You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 04:03:16 UTC

[jira] [Updated] (SPARK-20261) EventLoggingListener may not truly flush the logger when a compression codec is used

     [ https://issues.apache.org/jira/browse/SPARK-20261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon updated SPARK-20261:
---------------------------------
    Labels: bulk-closed  (was: )

> EventLoggingListener may not truly flush the logger when a compression codec is used
> ------------------------------------------------------------------------------------
>
>                 Key: SPARK-20261
>                 URL: https://issues.apache.org/jira/browse/SPARK-20261
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.1.0
>            Reporter: Brian Cho
>            Priority: Minor
>              Labels: bulk-closed
>
> Log events with flushLogger set to true are supposed to immediately flush to update the event history. However, this does not happen when using some compression codecs e.g., LZ4BlockOutputStream, because the compressed stream can hold on to the update until the compression block is filled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org