You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2016/09/03 00:01:43 UTC

[jira] [Assigned] (SPARK-17386) Default trigger interval causes excessive RPC calls

     [ https://issues.apache.org/jira/browse/SPARK-17386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Apache Spark reassigned SPARK-17386:
------------------------------------

    Assignee:     (was: Apache Spark)

> Default trigger interval causes excessive RPC calls
> ---------------------------------------------------
>
>                 Key: SPARK-17386
>                 URL: https://issues.apache.org/jira/browse/SPARK-17386
>             Project: Spark
>          Issue Type: Bug
>          Components: Streaming
>            Reporter: Frederick Reiss
>
> The default trigger interval for a Structured Streaming query is {{ProcessingTime(0)}}, i.e. "trigger new microbatches as fast as possible". When the trigger is set to this default value, the scheduler in {{StreamExecution}} will spin in a tight loop calling {{getOffset()}} on every {{Source}} until new data arrives.
> In test cases, where most of the sources are {{MemoryStream}} or {{TextSocketSource}}, this spinning leads to excessive CPU usage.
> In a production environment, this spinning could take down critical infrastructure. Most sources in Spark clusters will be {{FileStreamSource}} or the not-yet-written Kafka 0.10 Source. The {{getOffset()}} method of {{FileStreamSource}} performs a directory listing of an HDFS directory. If the scheduler calls {{FileStreamSource.getOffset()}} in a tight loop, Spark will make hundreds of RPC calls per second to the HDFS NameNode. This overhead could disrupt service to other systems using HDFS, including Spark itself. A similar situation will exist with the Kafka source, the {{getOffset()}} method of which will presumably call Kafka's {{Consumer.poll()}} method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org