You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flume.apache.org by Sutanu Das <sd...@att.com> on 2016/02/25 03:22:08 UTC

How to increase NUMBER of Spark Executors ?

Community,

How can I increase the NUMBER OF EXECUTORS for my Streaming job Local ?

We have tried spark.master = local[4] but It is not starting 4 executors and our job keeps getting Queued - do we need to make a code change to increase number of executors?

This job - jar file read from Kafka Stream with 2 partitions and send data to Cassandra -

Please help advise - thanks again community

Here is How we start the job:

nohup spark-submit --properties-file /hadoop_common/airwaveApList.properties --class airwaveApList /hadoop_common/airwaveApList-1.0.jar

Properties file for the Steaming Job:

spark.cassandra.connection.host       cass_host
spark.cassandra.auth.username         cass_app
spark.cassandra.auth.password         xxxxx
spark.topic                           ap_list_spark_streaming
spark.app.name                        ap-status
spark.metadata.broker.list            server.corp.net:6667
spark.zookeeper.connect               server.net:2181
spark.group.id                        airwave_activation_status
spark.zookeeper.connection.timeout.ms 1000
spark.cassandra.sql.keyspace          enterprise
spark.master                          local[4]
spark.batch.size.seconds              120
spark.driver.memory                   12G
spark.executor.memory                 12G
spark.akka.frameSize                  512
spark.local.dir                       /prod/hadoop/spark/airwaveApList_temp
spark.history.kerberos.keytab none
spark.history.kerberos.principal none
spark.history.provider org.apache.spark.deploy.yarn.history.YarnHistoryProvider
spark.history.ui.port 18080
spark.yarn.historyServer.address has-dal-0001.corp.wayport.net:18080
spark.yarn.services org.apache.spark.deploy.yarn.history.YarnHistoryService


Re: How to increase NUMBER of Spark Executors ?

Posted by Gonzalo Herreros <gh...@gmail.com>.
Local will only run with one executor, you are specifying 4 cores to be
used by the executor
That affects the number of taks an executor can run concurrently.

Please note this is the Flume distribution list, not the Spark one

Gonzalo


On 25 February 2016 at 02:22, Sutanu Das <sd...@att.com> wrote:

> Community,
>
>
>
> How can I increase the NUMBER OF EXECUTORS for my Streaming job Local ?
>
>
>
> We have tried *spark.master = local[4] *but It is not starting 4
> executors and our job keeps getting Queued – do we need to make a code
> change to increase number of executors?
>
>
>
> This job – jar file read from Kafka Stream with 2 partitions and send data
> to Cassandra -
>
>
>
> Please help advise – thanks again community
>
>
>
> *Here is How we start the job:*
>
>
>
> nohup spark-submit --properties-file
> /hadoop_common/airwaveApList.properties --class airwaveApList
> /hadoop_common/airwaveApList-1.0.jar
>
>
>
> *Properties file for the Steaming Job:*
>
>
>
> spark.cassandra.connection.host       cass_host
>
> spark.cassandra.auth.username         cass_app
>
> spark.cassandra.auth.password         xxxxx
>
> spark.topic                           ap_list_spark_streaming
>
> spark.app.name                        ap-status
>
> spark.metadata.broker.list            server.corp.net:6667
>
> spark.zookeeper.connect               server.net:2181
>
> spark.group.id                        airwave_activation_status
>
> spark.zookeeper.connection.timeout.ms 1000
>
> spark.cassandra.sql.keyspace          enterprise
>
> *spark.master                          local[4]*
>
> spark.batch.size.seconds              120
>
> spark.driver.memory                   12G
>
> spark.executor.memory                 12G
>
> spark.akka.frameSize                  512
>
> spark.local.dir                       /prod/hadoop/spark/airwaveApList_temp
>
> spark.history.kerberos.keytab none
>
> spark.history.kerberos.principal none
>
> spark.history.provider
> org.apache.spark.deploy.yarn.history.YarnHistoryProvider
>
> spark.history.ui.port 18080
>
> spark.yarn.historyServer.address has-dal-0001.corp.wayport.net:18080
>
> spark.yarn.services org.apache.spark.deploy.yarn.history.YarnHistoryService
>
>
>