You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Adrian Tanase <at...@adobe.com> on 2015/10/17 21:58:18 UTC

Spark Streaming scheduler delay VS driver.cores

Hi,

I’ve recently bumped up the resources for a spark streaming job – and the performance started to degrade over time.
it was running fine on 7 nodes with 14 executor cores each (via Yarn) until I bumped executor.cores to 22 cores/node (out of 32 on AWS c3.xlarge, 24 for yarn)

The driver has 2 cores and 2 GB ram (usage is at zero).

For really low data volume it goes from 1-2 seconds per batch to 4-5 s/batch after about 6 hours, doing almost nothing. I’ve noticed that the scheduler delay is 3-4s, even 5-6 seconds for some tasks. Should be in the low tens of milliseconds. What’s weirder is that under moderate load (thousands of events per second) - the delay is not as obvious anymore.

After this I reduced the executor.cores to 20 and bumped driver.cores to 4 and it seems to be ok now.
However, this is totally empirical, I have not found any documentation, code samples or email discussion on how to properly set driver.cores.

Does anyone know:

  *   If I assign more cores to the driver/application manager, will it use them?
     *   I was looking at the process list with htop and only one of the jvm’s on the driver was really taking up CPU time
  *   What is a decent parallelism factor for a streaming app with 10-20 secs batch time? I found it odd that at  7 x 22 = 154 the driver is becoming a bottleneck
     *   I’ve seen people recommend 3-4 taks/core or ~1000 parallelism for clusters in the tens of nodes

Thanks in advance,
-adrian

FW: Spark Streaming scheduler delay VS driver.cores

Posted by Adrian Tanase <at...@adobe.com>.
Apologies for reposting this to the dev list but I’ve had no luck in getting information about spark.driver.cores on the user list.

Happy to create a PR with documentation improvements for the spark.driver.cores config setting after I get some more details.

Thanks!
-adrian

From: Adrian Tanase
Date: Monday, October 19, 2015 at 10:09 PM
To: "user@spark.apache.org<ma...@spark.apache.org>"
Subject: Re: Spark Streaming scheduler delay VS driver.cores

Bump on this question – does anyone know what is the effect of spark.driver.cores on the driver's ability to manage larger clusters?

Any tips on setting a correct value? I’m running Spark streaming on Yarn / Hadoop 2.6 / Spark 1.5.1.

Thanks,
-adrian

From: Adrian Tanase
Date: Saturday, October 17, 2015 at 10:58 PM
To: "user@spark.apache.org<ma...@spark.apache.org>"
Subject: Spark Streaming scheduler delay VS driver.cores

Hi,

I’ve recently bumped up the resources for a spark streaming job – and the performance started to degrade over time.
it was running fine on 7 nodes with 14 executor cores each (via Yarn) until I bumped executor.cores to 22 cores/node (out of 32 on AWS c3.xlarge, 24 for yarn)

The driver has 2 cores and 2 GB ram (usage is at zero).

For really low data volume it goes from 1-2 seconds per batch to 4-5 s/batch after about 6 hours, doing almost nothing. I’ve noticed that the scheduler delay is 3-4s, even 5-6 seconds for some tasks. Should be in the low tens of milliseconds. What’s weirder is that under moderate load (thousands of events per second) - the delay is not as obvious anymore.

After this I reduced the executor.cores to 20 and bumped driver.cores to 4 and it seems to be ok now.
However, this is totally empirical, I have not found any documentation, code samples or email discussion on how to properly set driver.cores.

Does anyone know:

  *   If I assign more cores to the driver/application manager, will it use them?
     *   I was looking at the process list with htop and only one of the jvm’s on the driver was really taking up CPU time
  *   What is a decent parallelism factor for a streaming app with 10-20 secs batch time? I found it odd that at  7 x 22 = 154 the driver is becoming a bottleneck
     *   I’ve seen people recommend 3-4 taks/core or ~1000 parallelism for clusters in the tens of nodes

Thanks in advance,
-adrian

Re: Spark Streaming scheduler delay VS driver.cores

Posted by Adrian Tanase <at...@adobe.com>.
Bump on this question – does anyone know what is the effect of spark.driver.cores on the driver's ability to manage larger clusters?

Any tips on setting a correct value? I’m running Spark streaming on Yarn / Hadoop 2.6 / Spark 1.5.1.

Thanks,
-adrian

From: Adrian Tanase
Date: Saturday, October 17, 2015 at 10:58 PM
To: "user@spark.apache.org<ma...@spark.apache.org>"
Subject: Spark Streaming scheduler delay VS driver.cores

Hi,

I’ve recently bumped up the resources for a spark streaming job – and the performance started to degrade over time.
it was running fine on 7 nodes with 14 executor cores each (via Yarn) until I bumped executor.cores to 22 cores/node (out of 32 on AWS c3.xlarge, 24 for yarn)

The driver has 2 cores and 2 GB ram (usage is at zero).

For really low data volume it goes from 1-2 seconds per batch to 4-5 s/batch after about 6 hours, doing almost nothing. I’ve noticed that the scheduler delay is 3-4s, even 5-6 seconds for some tasks. Should be in the low tens of milliseconds. What’s weirder is that under moderate load (thousands of events per second) - the delay is not as obvious anymore.

After this I reduced the executor.cores to 20 and bumped driver.cores to 4 and it seems to be ok now.
However, this is totally empirical, I have not found any documentation, code samples or email discussion on how to properly set driver.cores.

Does anyone know:

  *   If I assign more cores to the driver/application manager, will it use them?
     *   I was looking at the process list with htop and only one of the jvm’s on the driver was really taking up CPU time
  *   What is a decent parallelism factor for a streaming app with 10-20 secs batch time? I found it odd that at  7 x 22 = 154 the driver is becoming a bottleneck
     *   I’ve seen people recommend 3-4 taks/core or ~1000 parallelism for clusters in the tens of nodes

Thanks in advance,
-adrian