You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by onpoq <on...@gmail.com> on 2014/06/09 07:02:43 UTC

How to achieve a reasonable performance on Spark Streaming

Dear All,

I recently installed Spark 1.0.0 on a 10-slave dedicate cluster. However,
the max input rate that the system can sustain with stable latency seems
very low. I use a simple word counting workload over tweets:

theDStream.flatMap(extractWordOnePairs).reduceByKey(sumFunc).count.print

With 2s batch interval, the 10-slave cluster can only handle ~ 30,000
tweets/s (which translates to ~ 300,000 words/s). To give you a sense about
the speed of a slave machine,  a single machine can handle ~ 100,000
tweets/s on a stream processing program in plain java. 

I've tuned the following parameters without seeing obvious improvement:
1. Batch interval: 1s, 2s, 5s, 10s
2. Parallelism: 1 x total num of cores, 2x, 3x 
3. StorageLevel: MEMORY_ONLY, MEMORY_ONLY_SER
4. Run type: yarn-client, standalone cluster

* My first question is: what are the max input rates you have observed on
Spark Streaming? I know it depends on the workload and the hardware. But I
just want to get some sense of the reasonable numbers.

* My second question is: any suggestion on what I can tune to improve the
performance? I've found unexpected delays in "reduce" that I can't explain,
and they may be related to the poor performance. Details are shown below

============= DETAILS =============

Below is the CPU utilization plot with 2s batch interval and 40,000
tweets/s. The latency keeps increasing while the CPU, network, disk and
memory are all under utilized.

<http://apache-spark-user-list.1001560.n3.nabble.com/file/n7221/cpu.png> 

I tried to find out which stage is the bottleneck. It seems that the
"reduce" phase for each batch can usually finish in less than 0.5s, but
sometimes (70 out of 545 batches) takes 5s. Below is a snapshot of the wet
UI showing the time taken by "reduce" in some batches where the normal cases
are marked in green and the abnormal case is marked in red:

<http://apache-spark-user-list.1001560.n3.nabble.com/file/n7221/reduce.png> 

I further look into all the tasks of a slow "reduce" stage. As shown by the
below snapshot, a small portion of the tasks are stragglers:

<http://apache-spark-user-list.1001560.n3.nabble.com/file/n7221/reduce2.png> 

Here is the log of some slow "reduce" tasks on an executor, where the start
and end of the tasks are marked in red. They started at 21:55:43, and
completed at 21:55:48. During the 5s, I can only see shuffling at the
beginning and activities of input blocks.

<http://apache-spark-user-list.1001560.n3.nabble.com/file/n7221/log1.png> 
...
<http://apache-spark-user-list.1001560.n3.nabble.com/file/n7221/log2.png> 

For comparison, here is the log of the normal "reduce" tasks on the same
executor:

<http://apache-spark-user-list.1001560.n3.nabble.com/file/n7221/log3.png> 

Anybody has any idea of this 5s delay?

Thanks.



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/How-to-achieve-a-reasonable-performance-on-Spark-Streaming-tp7221.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.