You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by davidkl <da...@hotmail.com> on 2015/03/05 10:39:03 UTC

Re: Identify the performance bottleneck from hardware prospective

Hello Julaiti,

Maybe I am just asking the obvious :-) but did you check disk IO? Depending
on what you are doing that could be the bottleneck.

In my case none of the HW resources was a bottleneck, but using some
distributed features that were blocking execution (e.g. Hazelcast). Could
that be your case as well? 

Regards



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Identify-the-performance-bottleneck-from-hardware-prospective-tp21684p21927.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Identify the performance bottleneck from hardware prospective

Posted by jalafate <ja...@eng.ucsd.edu>.
Hi David,

It is a great point. It is actually one of the reasons that my program is
slow. I found that the major cause of my program running slow is the huge
garbage collection time. I created too many small objects in the map
procedure which triggers GC mechanism frequently. After I improved my
program by creating fewer objects, the performance is much better.

Here are two videos that may help other people who also struggling about
finding the bottleneck of your spark applications.

1. A Deeper Understanding of Spark Internals - Aaron Davidson (Databricks)
http://youtu.be/dmL0N3qfSc8

2. Spark Summit 2014 - Advanced Spark Training - Advanced Spark Internals
and Tuning
http://youtu.be/HG2Yd-3r4-M

I personally learned a lot from the points mentioned in the two videos
above.

In practice, I will monitor CPU user time, CPU idle time (if disk IO is the
bottleneck, CPU idle time should be significant), memory usage, network IO
and garbage collection time per task (can be found on the Spark web UI).
Ganglia will be helpful to monitor CPU, memory and network IO.

Best,
Julaiti



On Thu, Mar 5, 2015 at 1:39 AM, davidkl [via Apache Spark User List] <
ml-node+s1001560n21927h16@n3.nabble.com> wrote:

> Hello Julaiti,
>
> Maybe I am just asking the obvious :-) but did you check disk IO?
> Depending on what you are doing that could be the bottleneck.
>
> In my case none of the HW resources was a bottleneck, but using some
> distributed features that were blocking execution (e.g. Hazelcast). Could
> that be your case as well?
>
> Regards
>
> ------------------------------
>  If you reply to this email, your message will be added to the discussion
> below:
>
> http://apache-spark-user-list.1001560.n3.nabble.com/Identify-the-performance-bottleneck-from-hardware-prospective-tp21684p21927.html
>  To unsubscribe from Identify the performance bottleneck from hardware
> prospective, click here
> <http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=21684&code=amFsYWZhdGVAZW5nLnVjc2QuZWR1fDIxNjg0fC05ODMxNTE2MTk=>
> .
> NAML
> <http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Identify-the-performance-bottleneck-from-hardware-prospective-tp21684p21937.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.