You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by VJ Shalish <vj...@gmail.com> on 2014/09/18 00:59:48 UTC

SPARK BENCHMARK TEST

 I am trying to benchmark spark in a hadoop cluster.
I need to design a sample spark job to test the CPU utilization, RAM usage,
Input throughput, Output throughput and Duration of execution in the
cluster.

I need to test the state of the cluster for :-

A spark job which uses high CPU
A spark job which uses high RAM
A spark job which uses high Input throughput
A spark job which uses high Output throughput
A spark job which takes long time.

These have to be tested individually and a combination of these scenarios
would also be used.

Please help me in understanding the factors of a Spark Job which would
contribute to  CPU utilization, RAM usage, Input throughput, Output
throughput, Duration of execution in the cluster. So that I can design
spark jobs that could be used for testing.



Thanks
Shalish.

Re: SPARK BENCHMARK TEST

Posted by VJ Shalish <vj...@gmail.com>.
Hi Please can someone advice on this.

On Wed, Sep 17, 2014 at 6:59 PM, VJ Shalish <vj...@gmail.com> wrote:

> I am trying to benchmark spark in a hadoop cluster.
> I need to design a sample spark job to test the CPU utilization, RAM
> usage, Input throughput, Output throughput and Duration of execution in the
> cluster.
>
> I need to test the state of the cluster for :-
>
> A spark job which uses high CPU
> A spark job which uses high RAM
> A spark job which uses high Input throughput
> A spark job which uses high Output throughput
> A spark job which takes long time.
>
> These have to be tested individually and a combination of these scenarios
> would also be used.
>
> Please help me in understanding the factors of a Spark Job which would
> contribute to  CPU utilization, RAM usage, Input throughput, Output
> throughput, Duration of execution in the cluster. So that I can design
> spark jobs that could be used for testing.
>
>
>
> Thanks
> Shalish.
>
>
>