You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by anshu shukla <an...@gmail.com> on 2015/08/26 15:56:54 UTC

Setting number of CORES from inside the Topology (JAVA code )

Hey ,

I  need to set the number of cores from inside the topology . Its working
fine  by setting in  spark-env.sh but  unable to do  via setting key/value
for  conf .

SparkConf sparkConf = new
SparkConf().setAppName("JavaCustomReceiver").setMaster("local[4]");

if(toponame.equals("IdentityTopology"))
{
    sparkConf.setExecutorEnv("SPARK_WORKER_CORES","1");
}




-- 
Thanks & Regards,
Anshu Shukla

Re: Setting number of CORES from inside the Topology (JAVA code )

Posted by Akhil Das <ak...@sigmoidanalytics.com>.
When you set .setMaster to local[4], it means that you are allocating 4
threads on your local machine. You can change it to local[1] to run it on a
single thread.

If you are submitting the job to a standalone spark cluster and you wanted
to limit the # cores for your job, then you can do it like
*sparkConf.set("spark.cores.max",
"224")*

Thanks
Best Regards

On Wed, Aug 26, 2015 at 7:26 PM, anshu shukla <an...@gmail.com>
wrote:

> Hey ,
>
> I  need to set the number of cores from inside the topology . Its working
> fine  by setting in  spark-env.sh but  unable to do  via setting key/value
> for  conf .
>
> SparkConf sparkConf = new SparkConf().setAppName("JavaCustomReceiver").setMaster("local[4]");
>
> if(toponame.equals("IdentityTopology"))
> {
>     sparkConf.setExecutorEnv("SPARK_WORKER_CORES","1");
> }
>
>
>
>
> --
> Thanks & Regards,
> Anshu Shukla
>