You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by 李宜芳 <xu...@gmail.com> on 2014/08/10 17:44:10 UTC

fair scheduler

Hi

I am trying to switch from FIFO to FAIR with standalone mode.

my environment:
hadoop 1.2.1
spark 0.8.0 using stanalone mode

and i modified the code..........

ClusterScheduler.scala  -> System.getProperty("spark.scheduler.mode",
"FAIR"))
SchedulerBuilder.scala  ->
val DEFAULT_SCHEDULING_MODE = SchedulingMode.FAIR

LocalScheduler.scala ->
System.getProperty("spark.scheduler.mode", "FAIR)

spark-env.sh ->
export SPARK_JAVA_OPTS="-Dspark.scheduler.mode=FAIR"
export SPARK_JAVA_OPTS=" -Dspark.scheduler.mode=FAIR" ./run-example
org.apache.spark.examples.SparkPi spark://streaming1:7077


but it's not work
i want to switch from fifo to fair
how can i  do??

Regards
Crystal Lee

Re: fair scheduler

Posted by fireflyc <fi...@163.com>.
@Crystal
You can use spark on yarn. Yarn have fair scheduler,modified yarn-site.xml.

发自我的 iPad

> 在 2014年8月11日,6:49,Matei Zaharia <ma...@gmail.com> 写道:
> 
> Hi Crystal,
> 
> The fair scheduler is only for jobs running concurrently within the same SparkContext (i.e. within an application), not for separate applications on the standalone cluster manager. It has no effect there. To run more of those concurrently, you need to set a cap on how many cores they each grab with spark.cores.max.
> 
> Matei
> 
> On August 10, 2014 at 12:13:08 PM, 李宜芳 (xuite627@gmail.com) wrote:
> 
> Hi  
> 
> I am trying to switch from FIFO to FAIR with standalone mode.  
> 
> my environment:  
> hadoop 1.2.1  
> spark 0.8.0 using stanalone mode  
> 
> and i modified the code..........  
> 
> ClusterScheduler.scala -> System.getProperty("spark.scheduler.mode",  
> "FAIR"))  
> SchedulerBuilder.scala ->  
> val DEFAULT_SCHEDULING_MODE = SchedulingMode.FAIR  
> 
> LocalScheduler.scala ->  
> System.getProperty("spark.scheduler.mode", "FAIR)  
> 
> spark-env.sh ->  
> export SPARK_JAVA_OPTS="-Dspark.scheduler.mode=FAIR"  
> export SPARK_JAVA_OPTS=" -Dspark.scheduler.mode=FAIR" ./run-example  
> org.apache.spark.examples.SparkPi spark://streaming1:7077  
> 
> 
> but it's not work  
> i want to switch from fifo to fair  
> how can i do??  
> 
> Regards  
> Crystal Lee  
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@spark.apache.org
For additional commands, e-mail: dev-help@spark.apache.org


Re: fair scheduler

Posted by Matei Zaharia <ma...@gmail.com>.
Hi Crystal,

The fair scheduler is only for jobs running concurrently within the same SparkContext (i.e. within an application), not for separate applications on the standalone cluster manager. It has no effect there. To run more of those concurrently, you need to set a cap on how many cores they each grab with spark.cores.max.

Matei

On August 10, 2014 at 12:13:08 PM, 李宜芳 (xuite627@gmail.com) wrote:

Hi  

I am trying to switch from FIFO to FAIR with standalone mode.  

my environment:  
hadoop 1.2.1  
spark 0.8.0 using stanalone mode  

and i modified the code..........  

ClusterScheduler.scala -> System.getProperty("spark.scheduler.mode",  
"FAIR"))  
SchedulerBuilder.scala ->  
val DEFAULT_SCHEDULING_MODE = SchedulingMode.FAIR  

LocalScheduler.scala ->  
System.getProperty("spark.scheduler.mode", "FAIR)  

spark-env.sh ->  
export SPARK_JAVA_OPTS="-Dspark.scheduler.mode=FAIR"  
export SPARK_JAVA_OPTS=" -Dspark.scheduler.mode=FAIR" ./run-example  
org.apache.spark.examples.SparkPi spark://streaming1:7077  


but it's not work  
i want to switch from fifo to fair  
how can i do??  

Regards  
Crystal Lee