You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Yuan Fang <yf...@advisorsoftware.com> on 2017/04/28 22:35:55 UTC

Could any one please tell me why this takes forever to finish?

object SparkPi {
  private val logger = Logger(this.getClass)

  val sparkConf = new SparkConf()
    .setAppName("Spark Pi")
    .setMaster("spark://10.100.103.192:7077")

  lazy val sc = new SparkContext(sparkConf)

sc.addJar("/Users/yfang/workspace/mcs/target/scala-2.11/root-assembly-0.1.0.jar")

  def main(args: Array[String]) {
    val x = (1 to 4)
    val a = sc.parallelize(x)
    val mean = a.mean()
    print(mean)
  }
}


spark://10.100.103.192:7077 is a remote standalone cluster I created on
AWS.
I ran it locally with IntelliJ.
I can see the job is submitted. But the calculation can never finish.

The log shows:
15:34:21.674 [Timer-0] WARN org.apache.spark.scheduler.TaskSchedulerImpl -
Initial job has not accepted any resources; check your cluster UI to ensure
that workers are registered and have sufficient resources

Any help will be highly appreciated!

Thanks!

Yuan

-- 
This message is intended exclusively for the individual or entity to which 
it is addressed. This communication may contain information that is 
proprietary, privileged or confidential or otherwise legally prohibited 
from disclosure. If you are not the named addressee, you are not authorized 
to read, print, retain, copy or disseminate this message or any part of it. 
If you have received this message in error, please notify the sender 
immediately by e-mail and delete all copies of the message.

Re: Could any one please tell me why this takes forever to finish?

Posted by "颜发才 (Yan Facai)" <fa...@gmail.com>.
Hi,
10.x.x.x is private network, see https://en.wikipedia.org/wiki/IP_address.
You should use the public IP of your AWS.

On Sat, Apr 29, 2017 at 6:35 AM, Yuan Fang <yf...@advisorsoftware.com>
wrote:

>
> object SparkPi {
>   private val logger = Logger(this.getClass)
>
>   val sparkConf = new SparkConf()
>     .setAppName("Spark Pi")
>     .setMaster("spark://10.100.103.192:7077")
>
>   lazy val sc = new SparkContext(sparkConf)
>   sc.addJar("/Users/yfang/workspace/mcs/target/scala-2.11/
> root-assembly-0.1.0.jar")
>
>   def main(args: Array[String]) {
>     val x = (1 to 4)
>     val a = sc.parallelize(x)
>     val mean = a.mean()
>     print(mean)
>   }
> }
>
>
> spark://10.100.103.192:7077 is a remote standalone cluster I created on
> AWS.
> I ran it locally with IntelliJ.
> I can see the job is submitted. But the calculation can never finish.
>
> The log shows:
> 15:34:21.674 [Timer-0] WARN org.apache.spark.scheduler.TaskSchedulerImpl
> - Initial job has not accepted any resources; check your cluster UI to
> ensure that workers are registered and have sufficient resources
>
> Any help will be highly appreciated!
>
> Thanks!
>
> Yuan
>
>
> This message is intended exclusively for the individual or entity to which
> it is addressed. This communication may contain information that is
> proprietary, privileged or confidential or otherwise legally prohibited
> from disclosure. If you are not the named addressee, you are not authorized
> to read, print, retain, copy or disseminate this message or any part of it.
> If you have received this message in error, please notify the sender
> immediately by e-mail and delete all copies of the message.