You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by cjwang <cj...@cjwang.us> on 2014/09/02 23:14:48 UTC
Creating an RDD in another RDD causes deadlock
My code seemed deadlock when I tried to do this:
object MoreRdd extends Serializable {
def apply(i: Int) = {
val rdd2 = sc.parallelize(0 to 10)
rdd2.map(j => i*10 + j).collect
}
}
val rdd1 = sc.parallelize(0 to 10)
val y = rdd1.map(i => MoreRdd(i)).collect
y.toString()
It never reached the last line. The code seemed deadlock somewhere since my
CPU load was quite low.
Is there a restriction not to create an RDD while another one is still
active? Is it because one worker can only handle one task? How do I work
around this?
--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Creating-an-RDD-in-another-RDD-causes-deadlock-tp13302.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org
Re: Creating an RDD in another RDD causes deadlock
Posted by cjwang <cj...@cjwang.us>.
I didn't know this restriction. Thank you.
--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Creating-an-RDD-in-another-RDD-causes-deadlock-tp13302p13304.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org
Re: Creating an RDD in another RDD causes deadlock
Posted by Sean Owen <so...@cloudera.com>.
Yes, you can't use RDDs inside RDDs. But of course you can do this:
val nums = (0 to 10)
val y = nums.map(i => MoreRdd(i)).collect
On Tue, Sep 2, 2014 at 10:14 PM, cjwang <cj...@cjwang.us> wrote:
> My code seemed deadlock when I tried to do this:
>
> object MoreRdd extends Serializable {
> def apply(i: Int) = {
> val rdd2 = sc.parallelize(0 to 10)
> rdd2.map(j => i*10 + j).collect
> }
> }
>
> val rdd1 = sc.parallelize(0 to 10)
> val y = rdd1.map(i => MoreRdd(i)).collect
>
> y.toString()
>
>
> It never reached the last line. The code seemed deadlock somewhere since my
> CPU load was quite low.
>
> Is there a restriction not to create an RDD while another one is still
> active? Is it because one worker can only handle one task? How do I work
> around this?
>
>
>
>
>
> --
> View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Creating-an-RDD-in-another-RDD-causes-deadlock-tp13302.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org