You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by "jw.cmu" <ji...@gmail.com> on 2014/09/19 23:35:01 UTC

Failed running Spark ALS

I'm trying to run Spark ALS using the netflix dataset but failed due to "No
space on device" exception. It seems the exception is thrown after the
training phase. It's not clear to me what is being written and where is the
output directory.

I was able to run the same code on the provided test.data dataset.

I'm new to Spark and I'd like to get some hints for resolving this problem.

The code I ran was got from
https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html (the
Java version).

Relevant info:

Spark version: 1.0.2 (Standalone deployment)
# slaves/workers/exectuors: 8
Core per worker: 64
memory per executor: 100g

Application parameters are left as default.







--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Failed-running-Spark-ALS-tp14704.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Failed running Spark ALS

Posted by Nick Pentreath <ni...@gmail.com>.
Have you set spark.local.dir (I think this is the config setting)?

It needs to point to a volume with plenty of space.

By default if I recall it point to /tmp

Sent from my iPhone

> On 19 Sep 2014, at 23:35, "jw.cmu" <ji...@gmail.com> wrote:
> 
> I'm trying to run Spark ALS using the netflix dataset but failed due to "No
> space on device" exception. It seems the exception is thrown after the
> training phase. It's not clear to me what is being written and where is the
> output directory.
> 
> I was able to run the same code on the provided test.data dataset.
> 
> I'm new to Spark and I'd like to get some hints for resolving this problem.
> 
> The code I ran was got from
> https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html (the
> Java version).
> 
> Relevant info:
> 
> Spark version: 1.0.2 (Standalone deployment)
> # slaves/workers/exectuors: 8
> Core per worker: 64
> memory per executor: 100g
> 
> Application parameters are left as default.
> 
> 
> 
> 
> 
> 
> 
> --
> View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Failed-running-Spark-ALS-tp14704.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org