You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Jia Zou <ja...@gmail.com> on 2016/01/07 05:41:14 UTC

org.apache.spark.storage.BlockNotFoundException in Spark1.5.2+Tachyon0.7.1

Dear all,

I am using Spark1.5.2 and Tachyon0.7.1 to run KMeans with
inputRDD.persist(StorageLevel.OFF_HEAP()).

I've set tired storage for Tachyon. It is all right when working set is
smaller than available memory. However, when working set exceeds available
memory, I keep getting errors like below:

16/01/07 04:18:53 INFO scheduler.TaskSetManager: Lost task 197.1 in stage
0.0 (TID 206) on executor 10.149.11.81: java.lang.RuntimeException
(org.apache.spark.storage.BlockNotFoundException: Block rdd_1_197 not found

16/01/07 04:18:53 INFO scheduler.TaskSetManager: Lost task 191.1 in stage
0.0 (TID 207) on executor 10.149.11.81: java.lang.RuntimeException
(org.apache.spark.storage.BlockNotFoundException: Block rdd_1_191 not found

16/01/07 04:18:53 INFO scheduler.TaskSetManager: Lost task 197.2 in stage
0.0 (TID 208) on executor 10.149.11.81: java.lang.RuntimeException
(org.apache.spark.storage.BlockNotFoundException: Block rdd_1_197 not found

16/01/07 04:18:53 INFO scheduler.TaskSetManager: Lost task 191.2 in stage
0.0 (TID 209) on executor 10.149.11.81: java.lang.RuntimeException
(org.apache.spark.storage.BlockNotFoundException: Block rdd_1_191 not found

16/01/07 04:18:53 INFO scheduler.TaskSetManager: Lost task 197.3 in stage
0.0 (TID 210) on executor 10.149.11.81: java.lang.RuntimeException
(org.apache.spark.storage.BlockNotFoundException: Block rdd_1_197 not found


Can any one give me some suggestions? Thanks a lot!


Best Regards,
Jia

Re: org.apache.spark.storage.BlockNotFoundException in Spark1.5.2+Tachyon0.7.1

Posted by Gene Pang <ge...@gmail.com>.
Yes, the tiered storage feature in Tachyon can address this issue. Here is
a link to more information:
http://tachyon-project.org/documentation/Tiered-Storage-on-Tachyon.html

Thanks,
Gene

On Wed, Jan 6, 2016 at 8:44 PM, Ted Yu <yu...@gmail.com> wrote:

> Have you seen this thread ?
>
> http://search-hadoop.com/m/q3RTtAiQta22XrCI
>
> On Wed, Jan 6, 2016 at 8:41 PM, Jia Zou <ja...@gmail.com> wrote:
>
>> Dear all,
>>
>> I am using Spark1.5.2 and Tachyon0.7.1 to run KMeans with
>> inputRDD.persist(StorageLevel.OFF_HEAP()).
>>
>> I've set tired storage for Tachyon. It is all right when working set is
>> smaller than available memory. However, when working set exceeds available
>> memory, I keep getting errors like below:
>>
>> 16/01/07 04:18:53 INFO scheduler.TaskSetManager: Lost task 197.1 in stage
>> 0.0 (TID 206) on executor 10.149.11.81: java.lang.RuntimeException
>> (org.apache.spark.storage.BlockNotFoundException: Block rdd_1_197 not found
>>
>> 16/01/07 04:18:53 INFO scheduler.TaskSetManager: Lost task 191.1 in stage
>> 0.0 (TID 207) on executor 10.149.11.81: java.lang.RuntimeException
>> (org.apache.spark.storage.BlockNotFoundException: Block rdd_1_191 not found
>>
>> 16/01/07 04:18:53 INFO scheduler.TaskSetManager: Lost task 197.2 in stage
>> 0.0 (TID 208) on executor 10.149.11.81: java.lang.RuntimeException
>> (org.apache.spark.storage.BlockNotFoundException: Block rdd_1_197 not found
>>
>> 16/01/07 04:18:53 INFO scheduler.TaskSetManager: Lost task 191.2 in stage
>> 0.0 (TID 209) on executor 10.149.11.81: java.lang.RuntimeException
>> (org.apache.spark.storage.BlockNotFoundException: Block rdd_1_191 not found
>>
>> 16/01/07 04:18:53 INFO scheduler.TaskSetManager: Lost task 197.3 in stage
>> 0.0 (TID 210) on executor 10.149.11.81: java.lang.RuntimeException
>> (org.apache.spark.storage.BlockNotFoundException: Block rdd_1_197 not found
>>
>>
>> Can any one give me some suggestions? Thanks a lot!
>>
>>
>> Best Regards,
>> Jia
>>
>
>

Re: org.apache.spark.storage.BlockNotFoundException in Spark1.5.2+Tachyon0.7.1

Posted by Ted Yu <yu...@gmail.com>.
Have you seen this thread ?

http://search-hadoop.com/m/q3RTtAiQta22XrCI

On Wed, Jan 6, 2016 at 8:41 PM, Jia Zou <ja...@gmail.com> wrote:

> Dear all,
>
> I am using Spark1.5.2 and Tachyon0.7.1 to run KMeans with
> inputRDD.persist(StorageLevel.OFF_HEAP()).
>
> I've set tired storage for Tachyon. It is all right when working set is
> smaller than available memory. However, when working set exceeds available
> memory, I keep getting errors like below:
>
> 16/01/07 04:18:53 INFO scheduler.TaskSetManager: Lost task 197.1 in stage
> 0.0 (TID 206) on executor 10.149.11.81: java.lang.RuntimeException
> (org.apache.spark.storage.BlockNotFoundException: Block rdd_1_197 not found
>
> 16/01/07 04:18:53 INFO scheduler.TaskSetManager: Lost task 191.1 in stage
> 0.0 (TID 207) on executor 10.149.11.81: java.lang.RuntimeException
> (org.apache.spark.storage.BlockNotFoundException: Block rdd_1_191 not found
>
> 16/01/07 04:18:53 INFO scheduler.TaskSetManager: Lost task 197.2 in stage
> 0.0 (TID 208) on executor 10.149.11.81: java.lang.RuntimeException
> (org.apache.spark.storage.BlockNotFoundException: Block rdd_1_197 not found
>
> 16/01/07 04:18:53 INFO scheduler.TaskSetManager: Lost task 191.2 in stage
> 0.0 (TID 209) on executor 10.149.11.81: java.lang.RuntimeException
> (org.apache.spark.storage.BlockNotFoundException: Block rdd_1_191 not found
>
> 16/01/07 04:18:53 INFO scheduler.TaskSetManager: Lost task 197.3 in stage
> 0.0 (TID 210) on executor 10.149.11.81: java.lang.RuntimeException
> (org.apache.spark.storage.BlockNotFoundException: Block rdd_1_197 not found
>
>
> Can any one give me some suggestions? Thanks a lot!
>
>
> Best Regards,
> Jia
>