You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Morbious <kn...@gmail.com> on 2014/12/12 21:37:18 UTC

java.lang.IllegalStateException: unread block data

Hi,

Recently I installed Cloudera Hadoop 5.1.1 with spark.
I shut down slave servers and than restored them back.
After this operation I was trying to run any task but each task with file
bigger than few megabytes ended with errors:

14/12/12 20:25:02 WARN scheduler.TaskSetManager: Lost TID 61 (task 1.0:61)
14/12/12 20:25:02 WARN scheduler.TaskSetManager: Loss was due to
java.lang.IllegalStateException
java.lang.IllegalStateException: unread block data
	at
java.io.ObjectInputStream$BlockDataInputStream.setBlockDataMode(ObjectInputStream.java:2421)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1382)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
	at
org.apache.spark.scheduler.ShuffleMapTask.readExternal(ShuffleMapTask.scala:140)
	at java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:1837)
	at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1796)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
	at
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:63)
	at
org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:85)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:169)
	at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
14/12/12 20:25:02 WARN scheduler.TaskSetManager: Lost TID 62 (task 1.0:62)
14/12/12 20:25:02 INFO scheduler.TaskSetManager: Loss was due to
java.lang.IllegalStateException: unread block data [duplicate 1]
14/12/12 20:25:02 WARN scheduler.TaskSetManager: Lost TID 63 (task 1.0:63)
14/12/12 20:25:02 INFO scheduler.TaskSetManager: Loss was due to
java.lang.IllegalStateException: unread block data [duplicate 2]
14/12/12 20:25:02 WARN scheduler.TaskSetManager: Lost TID 64 (task 1.0:64)
14/12/12 20:25:02 INFO scheduler.TaskSetManager: Loss was due to
java.lang.IllegalStateException: unread block data [duplicate 3]
14/12/12 20:25:02 WARN scheduler.TaskSetManager: Lost TID 60 (task 1.0:60)

I checked security limits but everything seems to be OK.
Before restart I was able to use word count on 100GB file, now it can be
done only on few mb file.

Best regards,

Morbious



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalStateException-unread-block-data-tp20668.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: java.lang.IllegalStateException: unread block data

Posted by Peng Cheng <pc...@uow.edu.au>.
I got the same problem, maybe java serializer is unstable



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalStateException-unread-block-data-tp20668p21463.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: java.lang.IllegalStateException: unread block data

Posted by sivarani <wh...@gmail.com>.
same issue anyone help please



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalStateException-unread-block-data-tp20668p20745.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: java.lang.IllegalStateException: unread block data

Posted by Morbious <kn...@gmail.com>.
I found solution.
I use HADOOP_MAPRED_HOME in my environment what clashes with spark.
After I set empty HADOOP_MAPRED_HOME spark's started working.




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalStateException-unread-block-data-tp20668p20742.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: java.lang.IllegalStateException: unread block data

Posted by Morbious <kn...@gmail.com>.
"Restored" ment reboot slave node with unchanged IP.
"Funny" thing is that for small files spark works fine.
I checked hadoop with hdfs also and I'm able to run wordcount on it without
any problems (i.e. file about 50GB size).



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalStateException-unread-block-data-tp20668p20692.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: java.lang.IllegalStateException: unread block data

Posted by Akhil <ak...@sigmoidanalytics.com>.
When you say restored, does it mean the internal IP/public IP remain
unchanged to you changed them accordingly? (I'm assuming you are using a
cloud service like AWS, GCE or Azure).

What is the serializer that you are using? Try to set the following before
creating the sparkContext, might help with Serialization and all

        System.setProperty("spark.serializer", "spark.KryoSerializer")
        System.setProperty("spark.kryo.registrator",
"com.sigmoidanalytics.MyRegistrator")


Morbious wrote
> Hi,
> 
> Recently I installed Cloudera Hadoop 5.1.1 with spark.
> I shut down slave servers and than restored them back.
> After this operation I was trying to run any task but each task with file
> bigger than few megabytes ended with errors:
> 
> 14/12/12 20:25:02 WARN scheduler.TaskSetManager: Lost TID 61 (task 1.0:61)
> 14/12/12 20:25:02 WARN scheduler.TaskSetManager: Loss was due to
> java.lang.IllegalStateException
> java.lang.IllegalStateException: unread block data
> 	at
> java.io.ObjectInputStream$BlockDataInputStream.setBlockDataMode(ObjectInputStream.java:2421)
> 	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1382)
> 	at
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
> 	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
> 	at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
> 	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> 	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
> 	at
> org.apache.spark.scheduler.ShuffleMapTask.readExternal(ShuffleMapTask.scala:140)
> 	at
> java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:1837)
> 	at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1796)
> 	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> 	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
> 	at
> org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:63)
> 	at
> org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:85)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:169)
> 	at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:745)
> 14/12/12 20:25:02 WARN scheduler.TaskSetManager: Lost TID 62 (task 1.0:62)
> 14/12/12 20:25:02 INFO scheduler.TaskSetManager: Loss was due to
> java.lang.IllegalStateException: unread block data [duplicate 1]
> 14/12/12 20:25:02 WARN scheduler.TaskSetManager: Lost TID 63 (task 1.0:63)
> 14/12/12 20:25:02 INFO scheduler.TaskSetManager: Loss was due to
> java.lang.IllegalStateException: unread block data [duplicate 2]
> 14/12/12 20:25:02 WARN scheduler.TaskSetManager: Lost TID 64 (task 1.0:64)
> 14/12/12 20:25:02 INFO scheduler.TaskSetManager: Loss was due to
> java.lang.IllegalStateException: unread block data [duplicate 3]
> 14/12/12 20:25:02 WARN scheduler.TaskSetManager: Lost TID 60 (task 1.0:60)
> 
> I checked security limits but everything seems to be OK.
> Before restart I was able to use word count on 100GB file, now it can be
> done only on few mb file.
> 
> Best regards,
> 
> Morbious





--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalStateException-unread-block-data-tp20668p20684.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: java.lang.IllegalStateException: unread block data

Posted by Marcelo Vanzin <va...@cloudera.com>.
Hi,

This is a question more suited for cdh-users@cloudera.org, since it's
probably CDH-specific. In the meantime, check the following:

- if you're using Yarn, check that you've also updated the copy of the
Spark assembly in HDFS (especially if you're using CM to manage
things)
- make sure all JDKs on all nodes are the same version (especially
check if there isn't an open jdk installed somewhere)
- make sure you're not adding any custom Hadoop jars to the driver or
executor classpaths.



On Fri, Dec 12, 2014 at 12:37 PM, Morbious
<kn...@gmail.com> wrote:
> Hi,
>
> Recently I installed Cloudera Hadoop 5.1.1 with spark.
> I shut down slave servers and than restored them back.
> After this operation I was trying to run any task but each task with file
> bigger than few megabytes ended with errors:
>
> 14/12/12 20:25:02 WARN scheduler.TaskSetManager: Lost TID 61 (task 1.0:61)
> 14/12/12 20:25:02 WARN scheduler.TaskSetManager: Loss was due to
> java.lang.IllegalStateException
> java.lang.IllegalStateException: unread block data
>         at
> java.io.ObjectInputStream$BlockDataInputStream.setBlockDataMode(ObjectInputStream.java:2421)
>         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1382)
>         at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
>         at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
>         at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
>         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
>         at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
>         at
> org.apache.spark.scheduler.ShuffleMapTask.readExternal(ShuffleMapTask.scala:140)
>         at java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:1837)
>         at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1796)
>         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
>         at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
>         at
> org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:63)
>         at
> org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:85)
>         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:169)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
> 14/12/12 20:25:02 WARN scheduler.TaskSetManager: Lost TID 62 (task 1.0:62)
> 14/12/12 20:25:02 INFO scheduler.TaskSetManager: Loss was due to
> java.lang.IllegalStateException: unread block data [duplicate 1]
> 14/12/12 20:25:02 WARN scheduler.TaskSetManager: Lost TID 63 (task 1.0:63)
> 14/12/12 20:25:02 INFO scheduler.TaskSetManager: Loss was due to
> java.lang.IllegalStateException: unread block data [duplicate 2]
> 14/12/12 20:25:02 WARN scheduler.TaskSetManager: Lost TID 64 (task 1.0:64)
> 14/12/12 20:25:02 INFO scheduler.TaskSetManager: Loss was due to
> java.lang.IllegalStateException: unread block data [duplicate 3]
> 14/12/12 20:25:02 WARN scheduler.TaskSetManager: Lost TID 60 (task 1.0:60)
>
> I checked security limits but everything seems to be OK.
> Before restart I was able to use word count on 100GB file, now it can be
> done only on few mb file.
>
> Best regards,
>
> Morbious
>
>
>
> --
> View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalStateException-unread-block-data-tp20668.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>



-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org