You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by 李铖 <li...@gmail.com> on 2015/03/25 11:26:04 UTC

Spark-sql query got exception.Help

It is ok when I do query data from a small hdfs file.
But if the hdfs file is 152m,I got this exception.
I try this code
.'sc.setSystemProperty("spark.kryoserializer.buffer.mb",'256')'.error
still.

```
com.esotericsoftware.kryo.KryoException: Buffer overflow. Available: 0,
required: 39135
at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
at
com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
at


```

Re: Spark-sql query got exception.Help

Posted by 李铖 <li...@gmail.com>.
Yes,the exception occured sometimes,but  at the end  the final result rised.

2015-03-26 11:08 GMT+08:00 Saisai Shao <sa...@gmail.com>:

> Would you mind running again to see if this exception can be reproduced
> again, since exception in MapOutputTracker seldom occurs, maybe some other
> exceptions which lead to this error.
>
> Thanks
> Jerry
>
> 2015-03-26 10:55 GMT+08:00 李铖 <li...@gmail.com>:
>
>> One more exception.How to fix it .Anybody help me ,please.
>>
>>
>> org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output
>> location for shuffle 0
>> at
>> org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$1.apply(MapOutputTracker.scala:386)
>> at
>> org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$1.apply(MapOutputTracker.scala:383)
>> at
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>> at
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>> at
>> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
>> at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>> at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
>> at
>> org.apache.spark.MapOutputTracker$.org$apache$spark$MapOutputTracker$$convertMapStatuses(MapOutputTracker.scala:382)
>> at
>> org.apache.spark.MapOutputTracker.getServerStatuses(MapOutputTracker.scala:178)
>> at
>> org.apache.spark.shuffle.hash.BlockStoreShuffleFetcher$.fetch(BlockStoreShuffleFetcher.scala:42)
>> at
>> org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:40)
>> at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
>> at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
>> at
>> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>> at org.apache.spark.sql.SchemaRDD.compute(SchemaRDD.scala:120)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
>> at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
>> at
>> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
>> at
>> org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:242)
>> at
>> org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204)
>> at
>> org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204)
>> at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1460)
>> at
>> org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:203)
>>
>>
>> 2015-03-26 10:39 GMT+08:00 李铖 <li...@gmail.com>:
>>
>>> Yes, it works after I append the two properties in spark-defaults.conf.
>>>
>>> As I  use python programing on spark platform,the python api does not
>>> have SparkConf api.
>>>
>>> Thanks.
>>>
>>> 2015-03-25 21:07 GMT+08:00 Cheng Lian <li...@gmail.com>:
>>>
>>>>  Oh, just noticed that you were calling sc.setSystemProperty. Actually
>>>> you need to set this property in SparkConf or in spark-defaults.conf. And
>>>> there are two configurations related to Kryo buffer size,
>>>>
>>>>    - spark.kryoserializer.buffer.mb, which is the initial size, and
>>>>    - spark.kryoserializer.buffer.max.mb, which is the max buffer size.
>>>>
>>>> Make sure the 2nd one is larger (it seems that Kryo doesn’t check for
>>>> it).
>>>>
>>>> Cheng
>>>>
>>>> On 3/25/15 7:31 PM, 李铖 wrote:
>>>>
>>>>   Here is the full track
>>>>
>>>>  15/03/25 17:48:34 WARN TaskSetManager: Lost task 0.0 in stage 1.0
>>>> (TID 1, cloud1): com.esotericsoftware.kryo.KryoException: Buffer overflow.
>>>> Available: 0, required: 39135
>>>>  at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
>>>>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
>>>>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
>>>>  at
>>>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
>>>>  at
>>>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:18)
>>>>  at com.esotericsoftware.kryo.Kryo.writeObjectOrNull(Kryo.java:549)
>>>>  at
>>>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:312)
>>>>  at
>>>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:293)
>>>>  at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:568)
>>>>  at
>>>> org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:165)
>>>>  at
>>>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:206)
>>>>  at
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>  at
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>  at java.lang.Thread.run(Thread.java:745)
>>>>
>>>> 2015-03-25 19:05 GMT+08:00 Cheng Lian <li...@gmail.com>:
>>>>
>>>>>  Could you please provide the full stack trace?
>>>>>
>>>>>
>>>>> On 3/25/15 6:26 PM, 李铖 wrote:
>>>>>
>>>>>  It is ok when I do query data from a small hdfs file.
>>>>> But if the hdfs file is 152m,I got this exception.
>>>>> I try this code
>>>>> .'sc.setSystemProperty("spark.kryoserializer.buffer.mb",'256')'.error
>>>>> still.
>>>>>
>>>>>  ```
>>>>> com.esotericsoftware.kryo.KryoException: Buffer overflow. Available:
>>>>> 0, required: 39135
>>>>>  at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
>>>>>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
>>>>>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
>>>>>  at
>>>>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
>>>>>  at
>>>>>
>>>>>
>>>>> ```
>>>>>
>>>>>
>>>>>
>>>>    ​
>>>>
>>>
>>>
>>
>

Re: Spark-sql query got exception.Help

Posted by Saisai Shao <sa...@gmail.com>.
Would you mind running again to see if this exception can be reproduced
again, since exception in MapOutputTracker seldom occurs, maybe some other
exceptions which lead to this error.

Thanks
Jerry

2015-03-26 10:55 GMT+08:00 李铖 <li...@gmail.com>:

> One more exception.How to fix it .Anybody help me ,please.
>
>
> org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output
> location for shuffle 0
> at
> org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$1.apply(MapOutputTracker.scala:386)
> at
> org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$1.apply(MapOutputTracker.scala:383)
> at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
> at
> org.apache.spark.MapOutputTracker$.org$apache$spark$MapOutputTracker$$convertMapStatuses(MapOutputTracker.scala:382)
> at
> org.apache.spark.MapOutputTracker.getServerStatuses(MapOutputTracker.scala:178)
> at
> org.apache.spark.shuffle.hash.BlockStoreShuffleFetcher$.fetch(BlockStoreShuffleFetcher.scala:42)
> at
> org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:40)
> at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
> at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
> at org.apache.spark.sql.SchemaRDD.compute(SchemaRDD.scala:120)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
> at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
> at
> org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:242)
> at
> org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204)
> at
> org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204)
> at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1460)
> at
> org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:203)
>
>
> 2015-03-26 10:39 GMT+08:00 李铖 <li...@gmail.com>:
>
>> Yes, it works after I append the two properties in spark-defaults.conf.
>>
>> As I  use python programing on spark platform,the python api does not
>> have SparkConf api.
>>
>> Thanks.
>>
>> 2015-03-25 21:07 GMT+08:00 Cheng Lian <li...@gmail.com>:
>>
>>>  Oh, just noticed that you were calling sc.setSystemProperty. Actually
>>> you need to set this property in SparkConf or in spark-defaults.conf. And
>>> there are two configurations related to Kryo buffer size,
>>>
>>>    - spark.kryoserializer.buffer.mb, which is the initial size, and
>>>    - spark.kryoserializer.buffer.max.mb, which is the max buffer size.
>>>
>>> Make sure the 2nd one is larger (it seems that Kryo doesn’t check for
>>> it).
>>>
>>> Cheng
>>>
>>> On 3/25/15 7:31 PM, 李铖 wrote:
>>>
>>>   Here is the full track
>>>
>>>  15/03/25 17:48:34 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID
>>> 1, cloud1): com.esotericsoftware.kryo.KryoException: Buffer overflow.
>>> Available: 0, required: 39135
>>>  at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
>>>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
>>>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
>>>  at
>>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
>>>  at
>>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:18)
>>>  at com.esotericsoftware.kryo.Kryo.writeObjectOrNull(Kryo.java:549)
>>>  at
>>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:312)
>>>  at
>>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:293)
>>>  at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:568)
>>>  at
>>> org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:165)
>>>  at
>>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:206)
>>>  at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>  at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>  at java.lang.Thread.run(Thread.java:745)
>>>
>>> 2015-03-25 19:05 GMT+08:00 Cheng Lian <li...@gmail.com>:
>>>
>>>>  Could you please provide the full stack trace?
>>>>
>>>>
>>>> On 3/25/15 6:26 PM, 李铖 wrote:
>>>>
>>>>  It is ok when I do query data from a small hdfs file.
>>>> But if the hdfs file is 152m,I got this exception.
>>>> I try this code
>>>> .'sc.setSystemProperty("spark.kryoserializer.buffer.mb",'256')'.error
>>>> still.
>>>>
>>>>  ```
>>>> com.esotericsoftware.kryo.KryoException: Buffer overflow. Available: 0,
>>>> required: 39135
>>>>  at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
>>>>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
>>>>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
>>>>  at
>>>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
>>>>  at
>>>>
>>>>
>>>> ```
>>>>
>>>>
>>>>
>>>    ​
>>>
>>
>>
>

Re: Spark-sql query got exception.Help

Posted by 李铖 <li...@gmail.com>.
One more exception.How to fix it .Anybody help me ,please.


org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output
location for shuffle 0
at
org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$1.apply(MapOutputTracker.scala:386)
at
org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$1.apply(MapOutputTracker.scala:383)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
at
org.apache.spark.MapOutputTracker$.org$apache$spark$MapOutputTracker$$convertMapStatuses(MapOutputTracker.scala:382)
at
org.apache.spark.MapOutputTracker.getServerStatuses(MapOutputTracker.scala:178)
at
org.apache.spark.shuffle.hash.BlockStoreShuffleFetcher$.fetch(BlockStoreShuffleFetcher.scala:42)
at
org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:40)
at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.sql.SchemaRDD.compute(SchemaRDD.scala:120)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
at
org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:242)
at
org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204)
at
org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1460)
at
org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:203)


2015-03-26 10:39 GMT+08:00 李铖 <li...@gmail.com>:

> Yes, it works after I append the two properties in spark-defaults.conf.
>
> As I  use python programing on spark platform,the python api does not have
> SparkConf api.
>
> Thanks.
>
> 2015-03-25 21:07 GMT+08:00 Cheng Lian <li...@gmail.com>:
>
>>  Oh, just noticed that you were calling sc.setSystemProperty. Actually
>> you need to set this property in SparkConf or in spark-defaults.conf. And
>> there are two configurations related to Kryo buffer size,
>>
>>    - spark.kryoserializer.buffer.mb, which is the initial size, and
>>    - spark.kryoserializer.buffer.max.mb, which is the max buffer size.
>>
>> Make sure the 2nd one is larger (it seems that Kryo doesn’t check for it).
>>
>> Cheng
>>
>> On 3/25/15 7:31 PM, 李铖 wrote:
>>
>>   Here is the full track
>>
>>  15/03/25 17:48:34 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID
>> 1, cloud1): com.esotericsoftware.kryo.KryoException: Buffer overflow.
>> Available: 0, required: 39135
>>  at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
>>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
>>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
>>  at
>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
>>  at
>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:18)
>>  at com.esotericsoftware.kryo.Kryo.writeObjectOrNull(Kryo.java:549)
>>  at
>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:312)
>>  at
>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:293)
>>  at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:568)
>>  at
>> org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:165)
>>  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:206)
>>  at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>  at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>  at java.lang.Thread.run(Thread.java:745)
>>
>> 2015-03-25 19:05 GMT+08:00 Cheng Lian <li...@gmail.com>:
>>
>>>  Could you please provide the full stack trace?
>>>
>>>
>>> On 3/25/15 6:26 PM, 李铖 wrote:
>>>
>>>  It is ok when I do query data from a small hdfs file.
>>> But if the hdfs file is 152m,I got this exception.
>>> I try this code
>>> .'sc.setSystemProperty("spark.kryoserializer.buffer.mb",'256')'.error
>>> still.
>>>
>>>  ```
>>> com.esotericsoftware.kryo.KryoException: Buffer overflow. Available: 0,
>>> required: 39135
>>>  at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
>>>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
>>>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
>>>  at
>>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
>>>  at
>>>
>>>
>>> ```
>>>
>>>
>>>
>>    ​
>>
>
>

Re: Spark-sql query got exception.Help

Posted by 李铖 <li...@gmail.com>.
Yes, it works after I append the two properties in spark-defaults.conf.

As I  use python programing on spark platform,the python api does not have
SparkConf api.

Thanks.

2015-03-25 21:07 GMT+08:00 Cheng Lian <li...@gmail.com>:

>  Oh, just noticed that you were calling sc.setSystemProperty. Actually
> you need to set this property in SparkConf or in spark-defaults.conf. And
> there are two configurations related to Kryo buffer size,
>
>    - spark.kryoserializer.buffer.mb, which is the initial size, and
>    - spark.kryoserializer.buffer.max.mb, which is the max buffer size.
>
> Make sure the 2nd one is larger (it seems that Kryo doesn’t check for it).
>
> Cheng
>
> On 3/25/15 7:31 PM, 李铖 wrote:
>
>   Here is the full track
>
>  15/03/25 17:48:34 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID
> 1, cloud1): com.esotericsoftware.kryo.KryoException: Buffer overflow.
> Available: 0, required: 39135
>  at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
>  at
> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
>  at
> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:18)
>  at com.esotericsoftware.kryo.Kryo.writeObjectOrNull(Kryo.java:549)
>  at
> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:312)
>  at
> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:293)
>  at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:568)
>  at
> org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:165)
>  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:206)
>  at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:745)
>
> 2015-03-25 19:05 GMT+08:00 Cheng Lian <li...@gmail.com>:
>
>>  Could you please provide the full stack trace?
>>
>>
>> On 3/25/15 6:26 PM, 李铖 wrote:
>>
>>  It is ok when I do query data from a small hdfs file.
>> But if the hdfs file is 152m,I got this exception.
>> I try this code
>> .'sc.setSystemProperty("spark.kryoserializer.buffer.mb",'256')'.error
>> still.
>>
>>  ```
>> com.esotericsoftware.kryo.KryoException: Buffer overflow. Available: 0,
>> required: 39135
>>  at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
>>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
>>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
>>  at
>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
>>  at
>>
>>
>> ```
>>
>>
>>
>    ​
>

Re: Spark-sql query got exception.Help

Posted by Cheng Lian <li...@gmail.com>.
Oh, just noticed that you were calling |sc.setSystemProperty|. Actually 
you need to set this property in SparkConf or in spark-defaults.conf. 
And there are two configurations related to Kryo buffer size,

  * spark.kryoserializer.buffer.mb, which is the initial size, and
  * spark.kryoserializer.buffer.max.mb, which is the max buffer size.

Make sure the 2nd one is larger (it seems that Kryo doesn’t check for it).

Cheng

On 3/25/15 7:31 PM, 李铖 wrote:

> Here is the full track
>
> 15/03/25 17:48:34 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID 
> 1, cloud1): com.esotericsoftware.kryo.KryoException: Buffer overflow. 
> Available: 0, required: 39135
> at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
> at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
> at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
> at 
> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
> at 
> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:18)
> at com.esotericsoftware.kryo.Kryo.writeObjectOrNull(Kryo.java:549)
> at 
> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:312)
> at 
> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:293)
> at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:568)
> at 
> org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:165)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:206)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
> 2015-03-25 19:05 GMT+08:00 Cheng Lian <lian.cs.zju@gmail.com 
> <ma...@gmail.com>>:
>
>     Could you please provide the full stack trace?
>
>
>     On 3/25/15 6:26 PM, 李铖 wrote:
>>     It is ok when I do query data from a small hdfs file.
>>     But if the hdfs file is 152m,I got this exception.
>>     I try this code
>>     .'sc.setSystemProperty("spark.kryoserializer.buffer.mb",'256')'.error
>>     still.
>>
>>     ```
>>     com.esotericsoftware.kryo.KryoException: Buffer overflow.
>>     Available: 0, required: 39135
>>     at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
>>     at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
>>     at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
>>     at
>>     com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
>>     at
>>
>>
>>     ```
>
>
​

Re: Spark-sql query got exception.Help

Posted by 李铖 <li...@gmail.com>.
Here is the full track

15/03/25 17:48:34 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID 1,
cloud1): com.esotericsoftware.kryo.KryoException: Buffer overflow.
Available: 0, required: 39135
at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
at
com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
at
com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:18)
at com.esotericsoftware.kryo.Kryo.writeObjectOrNull(Kryo.java:549)
at
com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:312)
at
com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:293)
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:568)
at
org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:165)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:206)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

2015-03-25 19:05 GMT+08:00 Cheng Lian <li...@gmail.com>:

>  Could you please provide the full stack trace?
>
>
> On 3/25/15 6:26 PM, 李铖 wrote:
>
>  It is ok when I do query data from a small hdfs file.
> But if the hdfs file is 152m,I got this exception.
> I try this code
> .'sc.setSystemProperty("spark.kryoserializer.buffer.mb",'256')'.error
> still.
>
>  ```
> com.esotericsoftware.kryo.KryoException: Buffer overflow. Available: 0,
> required: 39135
>  at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
>  at
> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
>  at
>
>
> ```
>
>
>

Re: Spark-sql query got exception.Help

Posted by Cheng Lian <li...@gmail.com>.
Could you please provide the full stack trace?

On 3/25/15 6:26 PM, 李铖 wrote:
> It is ok when I do query data from a small hdfs file.
> But if the hdfs file is 152m,I got this exception.
> I try this code 
> .'sc.setSystemProperty("spark.kryoserializer.buffer.mb",'256')'.error 
> still.
>
> ```
> com.esotericsoftware.kryo.KryoException: Buffer overflow. Available: 
> 0, required: 39135
> at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
> at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
> at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
> at 
> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
> at
>
>
> ```