You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Peter Vandenabeele <pe...@vandenabeele.com> on 2014/11/30 13:10:06 UTC

Loading JSON Dataset fails with com.fasterxml.jackson.databind.JsonMappingException

Hi,

On spark 1.1.0 in Standalone mode, I am following


https://spark.apache.org/docs/1.1.0/sql-programming-guide.html#json-datasets

to try to load a simple test JSON file (on my local filesystem, not in
hdfs).
The file is below and was validated with jsonlint.com:

➜  tmp  cat test_4.json
{"foo":
    [{
        "bar": {
            "id": 31,
            "name": "bar"
        }
    },{
        "tux": {
            "id": 42,
            "name": "tux"
        }
    }]
}
➜  tmp  wc test_4.json
      13      19     182 test_4.json


Reading the file as text works correctly (reporting a line count of 13).

However, trying to read the file with:

scala> val sqlContext = new org.apache.spark.sql.SQLContext(sc)
sqlContext: org.apache.spark.sql.SQLContext =
org.apache.spark.sql.SQLContext@2c4eae94

scala> val test_as_json = sqlContext.jsonFile(test_path)
...
Gets into this exception:

14/11/30 12:37:10 INFO rdd.HadoopRDD: Input split:
file:/Users/peter_v/tmp/test_4.json:0+91
14/11/30 12:37:10 INFO rdd.HadoopRDD: Input split:
file:/Users/peter_v/tmp/test_4.json:91+91
14/11/30 12:37:10 ERROR executor.Executor: Exception in task 1.0 in stage
1.0 (TID 3)
com.fasterxml.jackson.databind.JsonMappingException: Can not instantiate
value of type [map type; class java.util.LinkedHashMap, [simple type, class
java.lang.Object] -> [simple type, class java.lang.Object]] from String
value; no single-String constructor/factory method
...

Wat looks strange to me is that the file of 182 characters, seems to be
split over
2 workers that take char 0+91 and 91+91 ? (Is that interpretation correct
?? That
would yield 2 half JSON files that would each be incomplete??). I presume I
am
wrong here and something else is at play.

Full log of the experiment below.

Also, I did see this thread (regarding blank lines that trigger similar
problem)

  http://find.searchhub.org/document/9aaf462d6bca027c#f294a1dd16169ba4

I validated that I have no blank lines in the input (line count => 13) and
I also did try
the filter function that is suggested there, but still get (presumably) the
same error condition.

I also did not find immediate hints that this was a resolved issue in Spark
1.1.1

  http://spark.apache.org/releases/spark-release-1-1-1.html

Thanks for any hints how to resolve this,

Peter

+++++++++++++++++++++++++++++++++++


$ bin/spark-shell
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
MaxPermSize=128m; support was removed in 8.0
14/11/30 12:34:34 INFO spark.SecurityManager: Changing view acls to:
peter_v,
14/11/30 12:34:34 INFO spark.SecurityManager: Changing modify acls to:
peter_v,
14/11/30 12:34:34 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(peter_v, ); users with modify permissions: Set(peter_v, )
14/11/30 12:34:34 INFO spark.HttpServer: Starting HTTP Server
14/11/30 12:34:34 INFO server.Server: jetty-8.y.z-SNAPSHOT
14/11/30 12:34:34 INFO server.AbstractConnector: Started
SocketConnector@0.0.0.0:63500
14/11/30 12:34:34 INFO util.Utils: Successfully started service 'HTTP class
server' on port 63500.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.1.0
      /_/

Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java
1.8.0_25)
Type in expressions to have them evaluated.
Type :help for more information.
14/11/30 12:34:37 INFO spark.SecurityManager: Changing view acls to:
peter_v,
14/11/30 12:34:37 INFO spark.SecurityManager: Changing modify acls to:
peter_v,
14/11/30 12:34:37 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(peter_v, ); users with modify permissions: Set(peter_v, )
14/11/30 12:34:37 INFO slf4j.Slf4jLogger: Slf4jLogger started
14/11/30 12:34:37 INFO Remoting: Starting remoting
14/11/30 12:34:37 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkDriver@192.168.0.191:63503]
14/11/30 12:34:37 INFO Remoting: Remoting now listens on addresses:
[akka.tcp://sparkDriver@192.168.0.191:63503]
14/11/30 12:34:37 INFO util.Utils: Successfully started service
'sparkDriver' on port 63503.
14/11/30 12:34:37 INFO spark.SparkEnv: Registering MapOutputTracker
14/11/30 12:34:37 INFO spark.SparkEnv: Registering BlockManagerMaster
14/11/30 12:34:37 INFO storage.DiskBlockManager: Created local directory at
/var/folders/1q/3_rsfwqd4b93sj7m6rnbzj8h0000gn/T/spark-local-20141130123437-43b2
14/11/30 12:34:37 INFO util.Utils: Successfully started service 'Connection
manager for block manager' on port 63504.
14/11/30 12:34:37 INFO network.ConnectionManager: Bound socket to port
63504 with id = ConnectionManagerId(192.168.0.191,63504)
14/11/30 12:34:37 INFO storage.MemoryStore: MemoryStore started with
capacity 265.1 MB
14/11/30 12:34:37 INFO storage.BlockManagerMaster: Trying to register
BlockManager
14/11/30 12:34:37 INFO storage.BlockManagerMasterActor: Registering block
manager 192.168.0.191:63504 with 265.1 MB RAM
14/11/30 12:34:37 INFO storage.BlockManagerMaster: Registered BlockManager
14/11/30 12:34:37 INFO spark.HttpFileServer: HTTP File server directory is
/var/folders/1q/3_rsfwqd4b93sj7m6rnbzj8h0000gn/T/spark-80468993-950e-406c-b78a-0fb2181cc011
14/11/30 12:34:37 INFO spark.HttpServer: Starting HTTP Server
14/11/30 12:34:37 INFO server.Server: jetty-8.y.z-SNAPSHOT
14/11/30 12:34:37 INFO server.AbstractConnector: Started
SocketConnector@0.0.0.0:63505
14/11/30 12:34:37 INFO util.Utils: Successfully started service 'HTTP file
server' on port 63505.
14/11/30 12:34:38 INFO server.Server: jetty-8.y.z-SNAPSHOT
14/11/30 12:34:38 INFO server.AbstractConnector: Started
SelectChannelConnector@0.0.0.0:4040
14/11/30 12:34:38 INFO util.Utils: Successfully started service 'SparkUI'
on port 4040.
14/11/30 12:34:38 INFO ui.SparkUI: Started SparkUI at
http://192.168.0.191:4040
14/11/30 12:34:38 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
14/11/30 12:34:38 INFO executor.Executor: Using REPL class URI:
http://192.168.0.191:63500
14/11/30 12:34:38 INFO util.AkkaUtils: Connecting to HeartbeatReceiver:
akka.tcp://sparkDriver@192.168.0.191:63503/user/HeartbeatReceiver
14/11/30 12:34:38 INFO repl.SparkILoop: Created spark context..
Spark context available as sc.

scala> val test_path = "file:///Users/peter_v/tmp/test_4.json"
test_path: String = file:///Users/peter_v/tmp/test_4.json

scala> val test_as_textFile = sc.textFile(test_path)
14/11/30 12:35:24 INFO storage.MemoryStore: ensureFreeSpace(169677) called
with curMem=0, maxMem=278019440
14/11/30 12:35:24 INFO storage.MemoryStore: Block broadcast_0 stored as
values in memory (estimated size 165.7 KB, free 265.0 MB)
test_as_textFile: org.apache.spark.rdd.RDD[String] =
file:///Users/peter_v/tmp/test_4.json MappedRDD[1] at textFile at
<console>:14

scala> test_as_textFile.count()
14/11/30 12:35:28 INFO mapred.FileInputFormat: Total input paths to process
: 1
14/11/30 12:35:28 INFO spark.SparkContext: Starting job: count at
<console>:17
14/11/30 12:35:28 INFO scheduler.DAGScheduler: Got job 0 (count at
<console>:17) with 2 output partitions (allowLocal=false)
14/11/30 12:35:28 INFO scheduler.DAGScheduler: Final stage: Stage 0(count
at <console>:17)
14/11/30 12:35:28 INFO scheduler.DAGScheduler: Parents of final stage:
List()
14/11/30 12:35:28 INFO scheduler.DAGScheduler: Missing parents: List()
14/11/30 12:35:28 INFO scheduler.DAGScheduler: Submitting Stage 0
(file:///Users/peter_v/tmp/test_4.json MappedRDD[1] at textFile at
<console>:14), which has no missing parents
14/11/30 12:35:29 INFO storage.MemoryStore: ensureFreeSpace(2408) called
with curMem=169677, maxMem=278019440
14/11/30 12:35:29 INFO storage.MemoryStore: Block broadcast_1 stored as
values in memory (estimated size 2.4 KB, free 265.0 MB)
14/11/30 12:35:29 INFO scheduler.DAGScheduler: Submitting 2 missing tasks
from Stage 0 (file:///Users/peter_v/tmp/test_4.json MappedRDD[1] at
textFile at <console>:14)
14/11/30 12:35:29 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0
with 2 tasks
14/11/30 12:35:29 INFO scheduler.TaskSetManager: Starting task 0.0 in stage
0.0 (TID 0, localhost, PROCESS_LOCAL, 1191 bytes)
14/11/30 12:35:29 INFO scheduler.TaskSetManager: Starting task 1.0 in stage
0.0 (TID 1, localhost, PROCESS_LOCAL, 1191 bytes)
14/11/30 12:35:29 INFO executor.Executor: Running task 1.0 in stage 0.0
(TID 1)
14/11/30 12:35:29 INFO executor.Executor: Running task 0.0 in stage 0.0
(TID 0)
14/11/30 12:35:29 INFO rdd.HadoopRDD: Input split:
file:/Users/peter_v/tmp/test_4.json:0+91
14/11/30 12:35:29 INFO rdd.HadoopRDD: Input split:
file:/Users/peter_v/tmp/test_4.json:91+91
14/11/30 12:35:29 INFO Configuration.deprecation: mapred.tip.id is
deprecated. Instead, use mapreduce.task.id
14/11/30 12:35:29 INFO Configuration.deprecation: mapred.task.id is
deprecated. Instead, use mapreduce.task.attempt.id
14/11/30 12:35:29 INFO Configuration.deprecation: mapred.task.is.map is
deprecated. Instead, use mapreduce.task.ismap
14/11/30 12:35:29 INFO Configuration.deprecation: mapred.task.partition is
deprecated. Instead, use mapreduce.task.partition
14/11/30 12:35:29 INFO Configuration.deprecation: mapred.job.id is
deprecated. Instead, use mapreduce.job.id
14/11/30 12:35:29 INFO executor.Executor: Finished task 1.0 in stage 0.0
(TID 1). 1731 bytes result sent to driver
14/11/30 12:35:29 INFO executor.Executor: Finished task 0.0 in stage 0.0
(TID 0). 1731 bytes result sent to driver
14/11/30 12:35:29 INFO scheduler.TaskSetManager: Finished task 1.0 in stage
0.0 (TID 1) in 113 ms on localhost (1/2)
14/11/30 12:35:29 INFO scheduler.TaskSetManager: Finished task 0.0 in stage
0.0 (TID 0) in 122 ms on localhost (2/2)
14/11/30 12:35:29 INFO scheduler.DAGScheduler: Stage 0 (count at
<console>:17) finished in 0.129 s
14/11/30 12:35:29 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0,
whose tasks have all completed, from pool
14/11/30 12:35:29 INFO spark.SparkContext: Job finished: count at
<console>:17, took 0.218587372 s
res0: Long = 13

scala> val sqlContext = new org.apache.spark.sql.SQLContext(sc)
sqlContext: org.apache.spark.sql.SQLContext =
org.apache.spark.sql.SQLContext@2c4eae94

scala> val test_as_json = sqlContext.jsonFile(test_path)
14/11/30 12:37:10 INFO storage.MemoryStore: ensureFreeSpace(77323) called
with curMem=172085, maxMem=278019440
14/11/30 12:37:10 INFO storage.MemoryStore: Block broadcast_2 stored as
values in memory (estimated size 75.5 KB, free 264.9 MB)
14/11/30 12:37:10 INFO mapred.FileInputFormat: Total input paths to process
: 1
14/11/30 12:37:10 INFO spark.SparkContext: Starting job: reduce at
JsonRDD.scala:46
14/11/30 12:37:10 INFO scheduler.DAGScheduler: Got job 1 (reduce at
JsonRDD.scala:46) with 2 output partitions (allowLocal=false)
14/11/30 12:37:10 INFO scheduler.DAGScheduler: Final stage: Stage 1(reduce
at JsonRDD.scala:46)
14/11/30 12:37:10 INFO scheduler.DAGScheduler: Parents of final stage:
List()
14/11/30 12:37:10 INFO scheduler.DAGScheduler: Missing parents: List()
14/11/30 12:37:10 INFO scheduler.DAGScheduler: Submitting Stage 1
(MappedRDD[5] at map at JsonRDD.scala:46), which has no missing parents
14/11/30 12:37:10 INFO storage.MemoryStore: ensureFreeSpace(3000) called
with curMem=249408, maxMem=278019440
14/11/30 12:37:10 INFO storage.MemoryStore: Block broadcast_3 stored as
values in memory (estimated size 2.9 KB, free 264.9 MB)
14/11/30 12:37:10 INFO scheduler.DAGScheduler: Submitting 2 missing tasks
from Stage 1 (MappedRDD[5] at map at JsonRDD.scala:46)
14/11/30 12:37:10 INFO scheduler.TaskSchedulerImpl: Adding task set 1.0
with 2 tasks
14/11/30 12:37:10 INFO scheduler.TaskSetManager: Starting task 0.0 in stage
1.0 (TID 2, localhost, PROCESS_LOCAL, 1191 bytes)
14/11/30 12:37:10 INFO scheduler.TaskSetManager: Starting task 1.0 in stage
1.0 (TID 3, localhost, PROCESS_LOCAL, 1191 bytes)
14/11/30 12:37:10 INFO executor.Executor: Running task 0.0 in stage 1.0
(TID 2)
14/11/30 12:37:10 INFO executor.Executor: Running task 1.0 in stage 1.0
(TID 3)
14/11/30 12:37:10 INFO rdd.HadoopRDD: Input split:
file:/Users/peter_v/tmp/test_4.json:0+91
14/11/30 12:37:10 INFO rdd.HadoopRDD: Input split:
file:/Users/peter_v/tmp/test_4.json:91+91
14/11/30 12:37:10 ERROR executor.Executor: Exception in task 1.0 in stage
1.0 (TID 3)
com.fasterxml.jackson.databind.JsonMappingException: Can not instantiate
value of type [map type; class java.util.LinkedHashMap, [simple type, class
java.lang.Object] -> [simple type, class java.lang.Object]] from String
value; no single-String constructor/factory method
    at
com.fasterxml.jackson.databind.deser.std.StdValueInstantiator._createFromStringFallbacks(StdValueInstantiator.java:428)
    at
com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createFromString(StdValueInstantiator.java:299)
    at
com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:306)
    at
com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:26)
    at
com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:2993)
    at
com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2098)
    at
org.apache.spark.sql.json.JsonRDD$$anonfun$parseJson$1$$anonfun$apply$5.apply(JsonRDD.scala:275)
    at
org.apache.spark.sql.json.JsonRDD$$anonfun$parseJson$1$$anonfun$apply$5.apply(JsonRDD.scala:274)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at
scala.collection.TraversableOnce$class.reduceLeft(TraversableOnce.scala:172)
    at scala.collection.AbstractIterator.reduceLeft(Iterator.scala:1157)
    at org.apache.spark.rdd.RDD$$anonfun$18.apply(RDD.scala:847)
    at org.apache.spark.rdd.RDD$$anonfun$18.apply(RDD.scala:845)
    at
org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:1157)
    at
org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:1157)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
    at org.apache.spark.scheduler.Task.run(Task.scala:54)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
14/11/30 12:37:10 ERROR executor.Executor: Exception in task 0.0 in stage
1.0 (TID 2)
com.fasterxml.jackson.core.JsonParseException: Unexpected end-of-input
within/between OBJECT entries
 at [Source: java.io.StringReader@4e6d0c95; line: 1, column: 15]
    at
com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1524)
    at
com.fasterxml.jackson.core.json.ReaderBasedJsonParser._skipWS(ReaderBasedJsonParser.java:1682)
    at
com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:619)
    at
com.fasterxml.jackson.databind.deser.std.MapDeserializer._readAndBindStringMap(MapDeserializer.java:412)
    at
com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:312)
    at
com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:26)
    at
com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:2993)
    at
com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2098)
    at
org.apache.spark.sql.json.JsonRDD$$anonfun$parseJson$1$$anonfun$apply$5.apply(JsonRDD.scala:275)
    at
org.apache.spark.sql.json.JsonRDD$$anonfun$parseJson$1$$anonfun$apply$5.apply(JsonRDD.scala:274)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at
scala.collection.TraversableOnce$class.reduceLeft(TraversableOnce.scala:172)
    at scala.collection.AbstractIterator.reduceLeft(Iterator.scala:1157)
    at org.apache.spark.rdd.RDD$$anonfun$18.apply(RDD.scala:847)
    at org.apache.spark.rdd.RDD$$anonfun$18.apply(RDD.scala:845)
    at
org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:1157)
    at
org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:1157)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
    at org.apache.spark.scheduler.Task.run(Task.scala:54)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
14/11/30 12:37:10 WARN scheduler.TaskSetManager: Lost task 1.0 in stage 1.0
(TID 3, localhost): com.fasterxml.jackson.databind.JsonMappingException:
Can not instantiate value of type [map type; class java.util.LinkedHashMap,
[simple type, class java.lang.Object] -> [simple type, class
java.lang.Object]] from String value; no single-String constructor/factory
method

com.fasterxml.jackson.databind.deser.std.StdValueInstantiator._createFromStringFallbacks(StdValueInstantiator.java:428)

com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createFromString(StdValueInstantiator.java:299)

com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:306)

com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:26)

com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:2993)

com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2098)

org.apache.spark.sql.json.JsonRDD$$anonfun$parseJson$1$$anonfun$apply$5.apply(JsonRDD.scala:275)

org.apache.spark.sql.json.JsonRDD$$anonfun$parseJson$1$$anonfun$apply$5.apply(JsonRDD.scala:274)
        scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        scala.collection.Iterator$class.foreach(Iterator.scala:727)
        scala.collection.AbstractIterator.foreach(Iterator.scala:1157)

scala.collection.TraversableOnce$class.reduceLeft(TraversableOnce.scala:172)
        scala.collection.AbstractIterator.reduceLeft(Iterator.scala:1157)
        org.apache.spark.rdd.RDD$$anonfun$18.apply(RDD.scala:847)
        org.apache.spark.rdd.RDD$$anonfun$18.apply(RDD.scala:845)

org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:1157)

org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:1157)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
        org.apache.spark.scheduler.Task.run(Task.scala:54)

org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        java.lang.Thread.run(Thread.java:745)
14/11/30 12:37:10 ERROR scheduler.TaskSetManager: Task 1 in stage 1.0
failed 1 times; aborting job
14/11/30 12:37:10 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 1.0,
whose tasks have all completed, from pool
14/11/30 12:37:10 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 1.0
(TID 2, localhost): com.fasterxml.jackson.core.JsonParseException:
Unexpected end-of-input within/between OBJECT entries
 at [Source: java.io.StringReader@4e6d0c95; line: 1, column: 15]

com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1524)

com.fasterxml.jackson.core.json.ReaderBasedJsonParser._skipWS(ReaderBasedJsonParser.java:1682)

com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:619)

com.fasterxml.jackson.databind.deser.std.MapDeserializer._readAndBindStringMap(MapDeserializer.java:412)

com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:312)

com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:26)

com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:2993)

com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2098)

org.apache.spark.sql.json.JsonRDD$$anonfun$parseJson$1$$anonfun$apply$5.apply(JsonRDD.scala:275)

org.apache.spark.sql.json.JsonRDD$$anonfun$parseJson$1$$anonfun$apply$5.apply(JsonRDD.scala:274)
        scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        scala.collection.Iterator$class.foreach(Iterator.scala:727)
        scala.collection.AbstractIterator.foreach(Iterator.scala:1157)

scala.collection.TraversableOnce$class.reduceLeft(TraversableOnce.scala:172)
        scala.collection.AbstractIterator.reduceLeft(Iterator.scala:1157)
        org.apache.spark.rdd.RDD$$anonfun$18.apply(RDD.scala:847)
        org.apache.spark.rdd.RDD$$anonfun$18.apply(RDD.scala:845)

org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:1157)

org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:1157)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
        org.apache.spark.scheduler.Task.run(Task.scala:54)

org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        java.lang.Thread.run(Thread.java:745)
14/11/30 12:37:10 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 1.0,
whose tasks have all completed, from pool
14/11/30 12:37:10 INFO scheduler.TaskSchedulerImpl: Cancelling stage 1
14/11/30 12:37:10 INFO scheduler.DAGScheduler: Failed to run reduce at
JsonRDD.scala:46
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1
in stage 1.0 failed 1 times, most recent failure: Lost task 1.0 in stage
1.0 (TID 3, localhost):
com.fasterxml.jackson.databind.JsonMappingException: Can not instantiate
value of type [map type; class java.util.LinkedHashMap, [simple type, class
java.lang.Object] -> [simple type, class java.lang.Object]] from String
value; no single-String constructor/factory method

com.fasterxml.jackson.databind.deser.std.StdValueInstantiator._createFromStringFallbacks(StdValueInstantiator.java:428)

com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createFromString(StdValueInstantiator.java:299)

com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:306)

com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:26)

com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:2993)

com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2098)

org.apache.spark.sql.json.JsonRDD$$anonfun$parseJson$1$$anonfun$apply$5.apply(JsonRDD.scala:275)

org.apache.spark.sql.json.JsonRDD$$anonfun$parseJson$1$$anonfun$apply$5.apply(JsonRDD.scala:274)
        scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        scala.collection.Iterator$class.foreach(Iterator.scala:727)
        scala.collection.AbstractIterator.foreach(Iterator.scala:1157)

scala.collection.TraversableOnce$class.reduceLeft(TraversableOnce.scala:172)
        scala.collection.AbstractIterator.reduceLeft(Iterator.scala:1157)
        org.apache.spark.rdd.RDD$$anonfun$18.apply(RDD.scala:847)
        org.apache.spark.rdd.RDD$$anonfun$18.apply(RDD.scala:845)

org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:1157)

org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:1157)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
        org.apache.spark.scheduler.Task.run(Task.scala:54)

org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org
$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
    at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
    at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
    at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)
    at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
    at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
    at scala.Option.foreach(Option.scala:236)
    at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)
    at
org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
    at akka.actor.ActorCell.invoke(ActorCell.scala:456)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
    at akka.dispatch.Mailbox.run(Mailbox.scala:219)
    at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)


scala>
...
[testing with the filter for blank lines, as suggested in
http://find.searchhub.org/document/9aaf462d6bca027c#f294a1dd16169ba4]

scala> val filtered = test_as_textFile.filter(r => r.trim != "")
filtered: org.apache.spark.rdd.RDD[String] = FilteredRDD[18] at filter at
<console>:16

scala> filtered
res4: org.apache.spark.rdd.RDD[String] = FilteredRDD[18] at filter at
<console>:16

scala> val test_as_json = sqlContext.jsonRDD(filtered)
14/11/30 12:47:03 INFO spark.SparkContext: Starting job: reduce at
JsonRDD.scala:46
14/11/30 12:47:03 INFO scheduler.DAGScheduler: Got job 5 (reduce at
JsonRDD.scala:46) with 2 output partitions (allowLocal=false)
14/11/30 12:47:03 INFO scheduler.DAGScheduler: Final stage: Stage 5(reduce
at JsonRDD.scala:46)
14/11/30 12:47:03 INFO scheduler.DAGScheduler: Parents of final stage:
List()
14/11/30 12:47:03 INFO scheduler.DAGScheduler: Missing parents: List()
14/11/30 12:47:03 INFO scheduler.DAGScheduler: Submitting Stage 5
(MappedRDD[20] at map at JsonRDD.scala:46), which has no missing parents
14/11/30 12:47:03 INFO storage.MemoryStore: ensureFreeSpace(3208) called
with curMem=250000, maxMem=278019440
14/11/30 12:47:03 INFO storage.MemoryStore: Block broadcast_10 stored as
values in memory (estimated size 3.1 KB, free 264.9 MB)
14/11/30 12:47:03 INFO scheduler.DAGScheduler: Submitting 2 missing tasks
from Stage 5 (MappedRDD[20] at map at JsonRDD.scala:46)
14/11/30 12:47:03 INFO scheduler.TaskSchedulerImpl: Adding task set 5.0
with 2 tasks
14/11/30 12:47:03 INFO scheduler.TaskSetManager: Starting task 0.0 in stage
5.0 (TID 10, localhost, PROCESS_LOCAL, 1191 bytes)
14/11/30 12:47:03 INFO scheduler.TaskSetManager: Starting task 1.0 in stage
5.0 (TID 11, localhost, PROCESS_LOCAL, 1191 bytes)
14/11/30 12:47:03 INFO executor.Executor: Running task 0.0 in stage 5.0
(TID 10)
14/11/30 12:47:03 INFO executor.Executor: Running task 1.0 in stage 5.0
(TID 11)
14/11/30 12:47:03 INFO rdd.HadoopRDD: Input split:
file:/Users/peter_v/tmp/test_4.json:0+91
14/11/30 12:47:03 INFO rdd.HadoopRDD: Input split:
file:/Users/peter_v/tmp/test_4.json:91+91
14/11/30 12:47:03 ERROR executor.Executor: Exception in task 1.0 in stage
5.0 (TID 11)
com.fasterxml.jackson.databind.JsonMappingException: Can not instantiate
value of type [map type; class java.util.LinkedHashMap, [simple type, class
java.lang.Object] -> [simple type, class java.lang.Object]] from String
value; no single-String constructor/factory method
    at
com.fasterxml.jackson.databind.deser.std.StdValueInstantiator._createFromStringFallbacks(StdValueInstantiator.java:428)
    at
com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createFromString(StdValueInstantiator.java:299)
    at
com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:306)
    at
com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:26)
    at
com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:2993)
    at
com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2098)
    at
org.apache.spark.sql.json.JsonRDD$$anonfun$parseJson$1$$anonfun$apply$5.apply(JsonRDD.scala:275)
    at
org.apache.spark.sql.json.JsonRDD$$anonfun$parseJson$1$$anonfun$apply$5.apply(JsonRDD.scala:274)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at
scala.collection.TraversableOnce$class.reduceLeft(TraversableOnce.scala:172)
    at scala.collection.AbstractIterator.reduceLeft(Iterator.scala:1157)
    at org.apache.spark.rdd.RDD$$anonfun$18.apply(RDD.scala:847)
    at org.apache.spark.rdd.RDD$$anonfun$18.apply(RDD.scala:845)
    at
org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:1157)
    at
org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:1157)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
    at org.apache.spark.scheduler.Task.run(Task.scala:54)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
14/11/30 12:47:03 ERROR executor.Executor: Exception in task 0.0 in stage
5.0 (TID 10)
com.fasterxml.jackson.core.JsonParseException: Unexpected end-of-input
within/between OBJECT entries
 at [Source: java.io.StringReader@23acbe27; line: 1, column: 15]
    at
com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1524)
    at
com.fasterxml.jackson.core.json.ReaderBasedJsonParser._skipWS(ReaderBasedJsonParser.java:1682)
    at
com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:619)
    at
com.fasterxml.jackson.databind.deser.std.MapDeserializer._readAndBindStringMap(MapDeserializer.java:412)
    at
com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:312)
    at
com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:26)
    at
com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:2993)
    at
com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2098)
    at
org.apache.spark.sql.json.JsonRDD$$anonfun$parseJson$1$$anonfun$apply$5.apply(JsonRDD.scala:275)
    at
org.apache.spark.sql.json.JsonRDD$$anonfun$parseJson$1$$anonfun$apply$5.apply(JsonRDD.scala:274)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at
scala.collection.TraversableOnce$class.reduceLeft(TraversableOnce.scala:172)
    at scala.collection.AbstractIterator.reduceLeft(Iterator.scala:1157)
    at org.apache.spark.rdd.RDD$$anonfun$18.apply(RDD.scala:847)
    at org.apache.spark.rdd.RDD$$anonfun$18.apply(RDD.scala:845)
    at
org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:1157)
    at
org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:1157)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
    at org.apache.spark.scheduler.Task.run(Task.scala:54)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
14/11/30 12:47:03 WARN scheduler.TaskSetManager: Lost task 1.0 in stage 5.0
(TID 11, localhost): com.fasterxml.jackson.databind.JsonMappingException:
Can not instantiate value of type [map type; class java.util.LinkedHashMap,
[simple type, class java.lang.Object] -> [simple type, class
java.lang.Object]] from String value; no single-String constructor/factory
method

com.fasterxml.jackson.databind.deser.std.StdValueInstantiator._createFromStringFallbacks(StdValueInstantiator.java:428)

com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createFromString(StdValueInstantiator.java:299)

com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:306)

com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:26)

com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:2993)

com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2098)

org.apache.spark.sql.json.JsonRDD$$anonfun$parseJson$1$$anonfun$apply$5.apply(JsonRDD.scala:275)

org.apache.spark.sql.json.JsonRDD$$anonfun$parseJson$1$$anonfun$apply$5.apply(JsonRDD.scala:274)
        scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        scala.collection.Iterator$class.foreach(Iterator.scala:727)
        scala.collection.AbstractIterator.foreach(Iterator.scala:1157)

scala.collection.TraversableOnce$class.reduceLeft(TraversableOnce.scala:172)
        scala.collection.AbstractIterator.reduceLeft(Iterator.scala:1157)
        org.apache.spark.rdd.RDD$$anonfun$18.apply(RDD.scala:847)
        org.apache.spark.rdd.RDD$$anonfun$18.apply(RDD.scala:845)

org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:1157)

org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:1157)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
        org.apache.spark.scheduler.Task.run(Task.scala:54)

org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        java.lang.Thread.run(Thread.java:745)
14/11/30 12:47:03 ERROR scheduler.TaskSetManager: Task 1 in stage 5.0
failed 1 times; aborting job
14/11/30 12:47:03 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 5.0,
whose tasks have all completed, from pool
14/11/30 12:47:03 INFO scheduler.TaskSchedulerImpl: Cancelling stage 5
14/11/30 12:47:03 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 5.0
(TID 10, localhost): com.fasterxml.jackson.core.JsonParseException:
Unexpected end-of-input within/between OBJECT entries
 at [Source: java.io.StringReader@23acbe27; line: 1, column: 15]

com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1524)

com.fasterxml.jackson.core.json.ReaderBasedJsonParser._skipWS(ReaderBasedJsonParser.java:1682)

com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:619)

com.fasterxml.jackson.databind.deser.std.MapDeserializer._readAndBindStringMap(MapDeserializer.java:412)

com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:312)

com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:26)

com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:2993)

com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2098)

org.apache.spark.sql.json.JsonRDD$$anonfun$parseJson$1$$anonfun$apply$5.apply(JsonRDD.scala:275)

org.apache.spark.sql.json.JsonRDD$$anonfun$parseJson$1$$anonfun$apply$5.apply(JsonRDD.scala:274)
        scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        scala.collection.Iterator$class.foreach(Iterator.scala:727)
        scala.collection.AbstractIterator.foreach(Iterator.scala:1157)

scala.collection.TraversableOnce$class.reduceLeft(TraversableOnce.scala:172)
        scala.collection.AbstractIterator.reduceLeft(Iterator.scala:1157)
        org.apache.spark.rdd.RDD$$anonfun$18.apply(RDD.scala:847)
        org.apache.spark.rdd.RDD$$anonfun$18.apply(RDD.scala:845)

org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:1157)

org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:1157)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
        org.apache.spark.scheduler.Task.run(Task.scala:54)

org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        java.lang.Thread.run(Thread.java:745)
14/11/30 12:47:03 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 5.0,
whose tasks have all completed, from pool
14/11/30 12:47:03 INFO scheduler.DAGScheduler: Failed to run reduce at
JsonRDD.scala:46
14/11/30 12:47:03 INFO storage.BlockManager: Removing broadcast 10
14/11/30 12:47:03 INFO storage.BlockManager: Removing block broadcast_10
14/11/30 12:47:03 INFO storage.MemoryStore: Block broadcast_10 of size 3208
dropped from memory (free 277769440)
14/11/30 12:47:03 INFO spark.ContextCleaner: Cleaned broadcast 10
14/11/30 12:47:03 INFO storage.BlockManager: Removing broadcast 9
14/11/30 12:47:03 INFO storage.BlockManager: Removing block broadcast_9
14/11/30 12:47:03 INFO storage.MemoryStore: Block broadcast_9 of size 3000
dropped from memory (free 277772440)
14/11/30 12:47:03 INFO spark.ContextCleaner: Cleaned broadcast 9
14/11/30 12:47:03 INFO storage.BlockManager: Removing broadcast 8
14/11/30 12:47:03 INFO storage.BlockManager: Removing block broadcast_8
14/11/30 12:47:03 INFO storage.MemoryStore: Block broadcast_8 of size 77323
dropped from memory (free 277849763)
14/11/30 12:47:03 INFO spark.ContextCleaner: Cleaned broadcast 8
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1
in stage 5.0 failed 1 times, most recent failure: Lost task 1.0 in stage
5.0 (TID 11, localhost):
com.fasterxml.jackson.databind.JsonMappingException: Can not instantiate
value of type [map type; class java.util.LinkedHashMap, [simple type, class
java.lang.Object] -> [simple type, class java.lang.Object]] from String
value; no single-String constructor/factory method

com.fasterxml.jackson.databind.deser.std.StdValueInstantiator._createFromStringFallbacks(StdValueInstantiator.java:428)

com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createFromString(StdValueInstantiator.java:299)

com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:306)

com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:26)

com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:2993)

com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2098)

org.apache.spark.sql.json.JsonRDD$$anonfun$parseJson$1$$anonfun$apply$5.apply(JsonRDD.scala:275)

org.apache.spark.sql.json.JsonRDD$$anonfun$parseJson$1$$anonfun$apply$5.apply(JsonRDD.scala:274)
        scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        scala.collection.Iterator$class.foreach(Iterator.scala:727)
        scala.collection.AbstractIterator.foreach(Iterator.scala:1157)

scala.collection.TraversableOnce$class.reduceLeft(TraversableOnce.scala:172)
        scala.collection.AbstractIterator.reduceLeft(Iterator.scala:1157)
        org.apache.spark.rdd.RDD$$anonfun$18.apply(RDD.scala:847)
        org.apache.spark.rdd.RDD$$anonfun$18.apply(RDD.scala:845)

org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:1157)

org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:1157)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
        org.apache.spark.scheduler.Task.run(Task.scala:54)

org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org
$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
    at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
    at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
    at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)
    at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
    at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
    at scala.Option.foreach(Option.scala:236)
    at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)
    at
org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
    at akka.actor.ActorCell.invoke(ActorCell.scala:456)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
    at akka.dispatch.Mailbox.run(Mailbox.scala:219)
    at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)


scala>
...
scala> filtered.count()
14/11/30 12:48:20 INFO spark.SparkContext: Starting job: count at
<console>:19
14/11/30 12:48:20 INFO scheduler.DAGScheduler: Got job 6 (count at
<console>:19) with 2 output partitions (allowLocal=false)
14/11/30 12:48:20 INFO scheduler.DAGScheduler: Final stage: Stage 6(count
at <console>:19)
14/11/30 12:48:20 INFO scheduler.DAGScheduler: Parents of final stage:
List()
14/11/30 12:48:20 INFO scheduler.DAGScheduler: Missing parents: List()
14/11/30 12:48:20 INFO scheduler.DAGScheduler: Submitting Stage 6
(FilteredRDD[18] at filter at <console>:16), which has no missing parents
14/11/30 12:48:20 INFO storage.MemoryStore: ensureFreeSpace(2616) called
with curMem=169677, maxMem=278019440
14/11/30 12:48:20 INFO storage.MemoryStore: Block broadcast_11 stored as
values in memory (estimated size 2.6 KB, free 265.0 MB)
14/11/30 12:48:20 INFO scheduler.DAGScheduler: Submitting 2 missing tasks
from Stage 6 (FilteredRDD[18] at filter at <console>:16)
14/11/30 12:48:20 INFO scheduler.TaskSchedulerImpl: Adding task set 6.0
with 2 tasks
14/11/30 12:48:20 INFO scheduler.TaskSetManager: Starting task 0.0 in stage
6.0 (TID 12, localhost, PROCESS_LOCAL, 1191 bytes)
14/11/30 12:48:20 INFO scheduler.TaskSetManager: Starting task 1.0 in stage
6.0 (TID 13, localhost, PROCESS_LOCAL, 1191 bytes)
14/11/30 12:48:20 INFO executor.Executor: Running task 0.0 in stage 6.0
(TID 12)
14/11/30 12:48:20 INFO executor.Executor: Running task 1.0 in stage 6.0
(TID 13)
14/11/30 12:48:20 INFO rdd.HadoopRDD: Input split:
file:/Users/peter_v/tmp/test_4.json:0+91
14/11/30 12:48:20 INFO rdd.HadoopRDD: Input split:
file:/Users/peter_v/tmp/test_4.json:91+91
14/11/30 12:48:20 INFO executor.Executor: Finished task 0.0 in stage 6.0
(TID 12). 1731 bytes result sent to driver
14/11/30 12:48:20 INFO executor.Executor: Finished task 1.0 in stage 6.0
(TID 13). 1731 bytes result sent to driver
14/11/30 12:48:20 INFO scheduler.TaskSetManager: Finished task 1.0 in stage
6.0 (TID 13) in 7 ms on localhost (1/2)
14/11/30 12:48:20 INFO scheduler.TaskSetManager: Finished task 0.0 in stage
6.0 (TID 12) in 8 ms on localhost (2/2)
14/11/30 12:48:20 INFO scheduler.DAGScheduler: Stage 6 (count at
<console>:19) finished in 0.009 s
14/11/30 12:48:20 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 6.0,
whose tasks have all completed, from pool
14/11/30 12:48:20 INFO spark.SparkContext: Job finished: count at
<console>:19, took 0.012281729 s
res9: Long = 13


-- 
Peter Vandenabeele
http://www.allthingsdata.io

Re: Loading JSON Dataset fails with com.fasterxml.jackson.databind.JsonMappingException

Posted by Peter Vandenabeele <pe...@vandenabeele.com>.
On Sun, Nov 30, 2014 at 1:10 PM, Peter Vandenabeele <pe...@vandenabeele.com>
wrote:

> On spark 1.1.0 in Standalone mode, I am following
>
>
> https://spark.apache.org/docs/1.1.0/sql-programming-guide.html#json-datasets
>
> to try to load a simple test JSON file (on my local filesystem, not in
> hdfs).
> The file is below and was validated with jsonlint.com:
>
> ➜  tmp  cat test_4.json
> {"foo":
>     [{
>         "bar": {
>             "id": 31,
>             "name": "bar"
>         }
>     },{
>         "tux": {
>             "id": 42,
>             "name": "tux"
>         }
>     }]
> }
> ➜  tmp  wc test_4.json
>       13      19     182 test_4.json
>
>
I should have read the manual better (#rtfm):

  "jsonFile - loads data from a directory of JSON files where _each line of
the files is a JSON object_."  (emphasis mine)

So, what works is:

$ cat test_6.json   # ==> Succes  (count() => 3)
{"foo":"bar"}
{"foo":"tux"}
{"foo":"ping"}

and what fails is:

$   cat test_7.json   # ==> Fail (JsonMappingException)
[
  {"foo":"bar"},
  {"foo":"tux"},
  {"foo":"ping"}
]


I got confused by the fact that test_6.json is _not_ valid JSON (but works
for this)

and test_7.json is a _valid_ JSON array (and does not work for this).


I will see if I can contribute some note in the documentation.

In any case, it might be better to not _name_ the file in the tutorial

examples/src/main/resources/people.json

because it is actually not a valid JSON file (but a file with a JSON object
on each line)

Maybe

examples/src/main/resources/people.jsons

is a better name (equivalent to 'xs' that is used as convention in Scala).

Also, just showing an example of people.jsons could avoid future confusion.

Thanks,

Peter

-- 
Peter Vandenabeele
http://www.allthingsdata.io