You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2015/04/07 03:25:12 UTC

[jira] [Commented] (SPARK-6721) IllegalStateException

    [ https://issues.apache.org/jira/browse/SPARK-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14482366#comment-14482366 ] 

Sean Owen commented on SPARK-6721:
----------------------------------

Isn't this an error / config problem in Mongo rather than Spark?

> IllegalStateException
> ---------------------
>
>                 Key: SPARK-6721
>                 URL: https://issues.apache.org/jira/browse/SPARK-6721
>             Project: Spark
>          Issue Type: Bug
>          Components: Java API
>    Affects Versions: 1.2.0, 1.2.1, 1.3.0
>         Environment: Ubuntu 14.04, Java 8, MongoDB 3.0, Spark 1.3
>            Reporter: Luis Rodríguez Trejo
>              Labels: MongoDB, java.lang.IllegalStateexception, saveAsNewAPIHadoopFile
>
> I get the following exception when using saveAsNewAPIHadoopFile:
> {code}
> 15/03/23 17:05:34 WARN TaskSetManager: Lost task 0.1 in stage 0.0 (TID 4, 10.0.2.15): java.lang.IllegalStateException: open
> at org.bson.util.Assertions.isTrue(Assertions.java:36)
> at com.mongodb.DBTCPConnector.getPrimaryPort(DBTCPConnector.java:406)
> at com.mongodb.DBCollectionImpl.insert(DBCollectionImpl.java:184)
> at com.mongodb.DBCollectionImpl.insert(DBCollectionImpl.java:167)
> at com.mongodb.DBCollection.insert(DBCollection.java:161)
> at com.mongodb.DBCollection.insert(DBCollection.java:107)
> at com.mongodb.DBCollection.save(DBCollection.java:1049)
> at com.mongodb.DBCollection.save(DBCollection.java:1014)
> at com.mongodb.hadoop.output.MongoRecordWriter.write(MongoRecordWriter.java:105)
> at org.apache.spark.rdd.PairRDDFunctions$$anonfun$12.apply(PairRDDFunctions.scala:1000)
> at org.apache.spark.rdd.PairRDDFunctions$$anonfun$12.apply(PairRDDFunctions.scala:979)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
> at org.apache.spark.scheduler.Task.run(Task.scala:64)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Before Spark 1.3.0 this would result in the application crashing, but now the data just remains unprocessed.
> There is no "close" instruction at any part of the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org