You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Pulkit Bhuwalka (JIRA)" <ji...@apache.org> on 2014/09/29 09:27:34 UTC
[jira] [Comment Edited] (SPARK-3274) Spark Streaming Java API
reports java.lang.ClassCastException when calling collectAsMap on
JavaPairDStream
[ https://issues.apache.org/jira/browse/SPARK-3274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14151447#comment-14151447 ]
Pulkit Bhuwalka edited comment on SPARK-3274 at 9/29/14 7:27 AM:
-----------------------------------------------------------------
[~sowen] - you are right. I was making the mistake of reading the sequence file as String instead of Text. Adding toString fixed the problem. Thanks a lot for your help.
was (Author: pulkit.bosco05@gmail.com):
[~sowen] - you are right. I was making the mistake of reading the sequence file as String instead of text. Addind toString fixed the problem. Thanks a lot for your help.
> Spark Streaming Java API reports java.lang.ClassCastException when calling collectAsMap on JavaPairDStream
> ----------------------------------------------------------------------------------------------------------
>
> Key: SPARK-3274
> URL: https://issues.apache.org/jira/browse/SPARK-3274
> Project: Spark
> Issue Type: Bug
> Components: Java API
> Affects Versions: 1.0.2
> Reporter: Jack Hu
>
> Reproduce code:
> scontext
> .socketTextStream("localhost", 18888)
> .mapToPair(new PairFunction<String, String, String>(){
> public Tuple2<String, String> call(String arg0)
> throws Exception {
> return new Tuple2<String, String>("1", arg0);
> }
> })
> .foreachRDD(new Function2<JavaPairRDD<String, String>, Time, Void>() {
> public Void call(JavaPairRDD<String, String> v1, Time v2) throws Exception {
> System.out.println(v2.toString() + ": " + v1.collectAsMap().toString());
> return null;
> }
> });
> Exception:
> java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast to [Lscala.Tupl
> e2;
> at org.apache.spark.rdd.PairRDDFunctions.collectAsMap(PairRDDFunctions.s
> cala:447)
> at org.apache.spark.api.java.JavaPairRDD.collectAsMap(JavaPairRDD.scala:
> 464)
> at tuk.usecase.failedcall.FailedCall$1.call(FailedCall.java:90)
> at tuk.usecase.failedcall.FailedCall$1.call(FailedCall.java:88)
> at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachR
> DD$2.apply(JavaDStreamLike.scala:282)
> at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachR
> DD$2.apply(JavaDStreamLike.scala:282)
> at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mc
> V$sp(ForEachDStream.scala:41)
> at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(Fo
> rEachDStream.scala:40)
> at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(Fo
> rEachDStream.scala:40)
> at scala.util.Try$.apply(Try.scala:161)
> at org.apache.spark.streaming.scheduler.Job.run(Job.scala:32)
> at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobS
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org