You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by sachin Singh <sa...@gmail.com> on 2014/12/30 10:43:42 UTC

Spark SQL implementation error

I have a table(csv file) loaded data on that by creating POJO as per table
structure,and created SchemaRDD as under
JavaRDD<Test1> testSchema =
sc.textFile("D:/testTable.csv").map(GetTableData);/* GetTableData will
transform the all table data in testTable object*/
JavaSchemaRDD schemaTest = sqlContext.applySchema(testSchema, Test.class);
		schemaTest.registerTempTable("testTable");

JavaSchemaRDD sqlQuery = sqlContext.sql("SELECT * FROM testTable");
List<String> totDuration = sqlQuery.map(new Function<Row, String>() {
		  public String call(Row row) {
		    return "Field1is : " + row.getInt(0);
		  }
		}).collect();
its working fine
but.........
if I am changing query as(rest code is same)-  JavaSchemaRDD sqlQuery =
sqlContext.sql("SELECT sum(field1) FROM testTable group by field2"); 
error as - Exception in thread "main" java.lang.NoSuchMethodError:
org.apache.spark.rdd.ShuffledRDD.<init>(Lorg/apache/spark/rdd/RDD;Lorg/apache/spark/Partitioner;)V

Please help and Suggest 



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-implementation-error-tp20901.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Spark SQL implementation error

Posted by Michael Armbrust <mi...@databricks.com>.
Anytime you see "java.lang.NoSuchMethodError" it means that you have
multiple conflicting versions of a library on the classpath, or you are
trying to run code that was compiled against the wrong version of a library.

On Tue, Dec 30, 2014 at 1:43 AM, sachin Singh <sa...@gmail.com>
wrote:

> I have a table(csv file) loaded data on that by creating POJO as per table
> structure,and created SchemaRDD as under
> JavaRDD<Test1> testSchema =
> sc.textFile("D:/testTable.csv").map(GetTableData);/* GetTableData will
> transform the all table data in testTable object*/
> JavaSchemaRDD schemaTest = sqlContext.applySchema(testSchema, Test.class);
>                 schemaTest.registerTempTable("testTable");
>
> JavaSchemaRDD sqlQuery = sqlContext.sql("SELECT * FROM testTable");
> List<String> totDuration = sqlQuery.map(new Function<Row, String>() {
>                   public String call(Row row) {
>                     return "Field1is : " + row.getInt(0);
>                   }
>                 }).collect();
> its working fine
> but.........
> if I am changing query as(rest code is same)-  JavaSchemaRDD sqlQuery =
> sqlContext.sql("SELECT sum(field1) FROM testTable group by field2");
> error as - Exception in thread "main" java.lang.NoSuchMethodError:
>
> org.apache.spark.rdd.ShuffledRDD.<init>(Lorg/apache/spark/rdd/RDD;Lorg/apache/spark/Partitioner;)V
>
> Please help and Suggest
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-implementation-error-tp20901.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>

Re: Spark SQL implementation error

Posted by Pankaj Narang <pa...@gmail.com>.
As per telephonic call see how we can fetch the count

 val tweetsCount = sql("SELECT COUNT(*) FROM tweets")
  println(f"\n\n\nThere are ${tweetsCount.collect.head.getLong(0)} Tweets on
this Dataset\n\n")




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-implementation-error-tp20901p21008.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org