You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Chirag Aggarwal <Ch...@guavus.com> on 2014/11/03 06:00:42 UTC

Re: Does SparkSQL work with custom defined SerDe?

Did  https://issues.apache.org/jira/browse/SPARK-3807 fix the issue seen by you?
If yes, then please note that it shall be part of 1.1.1 and 1.2

Chirag

From: Chen Song <ch...@gmail.com>>
Date: Wednesday, 15 October 2014 4:03 AM
To: "user@spark.apache.org<ma...@spark.apache.org>" <us...@spark.apache.org>>
Subject: Re: Does SparkSQL work with custom defined SerDe?

Looks like it may be related to https://issues.apache.org/jira/browse/SPARK-3807.

I will build from branch 1.1 to see if the issue is resolved.

Chen

On Tue, Oct 14, 2014 at 10:33 AM, Chen Song <ch...@gmail.com>> wrote:
Sorry for bringing this out again, as I have no clue what could have caused this.

I turned on DEBUG logging and did see the jar containing the SerDe class was scanned.

More interestingly, I saw the same exception (org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Unresolved attributes) when running simple select on valid column names and malformed column names. This lead me to suspect that syntactical breaks somewhere.

select [valid_column] from table limit 5;
select [malformed_typo_column] from table limit 5;


On Mon, Oct 13, 2014 at 6:04 PM, Chen Song <ch...@gmail.com>> wrote:
In Hive, the table was created with custom SerDe, in the following way.

row format serde "abc.ProtobufSerDe"

with serdeproperties ("serialization.class"="abc.protobuf.generated.LogA$log_a")

When I start spark-sql shell, I always got the following exception, even for a simple query.

select user from log_a limit 25;

I can desc the table without any problem. When I explain the query, I got the same exception.


14/10/13 22:01:13 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.

Exception in thread "Driver" java.lang.reflect.InvocationTargetException

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:606)

        at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:162)

Caused by: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Unresolved attributes: 'user, tree:

Project ['user]

 Filter (dh#4 = 2014-10-13 05)

  LowerCaseSchema

   MetastoreRelation test, log_a, None


        at org.apache.spark.sql.catalyst.analysis.Analyzer$CheckResolution$$anonfun$apply$1.applyOrElse(Analyzer.scala:72)

        at org.apache.spark.sql.catalyst.analysis.Analyzer$CheckResolution$$anonfun$apply$1.applyOrElse(Analyzer.scala:70)

        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:165)

        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:183)

        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)

        at scala.collection.Iterator$class.foreach(Iterator.scala:727)

        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)

        at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)

        at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)

        at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)

        at scala.collection.TraversableOnce$class.to<http://class.to>(TraversableOnce.scala:273)

        at scala.collection.AbstractIterator.to<http://scala.collection.AbstractIterator.to>(Iterator.scala:1157)

        at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)

        at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)

        at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)

        at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)

        at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:212)

        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:168)

        at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:156)

        at org.apache.spark.sql.catalyst.analysis.Analyzer$CheckResolution$.apply(Analyzer.scala:70)

        at org.apache.spark.sql.catalyst.analysis.Analyzer$CheckResolution$.apply(Analyzer.scala:68)

        at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:61)

        at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:59)

        at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:51)

        at scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:60)

        at scala.collection.mutable.WrappedArray.foldLeft(WrappedArray.scala:34)

        at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:59)

        at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:51)

        at scala.collection.immutable.List.foreach(List.scala:318)

        at org.apache.spark.sql.catalyst.rules.RuleExecutor.apply(RuleExecutor.scala:51)

        at org.apache.spark.sql.SQLContext$QueryExecution.analyzed$lzycompute(SQLContext.scala:397)

        at org.apache.spark.sql.SQLContext$QueryExecution.analyzed(SQLContext.scala:397)

        at org.apache.spark.sql.hive.HiveContext$QueryExecution.optimizedPlan$lzycompute(HiveContext.scala:358)

        at org.apache.spark.sql.hive.HiveContext$QueryExecution.optimizedPlan(HiveContext.scala:357)

        at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:402)

        at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:400)

        at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:406)

        at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:406)

        at org.apache.spark.sql.SchemaRDD.collect(SchemaRDD.scala:438)

        at com.appnexus.data.spark.sql.Test$.main(Test.scala:23)

        at com.appnexus.data.spark.sql.Test.main(Test.scala)

        ... 5 more


--
Chen Song




--
Chen Song




--
Chen Song