You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Venkata Ramana G (JIRA)" <ji...@apache.org> on 2014/10/19 07:30:33 UTC

[jira] [Commented] (SPARK-3815) LPAD function does not work in where predicate

    [ https://issues.apache.org/jira/browse/SPARK-3815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14176226#comment-14176226 ] 

Venkata Ramana G commented on SPARK-3815:
-----------------------------------------

I found this working fine on the latest release. [~yanakad] Can you please reverify? thanks

> LPAD function does not work in where predicate
> ----------------------------------------------
>
>                 Key: SPARK-3815
>                 URL: https://issues.apache.org/jira/browse/SPARK-3815
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.1.0
>            Reporter: Yana Kadiyska
>            Priority: Minor
>
> select customer_id from mytable where pkey=concat_ws('-',LPAD('077',4,'0'),'2014-07') LIMIT 2
> produces:
> 14/10/03 14:51:35 ERROR server.SparkSQLOperationManager: Error executing query:
> org.apache.spark.SparkException: Task not serializable
>         at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:166)
>         at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:158)
>         at org.apache.spark.SparkContext.clean(SparkContext.scala:1242)
>         at org.apache.spark.rdd.RDD.mapPartitions(RDD.scala:597)
>         at org.apache.spark.sql.execution.Limit.execute(basicOperators.scala:146)
>         at org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd$lzycompute(HiveContext.scala:360)
>         at org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd(HiveContext.scala:360)
>         at org.apache.spark.sql.hive.thriftserver.server.SparkSQLOperationManager$$anon$1.run(SparkSQLOperationManager.scala:185)
>         at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:193)
>         at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatement(HiveSessionImpl.java:175)
>         at org.apache.hive.service.cli.CLIService.executeStatement(CLIService.java:150)
>         at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:207)
>         at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1133)
>         at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1118)
>         at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>         at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>         at org.apache.hive.service.auth.TUGIContainingProcessor$1.run(TUGIContainingProcessor.java:58)
>         at org.apache.hive.service.auth.TUGIContainingProcessor$1.run(TUGIContainingProcessor.java:55)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>         at org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:526)
>         at org.apache.hive.service.auth.TUGIContainingProcessor.process(TUGIContainingProcessor.java:55)
>         at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
> Caused by: java.io.NotSerializableException: java.lang.reflect.Constructor
>         at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1183)
>         at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
>         at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
>         at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
>         at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
>         at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1377)
>         at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1173)
>         at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
>         at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
>         at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
>         at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
>         at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347)
>         at scala.collection.immutable.$colon$colon.writeObject(List.scala:379)
>         at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> The following work fine:
> select concat_ws('-', LPAD(cast(112717 % 1024 AS STRING),4,'0'),'2014-07') from mytable where pkey='0077-2014-07' LIMIT 2
> select customer_id from mytable  where pkey=concat_ws('-','0077','2014-07') LIMIT 2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org