You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Dongjoon Hyun (JIRA)" <ji...@apache.org> on 2017/01/25 17:39:26 UTC

[jira] [Commented] (SPARK-19363) order by cannot be parsed when group by is missing

    [ https://issues.apache.org/jira/browse/SPARK-19363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15838222#comment-15838222 ] 

Dongjoon Hyun commented on SPARK-19363:
---------------------------------------

Hi, [~me2stk].

For me, it works. Could you provide a more specific reproducible example?

{code}
scala> spark.version
res0: String = 2.0.2

scala> sql("select a, p from tbl_x order by a desc").show
+---+---+
|  a|  p|
+---+---+
+---+---+
{code}

> order by cannot be parsed when group by is missing
> --------------------------------------------------
>
>                 Key: SPARK-19363
>                 URL: https://issues.apache.org/jira/browse/SPARK-19363
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.0.2
>            Reporter: Mitko Kolev
>
> The order by cannot be parsed, when there is no group by expression.  This used to work in spark 1.6, but seems broken in Spark 2.0.2.
> If there is a group by expression, order by can be parsed.
> {{select a, b from blah group by a, b order by a desc}}  
> Without a group by
> {{select a, b from blah order by a desc}} results in the exception below. 
> {code}
> 	at org.apache.spark.sql.catalyst.parser.ParseException.withCommand(ParseDriver.scala:197)
> 	at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:99)
> 	at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:45)
> 	at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:53)
> 	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:582)
> 	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:682)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:115)
> 	at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)
> 	at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:341)
> 	at org.apache.zeppelin.scheduler.Job.run(Job.java:176)
> 	at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org