You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by StanZhai <ma...@zhaishidan.cn> on 2015/09/10 08:11:35 UTC

[SparkSQL]Could not alter table in Spark 1.5 use HiveContext

After upgrade spark from 1.4.1 to 1.5.0, I encountered the following
exception when use alter table statement in HiveContext:

The sql is: ALTER TABLE a RENAME TO b

The exception is:

FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. Invalid
method name: 'alter_table_with_cascade'
msg: org.apache.spark.sql.execution.QueryExecutionException: FAILED:
Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
Unable to alter table. Invalid method name: 'alter_table_with_cascade'
	at
org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:433)
	at
org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:418)
	at
org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:256)
	at
org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:211)
	at
org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:248)
	at
org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:418)
	at
org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:408)
	at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:558)
	at
org.apache.spark.sql.hive.execution.HiveNativeCommand.run(HiveNativeCommand.scala:33)
	at
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
	at
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
	at
org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:69)
	at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:140)
	at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:138)
	at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:138)
	at
org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:927)
	at
org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:927)
	at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:144)
	at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:129)
	at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:719)
	at test.service.QueryService.query(QueryService.scala:28)
	at test.api.DatabaseApi$$anonfun$query$1.apply(DatabaseApi.scala:39)
	at test.api.DatabaseApi$$anonfun$query$1.apply(DatabaseApi.scala:30)
	at test.web.JettyUtils$$anon$1.getOrPost(JettyUtils.scala:81)
	at test.web.JettyUtils$$anon$1.doPost(JettyUtils.scala:119)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:755)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
	at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
	at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:501)
	at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
	at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:428)
	at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
	at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
	at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
	at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
	at org.eclipse.jetty.server.Server.handle(Server.java:370)
	at
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)
	at
org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:982)
	at
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1043)
	at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:865)
	at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)
	at
org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
	at
org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:667)
	at
org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
	at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
	at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
	at java.lang.Thread.run(Thread.java:745)

The sql can be run both at Spark 1.4.1 and Hive, I think this should be a
bug of Spark 1.5, Any suggestion? 

Best, Stan



--
View this message in context: http://apache-spark-developers-list.1001551.n3.nabble.com/SparkSQL-Could-not-alter-table-in-Spark-1-5-use-HiveContext-tp14029.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@spark.apache.org
For additional commands, e-mail: dev-help@spark.apache.org


Re: [SparkSQL]Could not alter table in Spark 1.5 use HiveContext

Posted by StanZhai <ma...@zhaishidan.cn>.
Thanks a lot! I've fixed this issue by set: 
spark.sql.hive.metastore.version = 0.13.1
spark.sql.hive.metastore.jars = maven


Yin Huai-2 wrote
> Yes, Spark 1.5 use Hive 1.2's metastore client by default. You can change
> it by putting the following settings in your spark conf.
> 
> spark.sql.hive.metastore.version = 0.13.1
> spark.sql.hive.metastore.jars = maven or the path of your hive 0.13 jars
> and hadoop jars
> 
> For spark.sql.hive.metastore.jars, basically, it tells spark sql where to
> find metastore client's classes of Hive 0.13.1. If you set it to maven, we
> will download needed jars directly (it is an easy way to do testing work).
> 
> On Thu, Sep 10, 2015 at 7:45 PM, StanZhai &lt;

> mail@

> &gt; wrote:
> 
>> Thank you for the swift reply!
>>
>> The version of my hive metastore server is 0.13.1, I've build spark use
>> sbt
>> like this:
>> build/sbt -Pyarn -Phadoop-2.4 -Phive -Phive-thriftserver assembly
>>
>> Is spark 1.5 bind the hive client version of 1.2 by default?
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-developers-list.1001551.n3.nabble.com/SparkSQL-Could-not-alter-table-in-Spark-1-5-use-HiveContext-tp14029p14044.html
>> Sent from the Apache Spark Developers List mailing list archive at
>> Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: 

> dev-unsubscribe@.apache

>> For additional commands, e-mail: 

> dev-help@.apache

>>
>>





--
View this message in context: http://apache-spark-developers-list.1001551.n3.nabble.com/SparkSQL-Could-not-alter-table-in-Spark-1-5-use-HiveContext-tp14029p14047.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@spark.apache.org
For additional commands, e-mail: dev-help@spark.apache.org


Re: [SparkSQL]Could not alter table in Spark 1.5 use HiveContext

Posted by Yin Huai <yh...@databricks.com>.
Yes, Spark 1.5 use Hive 1.2's metastore client by default. You can change
it by putting the following settings in your spark conf.

spark.sql.hive.metastore.version = 0.13.1
spark.sql.hive.metastore.jars = maven or the path of your hive 0.13 jars
and hadoop jars

For spark.sql.hive.metastore.jars, basically, it tells spark sql where to
find metastore client's classes of Hive 0.13.1. If you set it to maven, we
will download needed jars directly (it is an easy way to do testing work).

On Thu, Sep 10, 2015 at 7:45 PM, StanZhai <ma...@zhaishidan.cn> wrote:

> Thank you for the swift reply!
>
> The version of my hive metastore server is 0.13.1, I've build spark use sbt
> like this:
> build/sbt -Pyarn -Phadoop-2.4 -Phive -Phive-thriftserver assembly
>
> Is spark 1.5 bind the hive client version of 1.2 by default?
>
>
>
> --
> View this message in context:
> http://apache-spark-developers-list.1001551.n3.nabble.com/SparkSQL-Could-not-alter-table-in-Spark-1-5-use-HiveContext-tp14029p14044.html
> Sent from the Apache Spark Developers List mailing list archive at
> Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@spark.apache.org
> For additional commands, e-mail: dev-help@spark.apache.org
>
>

Re: [SparkSQL]Could not alter table in Spark 1.5 use HiveContext

Posted by StanZhai <ma...@zhaishidan.cn>.
Thank you for the swift reply!

The version of my hive metastore server is 0.13.1, I've build spark use sbt
like this:
build/sbt -Pyarn -Phadoop-2.4 -Phive -Phive-thriftserver assembly

Is spark 1.5 bind the hive client version of 1.2 by default?



--
View this message in context: http://apache-spark-developers-list.1001551.n3.nabble.com/SparkSQL-Could-not-alter-table-in-Spark-1-5-use-HiveContext-tp14029p14044.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@spark.apache.org
For additional commands, e-mail: dev-help@spark.apache.org


Re: [SparkSQL]Could not alter table in Spark 1.5 use HiveContext

Posted by Yin Huai <yh...@databricks.com>.
What is the Hive version of your metastore server?

Looks like you are using a Hive 1.2's metastore client talking to your
existing Hive 0.13.1 metastore server?

On Thu, Sep 10, 2015 at 10:48 AM, Michael Armbrust <mi...@databricks.com>
wrote:

> Can you open a JIRA?
>
> On Wed, Sep 9, 2015 at 11:11 PM, StanZhai <ma...@zhaishidan.cn> wrote:
>
>> After upgrade spark from 1.4.1 to 1.5.0, I encountered the following
>> exception when use alter table statement in HiveContext:
>>
>> The sql is: ALTER TABLE a RENAME TO b
>>
>> The exception is:
>>
>> FAILED: Execution Error, return code 1 from
>> org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. Invalid
>> method name: 'alter_table_with_cascade'
>> msg: org.apache.spark.sql.execution.QueryExecutionException: FAILED:
>> Execution Error, return code 1 from
>> org.apache.hadoop.hive.ql.exec.DDLTask.
>> Unable to alter table. Invalid method name: 'alter_table_with_cascade'
>>         at
>>
>> org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:433)
>>         at
>>
>> org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:418)
>>         at
>>
>> org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:256)
>>         at
>>
>> org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:211)
>>         at
>>
>> org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:248)
>>         at
>>
>> org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:418)
>>         at
>>
>> org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:408)
>>         at
>> org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:558)
>>         at
>>
>> org.apache.spark.sql.hive.execution.HiveNativeCommand.run(HiveNativeCommand.scala:33)
>>         at
>>
>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
>>         at
>>
>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
>>         at
>>
>> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:69)
>>         at
>>
>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:140)
>>         at
>>
>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:138)
>>         at
>>
>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
>>         at
>> org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:138)
>>         at
>>
>> org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:927)
>>         at
>> org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:927)
>>         at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:144)
>>         at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:129)
>>         at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
>>         at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:719)
>>         at test.service.QueryService.query(QueryService.scala:28)
>>         at
>> test.api.DatabaseApi$$anonfun$query$1.apply(DatabaseApi.scala:39)
>>         at
>> test.api.DatabaseApi$$anonfun$query$1.apply(DatabaseApi.scala:30)
>>         at test.web.JettyUtils$$anon$1.getOrPost(JettyUtils.scala:81)
>>         at test.web.JettyUtils$$anon$1.doPost(JettyUtils.scala:119)
>>         at javax.servlet.http.HttpServlet.service(HttpServlet.java:755)
>>         at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
>>         at
>> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
>>         at
>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:501)
>>         at
>>
>> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
>>         at
>> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:428)
>>         at
>>
>> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
>>         at
>>
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
>>         at
>> org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
>>         at
>>
>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
>>         at org.eclipse.jetty.server.Server.handle(Server.java:370)
>>         at
>>
>> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)
>>         at
>>
>> org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:982)
>>         at
>>
>> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1043)
>>         at
>> org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:865)
>>         at
>> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)
>>         at
>>
>> org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
>>         at
>>
>> org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:667)
>>         at
>>
>> org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
>>         at
>>
>> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
>>         at
>>
>> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
>>         at java.lang.Thread.run(Thread.java:745)
>>
>> The sql can be run both at Spark 1.4.1 and Hive, I think this should be a
>> bug of Spark 1.5, Any suggestion?
>>
>> Best, Stan
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-developers-list.1001551.n3.nabble.com/SparkSQL-Could-not-alter-table-in-Spark-1-5-use-HiveContext-tp14029.html
>> Sent from the Apache Spark Developers List mailing list archive at
>> Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscribe@spark.apache.org
>> For additional commands, e-mail: dev-help@spark.apache.org
>>
>>
>

Re: [SparkSQL]Could not alter table in Spark 1.5 use HiveContext

Posted by Michael Armbrust <mi...@databricks.com>.
Can you open a JIRA?

On Wed, Sep 9, 2015 at 11:11 PM, StanZhai <ma...@zhaishidan.cn> wrote:

> After upgrade spark from 1.4.1 to 1.5.0, I encountered the following
> exception when use alter table statement in HiveContext:
>
> The sql is: ALTER TABLE a RENAME TO b
>
> The exception is:
>
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. Invalid
> method name: 'alter_table_with_cascade'
> msg: org.apache.spark.sql.execution.QueryExecutionException: FAILED:
> Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
> Unable to alter table. Invalid method name: 'alter_table_with_cascade'
>         at
>
> org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:433)
>         at
>
> org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:418)
>         at
>
> org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:256)
>         at
>
> org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:211)
>         at
>
> org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:248)
>         at
>
> org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:418)
>         at
>
> org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:408)
>         at
> org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:558)
>         at
>
> org.apache.spark.sql.hive.execution.HiveNativeCommand.run(HiveNativeCommand.scala:33)
>         at
>
> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
>         at
>
> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
>         at
> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:69)
>         at
>
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:140)
>         at
>
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:138)
>         at
>
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
>         at
> org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:138)
>         at
>
> org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:927)
>         at
> org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:927)
>         at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:144)
>         at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:129)
>         at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
>         at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:719)
>         at test.service.QueryService.query(QueryService.scala:28)
>         at
> test.api.DatabaseApi$$anonfun$query$1.apply(DatabaseApi.scala:39)
>         at
> test.api.DatabaseApi$$anonfun$query$1.apply(DatabaseApi.scala:30)
>         at test.web.JettyUtils$$anon$1.getOrPost(JettyUtils.scala:81)
>         at test.web.JettyUtils$$anon$1.doPost(JettyUtils.scala:119)
>         at javax.servlet.http.HttpServlet.service(HttpServlet.java:755)
>         at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
>         at
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
>         at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:501)
>         at
>
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
>         at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:428)
>         at
>
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
>         at
>
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
>         at
> org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
>         at
>
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
>         at org.eclipse.jetty.server.Server.handle(Server.java:370)
>         at
>
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)
>         at
>
> org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:982)
>         at
>
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1043)
>         at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:865)
>         at
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)
>         at
>
> org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
>         at
>
> org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:667)
>         at
>
> org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
>         at
>
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
>         at
>
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
>         at java.lang.Thread.run(Thread.java:745)
>
> The sql can be run both at Spark 1.4.1 and Hive, I think this should be a
> bug of Spark 1.5, Any suggestion?
>
> Best, Stan
>
>
>
> --
> View this message in context:
> http://apache-spark-developers-list.1001551.n3.nabble.com/SparkSQL-Could-not-alter-table-in-Spark-1-5-use-HiveContext-tp14029.html
> Sent from the Apache Spark Developers List mailing list archive at
> Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@spark.apache.org
> For additional commands, e-mail: dev-help@spark.apache.org
>
>