You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by alexandria1101 <al...@gmail.com> on 2014/09/10 02:16:42 UTC

Table not found: using jdbc console to query sparksql hive thriftserver

Hi,

I want to use the sparksql thrift server in my application and make sure
everything is loading and working. I built Spark 1.1 SNAPSHOT and ran the
thrift server using ./sbin/start-thrift-server.  In my application I load
tables into schemaRDDs and I expect that the thrift-server should pick them
up.   In the app I then perform SQL queries on a table called mutation (the
same name as the table I registered from the schemaRDD).

I set the driver to "org.apache.hive.jdbc.HiveDriver" and the url to
"jdbc:hive2://localhost:10000/mutation?zeroDateTimeBehavior=convertToNull".

When I check the terminal for the thrift server output, it gets the query. 
However, I cannot use a jdbc console to communicate with it to show all of
the databases and tables to see if mutation is loaded.


I get the following errors:

14/09/09 16:51:02 WARN component.AbstractLifeCycle: FAILED
SelectChannelConnector@0.0.0.0:4040: java.net.BindException: Address already
in use
java.net.BindException: Address already in use
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:444)
	at sun.nio.ch.Net.bind(Net.java:436)
	at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at
org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
	at
org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
	at
org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
	at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at org.eclipse.jetty.server.Server.doStart(Server.java:293)
	at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at
org.apache.spark.ui.JettyUtils$.org$apache$spark$ui$JettyUtils$$connect$1(JettyUtils.scala:192)
	at org.apache.spark.ui.JettyUtils$$anonfun$3.apply(JettyUtils.scala:202)
	at org.apache.spark.ui.JettyUtils$$anonfun$3.apply(JettyUtils.scala:202)
	at
org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1446)
	at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
	at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1442)
	at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:202)
	at org.apache.spark.ui.WebUI.bind(WebUI.scala:102)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:224)
	at
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:53)
	at com.illumina.phoenix.util.Runner.createSparkContext(Runner.java:144)
	at
com.illumina.phoenix.etl.EtlPipelineRunner.main(EtlPipelineRunner.java:116)
1053 [main] WARN org.eclipse.jetty.util.component.AbstractLifeCycle  -
FAILED SelectChannelConnector@0.0.0.0:4040: java.net.BindException: Address
already in use
java.net.BindException: Address already in use
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:444)
	at sun.nio.ch.Net.bind(Net.java:436)
	at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at
org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
	at
org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
	at
org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
	at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at org.eclipse.jetty.server.Server.doStart(Server.java:293)
	at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at
org.apache.spark.ui.JettyUtils$.org$apache$spark$ui$JettyUtils$$connect$1(JettyUtils.scala:192)
	at org.apache.spark.ui.JettyUtils$$anonfun$3.apply(JettyUtils.scala:202)
	at org.apache.spark.ui.JettyUtils$$anonfun$3.apply(JettyUtils.scala:202)
	at
org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1446)
	at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
	at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1442)
	at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:202)
	at org.apache.spark.ui.WebUI.bind(WebUI.scala:102)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:224)
	at
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:53)
	at com.illumina.phoenix.util.Runner.createSparkContext(Runner.java:144)
	at
com.illumina.phoenix.etl.EtlPipelineRunner.main(EtlPipelineRunner.java:116)
14/09/09 16:51:02 WARN component.AbstractLifeCycle: FAILED
org.eclipse.jetty.server.Server@35241119: java.net.BindException: Address
already in use
java.net.BindException: Address already in use
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:444)
	at sun.nio.ch.Net.bind(Net.java:436)
	at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at
org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
	at
org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
	at
org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
	at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at org.eclipse.jetty.server.Server.doStart(Server.java:293)
	at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at
org.apache.spark.ui.JettyUtils$.org$apache$spark$ui$JettyUtils$$connect$1(JettyUtils.scala:192)
	at org.apache.spark.ui.JettyUtils$$anonfun$3.apply(JettyUtils.scala:202)
	at org.apache.spark.ui.JettyUtils$$anonfun$3.apply(JettyUtils.scala:202)
	at
org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1446)
	at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
	at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1442)
	at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:202)
	at org.apache.spark.ui.WebUI.bind(WebUI.scala:102)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:224)
	at
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:53)
	at com.illumina.phoenix.util.Runner.createSparkContext(Runner.java:144)
	at
com.illumina.phoenix.etl.EtlPipelineRunner.main(EtlPipelineRunner.java:116)
1055 [main] WARN org.eclipse.jetty.util.component.AbstractLifeCycle  -
FAILED org.eclipse.jetty.server.Server@35241119: java.net.BindException:
Address already in use
java.net.BindException: Address already in use
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:444)
	at sun.nio.ch.Net.bind(Net.java:436)
	at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at
org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
	at
org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
	at
org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
	at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at org.eclipse.jetty.server.Server.doStart(Server.java:293)
	at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at
org.apache.spark.ui.JettyUtils$.org$apache$spark$ui$JettyUtils$$connect$1(JettyUtils.scala:192)
	at org.apache.spark.ui.JettyUtils$$anonfun$3.apply(JettyUtils.scala:202)
	at org.apache.spark.ui.JettyUtils$$anonfun$3.apply(JettyUtils.scala:202)
	at
org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1446)
	at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
	at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1442)
	at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:202)
	at org.apache.spark.ui.WebUI.bind(WebUI.scala:102)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:224)
	at
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:53)
	at com.illumina.phoenix.util.Runner.createSparkContext(Runner.java:144)
	at
com.illumina.phoenix.etl.EtlPipelineRunner.main(EtlPipelineRunner.java:116)

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in
stage 2.0 failed 1 times, most recent failure: Lost task 0.0 in stage 2.0
(TID 17, localhost): org.springframework.jdbc.UncategorizedSQLException:
StatementCallback; uncategorized SQLException for SQL [SELECT mrnafeatureid,
mappedid, COUNT(DISTINCT pos) FROM mutation WHERE chromosomeid = 1 AND pos
BETWEEN 10617 AND 10637 GROUP BY mrnafeatureid, mappedid]; SQL state [null];
error code [0]; org.apache.hadoop.hive.ql.metadata.InvalidTableException:
Table not found mutation; nested exception is java.sql.SQLException:
org.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not found
mutation
       
org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:84)
       
org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
       
org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
       
org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:413)
       
org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:468)
       
com.illumina.phoenix.genomedb.jdbc.MutationDAOJdbc.getMutationEntriesBetween(MutationDAOJdbc.java:143)
       
com.illumina.phoenix.etl.ClassificationService.assignMutationClassIndel(ClassificationService.java:342)
       
com.illumina.phoenix.etl.ClassificationService.call(ClassificationService.java:663)
        com.illumina.phoenix.etl.Classifier.call(Classifier.java:72)
        com.illumina.phoenix.etl.Classifier.call(Classifier.java:19)
       
org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:923)
       
org.apache.spark.rdd.MappedValuesRDD$$anonfun$compute$1.apply(MappedValuesRDD.scala:31)
       
org.apache.spark.rdd.MappedValuesRDD$$anonfun$compute$1.apply(MappedValuesRDD.scala:31)
        scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
       
org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:236)
       
org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:163)
        org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:70)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:227)
        org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
        org.apache.spark.rdd.FlatMappedRDD.compute(FlatMappedRDD.scala:33)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
        org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
        org.apache.spark.scheduler.Task.run(Task.scala:54)
       
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
       
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
       
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
	at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
	at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
	at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
	at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
	at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)
	at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
	at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
	at scala.Option.foreach(Option.scala:236)
	at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)
	at
org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391)
	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
	at akka.actor.ActorCell.invoke(ActorCell.scala:456)
	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
	at akka.dispatch.Mailbox.run(Mailbox.scala:219)
	at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)







--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Table-not-found-using-jdbc-console-to-query-sparksql-hive-thriftserver-tp13840.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Table not found: using jdbc console to query sparksql hive thriftserver

Posted by alexandria1101 <al...@gmail.com>.
Thank you!! I can do this using saveAsTable with the schemaRDD, right? 



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Table-not-found-using-jdbc-console-to-query-sparksql-hive-thriftserver-tp13840p13979.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Table not found: using jdbc console to query sparksql hive thriftserver

Posted by Du Li <li...@yahoo-inc.com.INVALID>.
SchemaRDD has a method insertInto(table). When the table is partitioned, it would be more sensible and convenient to extend it with a list of partition key and values.


From: Denny Lee <de...@gmail.com>>
Date: Thursday, September 11, 2014 at 6:39 PM
To: Du Li <li...@yahoo-inc.com>>
Cc: "user@spark.incubator.apache.org<ma...@spark.incubator.apache.org>" <us...@spark.incubator.apache.org>>, alexandria1101 <al...@gmail.com>>
Subject: Re: Table not found: using jdbc console to query sparksql hive thriftserver

It sort of depends on the definition of efficiently.  From a work flow perspective I would agree but from an I/O perspective, wouldn’t there be the same multi-pass from the standpoint of the Hive context needing to push the data into HDFS?  Saying this, if you’re pushing the data into HDFS and then creating Hive tables via load (vs. a reference point ala external tables), I would agree with you.

And thanks for correcting me, the registerTempTable is in the SqlContext.



On September 10, 2014 at 13:47:24, Du Li (lidu@yahoo-inc.com<ma...@yahoo-inc.com>) wrote:

Hi Denny,

There is a related question by the way.

I have a program that reads in a stream of RDD¹s, each of which is to be
loaded into a hive table as one partition. Currently I do this by first
writing the RDD¹s to HDFS and then loading them to hive, which requires
multiple passes of HDFS I/O and serialization/deserialization.

I wonder if it is possible to do it more efficiently with Spark 1.1
streaming + SQL, e.g., by registering the RDDs into a hive context so that
the data is loaded directly into the hive table in cache and meanwhile
visible to jdbc/odbc clients. In the spark source code, the method
registerTempTable you mentioned works on SqlContext instead of HiveContext.

Thanks,
Du



On 9/10/14, 1:21 PM, "Denny Lee" <de...@gmail.com>> wrote:

>Actually, when registering the table, it is only available within the sc
>context you are running it in. For Spark 1.1, the method name is changed
>to RegisterAsTempTable to better reflect that.
>
>The Thrift server process runs under a different process meaning that it
>cannot see any of the tables generated within the sc context. You would
>need to save the sc table into Hive and then the Thrift process would be
>able to see them.
>
>HTH!
>
>> On Sep 10, 2014, at 13:08, alexandria1101
>><al...@gmail.com>> wrote:
>>
>> I used the hiveContext to register the tables and the tables are still
>>not
>> being found by the thrift server. Do I have to pass the hiveContext to
>>JDBC
>> somehow?
>>
>>
>>
>> --
>> View this message in context:
>>http://apache-spark-user-list.1001560.n3.nabble.com/Table-not-found-using
>>-jdbc-console-to-query-sparksql-hive-thriftserver-tp13840p13922.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org<ma...@spark.apache.org>
>> For additional commands, e-mail: user-help@spark.apache.org<ma...@spark.apache.org>
>>
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: user-unsubscribe@spark.apache.org<ma...@spark.apache.org>
>For additional commands, e-mail: user-help@spark.apache.org<ma...@spark.apache.org>
>


Re: Table not found: using jdbc console to query sparksql hive thriftserver

Posted by Denny Lee <de...@gmail.com>.
It sort of depends on the definition of efficiently.  From a work flow perspective I would agree but from an I/O perspective, wouldn’t there be the same multi-pass from the standpoint of the Hive context needing to push the data into HDFS?  Saying this, if you’re pushing the data into HDFS and then creating Hive tables via load (vs. a reference point ala external tables), I would agree with you.  

And thanks for correcting me, the registerTempTable is in the SqlContext.


On September 10, 2014 at 13:47:24, Du Li (lidu@yahoo-inc.com) wrote:

Hi Denny,  

There is a related question by the way.  

I have a program that reads in a stream of RDD¹s, each of which is to be  
loaded into a hive table as one partition. Currently I do this by first  
writing the RDD¹s to HDFS and then loading them to hive, which requires  
multiple passes of HDFS I/O and serialization/deserialization.  

I wonder if it is possible to do it more efficiently with Spark 1.1  
streaming + SQL, e.g., by registering the RDDs into a hive context so that  
the data is loaded directly into the hive table in cache and meanwhile  
visible to jdbc/odbc clients. In the spark source code, the method  
registerTempTable you mentioned works on SqlContext instead of HiveContext.  

Thanks,  
Du  



On 9/10/14, 1:21 PM, "Denny Lee" <de...@gmail.com> wrote:  

>Actually, when registering the table, it is only available within the sc  
>context you are running it in. For Spark 1.1, the method name is changed  
>to RegisterAsTempTable to better reflect that.  
>  
>The Thrift server process runs under a different process meaning that it  
>cannot see any of the tables generated within the sc context. You would  
>need to save the sc table into Hive and then the Thrift process would be  
>able to see them.  
>  
>HTH!  
>  
>> On Sep 10, 2014, at 13:08, alexandria1101  
>><al...@gmail.com> wrote:  
>>  
>> I used the hiveContext to register the tables and the tables are still  
>>not  
>> being found by the thrift server. Do I have to pass the hiveContext to  
>>JDBC  
>> somehow?  
>>  
>>  
>>  
>> --  
>> View this message in context:  
>>http://apache-spark-user-list.1001560.n3.nabble.com/Table-not-found-using  
>>-jdbc-console-to-query-sparksql-hive-thriftserver-tp13840p13922.html  
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.  
>>  
>> ---------------------------------------------------------------------  
>> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org  
>> For additional commands, e-mail: user-help@spark.apache.org  
>>  
>  
>---------------------------------------------------------------------  
>To unsubscribe, e-mail: user-unsubscribe@spark.apache.org  
>For additional commands, e-mail: user-help@spark.apache.org  
>  


Re: Table not found: using jdbc console to query sparksql hive thriftserver

Posted by Du Li <li...@yahoo-inc.com.INVALID>.
Hi Denny,

There is a related question by the way.

I have a program that reads in a stream of RDD¹s, each of which is to be
loaded into a hive table as one partition. Currently I do this by first
writing the RDD¹s to HDFS and then loading them to hive, which requires
multiple passes of HDFS I/O and serialization/deserialization.

I wonder if it is possible to do it more efficiently with Spark 1.1
streaming + SQL, e.g., by registering the RDDs into a hive context so that
the data is loaded directly into the hive table in cache and meanwhile
visible to jdbc/odbc clients. In the spark source code, the method
registerTempTable you mentioned works on SqlContext instead of HiveContext.

Thanks,
Du



On 9/10/14, 1:21 PM, "Denny Lee" <de...@gmail.com> wrote:

>Actually, when registering the table, it is only available within the sc
>context you are running it in. For Spark 1.1, the method name is changed
>to RegisterAsTempTable to better reflect that.
>
>The Thrift server process runs under a different process meaning that it
>cannot see any of the tables generated within the sc context. You would
>need to save the sc table into Hive and then the Thrift process would be
>able to see them.
>
>HTH!
>
>> On Sep 10, 2014, at 13:08, alexandria1101
>><al...@gmail.com> wrote:
>> 
>> I used the hiveContext to register the tables and the tables are still
>>not
>> being found by the thrift server.  Do I have to pass the hiveContext to
>>JDBC
>> somehow?
>> 
>> 
>> 
>> --
>> View this message in context:
>>http://apache-spark-user-list.1001560.n3.nabble.com/Table-not-found-using
>>-jdbc-console-to-query-sparksql-hive-thriftserver-tp13840p13922.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
>> For additional commands, e-mail: user-help@spark.apache.org
>> 
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
>For additional commands, e-mail: user-help@spark.apache.org
>


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Table not found: using jdbc console to query sparksql hive thriftserver

Posted by Denny Lee <de...@gmail.com>.
Actually, when registering the table, it is only available within the sc context you are running it in. For Spark 1.1, the method name is changed to RegisterAsTempTable to better reflect that. 

The Thrift server process runs under a different process meaning that it cannot see any of the tables generated within the sc context. You would need to save the sc table into Hive and then the Thrift process would be able to see them.

HTH!

> On Sep 10, 2014, at 13:08, alexandria1101 <al...@gmail.com> wrote:
> 
> I used the hiveContext to register the tables and the tables are still not
> being found by the thrift server.  Do I have to pass the hiveContext to JDBC
> somehow?
> 
> 
> 
> --
> View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Table-not-found-using-jdbc-console-to-query-sparksql-hive-thriftserver-tp13840p13922.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Table not found: using jdbc console to query sparksql hive thriftserver

Posted by alexandria1101 <al...@gmail.com>.
I used the hiveContext to register the tables and the tables are still not
being found by the thrift server.  Do I have to pass the hiveContext to JDBC
somehow?



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Table-not-found-using-jdbc-console-to-query-sparksql-hive-thriftserver-tp13840p13922.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Table not found: using jdbc console to query sparksql hive thriftserver

Posted by Du Li <li...@yahoo-inc.com.INVALID>.
You need to run mvn install so that the package you built is put into the
local maven repo. Then when compiling your own app (with the right
dependency specified), the package will be retrieved.



On 9/9/14, 8:16 PM, "alexandria1101" <al...@gmail.com> wrote:

>I think the package does not exist because I need to change the pom file:
>
>   <groupId>org.apache.spark</groupId>
>   <artifactId>spark-assembly_2.10</artifactId>
>   <version>1.0.1</version>
>   <type>pom</type>
>   <scope>provided</scope>
>  </dependency>
>
>I changed the version number to 1.1.1, yet still that causes the build
>error:
>
>Failure to find org.apache.spark:spark-assembly_2.10:pom:1.1.1 in
>http://repo.maven.apache.org/maven2 was cached in the local repository,
>resolution will not be reattempted until the update interval of central
>has
>elapsed or updates are forced -> [Help 1]
>
>Is there a way to get past this?
>
>
>
>--
>View this message in context:
>http://apache-spark-user-list.1001560.n3.nabble.com/Table-not-found-using-
>jdbc-console-to-query-sparksql-hive-thriftserver-tp13840p13851.html
>Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
>For additional commands, e-mail: user-help@spark.apache.org
>


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Table not found: using jdbc console to query sparksql hive thriftserver

Posted by alexandria1101 <al...@gmail.com>.
I think the package does not exist because I need to change the pom file:

   <groupId>org.apache.spark</groupId>
   <artifactId>spark-assembly_2.10</artifactId>
   <version>1.0.1</version>
   <type>pom</type>
   <scope>provided</scope>
  </dependency>

I changed the version number to 1.1.1, yet still that causes the build
error:

Failure to find org.apache.spark:spark-assembly_2.10:pom:1.1.1 in
http://repo.maven.apache.org/maven2 was cached in the local repository,
resolution will not be reattempted until the update interval of central has
elapsed or updates are forced -> [Help 1]

Is there a way to get past this?



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Table-not-found-using-jdbc-console-to-query-sparksql-hive-thriftserver-tp13840p13851.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Table not found: using jdbc console to query sparksql hive thriftserver

Posted by alexandria1101 <al...@gmail.com>.
Thanks so much!

That makes complete sense.  However, when I compile I get an error "package
org.apache.spark.sql.hive does not exist."

Does anyone else have this and any idea why this might be so?



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Table-not-found-using-jdbc-console-to-query-sparksql-hive-thriftserver-tp13840p13847.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Table not found: using jdbc console to query sparksql hive thriftserver

Posted by Du Li <li...@yahoo-inc.com.INVALID>.
Your tables were registered in the SqlContext, whereas the thrift server
works with HiveContext. They seem to be in two different worlds today.



On 9/9/14, 5:16 PM, "alexandria1101" <al...@gmail.com> wrote:

>Hi,
>
>I want to use the sparksql thrift server in my application and make sure
>everything is loading and working. I built Spark 1.1 SNAPSHOT and ran the
>thrift server using ./sbin/start-thrift-server.  In my application I load
>tables into schemaRDDs and I expect that the thrift-server should pick
>them
>up.   In the app I then perform SQL queries on a table called mutation
>(the
>same name as the table I registered from the schemaRDD).
>
>I set the driver to "org.apache.hive.jdbc.HiveDriver" and the url to
>"jdbc:hive2://localhost:10000/mutation?zeroDateTimeBehavior=convertToNull"
>.
>
>When I check the terminal for the thrift server output, it gets the
>query. 
>However, I cannot use a jdbc console to communicate with it to show all of
>the databases and tables to see if mutation is loaded.
>
>
>I get the following errors:
>
>14/09/09 16:51:02 WARN component.AbstractLifeCycle: FAILED
>SelectChannelConnector@0.0.0.0:4040: java.net.BindException: Address
>already
>in use
>java.net.BindException: Address already in use
>	at sun.nio.ch.Net.bind0(Native Method)
>	at sun.nio.ch.Net.bind(Net.java:444)
>	at sun.nio.ch.Net.bind(Net.java:436)
>	at
>sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>	at
>org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConn
>ector.java:187)
>	at
>org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:
>316)
>	at
>org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelC
>onnector.java:265)
>	at
>org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle
>.java:64)
>	at org.eclipse.jetty.server.Server.doStart(Server.java:293)
>	at
>org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle
>.java:64)
>	at
>org.apache.spark.ui.JettyUtils$.org$apache$spark$ui$JettyUtils$$connect$1(
>JettyUtils.scala:192)
>	at org.apache.spark.ui.JettyUtils$$anonfun$3.apply(JettyUtils.scala:202)
>	at org.apache.spark.ui.JettyUtils$$anonfun$3.apply(JettyUtils.scala:202)
>	at
>org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Ut
>ils.scala:1446)
>	at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
>	at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1442)
>	at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:202)
>	at org.apache.spark.ui.WebUI.bind(WebUI.scala:102)
>	at org.apache.spark.SparkContext.<init>(SparkContext.scala:224)
>	at
>org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:5
>3)
>	at com.illumina.phoenix.util.Runner.createSparkContext(Runner.java:144)
>	at
>com.illumina.phoenix.etl.EtlPipelineRunner.main(EtlPipelineRunner.java:116
>)
>1053 [main] WARN org.eclipse.jetty.util.component.AbstractLifeCycle  -
>FAILED SelectChannelConnector@0.0.0.0:4040: java.net.BindException:
>Address
>already in use
>java.net.BindException: Address already in use
>	at sun.nio.ch.Net.bind0(Native Method)
>	at sun.nio.ch.Net.bind(Net.java:444)
>	at sun.nio.ch.Net.bind(Net.java:436)
>	at
>sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>	at
>org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConn
>ector.java:187)
>	at
>org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:
>316)
>	at
>org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelC
>onnector.java:265)
>	at
>org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle
>.java:64)
>	at org.eclipse.jetty.server.Server.doStart(Server.java:293)
>	at
>org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle
>.java:64)
>	at
>org.apache.spark.ui.JettyUtils$.org$apache$spark$ui$JettyUtils$$connect$1(
>JettyUtils.scala:192)
>	at org.apache.spark.ui.JettyUtils$$anonfun$3.apply(JettyUtils.scala:202)
>	at org.apache.spark.ui.JettyUtils$$anonfun$3.apply(JettyUtils.scala:202)
>	at
>org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Ut
>ils.scala:1446)
>	at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
>	at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1442)
>	at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:202)
>	at org.apache.spark.ui.WebUI.bind(WebUI.scala:102)
>	at org.apache.spark.SparkContext.<init>(SparkContext.scala:224)
>	at
>org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:5
>3)
>	at com.illumina.phoenix.util.Runner.createSparkContext(Runner.java:144)
>	at
>com.illumina.phoenix.etl.EtlPipelineRunner.main(EtlPipelineRunner.java:116
>)
>14/09/09 16:51:02 WARN component.AbstractLifeCycle: FAILED
>org.eclipse.jetty.server.Server@35241119: java.net.BindException: Address
>already in use
>java.net.BindException: Address already in use
>	at sun.nio.ch.Net.bind0(Native Method)
>	at sun.nio.ch.Net.bind(Net.java:444)
>	at sun.nio.ch.Net.bind(Net.java:436)
>	at
>sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>	at
>org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConn
>ector.java:187)
>	at
>org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:
>316)
>	at
>org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelC
>onnector.java:265)
>	at
>org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle
>.java:64)
>	at org.eclipse.jetty.server.Server.doStart(Server.java:293)
>	at
>org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle
>.java:64)
>	at
>org.apache.spark.ui.JettyUtils$.org$apache$spark$ui$JettyUtils$$connect$1(
>JettyUtils.scala:192)
>	at org.apache.spark.ui.JettyUtils$$anonfun$3.apply(JettyUtils.scala:202)
>	at org.apache.spark.ui.JettyUtils$$anonfun$3.apply(JettyUtils.scala:202)
>	at
>org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Ut
>ils.scala:1446)
>	at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
>	at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1442)
>	at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:202)
>	at org.apache.spark.ui.WebUI.bind(WebUI.scala:102)
>	at org.apache.spark.SparkContext.<init>(SparkContext.scala:224)
>	at
>org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:5
>3)
>	at com.illumina.phoenix.util.Runner.createSparkContext(Runner.java:144)
>	at
>com.illumina.phoenix.etl.EtlPipelineRunner.main(EtlPipelineRunner.java:116
>)
>1055 [main] WARN org.eclipse.jetty.util.component.AbstractLifeCycle  -
>FAILED org.eclipse.jetty.server.Server@35241119: java.net.BindException:
>Address already in use
>java.net.BindException: Address already in use
>	at sun.nio.ch.Net.bind0(Native Method)
>	at sun.nio.ch.Net.bind(Net.java:444)
>	at sun.nio.ch.Net.bind(Net.java:436)
>	at
>sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>	at
>org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConn
>ector.java:187)
>	at
>org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:
>316)
>	at
>org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelC
>onnector.java:265)
>	at
>org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle
>.java:64)
>	at org.eclipse.jetty.server.Server.doStart(Server.java:293)
>	at
>org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle
>.java:64)
>	at
>org.apache.spark.ui.JettyUtils$.org$apache$spark$ui$JettyUtils$$connect$1(
>JettyUtils.scala:192)
>	at org.apache.spark.ui.JettyUtils$$anonfun$3.apply(JettyUtils.scala:202)
>	at org.apache.spark.ui.JettyUtils$$anonfun$3.apply(JettyUtils.scala:202)
>	at
>org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Ut
>ils.scala:1446)
>	at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
>	at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1442)
>	at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:202)
>	at org.apache.spark.ui.WebUI.bind(WebUI.scala:102)
>	at org.apache.spark.SparkContext.<init>(SparkContext.scala:224)
>	at
>org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:5
>3)
>	at com.illumina.phoenix.util.Runner.createSparkContext(Runner.java:144)
>	at
>com.illumina.phoenix.etl.EtlPipelineRunner.main(EtlPipelineRunner.java:116
>)
>
>org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
>in
>stage 2.0 failed 1 times, most recent failure: Lost task 0.0 in stage 2.0
>(TID 17, localhost): org.springframework.jdbc.UncategorizedSQLException:
>StatementCallback; uncategorized SQLException for SQL [SELECT
>mrnafeatureid,
>mappedid, COUNT(DISTINCT pos) FROM mutation WHERE chromosomeid = 1 AND pos
>BETWEEN 10617 AND 10637 GROUP BY mrnafeatureid, mappedid]; SQL state
>[null];
>error code [0]; org.apache.hadoop.hive.ql.metadata.InvalidTableException:
>Table not found mutation; nested exception is java.sql.SQLException:
>org.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not found
>mutation
>       
>org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.tr
>anslate(AbstractFallbackSQLExceptionTranslator.java:84)
>       
>org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.tr
>anslate(AbstractFallbackSQLExceptionTranslator.java:81)
>       
>org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.tr
>anslate(AbstractFallbackSQLExceptionTranslator.java:81)
>       
>org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:413)
>       
>org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:468)
>       
>com.illumina.phoenix.genomedb.jdbc.MutationDAOJdbc.getMutationEntriesBetwe
>en(MutationDAOJdbc.java:143)
>       
>com.illumina.phoenix.etl.ClassificationService.assignMutationClassIndel(Cl
>assificationService.java:342)
>       
>com.illumina.phoenix.etl.ClassificationService.call(ClassificationService.
>java:663)
>        com.illumina.phoenix.etl.Classifier.call(Classifier.java:72)
>        com.illumina.phoenix.etl.Classifier.call(Classifier.java:19)
>       
>org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.apply(Jav
>aPairRDD.scala:923)
>       
>org.apache.spark.rdd.MappedValuesRDD$$anonfun$compute$1.apply(MappedValues
>RDD.scala:31)
>       
>org.apache.spark.rdd.MappedValuesRDD$$anonfun$compute$1.apply(MappedValues
>RDD.scala:31)
>        scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>       
>org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:236)
>       
>org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:163)
>        org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:70)
>        org.apache.spark.rdd.RDD.iterator(RDD.scala:227)
>        org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>        org.apache.spark.rdd.FlatMappedRDD.compute(FlatMappedRDD.scala:33)
>        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>        org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
>        org.apache.spark.scheduler.Task.run(Task.scala:54)
>       
>org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
>       
>java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:
>1145)
>       
>java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java
>:615)
>        java.lang.Thread.run(Thread.java:745)
>Driver stacktrace:
>	at
>org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGSche
>duler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
>	at
>org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGSch
>eduler.scala:1174)
>	at
>org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGSch
>eduler.scala:1173)
>	at
>scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala
>:59)
>	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>	at
>org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173
>)
>	at
>org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.app
>ly(DAGScheduler.scala:688)
>	at
>org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.app
>ly(DAGScheduler.scala:688)
>	at scala.Option.foreach(Option.scala:236)
>	at
>org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.s
>cala:688)
>	at
>org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$
>2.applyOrElse(DAGScheduler.scala:1391)
>	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
>	at akka.actor.ActorCell.invoke(ActorCell.scala:456)
>	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
>	at akka.dispatch.Mailbox.run(Mailbox.scala:219)
>	at
>akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractD
>ispatcher.scala:386)
>	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>	at
>scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java
>:1339)
>	at 
>scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>	at
>scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.ja
>va:107)
>
>
>
>
>
>
>
>--
>View this message in context:
>http://apache-spark-user-list.1001560.n3.nabble.com/Table-not-found-using-
>jdbc-console-to-query-sparksql-hive-thriftserver-tp13840.html
>Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
>For additional commands, e-mail: user-help@spark.apache.org
>


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org