You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by Todd <bi...@163.com> on 2016/01/28 15:11:29 UTC

Two questions about working with Hive using jdbc

Hi,

I am using Hive 0.14, and I am using JDBC to connect the Hive thrift server to do queries things, I encounter two issues-

1. When the query is issued,how can i get the job id(mapreduce that run the query),so that I can get a chance to be able to kill the job.
2. I want to execute a sql file as an input to the Hive JDBC, like we do with hive command ( >hive -e sqlfile), Does Hive JDBC support this?

Thanks!



Re:Two questions about working with Hive using jdbc

Posted by Todd <bi...@163.com>.

Can someone help answer the questions? Thanks




--
发自我的网易邮箱平板适配版



在 2016-01-28 22:11:29,Todd <bi...@163.com> 写道:

Hi,

I am using Hive 0.14, and I am using JDBC to connect the Hive thrift server to do queries things, I encounter two issues-

1. When the query is issued,how can i get the job id(mapreduce that run the query),so that I can get a chance to be able to kill the job.
2. I want to execute a sql file as an input to the Hive JDBC, like we do with hive command ( >hive -e sqlfile), Does Hive JDBC support this?

Thanks!