You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Ibrar Ahmed <ib...@gmail.com> on 2015/05/13 21:28:13 UTC

Hive/Hbase Integration issue

Hi,

I am creating a table using hive and getting this error.

[127.0.0.1:10000] hive> CREATE TABLE hbase_table_1(key int, value string)
                      > STORED BY
'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
                      > WITH SERDEPROPERTIES ("hbase.columns.mapping" =
":key,cf1:val")
                      > TBLPROPERTIES ("hbase.table.name" = "xyz");



[Hive Error]: Query returned non-zero code: 1, cause: FAILED: Execution
Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
MetaException(message:org.apache.hadoop.hbase.client.RetriesExhaustedException:
Can't get the locations
    at
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:305)
    at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:147)
    at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:56)
    at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
    at
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:288)
    at
org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:267)
    at
org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:139)
    at
org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:134)
    at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:823)
    at
org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:601)
    at
org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:365)
    at
org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:281)
    at
org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:291)
    at
org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:162)
    at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:554)
    at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:547)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
    at com.sun.proxy.$Proxy7.createTable(Unknown Source)
    at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:613)
    at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4194)
    at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:281)
    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
    at
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1472)
    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1239)
    at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1057)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:880)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:870)
    at
org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:198)
    at
org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:644)
    at
org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:628)
    at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
    at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
    at
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
)


Any help/clue can help.

Re: Hive/Hbase Integration issue

Posted by Talat Uyarer <ta...@uyarer.com>.
This issue similar some missing settings. What do you for your Hive
Hbase integration ? Can you give some information about your cluster ?

BTW In [1], someone had same issue. Maybe help you

[1] http://mail-archives.apache.org/mod_mbox/hive-user/201307.mbox/%3CCE01CDA1.9221%25sanjay.subramanian@wizecommerce.com%3E

2015-05-13 22:28 GMT+03:00 Ibrar Ahmed <ib...@gmail.com>:
> Hi,
>
> I am creating a table using hive and getting this error.
>
> [127.0.0.1:10000] hive> CREATE TABLE hbase_table_1(key int, value string)
>                       > STORED BY
> 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
>                       > WITH SERDEPROPERTIES ("hbase.columns.mapping" =
> ":key,cf1:val")
>                       > TBLPROPERTIES ("hbase.table.name" = "xyz");
>
>
>
> [Hive Error]: Query returned non-zero code: 1, cause: FAILED: Execution
> Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
> MetaException(message:org.apache.hadoop.hbase.client.RetriesExhaustedException:
> Can't get the locations
>     at
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:305)
>     at
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:147)
>     at
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:56)
>     at
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
>     at
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:288)
>     at
> org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:267)
>     at
> org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:139)
>     at
> org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:134)
>     at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:823)
>     at
> org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:601)
>     at
> org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:365)
>     at
> org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:281)
>     at
> org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:291)
>     at
> org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:162)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:554)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:547)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
>     at com.sun.proxy.$Proxy7.createTable(Unknown Source)
>     at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:613)
>     at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4194)
>     at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:281)
>     at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
>     at
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
>     at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1472)
>     at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1239)
>     at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1057)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:880)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:870)
>     at
> org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:198)
>     at
> org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:644)
>     at
> org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:628)
>     at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>     at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>     at
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:745)
> )
>
>
> Any help/clue can help.



-- 
Talat UYARER
Websitesi: http://talat.uyarer.com
Twitter: http://twitter.com/talatuyarer
Linkedin: http://tr.linkedin.com/pub/talat-uyarer/10/142/304

Re: Hive/Hbase Integration issue

Posted by Talat Uyarer <ta...@uyarer.com>.
Your Zookeeper managed by Hbase. Could you check your
hbase.zookeeper.quorum settings. It should be same with Hbase
Zookeeper.

Talat

2015-05-13 23:03 GMT+03:00 Ibrar Ahmed <ib...@gmail.com>:
> Here is my hbase-site.xml
>
> <configuration>
>   <property>
>     <name>hbase.rootdir</name>
>     <value>file:///usr/local/hbase</value>
>   </property>
>   <property>
>     <name>hbase.zookeeper.property.dataDir</name>
>     <value>/usr/local/hbase/zookeeperdata</value>
>   </property>
> </configuration>
>
>
> And hive-site.xml
>
> <configuration>
>  <property>
>     <name>hive.aux.jars.path</name>
>
> <value>file:///usr/local/hive/lib/zookeeper-3.4.5.jar,file:/usr/local/hive/lib/hive-hbase-handler-0.13.1.jar,file:///usr/local/hive/lib/guava-11.0.2.jar,file:///usr/local/hbase/lib/hbase-client-0.98.2-
> hadoop2.jar,file:///usr/local/hbase/lib/hbase-common-0.98.2-hadoop2.jar,file:///usr/local/hbase/lib/hbase-protocol-0.98.2-hadoop2.jar,file:///usr/local/hbase/lib/hbase-server-0.98.2-hadoop2.jar,file:///usr
> /local/hbase/lib/hbase-shell-0.98.2-hadoop2.jar,file:///usr/local/hbase/lib/hbase-thrift-0.98.2-hadoop2.jar</value>
>   </property>
>
> <property>
>    <name>hbase.zookeeper.quorum</name>
>    <value>zk1,zk2,zk3</value>
> </property>
>
>
> <property>
>     <name>hive.exec.scratchdir</name>
>     <value>/usr/local/hive/mydir</value>
>     <description>Scratch space for Hive jobs</description>
> </property>
>
> </configuration>
>
>
>
> Hadoop classpath
>
>
> /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*::/usr/local/hbase/conf/hbase-site.xml:/contrib/capacity-scheduler/*.jar
>
>
> On Thu, May 14, 2015 at 12:28 AM, Ibrar Ahmed <ib...@gmail.com> wrote:
>
>> Hi,
>>
>> I am creating a table using hive and getting this error.
>>
>> [127.0.0.1:10000] hive> CREATE TABLE hbase_table_1(key int, value string)
>>                       > STORED BY
>> 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
>>                       > WITH SERDEPROPERTIES ("hbase.columns.mapping" =
>> ":key,cf1:val")
>>                       > TBLPROPERTIES ("hbase.table.name" = "xyz");
>>
>>
>>
>> [Hive Error]: Query returned non-zero code: 1, cause: FAILED: Execution
>> Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
>> MetaException(message:org.apache.hadoop.hbase.client.RetriesExhaustedException:
>> Can't get the locations
>>     at
>> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:305)
>>     at
>> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:147)
>>     at
>> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:56)
>>     at
>> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
>>     at
>> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:288)
>>     at
>> org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:267)
>>     at
>> org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:139)
>>     at
>> org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:134)
>>     at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:823)
>>     at
>> org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:601)
>>     at
>> org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:365)
>>     at
>> org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:281)
>>     at
>> org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:291)
>>     at
>> org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:162)
>>     at
>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:554)
>>     at
>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:547)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>     at java.lang.reflect.Method.invoke(Method.java:606)
>>     at
>> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
>>     at com.sun.proxy.$Proxy7.createTable(Unknown Source)
>>     at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:613)
>>     at
>> org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4194)
>>     at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:281)
>>     at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
>>     at
>> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
>>     at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1472)
>>     at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1239)
>>     at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1057)
>>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:880)
>>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:870)
>>     at
>> org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:198)
>>     at
>> org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:644)
>>     at
>> org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:628)
>>     at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>>     at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>>     at
>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
>>     at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>     at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>     at java.lang.Thread.run(Thread.java:745)
>> )
>>
>>
>> Any help/clue can help.
>>
>
>
>
> --
> Ibrar Ahmed



-- 
Talat UYARER
Websitesi: http://talat.uyarer.com
Twitter: http://twitter.com/talatuyarer
Linkedin: http://tr.linkedin.com/pub/talat-uyarer/10/142/304

Re: Hive/Hbase Integration issue

Posted by Ibrar Ahmed <ib...@gmail.com>.
Here is my hbase-site.xml

<configuration>
  <property>
    <name>hbase.rootdir</name>
    <value>file:///usr/local/hbase</value>
  </property>
  <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/usr/local/hbase/zookeeperdata</value>
  </property>
</configuration>


And hive-site.xml

<configuration>
 <property>
    <name>hive.aux.jars.path</name>

<value>file:///usr/local/hive/lib/zookeeper-3.4.5.jar,file:/usr/local/hive/lib/hive-hbase-handler-0.13.1.jar,file:///usr/local/hive/lib/guava-11.0.2.jar,file:///usr/local/hbase/lib/hbase-client-0.98.2-
hadoop2.jar,file:///usr/local/hbase/lib/hbase-common-0.98.2-hadoop2.jar,file:///usr/local/hbase/lib/hbase-protocol-0.98.2-hadoop2.jar,file:///usr/local/hbase/lib/hbase-server-0.98.2-hadoop2.jar,file:///usr
/local/hbase/lib/hbase-shell-0.98.2-hadoop2.jar,file:///usr/local/hbase/lib/hbase-thrift-0.98.2-hadoop2.jar</value>
  </property>

<property>
   <name>hbase.zookeeper.quorum</name>
   <value>zk1,zk2,zk3</value>
</property>


<property>
    <name>hive.exec.scratchdir</name>
    <value>/usr/local/hive/mydir</value>
    <description>Scratch space for Hive jobs</description>
</property>

</configuration>



Hadoop classpath


/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*::/usr/local/hbase/conf/hbase-site.xml:/contrib/capacity-scheduler/*.jar


On Thu, May 14, 2015 at 12:28 AM, Ibrar Ahmed <ib...@gmail.com> wrote:

> Hi,
>
> I am creating a table using hive and getting this error.
>
> [127.0.0.1:10000] hive> CREATE TABLE hbase_table_1(key int, value string)
>                       > STORED BY
> 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
>                       > WITH SERDEPROPERTIES ("hbase.columns.mapping" =
> ":key,cf1:val")
>                       > TBLPROPERTIES ("hbase.table.name" = "xyz");
>
>
>
> [Hive Error]: Query returned non-zero code: 1, cause: FAILED: Execution
> Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
> MetaException(message:org.apache.hadoop.hbase.client.RetriesExhaustedException:
> Can't get the locations
>     at
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:305)
>     at
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:147)
>     at
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:56)
>     at
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
>     at
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:288)
>     at
> org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:267)
>     at
> org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:139)
>     at
> org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:134)
>     at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:823)
>     at
> org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:601)
>     at
> org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:365)
>     at
> org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:281)
>     at
> org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:291)
>     at
> org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:162)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:554)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:547)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
>     at com.sun.proxy.$Proxy7.createTable(Unknown Source)
>     at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:613)
>     at
> org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4194)
>     at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:281)
>     at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
>     at
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
>     at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1472)
>     at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1239)
>     at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1057)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:880)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:870)
>     at
> org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:198)
>     at
> org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:644)
>     at
> org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:628)
>     at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>     at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>     at
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:745)
> )
>
>
> Any help/clue can help.
>



-- 
Ibrar Ahmed

Re: Hive/Hbase Integration issue

Posted by Ibrar Ahmed <ib...@gmail.com>.
Here is my hbase-site.xml

<configuration>
  <property>
    <name>hbase.rootdir</name>
    <value>file:///usr/local/hbase</value>
  </property>
  <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/usr/local/hbase/zookeeperdata</value>
  </property>
</configuration>


And hive-site.xml

<configuration>
 <property>
    <name>hive.aux.jars.path</name>

<value>file:///usr/local/hive/lib/zookeeper-3.4.5.jar,file:/usr/local/hive/lib/hive-hbase-handler-0.13.1.jar,file:///usr/local/hive/lib/guava-11.0.2.jar,file:///usr/local/hbase/lib/hbase-client-0.98.2-
hadoop2.jar,file:///usr/local/hbase/lib/hbase-common-0.98.2-hadoop2.jar,file:///usr/local/hbase/lib/hbase-protocol-0.98.2-hadoop2.jar,file:///usr/local/hbase/lib/hbase-server-0.98.2-hadoop2.jar,file:///usr
/local/hbase/lib/hbase-shell-0.98.2-hadoop2.jar,file:///usr/local/hbase/lib/hbase-thrift-0.98.2-hadoop2.jar</value>
  </property>

<property>
   <name>hbase.zookeeper.quorum</name>
   <value>zk1,zk2,zk3</value>
</property>


<property>
    <name>hive.exec.scratchdir</name>
    <value>/usr/local/hive/mydir</value>
    <description>Scratch space for Hive jobs</description>
</property>

</configuration>



Hadoop classpath


/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*::/usr/local/hbase/conf/hbase-site.xml:/contrib/capacity-scheduler/*.jar


On Thu, May 14, 2015 at 12:28 AM, Ibrar Ahmed <ib...@gmail.com> wrote:

> Hi,
>
> I am creating a table using hive and getting this error.
>
> [127.0.0.1:10000] hive> CREATE TABLE hbase_table_1(key int, value string)
>                       > STORED BY
> 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
>                       > WITH SERDEPROPERTIES ("hbase.columns.mapping" =
> ":key,cf1:val")
>                       > TBLPROPERTIES ("hbase.table.name" = "xyz");
>
>
>
> [Hive Error]: Query returned non-zero code: 1, cause: FAILED: Execution
> Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
> MetaException(message:org.apache.hadoop.hbase.client.RetriesExhaustedException:
> Can't get the locations
>     at
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:305)
>     at
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:147)
>     at
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:56)
>     at
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
>     at
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:288)
>     at
> org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:267)
>     at
> org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:139)
>     at
> org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:134)
>     at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:823)
>     at
> org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:601)
>     at
> org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:365)
>     at
> org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:281)
>     at
> org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:291)
>     at
> org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:162)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:554)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:547)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
>     at com.sun.proxy.$Proxy7.createTable(Unknown Source)
>     at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:613)
>     at
> org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4194)
>     at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:281)
>     at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
>     at
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
>     at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1472)
>     at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1239)
>     at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1057)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:880)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:870)
>     at
> org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:198)
>     at
> org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:644)
>     at
> org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:628)
>     at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>     at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>     at
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:745)
> )
>
>
> Any help/clue can help.
>



-- 
Ibrar Ahmed