You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by Adarsh Sharma <ad...@orkash.com> on 2011/01/06 11:34:18 UTC

Hive/Hbase Integration Error

Dear all,

I am sorry I am posting this message again but I can't able to locate 
the root cause after googled a lot.

I am trying Hive/Hbase Integration from the past 2 days. I am facing the 
below issue while creating external table in Hive.

I am using hadoop-0.20.2, hbase-0.20.6, hive-0.6.0 ( Mysql as metstore ) 
and java-1.6.0_20. Hbase-0.20.3 is also checked.

Problem arises when I issue the below command :

hive> CREATE TABLE hive_hbasetable_k(key int, value string)
    > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
    > WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
    > TBLPROPERTIES ("hbase.table.name" = "hivehbasek");

FAILED: Error in metadata: MetaException(message:org.apache.hadoop.hbase.MasterNotRunningException
        at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster(HConnectionManager.java:374)
        at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:72)
        at org.apache.hadoop.hive.hbase.HBaseStorageHandler.getHBaseAdmin(HBaseStorageHandler.java:64)
        at org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:159)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:275)
        at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:394)
        at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:2126)
        at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:166)
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)
        at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:633)
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:506)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:384)
        at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)
        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:302)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

        at java.lang.reflect.Method.invoke(Method.java:597)

        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask


It seems my HMaster is not Running but I checked from IP:60010 that it 
is running and I am able to create,insert tables in Hbase Properly.

Below is the contents of my hive.log :

 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.

 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.

 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.

 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.

 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.

 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.

 2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@561279c8

 java.net.ConnectException: Connection refused

       at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)

       at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)

       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)

 2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown input

 java.nio.channels.ClosedChannelException

       at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)

       at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)

       at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)

       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)

 2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output

 java.nio.channels.ClosedChannelException

       at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)

       at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)

       at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)

       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)

 2011-01-05 15:20:12,621 WARN  zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@799dbc3b

 
  Please help me, as i am not able to solve this problem.
 
 Also I want to add one more thing that my hadoop Cluster is of 9 nodes 
and 8 nodes act as Datanodes,Tasktrackers and Regionservers.
 
 Among these nodes is set zookeeper.quorum.property to have 5 
Datanodes. I don't know the number of servers needed for Zookeeper in 
fully distributed mode.
 
 
 Best Regards

 Adarsh Sharma



Re: Hive/Hbase Integration Error

Posted by Adarsh Sharma <ad...@orkash.com>.
Dear all,

Sorry I am replying my own question because I found the cause and want 
to discuss if anyone have an idea about it.

 From the wiki's page I found that  :


http://wiki.apache.org/hadoop/Hive/HBaseIntegration

***************************************************************************************************************************

The handler requires Hadoop 0.20 or higher, and has only been tested 
with dependency versions hadoop-0.20.0, hbase-0.20.3 and 
zookeeper-3.2.2. If you are not using hbase-0.20.3, you will need to 
rebuild the handler with the HBase jar matching your version, and change 
the --auxpath above accordingly. Failure to use matching versions will 
lead to misleading connection failures such as MasterNotRunningException 
<http://wiki.apache.org/hadoop/MasterNotRunningException> since the 
HBase RPC protocol changes often.

****************************************************************************************************************************

I am facing the same problem and want to know that is it possible 
integrate hbase-0.20.6 with hive-0.6.0. Would be require separate 
zookeeper package and configuration to run it.


Please help me to solve this.


Thanks






Adarsh Sharma wrote:
> Jean-Daniel Cryans wrote:
>>
>> You also need to create the table in order to see the relevant debug 
>> information, it won't create it until it needs it.
>>
> Sir
> Check the output :
>
> hive> CREATE TABLE hive_hbasetable_k(key int, value string)
>     > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
>     > WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
>     > TBLPROPERTIES ("hbase.table.name" = "hivehbasek");
> FAILED: Error in metadata: 
> MetaException(message:org.apache.hadoop.hbase.MasterNotRunningException
>         at 
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster(HConnectionManager.java:374)
>         at 
> org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:72)
>         at 
> org.apache.hadoop.hive.hbase.HBaseStorageHandler.getHBaseAdmin(HBaseStorageHandler.java:64)
>         at 
> org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:159)
>         at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:275)
>         at 
> org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:394)
>         at 
> org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:2126)
>         at 
> org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:166)
>         at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)
>         at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
>         at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:633)
>         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:506)
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:384)
>         at 
> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)
>         at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)
>         at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:302)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> )
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask
>
>
> And the head of Hive.log says :
>
> 2011-01-10 11:57:59,467 ERROR DataNucleus.Plugin 
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires 
> "org.eclipse.core.resources" but it cannot be resolved.
> 2011-01-10 11:57:59,470 ERROR DataNucleus.Plugin 
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires 
> "org.eclipse.core.runtime" but it cannot be resolved.
> 2011-01-10 11:57:59,470 ERROR DataNucleus.Plugin 
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires 
> "org.eclipse.core.runtime" but it cannot be resolved.
> 2011-01-10 11:57:59,471 ERROR DataNucleus.Plugin 
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires 
> "org.eclipse.text" but it cannot be resolved.
> 2011-01-10 11:57:59,471 ERROR DataNucleus.Plugin 
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires 
> "org.eclipse.text" but it cannot be resolved.
> 2011-01-10 11:58:38,210 WARN  zookeeper.ClientCnxn 
> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to 
> sun.nio.ch.SelectionKeyImpl@6070c38c
> java.io.IOException: TIMED OUT
>         at 
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:906)
> 2011-01-10 11:58:38,215 WARN  zookeeper.ClientCnxn 
> (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown 
> output
> java.net.SocketException: Transport endpoint is not connected
>         at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)
>         at 
> sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:651)
>         at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>         at 
> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
>         at 
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
> 2011-01-10 11:58:39,019 WARN  zookeeper.ClientCnxn 
> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to 
> sun.nio.ch.SelectionKeyImpl@34b6a6d6
> java.net.ConnectException: Connection refused
>         at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>         at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>         at 
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
>                                                                                                            
> 1,1           Top
>
> And I am researching on Zookeeper errors in Hbase logs that says :
>
> Mon Jan 10 11:35:01 IST 2011 Starting zookeeper on s1-ser
> ulimit -n 1024
> 2011-01-10 11:35:01,703 INFO 
> org.apache.zookeeper.server.quorum.QuorumPeerConfig: Defaulting to 
> majority quorums
> 2011-01-10 11:35:01,759 DEBUG 
> org.apache.hadoop.hbase.zookeeper.HQuorumPeer: preRegister called. 
> Server=com.sun.jmx.mbeanserver.JmxMBeanServer@2d342ba4, 
> name=log4j:logger=org.apache.hadoop.hbase.zookeeper.HQuorumPeer
> 2011-01-10 11:35:01,759 DEBUG org.apache.hadoop.hbase: preRegister 
> called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@2d342ba4, 
> name=log4j:logger=org.apache.hadoop.hbase
> 2011-01-10 11:35:01,759 INFO 
> org.apache.zookeeper.server.quorum.QuorumPeerMain: Starting quorum peer
> 2011-01-10 11:35:01,807 INFO 
> org.apache.zookeeper.server.quorum.QuorumCnxManager: My election bind 
> port: 3888
> 2011-01-10 11:35:01,833 INFO 
> org.apache.zookeeper.server.quorum.QuorumPeer: LOOKING
> 2011-01-10 11:35:01,837 INFO 
> org.apache.zookeeper.server.quorum.FastLeaderElection: New election: 
> 12884901931
> 2011-01-10 11:35:01,842 INFO 
> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification: 
> 0, 12884901931, 1, 0, LOOKING, LOOKING, 0
> 2011-01-10 11:35:01,842 INFO 
> org.apache.zookeeper.server.quorum.FastLeaderElection: Adding vote
> 2011-01-10 11:35:01,844 WARN 
> org.apache.zookeeper.server.quorum.QuorumCnxManager: Cannot open 
> channel to 1 at election address /192.168.1.103:3888
> java.net.ConnectException: Connection refused
>         at sun.nio.ch.Net.connect(Native Method)
>         at 
> sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:507)
>         at java.nio.channels.SocketChannel.open(SocketChannel.java:146)
>         at 
> org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:323)
>         at 
> org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:302)
>         at 
> org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:323)
>         at 
> org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:296)
>         at java.lang.Thread.run(Thread.java:619)
> 2011-01-10 11:35:01,850 INFO org.apache.zookeeper.server.quorum
>
>
> If you require any other information, Please let me know.
>
>
> Best regards
>
> Adarsh
>
>
>> J-D
>>
>> On Jan 9, 2011 9:30 PM, "Adarsh Sharma" <adarsh.sharma@orkash.com 
>> <ma...@orkash.com>> wrote:
>> > Jean-Daniel Cryans wrote:
>> >> Just figured that running the shell with this command will give all
>> >> the info you need:
>> >>
>> >> bin/hive -hiveconf hive.root.logger=INFO,console
>> >>
>> >
>> >
>> > Thanks JD, below is the output of this command :
>> >
>> > hadoop@s2-ratw-1:~/project/hive-0.6.0/build/dist$ bin/hive -hiveconf
>> > hive.root.logger=INFO,console
>> > Hive history
>> > file=/tmp/hadoop/hive_job_log_hadoop_201101101024_1339616584.txt
>> > 11/01/10 10:24:47 INFO exec.HiveHistory: Hive history
>> > file=/tmp/hadoop/hive_job_log_hadoop_201101101024_1339616584.txt
>> > hive> show tables;
>> > 11/01/10 10:25:07 INFO parse.ParseDriver: Parsing command: show tables
>> > 11/01/10 10:25:07 INFO parse.ParseDriver: Parse Completed
>> > 11/01/10 10:25:07 INFO ql.Driver: Semantic Analysis Completed
>> > 11/01/10 10:25:07 INFO ql.Driver: Returning Hive schema:
>> > Schema(fieldSchemas:[FieldSchema(name:tab_name, type:string,
>> > comment:from deserializer)], properties:null)
>> > 11/01/10 10:25:07 INFO ql.Driver: Starting command: show tables
>> > 11/01/10 10:25:07 INFO metastore.HiveMetaStore: 0: Opening raw store
>> > with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
>> > 11/01/10 10:25:07 INFO metastore.ObjectStore: ObjectStore, 
>> initialize called
>> > *11/01/10 10:25:08 ERROR DataNucleus.Plugin: Bundle
>> > "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it
>> > cannot be resolved.
>> > 11/01/10 10:25:08 ERROR DataNucleus.Plugin: Bundle
>> > "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it 
>> cannot
>> > be resolved.
>> > 11/01/10 10:25:08 ERROR DataNucleus.Plugin: Bundle
>> > "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be
>> > resolved.*
>> > 11/01/10 10:25:09 INFO metastore.ObjectStore: Initialized ObjectStore
>> > 11/01/10 10:25:10 INFO metastore.HiveMetaStore: 0: get_tables:
>> > db=default pat=.*
>> > OK
>> > 11/01/10 10:25:15 INFO ql.Driver: OK
>> > Time taken: 7.897 seconds
>> > 11/01/10 10:25:15 INFO CliDriver: Time taken: 7.897 seconds
>> > hive> exit;
>> >
>> > It seems that Hive is working but I am facing issues while integrating
>> > with Hbase.
>> >
>> >
>> > Best Regards
>> >
>> > Adarsh Sharma
>> >
>> >
>> >> J-D
>> >>
>> >> On Fri, Jan 7, 2011 at 9:57 AM, Jean-Daniel Cryans 
>> <jdcryans@apache.org <ma...@apache.org>> wrote:
>> >>
>> >>> While testing other things yesterday on my local machine, I
>> >>> encountered the same stack traces. Like I said the other day, which
>> >>> you seem to have discarded while debugging your issue, is that it's
>> >>> not able to connect to Zookeeper.
>> >>>
>> >>> Following the cue, I added these lines in 
>> HBaseStorageHandler.setConf():
>> >>>
>> >>> System.out.println(hbaseConf.get("hbase.zookeeper.quorum"));
>> >>> 
>> System.out.println(hbaseConf.get("hbase.zookeeper.property.clientPort"));
>> >>>
>> >>> It showed me this when trying to create a table (after recompiling):
>> >>>
>> >>> localhost
>> >>> 21810
>> >>>
>> >>> I was testing with 0.89 and the test jar includes a hbase-site.xml
>> >>> which has the port 21810 instead of the default 2181. I remembered
>> >>> that it's a known issue that has since been fixed for 0.90.0, so
>> >>> removing that jar fixed it for me.
>> >>>
>> >>> I'm not saying that in your case it's the same fix, but at least by
>> >>> debugging those configurations you'll know where it's trying to
>> >>> connect and then you'll be able to get to the bottom of your issue.
>> >>>
>> >>> J-D
>> >>>
>> >>> On Fri, Jan 7, 2011 at 4:54 AM, Adarsh Sharma 
>> <adarsh.sharma@orkash.com <ma...@orkash.com>> wrote:
>> >>>
>> >>>> John Sichi wrote:
>> >>>>
>> >>>> On Jan 6, 2011, at 9:53 PM, Adarsh Sharma wrote:
>> >>>>
>> >>>>
>> >>>> I want to know why it occurs in hive.log
>> >>>>
>> >>>> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin
>> >>>> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" 
>> requires
>> >>>> "org.eclipse.core.resources" but it cannot be resolved.
>> >>>>
>> >>>>
>> >>>>
>> >>>> That is a bogus error; it always shows up, so you can ignore it.
>> >>>>
>> >>>>
>> >>>>
>> >>>> And use this new Hive build but I am sorry but the error remains 
>> the same.
>> >>>>
>> >>>>
>> >>>> Then I don't know...probably still some remaining configuration 
>> error. This
>> >>>> guy seems to have gotten it working:
>> >>>>
>> >>>> http://mevivs.wordpress.com/2010/11/24/hivehbase-integration/
>> >>>>
>> >>>>
>> >>>> Thanks a lot John , I know this link as i have start working by 
>> following
>> >>>> this link in the past.
>> >>>>
>> >>>> But I think I have to research on below exception or warning to 
>> solve this
>> >>>> issue.
>> >>>>
>> >>>> 2011-01-05 15:20:12,185 WARN zookeeper.ClientCnxn
>> >>>> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
>> >>>> sun.nio.ch.SelectionKeyImpl@561279c8
>> >>>> java.net.ConnectException: Connection refused
>> >>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>> >>>> at
>> >>>> 
>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>> >>>> at
>> >>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
>> >>>> 2011-01-05 15:20:12,188 WARN zookeeper.ClientCnxn
>> >>>> (ClientCnxn.java:cleanup(1001)) - Ignoring exception during 
>> shutdown input
>> >>>> java.nio.channels.ClosedChannelException
>> >>>> at
>> >>>> 
>> sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
>> >>>> at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>> >>>> at
>> >>>> 
>> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
>> >>>> at
>> >>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>> >>>> 2011-01-05 15:20:12,188 WARN zookeeper.ClientCnxn
>> >>>> (ClientCnxn.java:cleanup(1006)) - Ignoring exception during 
>> shutdown output
>> >>>> java.nio.channels.ClosedChannelException
>> >>>> at
>> >>>> 
>> sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
>> >>>> at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>> >>>> at
>> >>>> 
>> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
>> >>>> at
>> >>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>> >>>> 2011-01-05 15:20:12,621 WARN zookeeper.ClientCnxn
>> >>>> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
>> >>>> sun.nio.ch.SelectionKeyImpl@799dbc3b
>> >>>>
>> >>>> Please help me, as i am not able to solve this problem.
>> >>>>
>> >>>> Also I want to add one more thing that my hadoop Cluster is of 9 
>> nodes and
>> >>>> 8 nodes act as Datanodes,Tasktrackers and Regionservers.
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> Best Regards
>> >>>>
>> >>>> Adarsh Sharma
>> >>>>
>> >>>> JVS
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >
>


Re: Hive/Hbase Integration Error

Posted by Adarsh Sharma <ad...@orkash.com>.
Jean-Daniel Cryans wrote:
> Sorry it that wasn't obvious, but you need to run hive using this command:
>
> bin/hive -hiveconf hive.root.logger=INFO,console
>
> AND in the same shell create the table in order to have more
> information. What we're trying to get looks like this:
> http://pastebin.com/gvTXDxtE
>
> Specifically, this line is very important (it should have different
> values in your case):
>
> INFO zookeeper.ClientCnxn: Priming connection to
> java.nio.channels.SocketChannel[connected local=/10.10.20.42:53187
> remote=sv4borg13/10.10.21.13:2181]
>
> Regarding your second log paste, it could be harmless if the process
> on that other machine just took more time to boot, also it happened 20
> minutes before your test. Do verify that hbase works before trying to
> create a table.
>
> J-D
>
> On Sun, Jan 9, 2011 at 10:37 PM, Adarsh Sharma <ad...@orkash.com> wrote:
>   
>> Jean-Daniel Cryans wrote:
>>
>> You also need to create the table in order to see the relevant debug
>> information, it won't create it until it needs it.
>>
>> Sir
>> Check the output :
>>
>> hive> CREATE TABLE hive_hbasetable_k(key int, value string)
>>     > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
>>     > WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
>>     > TBLPROPERTIES ("hbase.table.name" = "hivehbasek");
>> FAILED: Error in metadata:
>> MetaException(message:org.apache.hadoop.hbase.MasterNotRunningException
>>         at
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster(HConnectionManager.java:374)
>>         at
>> org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:72)
>>         at
>> org.apache.hadoop.hive.hbase.HBaseStorageHandler.getHBaseAdmin(HBaseStorageHandler.java:64)
>>         at
>> org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:159)
>>         at
>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:275)
>>         at
>> org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:394)
>>         at
>> org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:2126)
>>         at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:166)
>>         at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)
>>         at
>> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
>>         at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:633)
>>         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:506)
>>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:384)
>>         at
>> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)
>>         at
>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)
>>         at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:302)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>         at java.lang.reflect.Method.invoke(Method.java:597)
>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> )
>> FAILED: Execution Error, return code 1 from
>> org.apache.hadoop.hive.ql.exec.DDLTask
>>
>>
>> And the head of Hive.log says :
>>
>> 2011-01-10 11:57:59,467 ERROR DataNucleus.Plugin
>> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
>> "org.eclipse.core.resources" but it cannot be resolved.
>> 2011-01-10 11:57:59,470 ERROR DataNucleus.Plugin
>> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
>> "org.eclipse.core.runtime" but it cannot be resolved.
>> 2011-01-10 11:57:59,470 ERROR DataNucleus.Plugin
>> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
>> "org.eclipse.core.runtime" but it cannot be resolved.
>> 2011-01-10 11:57:59,471 ERROR DataNucleus.Plugin
>> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
>> "org.eclipse.text" but it cannot be resolved.
>> 2011-01-10 11:57:59,471 ERROR DataNucleus.Plugin
>> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
>> "org.eclipse.text" but it cannot be resolved.
>> 2011-01-10 11:58:38,210 WARN  zookeeper.ClientCnxn
>> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
>> sun.nio.ch.SelectionKeyImpl@6070c38c
>> java.io.IOException: TIMED OUT
>>         at
>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:906)
>> 2011-01-10 11:58:38,215 WARN  zookeeper.ClientCnxn
>> (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output
>> java.net.SocketException: Transport endpoint is not connected
>>         at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)
>>         at
>> sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:651)
>>         at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>>         at
>> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
>>         at
>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>> 2011-01-10 11:58:39,019 WARN  zookeeper.ClientCnxn
>> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
>> sun.nio.ch.SelectionKeyImpl@34b6a6d6
>> java.net.ConnectException: Connection refused
>>         at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>         at
>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>>         at
>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
>>
>> 1,1           Top
>>
>> And I am researching on Zookeeper errors in Hbase logs that says :
>>
>> Mon Jan 10 11:35:01 IST 2011 Starting zookeeper on s1-ser
>> ulimit -n 1024
>> 2011-01-10 11:35:01,703 INFO
>> org.apache.zookeeper.server.quorum.QuorumPeerConfig: Defaulting to majority
>> quorums
>> 2011-01-10 11:35:01,759 DEBUG org.apache.hadoop.hbase.zookeeper.HQuorumPeer:
>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@2d342ba4,
>> name=log4j:logger=org.apache.hadoop.hbase.zookeeper.HQuorumPeer
>> 2011-01-10 11:35:01,759 DEBUG org.apache.hadoop.hbase: preRegister called.
>> Server=com.sun.jmx.mbeanserver.JmxMBeanServer@2d342ba4,
>> name=log4j:logger=org.apache.hadoop.hbase
>> 2011-01-10 11:35:01,759 INFO
>> org.apache.zookeeper.server.quorum.QuorumPeerMain: Starting quorum peer
>> 2011-01-10 11:35:01,807 INFO
>> org.apache.zookeeper.server.quorum.QuorumCnxManager: My election bind port:
>> 3888
>> 2011-01-10 11:35:01,833 INFO org.apache.zookeeper.server.quorum.QuorumPeer:
>> LOOKING
>> 2011-01-10 11:35:01,837 INFO
>> org.apache.zookeeper.server.quorum.FastLeaderElection: New election:
>> 12884901931
>> 2011-01-10 11:35:01,842 INFO
>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification: 0,
>> 12884901931, 1, 0, LOOKING, LOOKING, 0
>> 2011-01-10 11:35:01,842 INFO
>> org.apache.zookeeper.server.quorum.FastLeaderElection: Adding vote
>> 2011-01-10 11:35:01,844 WARN
>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Cannot open channel to
>> 1 at election address /192.168.1.103:3888
>> java.net.ConnectException: Connection refused
>>         at sun.nio.ch.Net.connect(Native Method)
>>         at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:507)
>>         at java.nio.channels.SocketChannel.open(SocketChannel.java:146)
>>         at
>> org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:323)
>>         at
>> org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:302)
>>         at
>> org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:323)
>>         at
>> org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:296)
>>         at java.lang.Thread.run(Thread.java:619)
>> 2011-01-10 11:35:01,850 INFO org.apache.zookeeper.server.quorum
>>
>>
>> If you require any other information, Please let me know.
>>
>>
>> Best regards
>>
>> Adarsh
>>
>>     
Sir, I want a suggestion from your side.

I configured hadoop-0.20.2 on 9 servers.

192.168.0.173 ( Namenode, Jobtracker , HMaster, Hive Package holder )
192.168.0.174.......182  ( datanodes , tasktracker , HRegionservers )
I set 192.168.0.174........176 to *zookeeper.quorum.property . *I think 
this causes error on Zookeeper logs.

On the other hand, I have a separate cluster of 4 nodes.
1 server ( Namenode, Jobtracker , HMaster, Hive Package holder )
Remaining 3 Servers ( datanodes , tasktracker , HRegionservers ).
I also set these 3 servers IP in *zookeeper.quorum.property.


*I have a doubt regarding *zookeeper.quorum.property. *Is it correct or 
I need separate servers for it.



Thanks

Re: Hive/Hbase Integration Error

Posted by Adarsh Sharma <ad...@orkash.com>.
Jean-Daniel Cryans wrote:
> Can you confirm that the machine you're running hive from can "telnet
> 192.168.1.101 2181" and pals? Can it even ping? If not, then either
> the problem is that the machine cannot reach your zookeeper servers or
> they aren't running. There's not that many other options.
>
> J-D
>   
Sir , I am able to ssh and ping to these zookeeper servers successfully 
as shown below :

hadoop@s2-ratw-1:~/project/hadoop-0.20.2$ ping 192.168.1.101
PING 192.168.1.101 (192.168.1.101) 56(84) bytes of data.
64 bytes from 192.168.1.101: icmp_seq=1 ttl=64 time=0.142 ms
64 bytes from 192.168.1.101: icmp_seq=2 ttl=64 time=0.130 ms
64 bytes from 192.168.1.101: icmp_seq=3 ttl=64 time=0.109 ms

But My problem is my Zookeper Servers are not running properly.

I am using zookeeepr-3.2.2 jar which is in hbase-0.20.3/lib and also in 
Hive/build/dist/lib folder.

Is it necessary to configure Zookeeper separately or not. I think this 
is true for large clusters.


Thanks & Regards

Adarsh Sharma
I didn't configure zookeeper separately. I doubt that this might not be 
the issue.

I attached my zookeeper logs and hbase-site.xml.



Re: Hive/Hbase Integration Error

Posted by Jean-Daniel Cryans <jd...@apache.org>.
Can you confirm that the machine you're running hive from can "telnet
192.168.1.101 2181" and pals? Can it even ping? If not, then either
the problem is that the machine cannot reach your zookeeper servers or
they aren't running. There's not that many other options.

J-D

Re: Hive/Hbase Integration Error

Posted by Adarsh Sharma <ad...@orkash.com>.
Jean-Daniel Cryans wrote:
> Sorry it that wasn't obvious, but you need to run hive using this command:
>   


I am extremely Sorry Sir.

As per your instructions I am sending you the output of the create table 
command.

Please check the attachment.


Thanks & Warm Regards

Adarsh Sharma
> bin/hive -hiveconf hive.root.logger=INFO,console
>
> AND in the same shell create the table in order to have more
> information. What we're trying to get looks like this:
> http://pastebin.com/gvTXDxtE
>
> Specifically, this line is very important (it should have different
> values in your case):
>
> INFO zookeeper.ClientCnxn: Priming connection to
> java.nio.channels.SocketChannel[connected local=/10.10.20.42:53187
> remote=sv4borg13/10.10.21.13:2181]
>
> Regarding your second log paste, it could be harmless if the process
> on that other machine just took more time to boot, also it happened 20
> minutes before your test. Do verify that hbase works before trying to
> create a table.
>
> J-D
>
> On Sun, Jan 9, 2011 at 10:37 PM, Adarsh Sharma <ad...@orkash.com> wrote:
>   
>> Jean-Daniel Cryans wrote:
>>
>> You also need to create the table in order to see the relevant debug
>> information, it won't create it until it needs it.
>>
>> Sir
>> Check the output :
>>
>> hive> CREATE TABLE hive_hbasetable_k(key int, value string)
>>     > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
>>     > WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
>>     > TBLPROPERTIES ("hbase.table.name" = "hivehbasek");
>> FAILED: Error in metadata:
>> MetaException(message:org.apache.hadoop.hbase.MasterNotRunningException
>>         at
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster(HConnectionManager.java:374)
>>         at
>> org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:72)
>>         at
>> org.apache.hadoop.hive.hbase.HBaseStorageHandler.getHBaseAdmin(HBaseStorageHandler.java:64)
>>         at
>> org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:159)
>>         at
>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:275)
>>         at
>> org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:394)
>>         at
>> org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:2126)
>>         at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:166)
>>         at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)
>>         at
>> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
>>         at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:633)
>>         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:506)
>>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:384)
>>         at
>> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)
>>         at
>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)
>>         at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:302)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>         at java.lang.reflect.Method.invoke(Method.java:597)
>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> )
>> FAILED: Execution Error, return code 1 from
>> org.apache.hadoop.hive.ql.exec.DDLTask
>>
>>
>> And the head of Hive.log says :
>>
>> 2011-01-10 11:57:59,467 ERROR DataNucleus.Plugin
>> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
>> "org.eclipse.core.resources" but it cannot be resolved.
>> 2011-01-10 11:57:59,470 ERROR DataNucleus.Plugin
>> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
>> "org.eclipse.core.runtime" but it cannot be resolved.
>> 2011-01-10 11:57:59,470 ERROR DataNucleus.Plugin
>> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
>> "org.eclipse.core.runtime" but it cannot be resolved.
>> 2011-01-10 11:57:59,471 ERROR DataNucleus.Plugin
>> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
>> "org.eclipse.text" but it cannot be resolved.
>> 2011-01-10 11:57:59,471 ERROR DataNucleus.Plugin
>> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
>> "org.eclipse.text" but it cannot be resolved.
>> 2011-01-10 11:58:38,210 WARN  zookeeper.ClientCnxn
>> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
>> sun.nio.ch.SelectionKeyImpl@6070c38c
>> java.io.IOException: TIMED OUT
>>         at
>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:906)
>> 2011-01-10 11:58:38,215 WARN  zookeeper.ClientCnxn
>> (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output
>> java.net.SocketException: Transport endpoint is not connected
>>         at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)
>>         at
>> sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:651)
>>         at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>>         at
>> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
>>         at
>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>> 2011-01-10 11:58:39,019 WARN  zookeeper.ClientCnxn
>> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
>> sun.nio.ch.SelectionKeyImpl@34b6a6d6
>> java.net.ConnectException: Connection refused
>>         at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>         at
>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>>         at
>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
>>
>> 1,1           Top
>>
>> And I am researching on Zookeeper errors in Hbase logs that says :
>>
>> Mon Jan 10 11:35:01 IST 2011 Starting zookeeper on s1-ser
>> ulimit -n 1024
>> 2011-01-10 11:35:01,703 INFO
>> org.apache.zookeeper.server.quorum.QuorumPeerConfig: Defaulting to majority
>> quorums
>> 2011-01-10 11:35:01,759 DEBUG org.apache.hadoop.hbase.zookeeper.HQuorumPeer:
>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@2d342ba4,
>> name=log4j:logger=org.apache.hadoop.hbase.zookeeper.HQuorumPeer
>> 2011-01-10 11:35:01,759 DEBUG org.apache.hadoop.hbase: preRegister called.
>> Server=com.sun.jmx.mbeanserver.JmxMBeanServer@2d342ba4,
>> name=log4j:logger=org.apache.hadoop.hbase
>> 2011-01-10 11:35:01,759 INFO
>> org.apache.zookeeper.server.quorum.QuorumPeerMain: Starting quorum peer
>> 2011-01-10 11:35:01,807 INFO
>> org.apache.zookeeper.server.quorum.QuorumCnxManager: My election bind port:
>> 3888
>> 2011-01-10 11:35:01,833 INFO org.apache.zookeeper.server.quorum.QuorumPeer:
>> LOOKING
>> 2011-01-10 11:35:01,837 INFO
>> org.apache.zookeeper.server.quorum.FastLeaderElection: New election:
>> 12884901931
>> 2011-01-10 11:35:01,842 INFO
>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification: 0,
>> 12884901931, 1, 0, LOOKING, LOOKING, 0
>> 2011-01-10 11:35:01,842 INFO
>> org.apache.zookeeper.server.quorum.FastLeaderElection: Adding vote
>> 2011-01-10 11:35:01,844 WARN
>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Cannot open channel to
>> 1 at election address /192.168.1.103:3888
>> java.net.ConnectException: Connection refused
>>         at sun.nio.ch.Net.connect(Native Method)
>>         at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:507)
>>         at java.nio.channels.SocketChannel.open(SocketChannel.java:146)
>>         at
>> org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:323)
>>         at
>> org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:302)
>>         at
>> org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:323)
>>         at
>> org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:296)
>>         at java.lang.Thread.run(Thread.java:619)
>> 2011-01-10 11:35:01,850 INFO org.apache.zookeeper.server.quorum
>>
>>
>> If you require any other information, Please let me know.
>>
>>
>> Best regards
>>
>> Adarsh
>>
>>     


Re: Hive/Hbase Integration Error

Posted by Jean-Daniel Cryans <jd...@apache.org>.
Sorry it that wasn't obvious, but you need to run hive using this command:

bin/hive -hiveconf hive.root.logger=INFO,console

AND in the same shell create the table in order to have more
information. What we're trying to get looks like this:
http://pastebin.com/gvTXDxtE

Specifically, this line is very important (it should have different
values in your case):

INFO zookeeper.ClientCnxn: Priming connection to
java.nio.channels.SocketChannel[connected local=/10.10.20.42:53187
remote=sv4borg13/10.10.21.13:2181]

Regarding your second log paste, it could be harmless if the process
on that other machine just took more time to boot, also it happened 20
minutes before your test. Do verify that hbase works before trying to
create a table.

J-D

On Sun, Jan 9, 2011 at 10:37 PM, Adarsh Sharma <ad...@orkash.com> wrote:
> Jean-Daniel Cryans wrote:
>
> You also need to create the table in order to see the relevant debug
> information, it won't create it until it needs it.
>
> Sir
> Check the output :
>
> hive> CREATE TABLE hive_hbasetable_k(key int, value string)
>     > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
>     > WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
>     > TBLPROPERTIES ("hbase.table.name" = "hivehbasek");
> FAILED: Error in metadata:
> MetaException(message:org.apache.hadoop.hbase.MasterNotRunningException
>         at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster(HConnectionManager.java:374)
>         at
> org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:72)
>         at
> org.apache.hadoop.hive.hbase.HBaseStorageHandler.getHBaseAdmin(HBaseStorageHandler.java:64)
>         at
> org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:159)
>         at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:275)
>         at
> org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:394)
>         at
> org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:2126)
>         at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:166)
>         at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)
>         at
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
>         at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:633)
>         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:506)
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:384)
>         at
> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)
>         at
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)
>         at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:302)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> )
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.DDLTask
>
>
> And the head of Hive.log says :
>
> 2011-01-10 11:57:59,467 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.core.resources" but it cannot be resolved.
> 2011-01-10 11:57:59,470 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.core.runtime" but it cannot be resolved.
> 2011-01-10 11:57:59,470 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.core.runtime" but it cannot be resolved.
> 2011-01-10 11:57:59,471 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.text" but it cannot be resolved.
> 2011-01-10 11:57:59,471 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.text" but it cannot be resolved.
> 2011-01-10 11:58:38,210 WARN  zookeeper.ClientCnxn
> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
> sun.nio.ch.SelectionKeyImpl@6070c38c
> java.io.IOException: TIMED OUT
>         at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:906)
> 2011-01-10 11:58:38,215 WARN  zookeeper.ClientCnxn
> (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output
> java.net.SocketException: Transport endpoint is not connected
>         at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)
>         at
> sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:651)
>         at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>         at
> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
>         at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
> 2011-01-10 11:58:39,019 WARN  zookeeper.ClientCnxn
> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
> sun.nio.ch.SelectionKeyImpl@34b6a6d6
> java.net.ConnectException: Connection refused
>         at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>         at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>         at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
>
> 1,1           Top
>
> And I am researching on Zookeeper errors in Hbase logs that says :
>
> Mon Jan 10 11:35:01 IST 2011 Starting zookeeper on s1-ser
> ulimit -n 1024
> 2011-01-10 11:35:01,703 INFO
> org.apache.zookeeper.server.quorum.QuorumPeerConfig: Defaulting to majority
> quorums
> 2011-01-10 11:35:01,759 DEBUG org.apache.hadoop.hbase.zookeeper.HQuorumPeer:
> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@2d342ba4,
> name=log4j:logger=org.apache.hadoop.hbase.zookeeper.HQuorumPeer
> 2011-01-10 11:35:01,759 DEBUG org.apache.hadoop.hbase: preRegister called.
> Server=com.sun.jmx.mbeanserver.JmxMBeanServer@2d342ba4,
> name=log4j:logger=org.apache.hadoop.hbase
> 2011-01-10 11:35:01,759 INFO
> org.apache.zookeeper.server.quorum.QuorumPeerMain: Starting quorum peer
> 2011-01-10 11:35:01,807 INFO
> org.apache.zookeeper.server.quorum.QuorumCnxManager: My election bind port:
> 3888
> 2011-01-10 11:35:01,833 INFO org.apache.zookeeper.server.quorum.QuorumPeer:
> LOOKING
> 2011-01-10 11:35:01,837 INFO
> org.apache.zookeeper.server.quorum.FastLeaderElection: New election:
> 12884901931
> 2011-01-10 11:35:01,842 INFO
> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification: 0,
> 12884901931, 1, 0, LOOKING, LOOKING, 0
> 2011-01-10 11:35:01,842 INFO
> org.apache.zookeeper.server.quorum.FastLeaderElection: Adding vote
> 2011-01-10 11:35:01,844 WARN
> org.apache.zookeeper.server.quorum.QuorumCnxManager: Cannot open channel to
> 1 at election address /192.168.1.103:3888
> java.net.ConnectException: Connection refused
>         at sun.nio.ch.Net.connect(Native Method)
>         at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:507)
>         at java.nio.channels.SocketChannel.open(SocketChannel.java:146)
>         at
> org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:323)
>         at
> org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:302)
>         at
> org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:323)
>         at
> org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:296)
>         at java.lang.Thread.run(Thread.java:619)
> 2011-01-10 11:35:01,850 INFO org.apache.zookeeper.server.quorum
>
>
> If you require any other information, Please let me know.
>
>
> Best regards
>
> Adarsh
>

Re: Hive/Hbase Integration Error

Posted by Adarsh Sharma <ad...@orkash.com>.
Jean-Daniel Cryans wrote:
>
> You also need to create the table in order to see the relevant debug 
> information, it won't create it until it needs it.
>
Sir
Check the output :

hive> CREATE TABLE hive_hbasetable_k(key int, value string)
    > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
    > WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
    > TBLPROPERTIES ("hbase.table.name" = "hivehbasek");
FAILED: Error in metadata: 
MetaException(message:org.apache.hadoop.hbase.MasterNotRunningException
        at 
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster(HConnectionManager.java:374)
        at 
org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:72)
        at 
org.apache.hadoop.hive.hbase.HBaseStorageHandler.getHBaseAdmin(HBaseStorageHandler.java:64)
        at 
org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:159)
        at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:275)
        at 
org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:394)
        at 
org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:2126)
        at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:166)
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)
        at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:633)
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:506)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:384)
        at 
org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)
        at 
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:302)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
)
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask


And the head of Hive.log says :

2011-01-10 11:57:59,467 ERROR DataNucleus.Plugin 
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires 
"org.eclipse.core.resources" but it cannot be resolved.
2011-01-10 11:57:59,470 ERROR DataNucleus.Plugin 
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires 
"org.eclipse.core.runtime" but it cannot be resolved.
2011-01-10 11:57:59,470 ERROR DataNucleus.Plugin 
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires 
"org.eclipse.core.runtime" but it cannot be resolved.
2011-01-10 11:57:59,471 ERROR DataNucleus.Plugin 
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires 
"org.eclipse.text" but it cannot be resolved.
2011-01-10 11:57:59,471 ERROR DataNucleus.Plugin 
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires 
"org.eclipse.text" but it cannot be resolved.
2011-01-10 11:58:38,210 WARN  zookeeper.ClientCnxn 
(ClientCnxn.java:run(967)) - Exception closing session 0x0 to 
sun.nio.ch.SelectionKeyImpl@6070c38c
java.io.IOException: TIMED OUT
        at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:906)
2011-01-10 11:58:38,215 WARN  zookeeper.ClientCnxn 
(ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output
java.net.SocketException: Transport endpoint is not connected
        at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)
        at 
sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:651)
        at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
        at 
org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
        at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
2011-01-10 11:58:39,019 WARN  zookeeper.ClientCnxn 
(ClientCnxn.java:run(967)) - Exception closing session 0x0 to 
sun.nio.ch.SelectionKeyImpl@34b6a6d6
java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
        at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
                                                                                                           
1,1           Top

And I am researching on Zookeeper errors in Hbase logs that says :

Mon Jan 10 11:35:01 IST 2011 Starting zookeeper on s1-ser
ulimit -n 1024
2011-01-10 11:35:01,703 INFO 
org.apache.zookeeper.server.quorum.QuorumPeerConfig: Defaulting to 
majority quorums
2011-01-10 11:35:01,759 DEBUG 
org.apache.hadoop.hbase.zookeeper.HQuorumPeer: preRegister called. 
Server=com.sun.jmx.mbeanserver.JmxMBeanServer@2d342ba4, 
name=log4j:logger=org.apache.hadoop.hbase.zookeeper.HQuorumPeer
2011-01-10 11:35:01,759 DEBUG org.apache.hadoop.hbase: preRegister 
called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@2d342ba4, 
name=log4j:logger=org.apache.hadoop.hbase
2011-01-10 11:35:01,759 INFO 
org.apache.zookeeper.server.quorum.QuorumPeerMain: Starting quorum peer
2011-01-10 11:35:01,807 INFO 
org.apache.zookeeper.server.quorum.QuorumCnxManager: My election bind 
port: 3888
2011-01-10 11:35:01,833 INFO 
org.apache.zookeeper.server.quorum.QuorumPeer: LOOKING
2011-01-10 11:35:01,837 INFO 
org.apache.zookeeper.server.quorum.FastLeaderElection: New election: 
12884901931
2011-01-10 11:35:01,842 INFO 
org.apache.zookeeper.server.quorum.FastLeaderElection: Notification: 0, 
12884901931, 1, 0, LOOKING, LOOKING, 0
2011-01-10 11:35:01,842 INFO 
org.apache.zookeeper.server.quorum.FastLeaderElection: Adding vote
2011-01-10 11:35:01,844 WARN 
org.apache.zookeeper.server.quorum.QuorumCnxManager: Cannot open channel 
to 1 at election address /192.168.1.103:3888
java.net.ConnectException: Connection refused
        at sun.nio.ch.Net.connect(Native Method)
        at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:507)
        at java.nio.channels.SocketChannel.open(SocketChannel.java:146)
        at 
org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:323)
        at 
org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:302)
        at 
org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:323)
        at 
org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:296)
        at java.lang.Thread.run(Thread.java:619)
2011-01-10 11:35:01,850 INFO org.apache.zookeeper.server.quorum


If you require any other information, Please let me know.


Best regards

Adarsh


> J-D
>
> On Jan 9, 2011 9:30 PM, "Adarsh Sharma" <adarsh.sharma@orkash.com 
> <ma...@orkash.com>> wrote:
> > Jean-Daniel Cryans wrote:
> >> Just figured that running the shell with this command will give all
> >> the info you need:
> >>
> >> bin/hive -hiveconf hive.root.logger=INFO,console
> >>
> >
> >
> > Thanks JD, below is the output of this command :
> >
> > hadoop@s2-ratw-1:~/project/hive-0.6.0/build/dist$ bin/hive -hiveconf
> > hive.root.logger=INFO,console
> > Hive history
> > file=/tmp/hadoop/hive_job_log_hadoop_201101101024_1339616584.txt
> > 11/01/10 10:24:47 INFO exec.HiveHistory: Hive history
> > file=/tmp/hadoop/hive_job_log_hadoop_201101101024_1339616584.txt
> > hive> show tables;
> > 11/01/10 10:25:07 INFO parse.ParseDriver: Parsing command: show tables
> > 11/01/10 10:25:07 INFO parse.ParseDriver: Parse Completed
> > 11/01/10 10:25:07 INFO ql.Driver: Semantic Analysis Completed
> > 11/01/10 10:25:07 INFO ql.Driver: Returning Hive schema:
> > Schema(fieldSchemas:[FieldSchema(name:tab_name, type:string,
> > comment:from deserializer)], properties:null)
> > 11/01/10 10:25:07 INFO ql.Driver: Starting command: show tables
> > 11/01/10 10:25:07 INFO metastore.HiveMetaStore: 0: Opening raw store
> > with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> > 11/01/10 10:25:07 INFO metastore.ObjectStore: ObjectStore, 
> initialize called
> > *11/01/10 10:25:08 ERROR DataNucleus.Plugin: Bundle
> > "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it
> > cannot be resolved.
> > 11/01/10 10:25:08 ERROR DataNucleus.Plugin: Bundle
> > "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it 
> cannot
> > be resolved.
> > 11/01/10 10:25:08 ERROR DataNucleus.Plugin: Bundle
> > "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be
> > resolved.*
> > 11/01/10 10:25:09 INFO metastore.ObjectStore: Initialized ObjectStore
> > 11/01/10 10:25:10 INFO metastore.HiveMetaStore: 0: get_tables:
> > db=default pat=.*
> > OK
> > 11/01/10 10:25:15 INFO ql.Driver: OK
> > Time taken: 7.897 seconds
> > 11/01/10 10:25:15 INFO CliDriver: Time taken: 7.897 seconds
> > hive> exit;
> >
> > It seems that Hive is working but I am facing issues while integrating
> > with Hbase.
> >
> >
> > Best Regards
> >
> > Adarsh Sharma
> >
> >
> >> J-D
> >>
> >> On Fri, Jan 7, 2011 at 9:57 AM, Jean-Daniel Cryans 
> <jdcryans@apache.org <ma...@apache.org>> wrote:
> >>
> >>> While testing other things yesterday on my local machine, I
> >>> encountered the same stack traces. Like I said the other day, which
> >>> you seem to have discarded while debugging your issue, is that it's
> >>> not able to connect to Zookeeper.
> >>>
> >>> Following the cue, I added these lines in 
> HBaseStorageHandler.setConf():
> >>>
> >>> System.out.println(hbaseConf.get("hbase.zookeeper.quorum"));
> >>> 
> System.out.println(hbaseConf.get("hbase.zookeeper.property.clientPort"));
> >>>
> >>> It showed me this when trying to create a table (after recompiling):
> >>>
> >>> localhost
> >>> 21810
> >>>
> >>> I was testing with 0.89 and the test jar includes a hbase-site.xml
> >>> which has the port 21810 instead of the default 2181. I remembered
> >>> that it's a known issue that has since been fixed for 0.90.0, so
> >>> removing that jar fixed it for me.
> >>>
> >>> I'm not saying that in your case it's the same fix, but at least by
> >>> debugging those configurations you'll know where it's trying to
> >>> connect and then you'll be able to get to the bottom of your issue.
> >>>
> >>> J-D
> >>>
> >>> On Fri, Jan 7, 2011 at 4:54 AM, Adarsh Sharma 
> <adarsh.sharma@orkash.com <ma...@orkash.com>> wrote:
> >>>
> >>>> John Sichi wrote:
> >>>>
> >>>> On Jan 6, 2011, at 9:53 PM, Adarsh Sharma wrote:
> >>>>
> >>>>
> >>>> I want to know why it occurs in hive.log
> >>>>
> >>>> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin
> >>>> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" 
> requires
> >>>> "org.eclipse.core.resources" but it cannot be resolved.
> >>>>
> >>>>
> >>>>
> >>>> That is a bogus error; it always shows up, so you can ignore it.
> >>>>
> >>>>
> >>>>
> >>>> And use this new Hive build but I am sorry but the error remains 
> the same.
> >>>>
> >>>>
> >>>> Then I don't know...probably still some remaining configuration 
> error. This
> >>>> guy seems to have gotten it working:
> >>>>
> >>>> http://mevivs.wordpress.com/2010/11/24/hivehbase-integration/
> >>>>
> >>>>
> >>>> Thanks a lot John , I know this link as i have start working by 
> following
> >>>> this link in the past.
> >>>>
> >>>> But I think I have to research on below exception or warning to 
> solve this
> >>>> issue.
> >>>>
> >>>> 2011-01-05 15:20:12,185 WARN zookeeper.ClientCnxn
> >>>> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
> >>>> sun.nio.ch.SelectionKeyImpl@561279c8
> >>>> java.net.ConnectException: Connection refused
> >>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> >>>> at
> >>>> 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
> >>>> at
> >>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
> >>>> 2011-01-05 15:20:12,188 WARN zookeeper.ClientCnxn
> >>>> (ClientCnxn.java:cleanup(1001)) - Ignoring exception during 
> shutdown input
> >>>> java.nio.channels.ClosedChannelException
> >>>> at
> >>>> 
> sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
> >>>> at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
> >>>> at
> >>>> 
> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
> >>>> at
> >>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
> >>>> 2011-01-05 15:20:12,188 WARN zookeeper.ClientCnxn
> >>>> (ClientCnxn.java:cleanup(1006)) - Ignoring exception during 
> shutdown output
> >>>> java.nio.channels.ClosedChannelException
> >>>> at
> >>>> 
> sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
> >>>> at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
> >>>> at
> >>>> 
> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
> >>>> at
> >>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
> >>>> 2011-01-05 15:20:12,621 WARN zookeeper.ClientCnxn
> >>>> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
> >>>> sun.nio.ch.SelectionKeyImpl@799dbc3b
> >>>>
> >>>> Please help me, as i am not able to solve this problem.
> >>>>
> >>>> Also I want to add one more thing that my hadoop Cluster is of 9 
> nodes and
> >>>> 8 nodes act as Datanodes,Tasktrackers and Regionservers.
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> Best Regards
> >>>>
> >>>> Adarsh Sharma
> >>>>
> >>>> JVS
> >>>>
> >>>>
> >>>>
> >>>>
> >


Re: Hive/Hbase Integration Error

Posted by Jean-Daniel Cryans <jd...@apache.org>.
You also need to create the table in order to see the relevant debug
information, it won't create it until it needs it.

J-D
On Jan 9, 2011 9:30 PM, "Adarsh Sharma" <ad...@orkash.com> wrote:
> Jean-Daniel Cryans wrote:
>> Just figured that running the shell with this command will give all
>> the info you need:
>>
>> bin/hive -hiveconf hive.root.logger=INFO,console
>>
>
>
> Thanks JD, below is the output of this command :
>
> hadoop@s2-ratw-1:~/project/hive-0.6.0/build/dist$ bin/hive -hiveconf
> hive.root.logger=INFO,console
> Hive history
> file=/tmp/hadoop/hive_job_log_hadoop_201101101024_1339616584.txt
> 11/01/10 10:24:47 INFO exec.HiveHistory: Hive history
> file=/tmp/hadoop/hive_job_log_hadoop_201101101024_1339616584.txt
> hive> show tables;
> 11/01/10 10:25:07 INFO parse.ParseDriver: Parsing command: show tables
> 11/01/10 10:25:07 INFO parse.ParseDriver: Parse Completed
> 11/01/10 10:25:07 INFO ql.Driver: Semantic Analysis Completed
> 11/01/10 10:25:07 INFO ql.Driver: Returning Hive schema:
> Schema(fieldSchemas:[FieldSchema(name:tab_name, type:string,
> comment:from deserializer)], properties:null)
> 11/01/10 10:25:07 INFO ql.Driver: Starting command: show tables
> 11/01/10 10:25:07 INFO metastore.HiveMetaStore: 0: Opening raw store
> with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 11/01/10 10:25:07 INFO metastore.ObjectStore: ObjectStore, initialize
called
> *11/01/10 10:25:08 ERROR DataNucleus.Plugin: Bundle
> "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it
> cannot be resolved.
> 11/01/10 10:25:08 ERROR DataNucleus.Plugin: Bundle
> "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot
> be resolved.
> 11/01/10 10:25:08 ERROR DataNucleus.Plugin: Bundle
> "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be
> resolved.*
> 11/01/10 10:25:09 INFO metastore.ObjectStore: Initialized ObjectStore
> 11/01/10 10:25:10 INFO metastore.HiveMetaStore: 0: get_tables:
> db=default pat=.*
> OK
> 11/01/10 10:25:15 INFO ql.Driver: OK
> Time taken: 7.897 seconds
> 11/01/10 10:25:15 INFO CliDriver: Time taken: 7.897 seconds
> hive> exit;
>
> It seems that Hive is working but I am facing issues while integrating
> with Hbase.
>
>
> Best Regards
>
> Adarsh Sharma
>
>
>> J-D
>>
>> On Fri, Jan 7, 2011 at 9:57 AM, Jean-Daniel Cryans <jd...@apache.org>
wrote:
>>
>>> While testing other things yesterday on my local machine, I
>>> encountered the same stack traces. Like I said the other day, which
>>> you seem to have discarded while debugging your issue, is that it's
>>> not able to connect to Zookeeper.
>>>
>>> Following the cue, I added these lines in HBaseStorageHandler.setConf():
>>>
>>> System.out.println(hbaseConf.get("hbase.zookeeper.quorum"));
>>>
System.out.println(hbaseConf.get("hbase.zookeeper.property.clientPort"));
>>>
>>> It showed me this when trying to create a table (after recompiling):
>>>
>>> localhost
>>> 21810
>>>
>>> I was testing with 0.89 and the test jar includes a hbase-site.xml
>>> which has the port 21810 instead of the default 2181. I remembered
>>> that it's a known issue that has since been fixed for 0.90.0, so
>>> removing that jar fixed it for me.
>>>
>>> I'm not saying that in your case it's the same fix, but at least by
>>> debugging those configurations you'll know where it's trying to
>>> connect and then you'll be able to get to the bottom of your issue.
>>>
>>> J-D
>>>
>>> On Fri, Jan 7, 2011 at 4:54 AM, Adarsh Sharma <ad...@orkash.com>
wrote:
>>>
>>>> John Sichi wrote:
>>>>
>>>> On Jan 6, 2011, at 9:53 PM, Adarsh Sharma wrote:
>>>>
>>>>
>>>> I want to know why it occurs in hive.log
>>>>
>>>> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin
>>>> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
>>>> "org.eclipse.core.resources" but it cannot be resolved.
>>>>
>>>>
>>>>
>>>> That is a bogus error; it always shows up, so you can ignore it.
>>>>
>>>>
>>>>
>>>> And use this new Hive build but I am sorry but the error remains the
same.
>>>>
>>>>
>>>> Then I don't know...probably still some remaining configuration error.
This
>>>> guy seems to have gotten it working:
>>>>
>>>> http://mevivs.wordpress.com/2010/11/24/hivehbase-integration/
>>>>
>>>>
>>>> Thanks a lot John , I know this link as i have start working by
following
>>>> this link in the past.
>>>>
>>>> But I think I have to research on below exception or warning to solve
this
>>>> issue.
>>>>
>>>> 2011-01-05 15:20:12,185 WARN zookeeper.ClientCnxn
>>>> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
>>>> sun.nio.ch.SelectionKeyImpl@561279c8
>>>> java.net.ConnectException: Connection refused
>>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>>> at
>>>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>>>> at
>>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
>>>> 2011-01-05 15:20:12,188 WARN zookeeper.ClientCnxn
>>>> (ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown
input
>>>> java.nio.channels.ClosedChannelException
>>>> at
>>>> sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
>>>> at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>>>> at
>>>> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
>>>> at
>>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>>>> 2011-01-05 15:20:12,188 WARN zookeeper.ClientCnxn
>>>> (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown
output
>>>> java.nio.channels.ClosedChannelException
>>>> at
>>>> sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
>>>> at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>>>> at
>>>>
org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
>>>> at
>>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>>>> 2011-01-05 15:20:12,621 WARN zookeeper.ClientCnxn
>>>> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
>>>> sun.nio.ch.SelectionKeyImpl@799dbc3b
>>>>
>>>> Please help me, as i am not able to solve this problem.
>>>>
>>>> Also I want to add one more thing that my hadoop Cluster is of 9 nodes
and
>>>> 8 nodes act as Datanodes,Tasktrackers and Regionservers.
>>>>
>>>>
>>>>
>>>>
>>>> Best Regards
>>>>
>>>> Adarsh Sharma
>>>>
>>>> JVS
>>>>
>>>>
>>>>
>>>>
>

Re: Hive/Hbase Integration Error

Posted by Adarsh Sharma <ad...@orkash.com>.
Jean-Daniel Cryans wrote:
> Just figured that running the shell with this command will give all
> the info you need:
>
> bin/hive -hiveconf hive.root.logger=INFO,console
>   


Thanks JD, below is the output of this command :

hadoop@s2-ratw-1:~/project/hive-0.6.0/build/dist$ bin/hive -hiveconf 
hive.root.logger=INFO,console
Hive history 
file=/tmp/hadoop/hive_job_log_hadoop_201101101024_1339616584.txt
11/01/10 10:24:47 INFO exec.HiveHistory: Hive history 
file=/tmp/hadoop/hive_job_log_hadoop_201101101024_1339616584.txt
hive> show tables;
11/01/10 10:25:07 INFO parse.ParseDriver: Parsing command: show tables
11/01/10 10:25:07 INFO parse.ParseDriver: Parse Completed
11/01/10 10:25:07 INFO ql.Driver: Semantic Analysis Completed
11/01/10 10:25:07 INFO ql.Driver: Returning Hive schema: 
Schema(fieldSchemas:[FieldSchema(name:tab_name, type:string, 
comment:from deserializer)], properties:null)
11/01/10 10:25:07 INFO ql.Driver: Starting command: show tables
11/01/10 10:25:07 INFO metastore.HiveMetaStore: 0: Opening raw store 
with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
11/01/10 10:25:07 INFO metastore.ObjectStore: ObjectStore, initialize called
*11/01/10 10:25:08 ERROR DataNucleus.Plugin: Bundle 
"org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it 
cannot be resolved.
11/01/10 10:25:08 ERROR DataNucleus.Plugin: Bundle 
"org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot 
be resolved.
11/01/10 10:25:08 ERROR DataNucleus.Plugin: Bundle 
"org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be 
resolved.*
11/01/10 10:25:09 INFO metastore.ObjectStore: Initialized ObjectStore
11/01/10 10:25:10 INFO metastore.HiveMetaStore: 0: get_tables: 
db=default pat=.*
OK
11/01/10 10:25:15 INFO ql.Driver: OK
Time taken: 7.897 seconds
11/01/10 10:25:15 INFO CliDriver: Time taken: 7.897 seconds
hive> exit;

It seems that Hive is working but I am facing issues while integrating 
with Hbase.


Best Regards

Adarsh Sharma


> J-D
>
> On Fri, Jan 7, 2011 at 9:57 AM, Jean-Daniel Cryans <jd...@apache.org> wrote:
>   
>> While testing other things yesterday on my local machine, I
>> encountered the same stack traces. Like I said the other day, which
>> you seem to have discarded while debugging your issue, is that it's
>> not able to connect to Zookeeper.
>>
>> Following the cue, I added these lines in HBaseStorageHandler.setConf():
>>
>> System.out.println(hbaseConf.get("hbase.zookeeper.quorum"));
>> System.out.println(hbaseConf.get("hbase.zookeeper.property.clientPort"));
>>
>> It showed me this when trying to create a table (after recompiling):
>>
>> localhost
>> 21810
>>
>> I was testing with 0.89 and the test jar includes a hbase-site.xml
>> which has the port 21810 instead of the default 2181. I remembered
>> that it's a known issue that has since been fixed for 0.90.0, so
>> removing that jar fixed it for me.
>>
>> I'm not saying that in your case it's the same fix, but at least by
>> debugging those configurations you'll know where it's trying to
>> connect and then you'll be able to get to the bottom of your issue.
>>
>> J-D
>>
>> On Fri, Jan 7, 2011 at 4:54 AM, Adarsh Sharma <ad...@orkash.com> wrote:
>>     
>>> John Sichi wrote:
>>>
>>> On Jan 6, 2011, at 9:53 PM, Adarsh Sharma wrote:
>>>
>>>
>>> I want to know why it occurs in hive.log
>>>
>>> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin
>>> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
>>> "org.eclipse.core.resources" but it cannot be resolved.
>>>
>>>
>>>
>>> That is a bogus error; it always shows up, so you can ignore it.
>>>
>>>
>>>
>>> And use this new Hive build but I am sorry but the error remains the same.
>>>
>>>
>>> Then I don't know...probably still some remaining configuration error.  This
>>> guy seems to have gotten it working:
>>>
>>> http://mevivs.wordpress.com/2010/11/24/hivehbase-integration/
>>>
>>>
>>> Thanks a lot John , I know this link as i have start working by following
>>> this link in the past.
>>>
>>> But I think I have to research on below exception or warning to solve this
>>> issue.
>>>
>>>  2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn
>>> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
>>> sun.nio.ch.SelectionKeyImpl@561279c8
>>>  java.net.ConnectException: Connection refused
>>>        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>>        at
>>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>>>        at
>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
>>>  2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn
>>> (ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown input
>>>  java.nio.channels.ClosedChannelException
>>>        at
>>> sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
>>>        at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>>>        at
>>> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
>>>        at
>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>>>  2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn
>>> (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output
>>>  java.nio.channels.ClosedChannelException
>>>        at
>>> sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
>>>        at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>>>        at
>>> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
>>>        at
>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>>>  2011-01-05 15:20:12,621 WARN  zookeeper.ClientCnxn
>>> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
>>> sun.nio.ch.SelectionKeyImpl@799dbc3b
>>>
>>>   Please help me, as i am not able to solve this problem.
>>>
>>>  Also I want to add one more thing that my hadoop Cluster is of 9 nodes and
>>> 8 nodes act as Datanodes,Tasktrackers and Regionservers.
>>>
>>>
>>>
>>>
>>>  Best Regards
>>>
>>>  Adarsh Sharma
>>>
>>> JVS
>>>
>>>
>>>
>>>       


Re: Hive/Hbase Integration Error

Posted by Jean-Daniel Cryans <jd...@apache.org>.
Just figured that running the shell with this command will give all
the info you need:

bin/hive -hiveconf hive.root.logger=INFO,console

J-D

On Fri, Jan 7, 2011 at 9:57 AM, Jean-Daniel Cryans <jd...@apache.org> wrote:
> While testing other things yesterday on my local machine, I
> encountered the same stack traces. Like I said the other day, which
> you seem to have discarded while debugging your issue, is that it's
> not able to connect to Zookeeper.
>
> Following the cue, I added these lines in HBaseStorageHandler.setConf():
>
> System.out.println(hbaseConf.get("hbase.zookeeper.quorum"));
> System.out.println(hbaseConf.get("hbase.zookeeper.property.clientPort"));
>
> It showed me this when trying to create a table (after recompiling):
>
> localhost
> 21810
>
> I was testing with 0.89 and the test jar includes a hbase-site.xml
> which has the port 21810 instead of the default 2181. I remembered
> that it's a known issue that has since been fixed for 0.90.0, so
> removing that jar fixed it for me.
>
> I'm not saying that in your case it's the same fix, but at least by
> debugging those configurations you'll know where it's trying to
> connect and then you'll be able to get to the bottom of your issue.
>
> J-D
>
> On Fri, Jan 7, 2011 at 4:54 AM, Adarsh Sharma <ad...@orkash.com> wrote:
>> John Sichi wrote:
>>
>> On Jan 6, 2011, at 9:53 PM, Adarsh Sharma wrote:
>>
>>
>> I want to know why it occurs in hive.log
>>
>> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin
>> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
>> "org.eclipse.core.resources" but it cannot be resolved.
>>
>>
>>
>> That is a bogus error; it always shows up, so you can ignore it.
>>
>>
>>
>> And use this new Hive build but I am sorry but the error remains the same.
>>
>>
>> Then I don't know...probably still some remaining configuration error.  This
>> guy seems to have gotten it working:
>>
>> http://mevivs.wordpress.com/2010/11/24/hivehbase-integration/
>>
>>
>> Thanks a lot John , I know this link as i have start working by following
>> this link in the past.
>>
>> But I think I have to research on below exception or warning to solve this
>> issue.
>>
>>  2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn
>> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
>> sun.nio.ch.SelectionKeyImpl@561279c8
>>  java.net.ConnectException: Connection refused
>>        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>        at
>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>>        at
>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
>>  2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn
>> (ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown input
>>  java.nio.channels.ClosedChannelException
>>        at
>> sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
>>        at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>>        at
>> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
>>        at
>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>>  2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn
>> (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output
>>  java.nio.channels.ClosedChannelException
>>        at
>> sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
>>        at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>>        at
>> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
>>        at
>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>>  2011-01-05 15:20:12,621 WARN  zookeeper.ClientCnxn
>> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
>> sun.nio.ch.SelectionKeyImpl@799dbc3b
>>
>>   Please help me, as i am not able to solve this problem.
>>
>>  Also I want to add one more thing that my hadoop Cluster is of 9 nodes and
>> 8 nodes act as Datanodes,Tasktrackers and Regionservers.
>>
>>
>>
>>
>>  Best Regards
>>
>>  Adarsh Sharma
>>
>> JVS
>>
>>
>>
>

Re: Hive/Hbase Integration Error

Posted by Jean-Daniel Cryans <jd...@apache.org>.
While testing other things yesterday on my local machine, I
encountered the same stack traces. Like I said the other day, which
you seem to have discarded while debugging your issue, is that it's
not able to connect to Zookeeper.

Following the cue, I added these lines in HBaseStorageHandler.setConf():

System.out.println(hbaseConf.get("hbase.zookeeper.quorum"));
System.out.println(hbaseConf.get("hbase.zookeeper.property.clientPort"));

It showed me this when trying to create a table (after recompiling):

localhost
21810

I was testing with 0.89 and the test jar includes a hbase-site.xml
which has the port 21810 instead of the default 2181. I remembered
that it's a known issue that has since been fixed for 0.90.0, so
removing that jar fixed it for me.

I'm not saying that in your case it's the same fix, but at least by
debugging those configurations you'll know where it's trying to
connect and then you'll be able to get to the bottom of your issue.

J-D

On Fri, Jan 7, 2011 at 4:54 AM, Adarsh Sharma <ad...@orkash.com> wrote:
> John Sichi wrote:
>
> On Jan 6, 2011, at 9:53 PM, Adarsh Sharma wrote:
>
>
> I want to know why it occurs in hive.log
>
> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.core.resources" but it cannot be resolved.
>
>
>
> That is a bogus error; it always shows up, so you can ignore it.
>
>
>
> And use this new Hive build but I am sorry but the error remains the same.
>
>
> Then I don't know...probably still some remaining configuration error.  This
> guy seems to have gotten it working:
>
> http://mevivs.wordpress.com/2010/11/24/hivehbase-integration/
>
>
> Thanks a lot John , I know this link as i have start working by following
> this link in the past.
>
> But I think I have to research on below exception or warning to solve this
> issue.
>
>  2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn
> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
> sun.nio.ch.SelectionKeyImpl@561279c8
>  java.net.ConnectException: Connection refused
>        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>        at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>        at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
>  2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn
> (ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown input
>  java.nio.channels.ClosedChannelException
>        at
> sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
>        at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>        at
> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
>        at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>  2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn
> (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output
>  java.nio.channels.ClosedChannelException
>        at
> sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
>        at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>        at
> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
>        at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>  2011-01-05 15:20:12,621 WARN  zookeeper.ClientCnxn
> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
> sun.nio.ch.SelectionKeyImpl@799dbc3b
>
>   Please help me, as i am not able to solve this problem.
>
>  Also I want to add one more thing that my hadoop Cluster is of 9 nodes and
> 8 nodes act as Datanodes,Tasktrackers and Regionservers.
>
>
>
>
>  Best Regards
>
>  Adarsh Sharma
>
> JVS
>
>
>

Re: Hive/Hbase Integration Error

Posted by Adarsh Sharma <ad...@orkash.com>.
John Sichi wrote:
> On Jan 6, 2011, at 9:53 PM, Adarsh Sharma wrote:
>   
>> I want to know why it occurs in hive.log
>>
>> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
>>
>>     
>
> That is a bogus error; it always shows up, so you can ignore it.
>
>   
>> And use this new Hive build but I am sorry but the error remains the same.
>>     
>
> Then I don't know...probably still some remaining configuration error.  This guy seems to have gotten it working:
>
> http://mevivs.wordpress.com/2010/11/24/hivehbase-integration/
>   

Thanks a lot John , I know this link as i have start working by 
following this link in the past.

But I think I have to research on below exception or warning to solve 
this issue.

 2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@561279c8
 java.net.ConnectException: Connection refused
       at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
       at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
 2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown input
 java.nio.channels.ClosedChannelException
       at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
       at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
       at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
 2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output
 java.nio.channels.ClosedChannelException
       at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
       at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
       at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
 2011-01-05 15:20:12,621 WARN  zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@799dbc3b
 
  Please help me, as i am not able to solve this problem.
 
 Also I want to add one more thing that my hadoop Cluster is of 9 nodes and 8 nodes act as Datanodes,Tasktrackers and Regionservers.
 
 
 
 
 Best Regards

 Adarsh Sharma


> JVS
>
>   


Re: Hive/Hbase Integration Error

Posted by John Sichi <js...@fb.com>.
On Jan 6, 2011, at 9:53 PM, Adarsh Sharma wrote:
> I want to know why it occurs in hive.log
> 
> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
> 

That is a bogus error; it always shows up, so you can ignore it.

> 
> And use this new Hive build but I am sorry but the error remains the same.

Then I don't know...probably still some remaining configuration error.  This guy seems to have gotten it working:

http://mevivs.wordpress.com/2010/11/24/hivehbase-integration/

JVS


Re: Hive/Hbase Integration Error

Posted by Adarsh Sharma <ad...@orkash.com>.
Viral Bajaria wrote:
> Hi Adarsh,
>  
> Can you provide some more details as follows. I have had issues before 
> on communicating between my HMaster etc. but the initial issue was 
> ACL(s) and my zookeeper settings were a little messed up too (I can't 
> recollect what was wrong with zookeeper). I would suggest that you get 
> things up and running with default zookeeper settings before playing 
> around with them. You could also run a custom java program which 
> connects to hbase to make sure the issue is with hbase settings and 
> has nothing to do with Hive settings.
>  
Thanks Viral, below is my specifications :

You are absolutely right.

> 1) Are you running Namenode, HMaster, Hive on the same machine ?


*I am running Namenode, HMaster, Hive on the same machine.

We have a cluster of 10 servers and :-

1 Server act as ( Namenode , JobTracker, HMaster , Hive-Node containing 
Hive-0.6.0 package ). Would this affects.

8 Servers act as ( Datanode, TaskTracker, HregionServers ).

Out of these 8 Servers I set three ( sometime five ) servers IP  as 
Zookeeper Nodes. Would I need zookeeper.quorum.property set to separate 
servers or this configuration is right.*
> 2) Are you able to access the HMaster through the Web UI ? I think the 
> default port is 60010 (or you can check in your hbase-site.xml)
>  

*I am able to connect Hbase through Web Ui and also able to create 
tables in it. But error occurs when Hive/Hbase Integration comes into Play.*
> Thanks,
> Viral
>  
> On Thu, Jan 6, 2011 at 9:53 PM, Adarsh Sharma 
> <adarsh.sharma@orkash.com <ma...@orkash.com>> wrote:
>
>     John Sichi wrote:
>>     Here is what you need to do:
>>
>>     1) Use svn to check out the source for Hive 0.6
>>       
>     I download Hive-0.6.0 source code with the command
>
>      svn co http://svn.apache.org/repos/asf/hive/branches/branch-0.6/
>     hive-0.6.0
>
>
>
>>     2) In your checkout, replace the HBase 0.20.3 jars with the ones from 0.20.6
>>       
>     Replace hbase-0.20.3.jar,hbase-0.20.3.test.jar by hbase-0.20.6.jar
>     and hbase-0.20.6.test jars in Hive-0.6.0/hbase-handler/lib folder
>
>>     3) Build Hive 0.6 from source
>>       
>     Then Build the hive package by *ant -Dhadoop.version=0.20.0
>     package *command
>     Am I doing something wrong.
>
>     I want to know why it occurs in hive.log
>
>     2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
>
>
>     With Best Regards
>
>     Adarsh Sharma
>         
>
>
>>     4) Use your new Hive build
>>       
>
>     And use this new Hive build but I am sorry but the error remains
>     the same.
>
>
>>     JVS
>>
>>     On Jan 6, 2011, at 2:34 AM, Adarsh Sharma wrote:
>>
>>       
>>>     Dear all,
>>>
>>>     I am sorry I am posting this message again but I can't able to locate the root cause after googled a lot.
>>>
>>>     I am trying Hive/Hbase Integration from the past 2 days. I am facing the below issue while creating external table in Hive.
>>>
>>>     I am using hadoop-0.20.2, hbase-0.20.6, hive-0.6.0 ( Mysql as metstore ) and java-1.6.0_20. Hbase-0.20.3 is also checked.
>>>
>>>     Problem arises when I issue the below command :
>>>
>>>     hive> CREATE TABLE hive_hbasetable_k(key int, value string)
>>>         > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
>>>         > WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
>>>         > TBLPROPERTIES ("hbase.table.name <http://hbase.table.name/>" = "hivehbasek");
>>>
>>>
>>>     FAILED: Error in metadata: MetaException(message:org.apache.hadoop.hbase.MasterNotRunningException
>>>             at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster(HConnectionManager.java:374)
>>>             at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:72)
>>>             at org.apache.hadoop.hive.hbase.HBaseStorageHandler.getHBaseAdmin(HBaseStorageHandler.java:64)
>>>             at org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:159)
>>>             at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:275)
>>>             at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:394)
>>>             at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:2126)
>>>             at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:166)
>>>             at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)
>>>             at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
>>>             at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:633)
>>>             at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:506)
>>>             at org.apache.hadoop.hive.ql.Driver.run(Driver.java:384)
>>>             at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)
>>>             at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)
>>>             at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:302)
>>>             at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>             at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>             at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>             at java.lang.reflect.Method.invoke(Method.java:597)
>>>             at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>>     FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
>>>
>>>
>>>     It seems my HMaster is not Running but I checked from IP:60010 that it is running and I am able to create,insert tables in Hbase Properly.
>>>
>>>     Below is the contents of my hive.log :
>>>
>>>       2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
>>>      2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
>>>      2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.
>>>      2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.
>>>      2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.
>>>      2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.
>>>      2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@561279c8
>>>      java.net.ConnectException: Connection refused
>>>            at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>>            at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>>>            at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
>>>      2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown input
>>>      java.nio.channels.ClosedChannelException
>>>            at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
>>>            at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>>>            at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
>>>            at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>>>      2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output
>>>      java.nio.channels.ClosedChannelException
>>>            at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
>>>            at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>>>            at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
>>>            at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>>>      2011-01-05 15:20:12,621 WARN  zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@799dbc3b
>>>      
>>>       Please help me, as i am not able to solve this problem.
>>>      
>>>      Also I want to add one more thing that my hadoop Cluster is of 9 nodes and 8 nodes act as Datanodes,Tasktrackers and Regionservers.
>>>      
>>>      Among these nodes is set zookeeper.quorum.property to have 5 Datanodes. I don't know the number of servers needed for Zookeeper in fully distributed mode.
>>>      
>>>      
>>>      Best Regards
>>>
>>>      Adarsh Sharma
>>>
>>>
>>>
>>>         
>>       
>
>


Re: Hive/Hbase Integration Error

Posted by Viral Bajaria <vi...@gmail.com>.
Hi Adarsh,

Can you provide some more details as follows. I have had issues before on
communicating between my HMaster etc. but the initial issue was ACL(s) and
my zookeeper settings were a little messed up too (I can't recollect what
was wrong with zookeeper). I would suggest that you get things up and
running with default zookeeper settings before playing around with them. You
could also run a custom java program which connects to hbase to make sure
the issue is with hbase settings and has nothing to do with Hive settings.

1) Are you running Namenode, HMaster, Hive on the same machine ?
2) Are you able to access the HMaster through the Web UI ? I think the
default port is 60010 (or you can check in your hbase-site.xml)

Thanks,
Viral

On Thu, Jan 6, 2011 at 9:53 PM, Adarsh Sharma <ad...@orkash.com>wrote:

>  John Sichi wrote:
>
> Here is what you need to do:
>
> 1) Use svn to check out the source for Hive 0.6
>
>
> I download Hive-0.6.0 source code with the command
>
>  svn co http://svn.apache.org/repos/asf/hive/branches/branch-0.6/hive-0.6.0
>
>
>
> 2) In your checkout, replace the HBase 0.20.3 jars with the ones from 0.20.6
>
>
> Replace hbase-0.20.3.jar,hbase-0.20.3.test.jar by hbase-0.20.6.jar and
> hbase-0.20.6.test jars in Hive-0.6.0/hbase-handler/lib folder
>
> 3) Build Hive 0.6 from source
>
>
> Then Build the hive package by *ant -Dhadoop.version=0.20.0 package *
> command
> Am I doing something wrong.
>
> I want to know why it occurs in hive.log
>
> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
>
>
> With Best Regards
>
> Adarsh Sharma
>
>
> 4) Use your new Hive build
>
>
>
> And use this new Hive build but I am sorry but the error remains the same.
>
>
> JVS
>
> On Jan 6, 2011, at 2:34 AM, Adarsh Sharma wrote:
>
>
>
> Dear all,
>
> I am sorry I am posting this message again but I can't able to locate the root cause after googled a lot.
>
> I am trying Hive/Hbase Integration from the past 2 days. I am facing the below issue while creating external table in Hive.
>
> I am using hadoop-0.20.2, hbase-0.20.6, hive-0.6.0 ( Mysql as metstore ) and java-1.6.0_20. Hbase-0.20.3 is also checked.
>
> Problem arises when I issue the below command :
>
> hive> CREATE TABLE hive_hbasetable_k(key int, value string)
>     > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
>     > WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
>     > TBLPROPERTIES ("hbase.table.name" = "hivehbasek");
>
>
> FAILED: Error in metadata: MetaException(message:org.apache.hadoop.hbase.MasterNotRunningException
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster(HConnectionManager.java:374)
>         at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:72)
>         at org.apache.hadoop.hive.hbase.HBaseStorageHandler.getHBaseAdmin(HBaseStorageHandler.java:64)
>         at org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:159)
>         at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:275)
>         at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:394)
>         at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:2126)
>         at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:166)
>         at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)
>         at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
>         at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:633)
>         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:506)
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:384)
>         at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)
>         at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)
>         at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:302)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
>
>
> It seems my HMaster is not Running but I checked from IP:60010 that it is running and I am able to create,insert tables in Hbase Properly.
>
> Below is the contents of my hive.log :
>
>   2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
>  2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
>  2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.
>  2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.
>  2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.
>  2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.
>  2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@561279c8
>  java.net.ConnectException: Connection refused
>        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
>  2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown input
>  java.nio.channels.ClosedChannelException
>        at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
>        at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>        at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
>        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>  2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output
>  java.nio.channels.ClosedChannelException
>        at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
>        at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>        at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
>        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>  2011-01-05 15:20:12,621 WARN  zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@799dbc3b
>
>   Please help me, as i am not able to solve this problem.
>
>  Also I want to add one more thing that my hadoop Cluster is of 9 nodes and 8 nodes act as Datanodes,Tasktrackers and Regionservers.
>
>  Among these nodes is set zookeeper.quorum.property to have 5 Datanodes. I don't know the number of servers needed for Zookeeper in fully distributed mode.
>
>
>  Best Regards
>
>  Adarsh Sharma
>
>
>
>
>
>
>

Re: Hive/Hbase Integration Error

Posted by Adarsh Sharma <ad...@orkash.com>.
John Sichi wrote:
> Here is what you need to do:
>
> 1) Use svn to check out the source for Hive 0.6
>   
I download Hive-0.6.0 source code with the command

 svn co http://svn.apache.org/repos/asf/hive/branches/branch-0.6/ hive-0.6.0


> 2) In your checkout, replace the HBase 0.20.3 jars with the ones from 0.20.6
>   
Replace hbase-0.20.3.jar,hbase-0.20.3.test.jar by hbase-0.20.6.jar and 
hbase-0.20.6.test jars in Hive-0.6.0/hbase-handler/lib folder
> 3) Build Hive 0.6 from source
>   
Then Build the hive package by *ant -Dhadoop.version=0.20.0 package *command
Am I doing something wrong.

I want to know why it occurs in hive.log

2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.


With Best Regards

Adarsh Sharma


> 4) Use your new Hive build
>   

And use this new Hive build but I am sorry but the error remains the same.

> JVS
>
> On Jan 6, 2011, at 2:34 AM, Adarsh Sharma wrote:
>
>   
>> Dear all,
>>
>> I am sorry I am posting this message again but I can't able to locate the root cause after googled a lot.
>>
>> I am trying Hive/Hbase Integration from the past 2 days. I am facing the below issue while creating external table in Hive.
>>
>> I am using hadoop-0.20.2, hbase-0.20.6, hive-0.6.0 ( Mysql as metstore ) and java-1.6.0_20. Hbase-0.20.3 is also checked.
>>
>> Problem arises when I issue the below command :
>>
>> hive> CREATE TABLE hive_hbasetable_k(key int, value string)
>>     > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
>>     > WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
>>     > TBLPROPERTIES ("hbase.table.name" = "hivehbasek");
>>
>>
>> FAILED: Error in metadata: MetaException(message:org.apache.hadoop.hbase.MasterNotRunningException
>>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster(HConnectionManager.java:374)
>>         at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:72)
>>         at org.apache.hadoop.hive.hbase.HBaseStorageHandler.getHBaseAdmin(HBaseStorageHandler.java:64)
>>         at org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:159)
>>         at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:275)
>>         at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:394)
>>         at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:2126)
>>         at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:166)
>>         at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)
>>         at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
>>         at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:633)
>>         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:506)
>>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:384)
>>         at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)
>>         at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)
>>         at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:302)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>         at java.lang.reflect.Method.invoke(Method.java:597)
>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
>>
>>
>> It seems my HMaster is not Running but I checked from IP:60010 that it is running and I am able to create,insert tables in Hbase Properly.
>>
>> Below is the contents of my hive.log :
>>
>>   2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
>>  2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
>>  2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.
>>  2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.
>>  2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.
>>  2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.
>>  2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@561279c8
>>  java.net.ConnectException: Connection refused
>>        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>>        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
>>  2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown input
>>  java.nio.channels.ClosedChannelException
>>        at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
>>        at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>>        at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
>>        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>>  2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output
>>  java.nio.channels.ClosedChannelException
>>        at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
>>        at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>>        at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
>>        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>>  2011-01-05 15:20:12,621 WARN  zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@799dbc3b
>>  
>>   Please help me, as i am not able to solve this problem.
>>  
>>  Also I want to add one more thing that my hadoop Cluster is of 9 nodes and 8 nodes act as Datanodes,Tasktrackers and Regionservers.
>>  
>>  Among these nodes is set zookeeper.quorum.property to have 5 Datanodes. I don't know the number of servers needed for Zookeeper in fully distributed mode.
>>  
>>  
>>  Best Regards
>>
>>  Adarsh Sharma
>>
>>
>>
>>     
>
>   


Re: Hive/Hbase Integration Error

Posted by John Sichi <js...@fb.com>.
Here is what you need to do:

1) Use svn to check out the source for Hive 0.6

2) In your checkout, replace the HBase 0.20.3 jars with the ones from 0.20.6

3) Build Hive 0.6 from source

4) Use your new Hive build

JVS

On Jan 6, 2011, at 2:34 AM, Adarsh Sharma wrote:

> Dear all,
> 
> I am sorry I am posting this message again but I can't able to locate the root cause after googled a lot.
> 
> I am trying Hive/Hbase Integration from the past 2 days. I am facing the below issue while creating external table in Hive.
> 
> I am using hadoop-0.20.2, hbase-0.20.6, hive-0.6.0 ( Mysql as metstore ) and java-1.6.0_20. Hbase-0.20.3 is also checked.
> 
> Problem arises when I issue the below command :
> 
> hive> CREATE TABLE hive_hbasetable_k(key int, value string)
>     > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
>     > WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
>     > TBLPROPERTIES ("hbase.table.name" = "hivehbasek");
> 
> 
> FAILED: Error in metadata: MetaException(message:org.apache.hadoop.hbase.MasterNotRunningException
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster(HConnectionManager.java:374)
>         at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:72)
>         at org.apache.hadoop.hive.hbase.HBaseStorageHandler.getHBaseAdmin(HBaseStorageHandler.java:64)
>         at org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:159)
>         at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:275)
>         at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:394)
>         at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:2126)
>         at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:166)
>         at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)
>         at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
>         at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:633)
>         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:506)
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:384)
>         at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)
>         at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)
>         at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:302)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
> 
> 
> It seems my HMaster is not Running but I checked from IP:60010 that it is running and I am able to create,insert tables in Hbase Properly.
> 
> Below is the contents of my hive.log :
> 
>   2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
>  2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
>  2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.
>  2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.
>  2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.
>  2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.
>  2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@561279c8
>  java.net.ConnectException: Connection refused
>        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
>  2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown input
>  java.nio.channels.ClosedChannelException
>        at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
>        at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>        at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
>        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>  2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output
>  java.nio.channels.ClosedChannelException
>        at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
>        at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>        at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
>        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>  2011-01-05 15:20:12,621 WARN  zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@799dbc3b
>  
>   Please help me, as i am not able to solve this problem.
>  
>  Also I want to add one more thing that my hadoop Cluster is of 9 nodes and 8 nodes act as Datanodes,Tasktrackers and Regionservers.
>  
>  Among these nodes is set zookeeper.quorum.property to have 5 Datanodes. I don't know the number of servers needed for Zookeeper in fully distributed mode.
>  
>  
>  Best Regards
> 
>  Adarsh Sharma
> 
> 
> 


Re: Hive/Hbase Integration Error

Posted by John Sichi <js...@fb.com>.
Here is what you need to do:

1) Use svn to check out the source for Hive 0.6

2) In your checkout, replace the HBase 0.20.3 jars with the ones from 0.20.6

3) Build Hive 0.6 from source

4) Use your new Hive build

JVS

On Jan 6, 2011, at 2:34 AM, Adarsh Sharma wrote:

> Dear all,
> 
> I am sorry I am posting this message again but I can't able to locate the root cause after googled a lot.
> 
> I am trying Hive/Hbase Integration from the past 2 days. I am facing the below issue while creating external table in Hive.
> 
> I am using hadoop-0.20.2, hbase-0.20.6, hive-0.6.0 ( Mysql as metstore ) and java-1.6.0_20. Hbase-0.20.3 is also checked.
> 
> Problem arises when I issue the below command :
> 
> hive> CREATE TABLE hive_hbasetable_k(key int, value string)
>     > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
>     > WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
>     > TBLPROPERTIES ("hbase.table.name" = "hivehbasek");
> 
> 
> FAILED: Error in metadata: MetaException(message:org.apache.hadoop.hbase.MasterNotRunningException
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster(HConnectionManager.java:374)
>         at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:72)
>         at org.apache.hadoop.hive.hbase.HBaseStorageHandler.getHBaseAdmin(HBaseStorageHandler.java:64)
>         at org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:159)
>         at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:275)
>         at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:394)
>         at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:2126)
>         at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:166)
>         at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)
>         at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
>         at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:633)
>         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:506)
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:384)
>         at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)
>         at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)
>         at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:302)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
> 
> 
> It seems my HMaster is not Running but I checked from IP:60010 that it is running and I am able to create,insert tables in Hbase Properly.
> 
> Below is the contents of my hive.log :
> 
>   2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
>  2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
>  2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.
>  2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.
>  2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.
>  2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.
>  2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@561279c8
>  java.net.ConnectException: Connection refused
>        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
>  2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown input
>  java.nio.channels.ClosedChannelException
>        at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
>        at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>        at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
>        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>  2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output
>  java.nio.channels.ClosedChannelException
>        at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
>        at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>        at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
>        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>  2011-01-05 15:20:12,621 WARN  zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@799dbc3b
>  
>   Please help me, as i am not able to solve this problem.
>  
>  Also I want to add one more thing that my hadoop Cluster is of 9 nodes and 8 nodes act as Datanodes,Tasktrackers and Regionservers.
>  
>  Among these nodes is set zookeeper.quorum.property to have 5 Datanodes. I don't know the number of servers needed for Zookeeper in fully distributed mode.
>  
>  
>  Best Regards
> 
>  Adarsh Sharma
> 
> 
>