You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by Adarsh Sharma <ad...@orkash.com> on 2011/01/05 11:14:56 UTC

Error in metadata: javax.jdo.JDOFatalDataStoreException




Dear all,

I am trying Hive/Hbase Integration from the past 2 days. I am facing the 
below issue while creating external table in Hive.

*Command-Line Error :-

*hadoop@s2-ratw-1:~/project/hive-0.6.0/build/dist$ bin/hive --auxpath 
/home/hadoop/project/hive-0.6.0/build/dist/lib/hive_hbase-handler.jar,/home/hadoop/project/hive-0.6.0/build/dist/lib/hbase-0.20.3.jar,/home/hadoop/project/hive-0.6.0/build/dist/lib/zookeeper-3.2.2.jar  
-hiveconf 
hbase.zookeeper.quorum=192.168.1.103,192.168.1.114,192.168.1.115,192.168.1.104,192.168.1.107
Hive history 
file=/tmp/hadoop/hive_job_log_hadoop_201101051527_1728376885.txt
hive> show tables;
FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: 
Communications link failure

The last packet sent successfully to the server was 0 milliseconds ago. 
The driver has not received any packets from the server.
NestedThrowables:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications 
link failure

The last packet sent successfully to the server was 0 milliseconds ago. 
The driver has not received any packets from the server.
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask
hive> exit;
hadoop@s2-ratw-1:~/project/hive-0.6.0/build/dist$

*My hive.log file says :*

2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin 
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires 
"org.eclipse.core.resources" but it cannot be resolved.
2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin 
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires 
"org.eclipse.core.resources" but it cannot be resolved.
2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin 
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires 
"org.eclipse.core.runtime" but it cannot be resolved.
2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin 
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires 
"org.eclipse.core.runtime" but it cannot be resolved.
2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin 
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires 
"org.eclipse.text" but it cannot be resolved.
2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin 
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires 
"org.eclipse.text" but it cannot be resolved.
2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn 
(ClientCnxn.java:run(967)) - Exception closing session 0x0 to 
sun.nio.ch.SelectionKeyImpl@561279c8
java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
        at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn 
(ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown input
java.nio.channels.ClosedChannelException
        at 
sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
        at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
        at 
org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
        at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn 
(ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output
java.nio.channels.ClosedChannelException
        at 
sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
        at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
        at 
org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
        at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
2011-01-05 15:20:12,621 WARN  zookeeper.ClientCnxn 
(ClientCnxn.java:run(967)) - Exception closing session 0x0 to 
sun.nio.ch.SelectionKeyImpl@799dbc3b

I overcomed from the previous issue of MasterNotRunning Exception which 
occured due to incompatibilities in hive_hbase jars.

Now I'm using Hadoop-0.20.2, Hive-0.6.0 ( Bydefault Derby metastore  ) 
and Hbase-0.20.3.

Please tell how this could be resolved.

Also I want to add one more thing that my hadoop Cluster is of 9 nodes 
and 8 nodes act as Datanodes,Tasktrackers and Regionservers.

Among these nodes is set zookeeper.quorum.property to have 5 Datanodes. 
Would this is the issue.
I don't know the number of servers needed for Zookeeper in fully 
distributed mode.


Best Regards

Adarsh Sharma



Re: Error in metadata: javax.jdo.JDOFatalDataStoreException

Posted by Adarsh Sharma <ad...@orkash.com>.
John Sichi wrote:
> Since the exception below is from JDO, it has to do with the configuration of Hive's metastore (not HBase/Zookeeper).
>
> JVS
>
> On Jan 5, 2011, at 2:14 AM, Adarsh Sharma wrote:
>
>   
>>
>>
>> Dear all,
>>
>> I am trying Hive/Hbase Integration from the past 2 days. I am facing the below issue while creating external table in Hive.
>>
>> *Command-Line Error :-
>>
>> *hadoop@s2-ratw-1:~/project/hive-0.6.0/build/dist$ bin/hive --auxpath /home/hadoop/project/hive-0.6.0/build/dist/lib/hive_hbase-handler.jar,/home/hadoop/project/hive-0.6.0/build/dist/lib/hbase-0.20.3.jar,/home/hadoop/project/hive-0.6.0/build/dist/lib/zookeeper-3.2.2.jar  -hiveconf hbase.zookeeper.quorum=192.168.1.103,192.168.1.114,192.168.1.115,192.168.1.104,192.168.1.107
>> Hive history file=/tmp/hadoop/hive_job_log_hadoop_201101051527_1728376885.txt
>> hive> show tables;
>> FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Communications link failure
>>
>> The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
>> NestedThrowables:
>> com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
>>
>> The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
>> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
>> hive> exit;
>> hadoop@s2-ratw-1:~/project/hive-0.6.0/build/dist$
>>
>> *My hive.log file says :*
>>
>> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
>> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
>> 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.
>> 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.
>> 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.
>> 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.
>> 2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@561279c8
>> java.net.ConnectException: Connection refused
>>       at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>       at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>>       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
>> 2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown input
>> java.nio.channels.ClosedChannelException
>>       at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
>>       at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>>       at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
>>       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>> 2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output
>> java.nio.channels.ClosedChannelException
>>       at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
>>       at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>>       at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
>>       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>> 2011-01-05 15:20:12,621 WARN  zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@799dbc3b
>>
>> I overcomed from the previous issue of MasterNotRunning Exception which occured due to incompatibilities in hive_hbase jars.
>>
>> Now I'm using Hadoop-0.20.2, Hive-0.6.0 ( Bydefault Derby metastore  ) and Hbase-0.20.3.
>>
>> Please tell how this could be resolved.
>>
>> Also I want to add one more thing that my hadoop Cluster is of 9 nodes and 8 nodes act as Datanodes,Tasktrackers and Regionservers.
>>
>> Among these nodes is set zookeeper.quorum.property to have 5 Datanodes. Would this is the issue.
>> I don't know the number of servers needed for Zookeeper in fully distributed mode.
>>
>>
>> Best Regards
>>
>> Adarsh Sharma
>>
>>
>>     
>
>   
Thank U all, I over comed this issue, but know I got a strange problem 
while creating external table that is managed by hbase ( Hive/Hbase 
Integration ).

******************Error while running create table command in Hive to 
store in Hbase**************

hive> CREATE TABLE hive_hbasetable_k(key int, value string)
    > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
    > WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
    > TBLPROPERTIES ("hbase.table.name" = "hivehbasek");

FAILED: Error in metadata: 
MetaException(message:org.apache.hadoop.hbase.MasterNotRunningException
        at 
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster(HConnectionManager.java:374)
        at 
org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:72)
        at 
org.apache.hadoop.hive.hbase.HBaseStorageHandler.getHBaseAdmin(HBaseStorageHandler.java:64)
        at 
org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:159)
        at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:275)
        at 
org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:394)
        at 
org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:2126)
        at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:166)
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)
        at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:633)
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:506)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:384)
        at 
org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)
        at 
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:302)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
)
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask

First of all, i use Hbase-0.20.6 and Hive-0.6.0 and post this issue in 
this list and Igot a reply below :

>From that wiki page:

"If you are not using hbase-0.20.3, you will need to rebuild the handler with the HBase jar matching your version, and change the --auxpath above accordingly. Failure to use matching versions will lead to misleading connection failures such as MasterNotRunningException since the HBase RPC protocol changes often."

JVS

On Dec 29, 2010, at 5:20 AM, Adarsh Sharma wrote


I think Hive-0.6.0 jars is incompatible with Hadoop jars.

My cluster is running correctly. Hbase is running properly and Hive too 
and I can create table in them but issue arises when Hive/Hbase 
Integration comes into play.
Now I use Hbase-0.20.3 but the problem remains the same. Please guide 
how to solve this error.

My hive.log says :

2011-01-06 09:53:15,056 ERROR DataNucleus.Plugin 
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires 
"org.eclipse.core.resources" but it cannot be resolved.
2011-01-06 09:53:15,056 ERROR DataNucleus.Plugin 
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires 
"org.eclipse.core.resources" but it cannot be resolved.
2011-01-06 09:53:15,058 ERROR DataNucleus.Plugin 
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires 
"org.eclipse.core.runtime" but it cannot be resolved.
2011-01-06 09:53:15,058 ERROR DataNucleus.Plugin 
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires 
"org.eclipse.core.runtime" but it cannot be resolved.
2011-01-06 09:53:15,059 ERROR DataNucleus.Plugin 
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires 
"org.eclipse.text" but it cannot be resolved.
2011-01-06 09:53:15,059 ERROR DataNucleus.Plugin 
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires 
"org.eclipse.text" but it cannot be resolved.
~
and when issue the create table command it says :

2011-01-06 09:55:33,076 WARN  zookeeper.ClientCnxn 
(ClientCnxn.java:run(967)) - Exception closing session 0x0 to 
sun.nio.ch.SelectionKeyImpl@2d8b4ccb
java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
        at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
2011-01-06 09:55:33,078 WARN  zookeeper.ClientCnxn 
(ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown input
java.nio.channels.ClosedChannelException
        at 
sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
        at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
        at 
org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
        at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
2011-01-06 09:55:33,079 WARN  zookeeper.ClientCnxn 
(ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output
java.nio.channels.ClosedChannelException
        at 
sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
        at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
        at 
org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
        at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
2011-01-06 09:55:34,050 WARN  zookeeper.ClientCnxn 
(ClientCnxn.java:run(967)) - Exception closing session 0x0 to 
sun.nio.ch.SelectionKeyImpl@1bdb52c8
java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
"hive.log" 2665L, 207384C                                  

I attached my hbase.site.xml.

Please help me find the root cause.
                                    

Thanks and Regards
Adarsh Sharma

Re: Error in metadata: javax.jdo.JDOFatalDataStoreException

Posted by John Sichi <js...@fb.com>.
Since the exception below is from JDO, it has to do with the configuration of Hive's metastore (not HBase/Zookeeper).

JVS

On Jan 5, 2011, at 2:14 AM, Adarsh Sharma wrote:

> 
> 
> 
> 
> Dear all,
> 
> I am trying Hive/Hbase Integration from the past 2 days. I am facing the below issue while creating external table in Hive.
> 
> *Command-Line Error :-
> 
> *hadoop@s2-ratw-1:~/project/hive-0.6.0/build/dist$ bin/hive --auxpath /home/hadoop/project/hive-0.6.0/build/dist/lib/hive_hbase-handler.jar,/home/hadoop/project/hive-0.6.0/build/dist/lib/hbase-0.20.3.jar,/home/hadoop/project/hive-0.6.0/build/dist/lib/zookeeper-3.2.2.jar  -hiveconf hbase.zookeeper.quorum=192.168.1.103,192.168.1.114,192.168.1.115,192.168.1.104,192.168.1.107
> Hive history file=/tmp/hadoop/hive_job_log_hadoop_201101051527_1728376885.txt
> hive> show tables;
> FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Communications link failure
> 
> The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
> NestedThrowables:
> com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
> 
> The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
> hive> exit;
> hadoop@s2-ratw-1:~/project/hive-0.6.0/build/dist$
> 
> *My hive.log file says :*
> 
> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
> 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.
> 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.
> 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.
> 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.
> 2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@561279c8
> java.net.ConnectException: Connection refused
>       at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>       at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
> 2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown input
> java.nio.channels.ClosedChannelException
>       at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
>       at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>       at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
>       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
> 2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output
> java.nio.channels.ClosedChannelException
>       at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
>       at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>       at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
>       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
> 2011-01-05 15:20:12,621 WARN  zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@799dbc3b
> 
> I overcomed from the previous issue of MasterNotRunning Exception which occured due to incompatibilities in hive_hbase jars.
> 
> Now I'm using Hadoop-0.20.2, Hive-0.6.0 ( Bydefault Derby metastore  ) and Hbase-0.20.3.
> 
> Please tell how this could be resolved.
> 
> Also I want to add one more thing that my hadoop Cluster is of 9 nodes and 8 nodes act as Datanodes,Tasktrackers and Regionservers.
> 
> Among these nodes is set zookeeper.quorum.property to have 5 Datanodes. Would this is the issue.
> I don't know the number of servers needed for Zookeeper in fully distributed mode.
> 
> 
> Best Regards
> 
> Adarsh Sharma
> 
> 


Re: Error in metadata: javax.jdo.JDOFatalDataStoreException

Posted by John Sichi <js...@fb.com>.
Since the exception below is from JDO, it has to do with the configuration of Hive's metastore (not HBase/Zookeeper).

JVS

On Jan 5, 2011, at 2:14 AM, Adarsh Sharma wrote:

> 
> 
> 
> 
> Dear all,
> 
> I am trying Hive/Hbase Integration from the past 2 days. I am facing the below issue while creating external table in Hive.
> 
> *Command-Line Error :-
> 
> *hadoop@s2-ratw-1:~/project/hive-0.6.0/build/dist$ bin/hive --auxpath /home/hadoop/project/hive-0.6.0/build/dist/lib/hive_hbase-handler.jar,/home/hadoop/project/hive-0.6.0/build/dist/lib/hbase-0.20.3.jar,/home/hadoop/project/hive-0.6.0/build/dist/lib/zookeeper-3.2.2.jar  -hiveconf hbase.zookeeper.quorum=192.168.1.103,192.168.1.114,192.168.1.115,192.168.1.104,192.168.1.107
> Hive history file=/tmp/hadoop/hive_job_log_hadoop_201101051527_1728376885.txt
> hive> show tables;
> FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Communications link failure
> 
> The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
> NestedThrowables:
> com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
> 
> The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
> hive> exit;
> hadoop@s2-ratw-1:~/project/hive-0.6.0/build/dist$
> 
> *My hive.log file says :*
> 
> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
> 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.
> 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.
> 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.
> 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.
> 2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@561279c8
> java.net.ConnectException: Connection refused
>       at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>       at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
> 2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown input
> java.nio.channels.ClosedChannelException
>       at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
>       at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>       at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
>       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
> 2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output
> java.nio.channels.ClosedChannelException
>       at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
>       at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>       at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
>       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
> 2011-01-05 15:20:12,621 WARN  zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@799dbc3b
> 
> I overcomed from the previous issue of MasterNotRunning Exception which occured due to incompatibilities in hive_hbase jars.
> 
> Now I'm using Hadoop-0.20.2, Hive-0.6.0 ( Bydefault Derby metastore  ) and Hbase-0.20.3.
> 
> Please tell how this could be resolved.
> 
> Also I want to add one more thing that my hadoop Cluster is of 9 nodes and 8 nodes act as Datanodes,Tasktrackers and Regionservers.
> 
> Among these nodes is set zookeeper.quorum.property to have 5 Datanodes. Would this is the issue.
> I don't know the number of servers needed for Zookeeper in fully distributed mode.
> 
> 
> Best Regards
> 
> Adarsh Sharma
> 
> 


Re: Error in metadata: javax.jdo.JDOFatalDataStoreException

Posted by vaibhav negi <ss...@gmail.com>.
Hi Adarsh,

It may be because of wrong configuration to meta store server/lack of access
rights .

Vaibhav Negi


On Wed, Jan 5, 2011 at 11:04 PM, Jean-Daniel Cryans <jd...@apache.org>wrote:

> With one cluster you really only need one, and it doesn't seem to be
> running from what I can tell:
>
> 2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn
> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
> sun.nio.ch.SelectionKeyImpl@561279c8
> java.net.ConnectException: Connection refused
>
> And this is only the tail of the log, the head will tell you where
> it's trying to connect. My guess is that there's either a problem with
> your hbase configuration for hive, or the ZK peers aren't running, or
> both issues at the same time. Although if you can see that HBase is
> already running properly, then it must be a configuration issue.
>
> J-D
>
> On Wed, Jan 5, 2011 at 2:14 AM, Adarsh Sharma <ad...@orkash.com>
> wrote:
> >
> >
> >
> >
> > Dear all,
> >
> > I am trying Hive/Hbase Integration from the past 2 days. I am facing the
> > below issue while creating external table in Hive.
> >
> > *Command-Line Error :-
> >
> > *hadoop@s2-ratw-1:~/project/hive-0.6.0/build/dist$ bin/hive --auxpath
> >
> /home/hadoop/project/hive-0.6.0/build/dist/lib/hive_hbase-handler.jar,/home/hadoop/project/hive-0.6.0/build/dist/lib/hbase-0.20.3.jar,/home/hadoop/project/hive-0.6.0/build/dist/lib/zookeeper-3.2.2.jar
> >  -hiveconf
> >
> hbase.zookeeper.quorum=192.168.1.103,192.168.1.114,192.168.1.115,192.168.1.104,192.168.1.107
> > Hive history
> > file=/tmp/hadoop/hive_job_log_hadoop_201101051527_1728376885.txt
> > hive> show tables;
> > FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException:
> > Communications link failure
> >
> > The last packet sent successfully to the server was 0 milliseconds ago.
> The
> > driver has not received any packets from the server.
> > NestedThrowables:
> > com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications
> link
> > failure
> >
> > The last packet sent successfully to the server was 0 milliseconds ago.
> The
> > driver has not received any packets from the server.
> > FAILED: Execution Error, return code 1 from
> > org.apache.hadoop.hive.ql.exec.DDLTask
> > hive> exit;
> > hadoop@s2-ratw-1:~/project/hive-0.6.0/build/dist$
> >
> > *My hive.log file says :*
> >
> > 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin
> > (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> > "org.eclipse.core.resources" but it cannot be resolved.
> > 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin
> > (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> > "org.eclipse.core.resources" but it cannot be resolved.
> > 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin
> > (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> > "org.eclipse.core.runtime" but it cannot be resolved.
> > 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin
> > (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> > "org.eclipse.core.runtime" but it cannot be resolved.
> > 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin
> > (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> > "org.eclipse.text" but it cannot be resolved.
> > 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin
> > (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> > "org.eclipse.text" but it cannot be resolved.
> > 2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn
> > (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
> > sun.nio.ch.SelectionKeyImpl@561279c8
> > java.net.ConnectException: Connection refused
> >       at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> >       at
> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
> >       at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
> > 2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn
> > (ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown
> input
> > java.nio.channels.ClosedChannelException
> >       at
> > sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
> >       at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
> >       at
> > org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
> >       at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
> > 2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn
> > (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown
> output
> > java.nio.channels.ClosedChannelException
> >       at
> > sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
> >       at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
> >       at
> > org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
> >       at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
> > 2011-01-05 15:20:12,621 WARN  zookeeper.ClientCnxn
> > (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
> > sun.nio.ch.SelectionKeyImpl@799dbc3b
> >
> > I overcomed from the previous issue of MasterNotRunning Exception which
> > occured due to incompatibilities in hive_hbase jars.
> >
> > Now I'm using Hadoop-0.20.2, Hive-0.6.0 ( Bydefault Derby metastore  )
> and
> > Hbase-0.20.3.
> >
> > Please tell how this could be resolved.
> >
> > Also I want to add one more thing that my hadoop Cluster is of 9 nodes
> and 8
> > nodes act as Datanodes,Tasktrackers and Regionservers.
> >
> > Among these nodes is set zookeeper.quorum.property to have 5 Datanodes.
> > Would this is the issue.
> > I don't know the number of servers needed for Zookeeper in fully
> distributed
> > mode.
> >
> >
> > Best Regards
> >
> > Adarsh Sharma
> >
> >
> >
>

Re: Error in metadata: javax.jdo.JDOFatalDataStoreException

Posted by Jean-Daniel Cryans <jd...@apache.org>.
With one cluster you really only need one, and it doesn't seem to be
running from what I can tell:

2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn
(ClientCnxn.java:run(967)) - Exception closing session 0x0 to
sun.nio.ch.SelectionKeyImpl@561279c8
java.net.ConnectException: Connection refused

And this is only the tail of the log, the head will tell you where
it's trying to connect. My guess is that there's either a problem with
your hbase configuration for hive, or the ZK peers aren't running, or
both issues at the same time. Although if you can see that HBase is
already running properly, then it must be a configuration issue.

J-D

On Wed, Jan 5, 2011 at 2:14 AM, Adarsh Sharma <ad...@orkash.com> wrote:
>
>
>
>
> Dear all,
>
> I am trying Hive/Hbase Integration from the past 2 days. I am facing the
> below issue while creating external table in Hive.
>
> *Command-Line Error :-
>
> *hadoop@s2-ratw-1:~/project/hive-0.6.0/build/dist$ bin/hive --auxpath
> /home/hadoop/project/hive-0.6.0/build/dist/lib/hive_hbase-handler.jar,/home/hadoop/project/hive-0.6.0/build/dist/lib/hbase-0.20.3.jar,/home/hadoop/project/hive-0.6.0/build/dist/lib/zookeeper-3.2.2.jar
>  -hiveconf
> hbase.zookeeper.quorum=192.168.1.103,192.168.1.114,192.168.1.115,192.168.1.104,192.168.1.107
> Hive history
> file=/tmp/hadoop/hive_job_log_hadoop_201101051527_1728376885.txt
> hive> show tables;
> FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException:
> Communications link failure
>
> The last packet sent successfully to the server was 0 milliseconds ago. The
> driver has not received any packets from the server.
> NestedThrowables:
> com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link
> failure
>
> The last packet sent successfully to the server was 0 milliseconds ago. The
> driver has not received any packets from the server.
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.DDLTask
> hive> exit;
> hadoop@s2-ratw-1:~/project/hive-0.6.0/build/dist$
>
> *My hive.log file says :*
>
> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.core.resources" but it cannot be resolved.
> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.core.resources" but it cannot be resolved.
> 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.core.runtime" but it cannot be resolved.
> 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.core.runtime" but it cannot be resolved.
> 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.text" but it cannot be resolved.
> 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.text" but it cannot be resolved.
> 2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn
> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
> sun.nio.ch.SelectionKeyImpl@561279c8
> java.net.ConnectException: Connection refused
>       at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>       at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
> 2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn
> (ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown input
> java.nio.channels.ClosedChannelException
>       at
> sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
>       at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>       at
> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
>       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
> 2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn
> (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output
> java.nio.channels.ClosedChannelException
>       at
> sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
>       at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>       at
> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
>       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
> 2011-01-05 15:20:12,621 WARN  zookeeper.ClientCnxn
> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
> sun.nio.ch.SelectionKeyImpl@799dbc3b
>
> I overcomed from the previous issue of MasterNotRunning Exception which
> occured due to incompatibilities in hive_hbase jars.
>
> Now I'm using Hadoop-0.20.2, Hive-0.6.0 ( Bydefault Derby metastore  ) and
> Hbase-0.20.3.
>
> Please tell how this could be resolved.
>
> Also I want to add one more thing that my hadoop Cluster is of 9 nodes and 8
> nodes act as Datanodes,Tasktrackers and Regionservers.
>
> Among these nodes is set zookeeper.quorum.property to have 5 Datanodes.
> Would this is the issue.
> I don't know the number of servers needed for Zookeeper in fully distributed
> mode.
>
>
> Best Regards
>
> Adarsh Sharma
>
>
>

Re: Error in metadata: javax.jdo.JDOFatalDataStoreException

Posted by Jean-Daniel Cryans <jd...@apache.org>.
With one cluster you really only need one, and it doesn't seem to be
running from what I can tell:

2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn
(ClientCnxn.java:run(967)) - Exception closing session 0x0 to
sun.nio.ch.SelectionKeyImpl@561279c8
java.net.ConnectException: Connection refused

And this is only the tail of the log, the head will tell you where
it's trying to connect. My guess is that there's either a problem with
your hbase configuration for hive, or the ZK peers aren't running, or
both issues at the same time. Although if you can see that HBase is
already running properly, then it must be a configuration issue.

J-D

On Wed, Jan 5, 2011 at 2:14 AM, Adarsh Sharma <ad...@orkash.com> wrote:
>
>
>
>
> Dear all,
>
> I am trying Hive/Hbase Integration from the past 2 days. I am facing the
> below issue while creating external table in Hive.
>
> *Command-Line Error :-
>
> *hadoop@s2-ratw-1:~/project/hive-0.6.0/build/dist$ bin/hive --auxpath
> /home/hadoop/project/hive-0.6.0/build/dist/lib/hive_hbase-handler.jar,/home/hadoop/project/hive-0.6.0/build/dist/lib/hbase-0.20.3.jar,/home/hadoop/project/hive-0.6.0/build/dist/lib/zookeeper-3.2.2.jar
>  -hiveconf
> hbase.zookeeper.quorum=192.168.1.103,192.168.1.114,192.168.1.115,192.168.1.104,192.168.1.107
> Hive history
> file=/tmp/hadoop/hive_job_log_hadoop_201101051527_1728376885.txt
> hive> show tables;
> FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException:
> Communications link failure
>
> The last packet sent successfully to the server was 0 milliseconds ago. The
> driver has not received any packets from the server.
> NestedThrowables:
> com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link
> failure
>
> The last packet sent successfully to the server was 0 milliseconds ago. The
> driver has not received any packets from the server.
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.DDLTask
> hive> exit;
> hadoop@s2-ratw-1:~/project/hive-0.6.0/build/dist$
>
> *My hive.log file says :*
>
> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.core.resources" but it cannot be resolved.
> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.core.resources" but it cannot be resolved.
> 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.core.runtime" but it cannot be resolved.
> 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.core.runtime" but it cannot be resolved.
> 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.text" but it cannot be resolved.
> 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.text" but it cannot be resolved.
> 2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn
> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
> sun.nio.ch.SelectionKeyImpl@561279c8
> java.net.ConnectException: Connection refused
>       at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>       at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
> 2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn
> (ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown input
> java.nio.channels.ClosedChannelException
>       at
> sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
>       at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>       at
> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
>       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
> 2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn
> (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output
> java.nio.channels.ClosedChannelException
>       at
> sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
>       at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>       at
> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
>       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
> 2011-01-05 15:20:12,621 WARN  zookeeper.ClientCnxn
> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
> sun.nio.ch.SelectionKeyImpl@799dbc3b
>
> I overcomed from the previous issue of MasterNotRunning Exception which
> occured due to incompatibilities in hive_hbase jars.
>
> Now I'm using Hadoop-0.20.2, Hive-0.6.0 ( Bydefault Derby metastore  ) and
> Hbase-0.20.3.
>
> Please tell how this could be resolved.
>
> Also I want to add one more thing that my hadoop Cluster is of 9 nodes and 8
> nodes act as Datanodes,Tasktrackers and Regionservers.
>
> Among these nodes is set zookeeper.quorum.property to have 5 Datanodes.
> Would this is the issue.
> I don't know the number of servers needed for Zookeeper in fully distributed
> mode.
>
>
> Best Regards
>
> Adarsh Sharma
>
>
>