You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Jean-Daniel Cryans <jd...@apache.org> on 2011/01/05 18:34:04 UTC

Re: Error in metadata: javax.jdo.JDOFatalDataStoreException

With one cluster you really only need one, and it doesn't seem to be
running from what I can tell:

2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn
(ClientCnxn.java:run(967)) - Exception closing session 0x0 to
sun.nio.ch.SelectionKeyImpl@561279c8
java.net.ConnectException: Connection refused

And this is only the tail of the log, the head will tell you where
it's trying to connect. My guess is that there's either a problem with
your hbase configuration for hive, or the ZK peers aren't running, or
both issues at the same time. Although if you can see that HBase is
already running properly, then it must be a configuration issue.

J-D

On Wed, Jan 5, 2011 at 2:14 AM, Adarsh Sharma <ad...@orkash.com> wrote:
>
>
>
>
> Dear all,
>
> I am trying Hive/Hbase Integration from the past 2 days. I am facing the
> below issue while creating external table in Hive.
>
> *Command-Line Error :-
>
> *hadoop@s2-ratw-1:~/project/hive-0.6.0/build/dist$ bin/hive --auxpath
> /home/hadoop/project/hive-0.6.0/build/dist/lib/hive_hbase-handler.jar,/home/hadoop/project/hive-0.6.0/build/dist/lib/hbase-0.20.3.jar,/home/hadoop/project/hive-0.6.0/build/dist/lib/zookeeper-3.2.2.jar
>  -hiveconf
> hbase.zookeeper.quorum=192.168.1.103,192.168.1.114,192.168.1.115,192.168.1.104,192.168.1.107
> Hive history
> file=/tmp/hadoop/hive_job_log_hadoop_201101051527_1728376885.txt
> hive> show tables;
> FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException:
> Communications link failure
>
> The last packet sent successfully to the server was 0 milliseconds ago. The
> driver has not received any packets from the server.
> NestedThrowables:
> com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link
> failure
>
> The last packet sent successfully to the server was 0 milliseconds ago. The
> driver has not received any packets from the server.
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.DDLTask
> hive> exit;
> hadoop@s2-ratw-1:~/project/hive-0.6.0/build/dist$
>
> *My hive.log file says :*
>
> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.core.resources" but it cannot be resolved.
> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.core.resources" but it cannot be resolved.
> 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.core.runtime" but it cannot be resolved.
> 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.core.runtime" but it cannot be resolved.
> 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.text" but it cannot be resolved.
> 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> "org.eclipse.text" but it cannot be resolved.
> 2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn
> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
> sun.nio.ch.SelectionKeyImpl@561279c8
> java.net.ConnectException: Connection refused
>       at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>       at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
> 2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn
> (ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown input
> java.nio.channels.ClosedChannelException
>       at
> sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
>       at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>       at
> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
>       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
> 2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn
> (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output
> java.nio.channels.ClosedChannelException
>       at
> sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
>       at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>       at
> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
>       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
> 2011-01-05 15:20:12,621 WARN  zookeeper.ClientCnxn
> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
> sun.nio.ch.SelectionKeyImpl@799dbc3b
>
> I overcomed from the previous issue of MasterNotRunning Exception which
> occured due to incompatibilities in hive_hbase jars.
>
> Now I'm using Hadoop-0.20.2, Hive-0.6.0 ( Bydefault Derby metastore  ) and
> Hbase-0.20.3.
>
> Please tell how this could be resolved.
>
> Also I want to add one more thing that my hadoop Cluster is of 9 nodes and 8
> nodes act as Datanodes,Tasktrackers and Regionservers.
>
> Among these nodes is set zookeeper.quorum.property to have 5 Datanodes.
> Would this is the issue.
> I don't know the number of servers needed for Zookeeper in fully distributed
> mode.
>
>
> Best Regards
>
> Adarsh Sharma
>
>
>

Re: Error in metadata: javax.jdo.JDOFatalDataStoreException

Posted by vaibhav negi <ss...@gmail.com>.
Hi Adarsh,

It may be because of wrong configuration to meta store server/lack of access
rights .

Vaibhav Negi


On Wed, Jan 5, 2011 at 11:04 PM, Jean-Daniel Cryans <jd...@apache.org>wrote:

> With one cluster you really only need one, and it doesn't seem to be
> running from what I can tell:
>
> 2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn
> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
> sun.nio.ch.SelectionKeyImpl@561279c8
> java.net.ConnectException: Connection refused
>
> And this is only the tail of the log, the head will tell you where
> it's trying to connect. My guess is that there's either a problem with
> your hbase configuration for hive, or the ZK peers aren't running, or
> both issues at the same time. Although if you can see that HBase is
> already running properly, then it must be a configuration issue.
>
> J-D
>
> On Wed, Jan 5, 2011 at 2:14 AM, Adarsh Sharma <ad...@orkash.com>
> wrote:
> >
> >
> >
> >
> > Dear all,
> >
> > I am trying Hive/Hbase Integration from the past 2 days. I am facing the
> > below issue while creating external table in Hive.
> >
> > *Command-Line Error :-
> >
> > *hadoop@s2-ratw-1:~/project/hive-0.6.0/build/dist$ bin/hive --auxpath
> >
> /home/hadoop/project/hive-0.6.0/build/dist/lib/hive_hbase-handler.jar,/home/hadoop/project/hive-0.6.0/build/dist/lib/hbase-0.20.3.jar,/home/hadoop/project/hive-0.6.0/build/dist/lib/zookeeper-3.2.2.jar
> >  -hiveconf
> >
> hbase.zookeeper.quorum=192.168.1.103,192.168.1.114,192.168.1.115,192.168.1.104,192.168.1.107
> > Hive history
> > file=/tmp/hadoop/hive_job_log_hadoop_201101051527_1728376885.txt
> > hive> show tables;
> > FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException:
> > Communications link failure
> >
> > The last packet sent successfully to the server was 0 milliseconds ago.
> The
> > driver has not received any packets from the server.
> > NestedThrowables:
> > com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications
> link
> > failure
> >
> > The last packet sent successfully to the server was 0 milliseconds ago.
> The
> > driver has not received any packets from the server.
> > FAILED: Execution Error, return code 1 from
> > org.apache.hadoop.hive.ql.exec.DDLTask
> > hive> exit;
> > hadoop@s2-ratw-1:~/project/hive-0.6.0/build/dist$
> >
> > *My hive.log file says :*
> >
> > 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin
> > (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> > "org.eclipse.core.resources" but it cannot be resolved.
> > 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin
> > (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> > "org.eclipse.core.resources" but it cannot be resolved.
> > 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin
> > (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> > "org.eclipse.core.runtime" but it cannot be resolved.
> > 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin
> > (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> > "org.eclipse.core.runtime" but it cannot be resolved.
> > 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin
> > (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> > "org.eclipse.text" but it cannot be resolved.
> > 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin
> > (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
> > "org.eclipse.text" but it cannot be resolved.
> > 2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn
> > (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
> > sun.nio.ch.SelectionKeyImpl@561279c8
> > java.net.ConnectException: Connection refused
> >       at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> >       at
> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
> >       at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
> > 2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn
> > (ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown
> input
> > java.nio.channels.ClosedChannelException
> >       at
> > sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
> >       at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
> >       at
> > org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
> >       at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
> > 2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn
> > (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown
> output
> > java.nio.channels.ClosedChannelException
> >       at
> > sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
> >       at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
> >       at
> > org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
> >       at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
> > 2011-01-05 15:20:12,621 WARN  zookeeper.ClientCnxn
> > (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
> > sun.nio.ch.SelectionKeyImpl@799dbc3b
> >
> > I overcomed from the previous issue of MasterNotRunning Exception which
> > occured due to incompatibilities in hive_hbase jars.
> >
> > Now I'm using Hadoop-0.20.2, Hive-0.6.0 ( Bydefault Derby metastore  )
> and
> > Hbase-0.20.3.
> >
> > Please tell how this could be resolved.
> >
> > Also I want to add one more thing that my hadoop Cluster is of 9 nodes
> and 8
> > nodes act as Datanodes,Tasktrackers and Regionservers.
> >
> > Among these nodes is set zookeeper.quorum.property to have 5 Datanodes.
> > Would this is the issue.
> > I don't know the number of servers needed for Zookeeper in fully
> distributed
> > mode.
> >
> >
> > Best Regards
> >
> > Adarsh Sharma
> >
> >
> >
>