You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Vimal Jain <vk...@gmail.com> on 2013/10/18 12:21:55 UTC
Exceptions in Hadoop and Hbase log files
Hi,
I am running Hbase in pseudo distributed mode.( Hbase 0.94.7 and Hadoop
1.1.2).
I am getting certain exceptions in Hadoop's namenode and data node files
which are :-
Namenode :-
2013-10-18 10:33:37,218 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
NameSystem.heartbeatCheck: lost heartbeat from 192.168.20.30:50010
2013-10-18 10:33:37,242 INFO org.apache.hadoop.net.NetworkTopology:
Removing a node: /default-rack/192.168.20.30:50010
2013-10-18 10:35:27,606 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
transactions: 64 Total time for transactions(ms): 1Number
of transactions batched in Syncs: 0 Number of syncs: 43 SyncTimes(ms): 86
2013-10-18 10:35:27,614 ERROR
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hadoop cause:java.io.IOException: File /h
base/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
could only be replicated to 0 nodes, instead of 1
2013-10-18 10:35:27,895 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 9 on 9000, call
addBlock(/hbase/event_data/433b61f2a4ebff8f2e4b89890508a
3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e,
DFSClient_hb_rs_hbase.rummycircle.com,60020,1382012725057, null) from
192.168.20.30:44990: error: java.io.I
OException: File
/hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
could only be replicated to 0 nodes, instead
of 1
java.io.IOException: File
/hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
could only be replicated to 0 nodes
, instead of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
Data node :-
2013-10-18 06:13:14,499 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
192.168.20.30:50010, storageID=DS-1816106352-192.16
8.20.30-50010-1369314076237, infoPort=50075, ipcPort=50020):Got exception
while serving blk_-3215981820534544354_52215 to /192.168.20.30:
java.net.SocketTimeoutException: 480000 millis timeout while waiting for
channel to be ready for write. ch :
java.nio.channels.SocketChannel[connected
local=/192.168.20.30:50010 remote=/192.168.20.30:36188]
at
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
at
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
at
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
at java.lang.Thread.run(Thread.java:662)
--
Thanks and Regards,
Vimal Jain
Re: Exceptions in Hadoop and Hbase log files
Posted by Vimal Jain <vk...@gmail.com>.
I will try that if i get them next time.
Could anyone please give the cause of this exceptions ?
On Fri, Oct 18, 2013 at 4:03 PM, divye sheth <di...@gmail.com> wrote:
> I would recommend you to stop the cluster and then start the daemons one by
> one.
> 1. stop-dfs.sh
> 2. hadoop-daemon.sh start namenode
> 3. hadoop-daemon.sh start datanode
>
> This will show start up errors if any, also verify if the datanode is able
> to communicate with the namenode.
>
> Thanks
> Divye Sheth
>
>
> On Fri, Oct 18, 2013 at 3:51 PM, Vimal Jain <vk...@gmail.com> wrote:
>
> > Hi,
> > I am running Hbase in pseudo distributed mode.( Hbase 0.94.7 and Hadoop
> > 1.1.2).
> > I am getting certain exceptions in Hadoop's namenode and data node files
> > which are :-
> >
> > Namenode :-
> >
> > 2013-10-18 10:33:37,218 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> > NameSystem.heartbeatCheck: lost heartbeat from 192.168.20.30:50010
> > 2013-10-18 10:33:37,242 INFO org.apache.hadoop.net.NetworkTopology:
> > Removing a node: /default-rack/192.168.20.30:50010
> > 2013-10-18 10:35:27,606 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> > transactions: 64 Total time for transactions(ms): 1Number
> > of transactions batched in Syncs: 0 Number of syncs: 43 SyncTimes(ms): 86
> > 2013-10-18 10:35:27,614 ERROR
> > org.apache.hadoop.security.UserGroupInformation:
> PriviledgedActionException
> > as:hadoop cause:java.io.IOException: File /h
> >
> >
> base/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> > could only be replicated to 0 nodes, instead of 1
> > 2013-10-18 10:35:27,895 INFO org.apache.hadoop.ipc.Server: IPC Server
> > handler 9 on 9000, call
> > addBlock(/hbase/event_data/433b61f2a4ebff8f2e4b89890508a
> > 3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e,
> > DFSClient_hb_rs_hbase.rummycircle.com,60020,1382012725057, null) from
> > 192.168.20.30:44990: error: java.io.I
> > OException: File
> >
> >
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> > could only be replicated to 0 nodes, instead
> > of 1
> > java.io.IOException: File
> >
> >
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> > could only be replicated to 0 nodes
> > , instead of 1
> > at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> > at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
> > at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > at java.lang.reflect.Method.invoke(Method.java:597)
> > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> > at java.security.AccessController.doPrivileged(Native Method)
> > at javax.security.auth.Subject.doAs(Subject.java:396)
> > at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
> >
> >
> > Data node :-
> >
> > 2013-10-18 06:13:14,499 WARN
> > org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> > 192.168.20.30:50010, storageID=DS-1816106352-192.16
> > 8.20.30-50010-1369314076237, infoPort=50075, ipcPort=50020):Got exception
> > while serving blk_-3215981820534544354_52215 to /192.168.20.30:
> > java.net.SocketTimeoutException: 480000 millis timeout while waiting for
> > channel to be ready for write. ch :
> > java.nio.channels.SocketChannel[connected
> > local=/192.168.20.30:50010 remote=/192.168.20.30:36188]
> > at
> >
> >
> org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
> > at
> >
> >
> org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
> > at
> >
> >
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
> > at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
> > at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
> > at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
> > at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
> > at java.lang.Thread.run(Thread.java:662)
> >
> >
> >
> >
> >
> >
> >
> > --
> > Thanks and Regards,
> > Vimal Jain
> >
>
--
Thanks and Regards,
Vimal Jain
Re: Exceptions in Hadoop and Hbase log files
Posted by Vimal Jain <vk...@gmail.com>.
I will try that if i get them next time.
Could anyone please give the cause of this exceptions ?
On Fri, Oct 18, 2013 at 4:03 PM, divye sheth <di...@gmail.com> wrote:
> I would recommend you to stop the cluster and then start the daemons one by
> one.
> 1. stop-dfs.sh
> 2. hadoop-daemon.sh start namenode
> 3. hadoop-daemon.sh start datanode
>
> This will show start up errors if any, also verify if the datanode is able
> to communicate with the namenode.
>
> Thanks
> Divye Sheth
>
>
> On Fri, Oct 18, 2013 at 3:51 PM, Vimal Jain <vk...@gmail.com> wrote:
>
> > Hi,
> > I am running Hbase in pseudo distributed mode.( Hbase 0.94.7 and Hadoop
> > 1.1.2).
> > I am getting certain exceptions in Hadoop's namenode and data node files
> > which are :-
> >
> > Namenode :-
> >
> > 2013-10-18 10:33:37,218 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> > NameSystem.heartbeatCheck: lost heartbeat from 192.168.20.30:50010
> > 2013-10-18 10:33:37,242 INFO org.apache.hadoop.net.NetworkTopology:
> > Removing a node: /default-rack/192.168.20.30:50010
> > 2013-10-18 10:35:27,606 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> > transactions: 64 Total time for transactions(ms): 1Number
> > of transactions batched in Syncs: 0 Number of syncs: 43 SyncTimes(ms): 86
> > 2013-10-18 10:35:27,614 ERROR
> > org.apache.hadoop.security.UserGroupInformation:
> PriviledgedActionException
> > as:hadoop cause:java.io.IOException: File /h
> >
> >
> base/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> > could only be replicated to 0 nodes, instead of 1
> > 2013-10-18 10:35:27,895 INFO org.apache.hadoop.ipc.Server: IPC Server
> > handler 9 on 9000, call
> > addBlock(/hbase/event_data/433b61f2a4ebff8f2e4b89890508a
> > 3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e,
> > DFSClient_hb_rs_hbase.rummycircle.com,60020,1382012725057, null) from
> > 192.168.20.30:44990: error: java.io.I
> > OException: File
> >
> >
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> > could only be replicated to 0 nodes, instead
> > of 1
> > java.io.IOException: File
> >
> >
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> > could only be replicated to 0 nodes
> > , instead of 1
> > at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> > at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
> > at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > at java.lang.reflect.Method.invoke(Method.java:597)
> > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> > at java.security.AccessController.doPrivileged(Native Method)
> > at javax.security.auth.Subject.doAs(Subject.java:396)
> > at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
> >
> >
> > Data node :-
> >
> > 2013-10-18 06:13:14,499 WARN
> > org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> > 192.168.20.30:50010, storageID=DS-1816106352-192.16
> > 8.20.30-50010-1369314076237, infoPort=50075, ipcPort=50020):Got exception
> > while serving blk_-3215981820534544354_52215 to /192.168.20.30:
> > java.net.SocketTimeoutException: 480000 millis timeout while waiting for
> > channel to be ready for write. ch :
> > java.nio.channels.SocketChannel[connected
> > local=/192.168.20.30:50010 remote=/192.168.20.30:36188]
> > at
> >
> >
> org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
> > at
> >
> >
> org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
> > at
> >
> >
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
> > at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
> > at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
> > at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
> > at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
> > at java.lang.Thread.run(Thread.java:662)
> >
> >
> >
> >
> >
> >
> >
> > --
> > Thanks and Regards,
> > Vimal Jain
> >
>
--
Thanks and Regards,
Vimal Jain
Re: Exceptions in Hadoop and Hbase log files
Posted by Vimal Jain <vk...@gmail.com>.
I will try that if i get them next time.
Could anyone please give the cause of this exceptions ?
On Fri, Oct 18, 2013 at 4:03 PM, divye sheth <di...@gmail.com> wrote:
> I would recommend you to stop the cluster and then start the daemons one by
> one.
> 1. stop-dfs.sh
> 2. hadoop-daemon.sh start namenode
> 3. hadoop-daemon.sh start datanode
>
> This will show start up errors if any, also verify if the datanode is able
> to communicate with the namenode.
>
> Thanks
> Divye Sheth
>
>
> On Fri, Oct 18, 2013 at 3:51 PM, Vimal Jain <vk...@gmail.com> wrote:
>
> > Hi,
> > I am running Hbase in pseudo distributed mode.( Hbase 0.94.7 and Hadoop
> > 1.1.2).
> > I am getting certain exceptions in Hadoop's namenode and data node files
> > which are :-
> >
> > Namenode :-
> >
> > 2013-10-18 10:33:37,218 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> > NameSystem.heartbeatCheck: lost heartbeat from 192.168.20.30:50010
> > 2013-10-18 10:33:37,242 INFO org.apache.hadoop.net.NetworkTopology:
> > Removing a node: /default-rack/192.168.20.30:50010
> > 2013-10-18 10:35:27,606 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> > transactions: 64 Total time for transactions(ms): 1Number
> > of transactions batched in Syncs: 0 Number of syncs: 43 SyncTimes(ms): 86
> > 2013-10-18 10:35:27,614 ERROR
> > org.apache.hadoop.security.UserGroupInformation:
> PriviledgedActionException
> > as:hadoop cause:java.io.IOException: File /h
> >
> >
> base/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> > could only be replicated to 0 nodes, instead of 1
> > 2013-10-18 10:35:27,895 INFO org.apache.hadoop.ipc.Server: IPC Server
> > handler 9 on 9000, call
> > addBlock(/hbase/event_data/433b61f2a4ebff8f2e4b89890508a
> > 3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e,
> > DFSClient_hb_rs_hbase.rummycircle.com,60020,1382012725057, null) from
> > 192.168.20.30:44990: error: java.io.I
> > OException: File
> >
> >
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> > could only be replicated to 0 nodes, instead
> > of 1
> > java.io.IOException: File
> >
> >
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> > could only be replicated to 0 nodes
> > , instead of 1
> > at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> > at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
> > at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > at java.lang.reflect.Method.invoke(Method.java:597)
> > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> > at java.security.AccessController.doPrivileged(Native Method)
> > at javax.security.auth.Subject.doAs(Subject.java:396)
> > at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
> >
> >
> > Data node :-
> >
> > 2013-10-18 06:13:14,499 WARN
> > org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> > 192.168.20.30:50010, storageID=DS-1816106352-192.16
> > 8.20.30-50010-1369314076237, infoPort=50075, ipcPort=50020):Got exception
> > while serving blk_-3215981820534544354_52215 to /192.168.20.30:
> > java.net.SocketTimeoutException: 480000 millis timeout while waiting for
> > channel to be ready for write. ch :
> > java.nio.channels.SocketChannel[connected
> > local=/192.168.20.30:50010 remote=/192.168.20.30:36188]
> > at
> >
> >
> org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
> > at
> >
> >
> org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
> > at
> >
> >
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
> > at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
> > at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
> > at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
> > at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
> > at java.lang.Thread.run(Thread.java:662)
> >
> >
> >
> >
> >
> >
> >
> > --
> > Thanks and Regards,
> > Vimal Jain
> >
>
--
Thanks and Regards,
Vimal Jain
Re: Exceptions in Hadoop and Hbase log files
Posted by Vimal Jain <vk...@gmail.com>.
I will try that if i get them next time.
Could anyone please give the cause of this exceptions ?
On Fri, Oct 18, 2013 at 4:03 PM, divye sheth <di...@gmail.com> wrote:
> I would recommend you to stop the cluster and then start the daemons one by
> one.
> 1. stop-dfs.sh
> 2. hadoop-daemon.sh start namenode
> 3. hadoop-daemon.sh start datanode
>
> This will show start up errors if any, also verify if the datanode is able
> to communicate with the namenode.
>
> Thanks
> Divye Sheth
>
>
> On Fri, Oct 18, 2013 at 3:51 PM, Vimal Jain <vk...@gmail.com> wrote:
>
> > Hi,
> > I am running Hbase in pseudo distributed mode.( Hbase 0.94.7 and Hadoop
> > 1.1.2).
> > I am getting certain exceptions in Hadoop's namenode and data node files
> > which are :-
> >
> > Namenode :-
> >
> > 2013-10-18 10:33:37,218 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> > NameSystem.heartbeatCheck: lost heartbeat from 192.168.20.30:50010
> > 2013-10-18 10:33:37,242 INFO org.apache.hadoop.net.NetworkTopology:
> > Removing a node: /default-rack/192.168.20.30:50010
> > 2013-10-18 10:35:27,606 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> > transactions: 64 Total time for transactions(ms): 1Number
> > of transactions batched in Syncs: 0 Number of syncs: 43 SyncTimes(ms): 86
> > 2013-10-18 10:35:27,614 ERROR
> > org.apache.hadoop.security.UserGroupInformation:
> PriviledgedActionException
> > as:hadoop cause:java.io.IOException: File /h
> >
> >
> base/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> > could only be replicated to 0 nodes, instead of 1
> > 2013-10-18 10:35:27,895 INFO org.apache.hadoop.ipc.Server: IPC Server
> > handler 9 on 9000, call
> > addBlock(/hbase/event_data/433b61f2a4ebff8f2e4b89890508a
> > 3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e,
> > DFSClient_hb_rs_hbase.rummycircle.com,60020,1382012725057, null) from
> > 192.168.20.30:44990: error: java.io.I
> > OException: File
> >
> >
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> > could only be replicated to 0 nodes, instead
> > of 1
> > java.io.IOException: File
> >
> >
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> > could only be replicated to 0 nodes
> > , instead of 1
> > at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> > at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
> > at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > at java.lang.reflect.Method.invoke(Method.java:597)
> > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> > at java.security.AccessController.doPrivileged(Native Method)
> > at javax.security.auth.Subject.doAs(Subject.java:396)
> > at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
> >
> >
> > Data node :-
> >
> > 2013-10-18 06:13:14,499 WARN
> > org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> > 192.168.20.30:50010, storageID=DS-1816106352-192.16
> > 8.20.30-50010-1369314076237, infoPort=50075, ipcPort=50020):Got exception
> > while serving blk_-3215981820534544354_52215 to /192.168.20.30:
> > java.net.SocketTimeoutException: 480000 millis timeout while waiting for
> > channel to be ready for write. ch :
> > java.nio.channels.SocketChannel[connected
> > local=/192.168.20.30:50010 remote=/192.168.20.30:36188]
> > at
> >
> >
> org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
> > at
> >
> >
> org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
> > at
> >
> >
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
> > at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
> > at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
> > at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
> > at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
> > at java.lang.Thread.run(Thread.java:662)
> >
> >
> >
> >
> >
> >
> >
> > --
> > Thanks and Regards,
> > Vimal Jain
> >
>
--
Thanks and Regards,
Vimal Jain
Re: Exceptions in Hadoop and Hbase log files
Posted by Vimal Jain <vk...@gmail.com>.
I will try that if i get them next time.
Could anyone please give the cause of this exceptions ?
On Fri, Oct 18, 2013 at 4:03 PM, divye sheth <di...@gmail.com> wrote:
> I would recommend you to stop the cluster and then start the daemons one by
> one.
> 1. stop-dfs.sh
> 2. hadoop-daemon.sh start namenode
> 3. hadoop-daemon.sh start datanode
>
> This will show start up errors if any, also verify if the datanode is able
> to communicate with the namenode.
>
> Thanks
> Divye Sheth
>
>
> On Fri, Oct 18, 2013 at 3:51 PM, Vimal Jain <vk...@gmail.com> wrote:
>
> > Hi,
> > I am running Hbase in pseudo distributed mode.( Hbase 0.94.7 and Hadoop
> > 1.1.2).
> > I am getting certain exceptions in Hadoop's namenode and data node files
> > which are :-
> >
> > Namenode :-
> >
> > 2013-10-18 10:33:37,218 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> > NameSystem.heartbeatCheck: lost heartbeat from 192.168.20.30:50010
> > 2013-10-18 10:33:37,242 INFO org.apache.hadoop.net.NetworkTopology:
> > Removing a node: /default-rack/192.168.20.30:50010
> > 2013-10-18 10:35:27,606 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> > transactions: 64 Total time for transactions(ms): 1Number
> > of transactions batched in Syncs: 0 Number of syncs: 43 SyncTimes(ms): 86
> > 2013-10-18 10:35:27,614 ERROR
> > org.apache.hadoop.security.UserGroupInformation:
> PriviledgedActionException
> > as:hadoop cause:java.io.IOException: File /h
> >
> >
> base/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> > could only be replicated to 0 nodes, instead of 1
> > 2013-10-18 10:35:27,895 INFO org.apache.hadoop.ipc.Server: IPC Server
> > handler 9 on 9000, call
> > addBlock(/hbase/event_data/433b61f2a4ebff8f2e4b89890508a
> > 3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e,
> > DFSClient_hb_rs_hbase.rummycircle.com,60020,1382012725057, null) from
> > 192.168.20.30:44990: error: java.io.I
> > OException: File
> >
> >
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> > could only be replicated to 0 nodes, instead
> > of 1
> > java.io.IOException: File
> >
> >
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> > could only be replicated to 0 nodes
> > , instead of 1
> > at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> > at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
> > at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > at java.lang.reflect.Method.invoke(Method.java:597)
> > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> > at java.security.AccessController.doPrivileged(Native Method)
> > at javax.security.auth.Subject.doAs(Subject.java:396)
> > at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
> >
> >
> > Data node :-
> >
> > 2013-10-18 06:13:14,499 WARN
> > org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> > 192.168.20.30:50010, storageID=DS-1816106352-192.16
> > 8.20.30-50010-1369314076237, infoPort=50075, ipcPort=50020):Got exception
> > while serving blk_-3215981820534544354_52215 to /192.168.20.30:
> > java.net.SocketTimeoutException: 480000 millis timeout while waiting for
> > channel to be ready for write. ch :
> > java.nio.channels.SocketChannel[connected
> > local=/192.168.20.30:50010 remote=/192.168.20.30:36188]
> > at
> >
> >
> org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
> > at
> >
> >
> org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
> > at
> >
> >
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
> > at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
> > at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
> > at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
> > at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
> > at java.lang.Thread.run(Thread.java:662)
> >
> >
> >
> >
> >
> >
> >
> > --
> > Thanks and Regards,
> > Vimal Jain
> >
>
--
Thanks and Regards,
Vimal Jain
Re: Exceptions in Hadoop and Hbase log files
Posted by divye sheth <di...@gmail.com>.
I would recommend you to stop the cluster and then start the daemons one by
one.
1. stop-dfs.sh
2. hadoop-daemon.sh start namenode
3. hadoop-daemon.sh start datanode
This will show start up errors if any, also verify if the datanode is able
to communicate with the namenode.
Thanks
Divye Sheth
On Fri, Oct 18, 2013 at 3:51 PM, Vimal Jain <vk...@gmail.com> wrote:
> Hi,
> I am running Hbase in pseudo distributed mode.( Hbase 0.94.7 and Hadoop
> 1.1.2).
> I am getting certain exceptions in Hadoop's namenode and data node files
> which are :-
>
> Namenode :-
>
> 2013-10-18 10:33:37,218 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.heartbeatCheck: lost heartbeat from 192.168.20.30:50010
> 2013-10-18 10:33:37,242 INFO org.apache.hadoop.net.NetworkTopology:
> Removing a node: /default-rack/192.168.20.30:50010
> 2013-10-18 10:35:27,606 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> transactions: 64 Total time for transactions(ms): 1Number
> of transactions batched in Syncs: 0 Number of syncs: 43 SyncTimes(ms): 86
> 2013-10-18 10:35:27,614 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.IOException: File /h
>
> base/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes, instead of 1
> 2013-10-18 10:35:27,895 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 9000, call
> addBlock(/hbase/event_data/433b61f2a4ebff8f2e4b89890508a
> 3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e,
> DFSClient_hb_rs_hbase.rummycircle.com,60020,1382012725057, null) from
> 192.168.20.30:44990: error: java.io.I
> OException: File
>
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes, instead
> of 1
> java.io.IOException: File
>
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes
> , instead of 1
> at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>
>
> Data node :-
>
> 2013-10-18 06:13:14,499 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 192.168.20.30:50010, storageID=DS-1816106352-192.16
> 8.20.30-50010-1369314076237, infoPort=50075, ipcPort=50020):Got exception
> while serving blk_-3215981820534544354_52215 to /192.168.20.30:
> java.net.SocketTimeoutException: 480000 millis timeout while waiting for
> channel to be ready for write. ch :
> java.nio.channels.SocketChannel[connected
> local=/192.168.20.30:50010 remote=/192.168.20.30:36188]
> at
>
> org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
> at
>
> org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
> at
>
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
> at
>
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
> at
>
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
> at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
> at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
> at java.lang.Thread.run(Thread.java:662)
>
>
>
>
>
>
>
> --
> Thanks and Regards,
> Vimal Jain
>
Re: Exceptions in Hadoop and Hbase log files
Posted by Vimal Jain <vk...@gmail.com>.
Some more exceptions in data node log -:
2013-10-18 10:37:53,693 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: Got blockRec
eived message from unregistered or dead node blk_-2949905629769882833_52274
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4188)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1069)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at sun.proxy.$Proxy5.blockReceived(Unknown Source)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:1006)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1527)
at java.lang.Thread.run(Thread.java:662)
2013-10-18 10:37:53,696 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: Got blockRec
eived message from unregistered or dead node blk_-2949905629769882833_52274
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4188)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1069)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
These exceptions keep on filling my disk space.
Let me know if you need more information.
Please help here.
On Fri, Oct 18, 2013 at 3:51 PM, Vimal Jain <vk...@gmail.com> wrote:
> Hi,
> I am running Hbase in pseudo distributed mode.( Hbase 0.94.7 and Hadoop
> 1.1.2).
> I am getting certain exceptions in Hadoop's namenode and data node files
> which are :-
>
> Namenode :-
>
> 2013-10-18 10:33:37,218 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.heartbeatCheck: lost heartbeat from 192.168.20.30:50010
> 2013-10-18 10:33:37,242 INFO org.apache.hadoop.net.NetworkTopology:
> Removing a node: /default-rack/192.168.20.30:50010
> 2013-10-18 10:35:27,606 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> transactions: 64 Total time for transactions(ms): 1Number
> of transactions batched in Syncs: 0 Number of syncs: 43 SyncTimes(ms): 86
> 2013-10-18 10:35:27,614 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.IOException: File /h
> base/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes, instead of 1
> 2013-10-18 10:35:27,895 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 9000, call
> addBlock(/hbase/event_data/433b61f2a4ebff8f2e4b89890508a
> 3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e,
> DFSClient_hb_rs_hbase.rummycircle.com,60020,1382012725057, null) from
> 192.168.20.30:44990: error: java.io.I
> OException: File
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes, instead
> of 1
> java.io.IOException: File
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes
> , instead of 1
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>
>
> Data node :-
>
> 2013-10-18 06:13:14,499 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 192.168.20.30:50010, storageID=DS-1816106352-192.16
> 8.20.30-50010-1369314076237, infoPort=50075, ipcPort=50020):Got exception
> while serving blk_-3215981820534544354_52215 to /192.168.20.30:
> java.net.SocketTimeoutException: 480000 millis timeout while waiting for
> channel to be ready for write. ch :
> java.nio.channels.SocketChannel[connected
> local=/192.168.20.30:50010 remote=/192.168.20.30:36188]
> at
> org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
> at
> org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
> at
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
> at java.lang.Thread.run(Thread.java:662)
>
>
>
>
>
>
>
> --
> Thanks and Regards,
> Vimal Jain
>
--
Thanks and Regards,
Vimal Jain
Re: Exceptions in Hadoop and Hbase log files
Posted by Vimal Jain <vk...@gmail.com>.
Some more exceptions in data node log -:
2013-10-18 10:37:53,693 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: Got blockRec
eived message from unregistered or dead node blk_-2949905629769882833_52274
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4188)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1069)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at sun.proxy.$Proxy5.blockReceived(Unknown Source)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:1006)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1527)
at java.lang.Thread.run(Thread.java:662)
2013-10-18 10:37:53,696 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: Got blockRec
eived message from unregistered or dead node blk_-2949905629769882833_52274
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4188)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1069)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
These exceptions keep on filling my disk space.
Let me know if you need more information.
Please help here.
On Fri, Oct 18, 2013 at 3:51 PM, Vimal Jain <vk...@gmail.com> wrote:
> Hi,
> I am running Hbase in pseudo distributed mode.( Hbase 0.94.7 and Hadoop
> 1.1.2).
> I am getting certain exceptions in Hadoop's namenode and data node files
> which are :-
>
> Namenode :-
>
> 2013-10-18 10:33:37,218 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.heartbeatCheck: lost heartbeat from 192.168.20.30:50010
> 2013-10-18 10:33:37,242 INFO org.apache.hadoop.net.NetworkTopology:
> Removing a node: /default-rack/192.168.20.30:50010
> 2013-10-18 10:35:27,606 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> transactions: 64 Total time for transactions(ms): 1Number
> of transactions batched in Syncs: 0 Number of syncs: 43 SyncTimes(ms): 86
> 2013-10-18 10:35:27,614 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.IOException: File /h
> base/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes, instead of 1
> 2013-10-18 10:35:27,895 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 9000, call
> addBlock(/hbase/event_data/433b61f2a4ebff8f2e4b89890508a
> 3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e,
> DFSClient_hb_rs_hbase.rummycircle.com,60020,1382012725057, null) from
> 192.168.20.30:44990: error: java.io.I
> OException: File
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes, instead
> of 1
> java.io.IOException: File
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes
> , instead of 1
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>
>
> Data node :-
>
> 2013-10-18 06:13:14,499 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 192.168.20.30:50010, storageID=DS-1816106352-192.16
> 8.20.30-50010-1369314076237, infoPort=50075, ipcPort=50020):Got exception
> while serving blk_-3215981820534544354_52215 to /192.168.20.30:
> java.net.SocketTimeoutException: 480000 millis timeout while waiting for
> channel to be ready for write. ch :
> java.nio.channels.SocketChannel[connected
> local=/192.168.20.30:50010 remote=/192.168.20.30:36188]
> at
> org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
> at
> org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
> at
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
> at java.lang.Thread.run(Thread.java:662)
>
>
>
>
>
>
>
> --
> Thanks and Regards,
> Vimal Jain
>
--
Thanks and Regards,
Vimal Jain
Re: Exceptions in Hadoop and Hbase log files
Posted by Steve Loughran <st...@hortonworks.com>.
have you tried using your favourite search engine to see what phrases in
the log throw up, things like "could only be replicated to 0 nodes"
On 18 October 2013 11:21, Vimal Jain <vk...@gmail.com> wrote:
> Hi,
> I am running Hbase in pseudo distributed mode.( Hbase 0.94.7 and Hadoop
> 1.1.2).
> I am getting certain exceptions in Hadoop's namenode and data node files
> which are :-
>
> Namenode :-
>
> 2013-10-18 10:33:37,218 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.heartbeatCheck: lost heartbeat from 192.168.20.30:50010
> 2013-10-18 10:33:37,242 INFO org.apache.hadoop.net.NetworkTopology:
> Removing a node: /default-rack/192.168.20.30:50010
> 2013-10-18 10:35:27,606 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> transactions: 64 Total time for transactions(ms): 1Number
> of transactions batched in Syncs: 0 Number of syncs: 43 SyncTimes(ms): 86
> 2013-10-18 10:35:27,614 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.IOException: File /h
>
> base/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes, instead of 1
> 2013-10-18 10:35:27,895 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 9000, call
> addBlock(/hbase/event_data/433b61f2a4ebff8f2e4b89890508a
> 3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e,
> DFSClient_hb_rs_hbase.rummycircle.com,60020,1382012725057, null) from
> 192.168.20.30:44990: error: java.io.I
> OException: File
>
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes, instead
> of 1
> java.io.IOException: File
>
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes
> , instead of 1
> at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>
>
> Data node :-
>
> 2013-10-18 06:13:14,499 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 192.168.20.30:50010, storageID=DS-1816106352-192.16
> 8.20.30-50010-1369314076237, infoPort=50075, ipcPort=50020):Got exception
> while serving blk_-3215981820534544354_52215 to /192.168.20.30:
> java.net.SocketTimeoutException: 480000 millis timeout while waiting for
> channel to be ready for write. ch :
> java.nio.channels.SocketChannel[connected
> local=/192.168.20.30:50010 remote=/192.168.20.30:36188]
> at
>
> org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
> at
>
> org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
> at
>
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
> at
>
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
> at
>
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
> at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
> at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
> at java.lang.Thread.run(Thread.java:662)
>
>
>
>
>
>
>
> --
> Thanks and Regards,
> Vimal Jain
>
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby notified that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender immediately
and delete it from your system. Thank You.
Re: Exceptions in Hadoop and Hbase log files
Posted by Vimal Jain <vk...@gmail.com>.
Some more exceptions in data node log -:
2013-10-18 10:37:53,693 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: Got blockRec
eived message from unregistered or dead node blk_-2949905629769882833_52274
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4188)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1069)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at sun.proxy.$Proxy5.blockReceived(Unknown Source)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:1006)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1527)
at java.lang.Thread.run(Thread.java:662)
2013-10-18 10:37:53,696 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: Got blockRec
eived message from unregistered or dead node blk_-2949905629769882833_52274
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4188)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1069)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
These exceptions keep on filling my disk space.
Let me know if you need more information.
Please help here.
On Fri, Oct 18, 2013 at 3:51 PM, Vimal Jain <vk...@gmail.com> wrote:
> Hi,
> I am running Hbase in pseudo distributed mode.( Hbase 0.94.7 and Hadoop
> 1.1.2).
> I am getting certain exceptions in Hadoop's namenode and data node files
> which are :-
>
> Namenode :-
>
> 2013-10-18 10:33:37,218 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.heartbeatCheck: lost heartbeat from 192.168.20.30:50010
> 2013-10-18 10:33:37,242 INFO org.apache.hadoop.net.NetworkTopology:
> Removing a node: /default-rack/192.168.20.30:50010
> 2013-10-18 10:35:27,606 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> transactions: 64 Total time for transactions(ms): 1Number
> of transactions batched in Syncs: 0 Number of syncs: 43 SyncTimes(ms): 86
> 2013-10-18 10:35:27,614 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.IOException: File /h
> base/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes, instead of 1
> 2013-10-18 10:35:27,895 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 9000, call
> addBlock(/hbase/event_data/433b61f2a4ebff8f2e4b89890508a
> 3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e,
> DFSClient_hb_rs_hbase.rummycircle.com,60020,1382012725057, null) from
> 192.168.20.30:44990: error: java.io.I
> OException: File
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes, instead
> of 1
> java.io.IOException: File
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes
> , instead of 1
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>
>
> Data node :-
>
> 2013-10-18 06:13:14,499 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 192.168.20.30:50010, storageID=DS-1816106352-192.16
> 8.20.30-50010-1369314076237, infoPort=50075, ipcPort=50020):Got exception
> while serving blk_-3215981820534544354_52215 to /192.168.20.30:
> java.net.SocketTimeoutException: 480000 millis timeout while waiting for
> channel to be ready for write. ch :
> java.nio.channels.SocketChannel[connected
> local=/192.168.20.30:50010 remote=/192.168.20.30:36188]
> at
> org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
> at
> org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
> at
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
> at java.lang.Thread.run(Thread.java:662)
>
>
>
>
>
>
>
> --
> Thanks and Regards,
> Vimal Jain
>
--
Thanks and Regards,
Vimal Jain
Re: Exceptions in Hadoop and Hbase log files
Posted by divye sheth <di...@gmail.com>.
I would recommend you to stop the cluster and then start the daemons one by
one.
1. stop-dfs.sh
2. hadoop-daemon.sh start namenode
3. hadoop-daemon.sh start datanode
This will show start up errors if any, also verify if the datanode is able
to communicate with the namenode.
Thanks
Divye Sheth
On Fri, Oct 18, 2013 at 3:51 PM, Vimal Jain <vk...@gmail.com> wrote:
> Hi,
> I am running Hbase in pseudo distributed mode.( Hbase 0.94.7 and Hadoop
> 1.1.2).
> I am getting certain exceptions in Hadoop's namenode and data node files
> which are :-
>
> Namenode :-
>
> 2013-10-18 10:33:37,218 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.heartbeatCheck: lost heartbeat from 192.168.20.30:50010
> 2013-10-18 10:33:37,242 INFO org.apache.hadoop.net.NetworkTopology:
> Removing a node: /default-rack/192.168.20.30:50010
> 2013-10-18 10:35:27,606 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> transactions: 64 Total time for transactions(ms): 1Number
> of transactions batched in Syncs: 0 Number of syncs: 43 SyncTimes(ms): 86
> 2013-10-18 10:35:27,614 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.IOException: File /h
>
> base/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes, instead of 1
> 2013-10-18 10:35:27,895 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 9000, call
> addBlock(/hbase/event_data/433b61f2a4ebff8f2e4b89890508a
> 3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e,
> DFSClient_hb_rs_hbase.rummycircle.com,60020,1382012725057, null) from
> 192.168.20.30:44990: error: java.io.I
> OException: File
>
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes, instead
> of 1
> java.io.IOException: File
>
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes
> , instead of 1
> at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>
>
> Data node :-
>
> 2013-10-18 06:13:14,499 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 192.168.20.30:50010, storageID=DS-1816106352-192.16
> 8.20.30-50010-1369314076237, infoPort=50075, ipcPort=50020):Got exception
> while serving blk_-3215981820534544354_52215 to /192.168.20.30:
> java.net.SocketTimeoutException: 480000 millis timeout while waiting for
> channel to be ready for write. ch :
> java.nio.channels.SocketChannel[connected
> local=/192.168.20.30:50010 remote=/192.168.20.30:36188]
> at
>
> org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
> at
>
> org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
> at
>
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
> at
>
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
> at
>
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
> at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
> at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
> at java.lang.Thread.run(Thread.java:662)
>
>
>
>
>
>
>
> --
> Thanks and Regards,
> Vimal Jain
>
Re: Exceptions in Hadoop and Hbase log files
Posted by Vimal Jain <vk...@gmail.com>.
Some more exceptions in data node log -:
2013-10-18 10:37:53,693 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: Got blockRec
eived message from unregistered or dead node blk_-2949905629769882833_52274
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4188)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1069)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at sun.proxy.$Proxy5.blockReceived(Unknown Source)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:1006)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1527)
at java.lang.Thread.run(Thread.java:662)
2013-10-18 10:37:53,696 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: Got blockRec
eived message from unregistered or dead node blk_-2949905629769882833_52274
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4188)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1069)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
These exceptions keep on filling my disk space.
Let me know if you need more information.
Please help here.
On Fri, Oct 18, 2013 at 3:51 PM, Vimal Jain <vk...@gmail.com> wrote:
> Hi,
> I am running Hbase in pseudo distributed mode.( Hbase 0.94.7 and Hadoop
> 1.1.2).
> I am getting certain exceptions in Hadoop's namenode and data node files
> which are :-
>
> Namenode :-
>
> 2013-10-18 10:33:37,218 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.heartbeatCheck: lost heartbeat from 192.168.20.30:50010
> 2013-10-18 10:33:37,242 INFO org.apache.hadoop.net.NetworkTopology:
> Removing a node: /default-rack/192.168.20.30:50010
> 2013-10-18 10:35:27,606 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> transactions: 64 Total time for transactions(ms): 1Number
> of transactions batched in Syncs: 0 Number of syncs: 43 SyncTimes(ms): 86
> 2013-10-18 10:35:27,614 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.IOException: File /h
> base/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes, instead of 1
> 2013-10-18 10:35:27,895 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 9000, call
> addBlock(/hbase/event_data/433b61f2a4ebff8f2e4b89890508a
> 3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e,
> DFSClient_hb_rs_hbase.rummycircle.com,60020,1382012725057, null) from
> 192.168.20.30:44990: error: java.io.I
> OException: File
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes, instead
> of 1
> java.io.IOException: File
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes
> , instead of 1
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>
>
> Data node :-
>
> 2013-10-18 06:13:14,499 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 192.168.20.30:50010, storageID=DS-1816106352-192.16
> 8.20.30-50010-1369314076237, infoPort=50075, ipcPort=50020):Got exception
> while serving blk_-3215981820534544354_52215 to /192.168.20.30:
> java.net.SocketTimeoutException: 480000 millis timeout while waiting for
> channel to be ready for write. ch :
> java.nio.channels.SocketChannel[connected
> local=/192.168.20.30:50010 remote=/192.168.20.30:36188]
> at
> org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
> at
> org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
> at
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
> at java.lang.Thread.run(Thread.java:662)
>
>
>
>
>
>
>
> --
> Thanks and Regards,
> Vimal Jain
>
--
Thanks and Regards,
Vimal Jain
Re: Exceptions in Hadoop and Hbase log files
Posted by Vimal Jain <vk...@gmail.com>.
Some more exceptions in data node log -:
2013-10-18 10:37:53,693 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: Got blockRec
eived message from unregistered or dead node blk_-2949905629769882833_52274
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4188)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1069)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at sun.proxy.$Proxy5.blockReceived(Unknown Source)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:1006)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1527)
at java.lang.Thread.run(Thread.java:662)
2013-10-18 10:37:53,696 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: Got blockRec
eived message from unregistered or dead node blk_-2949905629769882833_52274
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4188)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1069)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
These exceptions keep on filling my disk space.
Let me know if you need more information.
Please help here.
On Fri, Oct 18, 2013 at 3:51 PM, Vimal Jain <vk...@gmail.com> wrote:
> Hi,
> I am running Hbase in pseudo distributed mode.( Hbase 0.94.7 and Hadoop
> 1.1.2).
> I am getting certain exceptions in Hadoop's namenode and data node files
> which are :-
>
> Namenode :-
>
> 2013-10-18 10:33:37,218 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.heartbeatCheck: lost heartbeat from 192.168.20.30:50010
> 2013-10-18 10:33:37,242 INFO org.apache.hadoop.net.NetworkTopology:
> Removing a node: /default-rack/192.168.20.30:50010
> 2013-10-18 10:35:27,606 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> transactions: 64 Total time for transactions(ms): 1Number
> of transactions batched in Syncs: 0 Number of syncs: 43 SyncTimes(ms): 86
> 2013-10-18 10:35:27,614 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.IOException: File /h
> base/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes, instead of 1
> 2013-10-18 10:35:27,895 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 9000, call
> addBlock(/hbase/event_data/433b61f2a4ebff8f2e4b89890508a
> 3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e,
> DFSClient_hb_rs_hbase.rummycircle.com,60020,1382012725057, null) from
> 192.168.20.30:44990: error: java.io.I
> OException: File
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes, instead
> of 1
> java.io.IOException: File
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes
> , instead of 1
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>
>
> Data node :-
>
> 2013-10-18 06:13:14,499 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 192.168.20.30:50010, storageID=DS-1816106352-192.16
> 8.20.30-50010-1369314076237, infoPort=50075, ipcPort=50020):Got exception
> while serving blk_-3215981820534544354_52215 to /192.168.20.30:
> java.net.SocketTimeoutException: 480000 millis timeout while waiting for
> channel to be ready for write. ch :
> java.nio.channels.SocketChannel[connected
> local=/192.168.20.30:50010 remote=/192.168.20.30:36188]
> at
> org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
> at
> org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
> at
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
> at java.lang.Thread.run(Thread.java:662)
>
>
>
>
>
>
>
> --
> Thanks and Regards,
> Vimal Jain
>
--
Thanks and Regards,
Vimal Jain
Re: Exceptions in Hadoop and Hbase log files
Posted by divye sheth <di...@gmail.com>.
I would recommend you to stop the cluster and then start the daemons one by
one.
1. stop-dfs.sh
2. hadoop-daemon.sh start namenode
3. hadoop-daemon.sh start datanode
This will show start up errors if any, also verify if the datanode is able
to communicate with the namenode.
Thanks
Divye Sheth
On Fri, Oct 18, 2013 at 3:51 PM, Vimal Jain <vk...@gmail.com> wrote:
> Hi,
> I am running Hbase in pseudo distributed mode.( Hbase 0.94.7 and Hadoop
> 1.1.2).
> I am getting certain exceptions in Hadoop's namenode and data node files
> which are :-
>
> Namenode :-
>
> 2013-10-18 10:33:37,218 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.heartbeatCheck: lost heartbeat from 192.168.20.30:50010
> 2013-10-18 10:33:37,242 INFO org.apache.hadoop.net.NetworkTopology:
> Removing a node: /default-rack/192.168.20.30:50010
> 2013-10-18 10:35:27,606 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> transactions: 64 Total time for transactions(ms): 1Number
> of transactions batched in Syncs: 0 Number of syncs: 43 SyncTimes(ms): 86
> 2013-10-18 10:35:27,614 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.IOException: File /h
>
> base/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes, instead of 1
> 2013-10-18 10:35:27,895 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 9000, call
> addBlock(/hbase/event_data/433b61f2a4ebff8f2e4b89890508a
> 3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e,
> DFSClient_hb_rs_hbase.rummycircle.com,60020,1382012725057, null) from
> 192.168.20.30:44990: error: java.io.I
> OException: File
>
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes, instead
> of 1
> java.io.IOException: File
>
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes
> , instead of 1
> at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>
>
> Data node :-
>
> 2013-10-18 06:13:14,499 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 192.168.20.30:50010, storageID=DS-1816106352-192.16
> 8.20.30-50010-1369314076237, infoPort=50075, ipcPort=50020):Got exception
> while serving blk_-3215981820534544354_52215 to /192.168.20.30:
> java.net.SocketTimeoutException: 480000 millis timeout while waiting for
> channel to be ready for write. ch :
> java.nio.channels.SocketChannel[connected
> local=/192.168.20.30:50010 remote=/192.168.20.30:36188]
> at
>
> org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
> at
>
> org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
> at
>
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
> at
>
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
> at
>
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
> at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
> at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
> at java.lang.Thread.run(Thread.java:662)
>
>
>
>
>
>
>
> --
> Thanks and Regards,
> Vimal Jain
>
Re: Exceptions in Hadoop and Hbase log files
Posted by divye sheth <di...@gmail.com>.
I would recommend you to stop the cluster and then start the daemons one by
one.
1. stop-dfs.sh
2. hadoop-daemon.sh start namenode
3. hadoop-daemon.sh start datanode
This will show start up errors if any, also verify if the datanode is able
to communicate with the namenode.
Thanks
Divye Sheth
On Fri, Oct 18, 2013 at 3:51 PM, Vimal Jain <vk...@gmail.com> wrote:
> Hi,
> I am running Hbase in pseudo distributed mode.( Hbase 0.94.7 and Hadoop
> 1.1.2).
> I am getting certain exceptions in Hadoop's namenode and data node files
> which are :-
>
> Namenode :-
>
> 2013-10-18 10:33:37,218 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.heartbeatCheck: lost heartbeat from 192.168.20.30:50010
> 2013-10-18 10:33:37,242 INFO org.apache.hadoop.net.NetworkTopology:
> Removing a node: /default-rack/192.168.20.30:50010
> 2013-10-18 10:35:27,606 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> transactions: 64 Total time for transactions(ms): 1Number
> of transactions batched in Syncs: 0 Number of syncs: 43 SyncTimes(ms): 86
> 2013-10-18 10:35:27,614 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.IOException: File /h
>
> base/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes, instead of 1
> 2013-10-18 10:35:27,895 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 9000, call
> addBlock(/hbase/event_data/433b61f2a4ebff8f2e4b89890508a
> 3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e,
> DFSClient_hb_rs_hbase.rummycircle.com,60020,1382012725057, null) from
> 192.168.20.30:44990: error: java.io.I
> OException: File
>
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes, instead
> of 1
> java.io.IOException: File
>
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes
> , instead of 1
> at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>
>
> Data node :-
>
> 2013-10-18 06:13:14,499 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 192.168.20.30:50010, storageID=DS-1816106352-192.16
> 8.20.30-50010-1369314076237, infoPort=50075, ipcPort=50020):Got exception
> while serving blk_-3215981820534544354_52215 to /192.168.20.30:
> java.net.SocketTimeoutException: 480000 millis timeout while waiting for
> channel to be ready for write. ch :
> java.nio.channels.SocketChannel[connected
> local=/192.168.20.30:50010 remote=/192.168.20.30:36188]
> at
>
> org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
> at
>
> org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
> at
>
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
> at
>
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
> at
>
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
> at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
> at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
> at java.lang.Thread.run(Thread.java:662)
>
>
>
>
>
>
>
> --
> Thanks and Regards,
> Vimal Jain
>
Re: Exceptions in Hadoop and Hbase log files
Posted by divye sheth <di...@gmail.com>.
I would recommend you to stop the cluster and then start the daemons one by
one.
1. stop-dfs.sh
2. hadoop-daemon.sh start namenode
3. hadoop-daemon.sh start datanode
This will show start up errors if any, also verify if the datanode is able
to communicate with the namenode.
Thanks
Divye Sheth
On Fri, Oct 18, 2013 at 3:51 PM, Vimal Jain <vk...@gmail.com> wrote:
> Hi,
> I am running Hbase in pseudo distributed mode.( Hbase 0.94.7 and Hadoop
> 1.1.2).
> I am getting certain exceptions in Hadoop's namenode and data node files
> which are :-
>
> Namenode :-
>
> 2013-10-18 10:33:37,218 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.heartbeatCheck: lost heartbeat from 192.168.20.30:50010
> 2013-10-18 10:33:37,242 INFO org.apache.hadoop.net.NetworkTopology:
> Removing a node: /default-rack/192.168.20.30:50010
> 2013-10-18 10:35:27,606 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> transactions: 64 Total time for transactions(ms): 1Number
> of transactions batched in Syncs: 0 Number of syncs: 43 SyncTimes(ms): 86
> 2013-10-18 10:35:27,614 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.IOException: File /h
>
> base/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes, instead of 1
> 2013-10-18 10:35:27,895 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 9000, call
> addBlock(/hbase/event_data/433b61f2a4ebff8f2e4b89890508a
> 3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e,
> DFSClient_hb_rs_hbase.rummycircle.com,60020,1382012725057, null) from
> 192.168.20.30:44990: error: java.io.I
> OException: File
>
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes, instead
> of 1
> java.io.IOException: File
>
> /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
> could only be replicated to 0 nodes
> , instead of 1
> at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>
>
> Data node :-
>
> 2013-10-18 06:13:14,499 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 192.168.20.30:50010, storageID=DS-1816106352-192.16
> 8.20.30-50010-1369314076237, infoPort=50075, ipcPort=50020):Got exception
> while serving blk_-3215981820534544354_52215 to /192.168.20.30:
> java.net.SocketTimeoutException: 480000 millis timeout while waiting for
> channel to be ready for write. ch :
> java.nio.channels.SocketChannel[connected
> local=/192.168.20.30:50010 remote=/192.168.20.30:36188]
> at
>
> org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
> at
>
> org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
> at
>
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
> at
>
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
> at
>
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
> at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
> at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
> at java.lang.Thread.run(Thread.java:662)
>
>
>
>
>
>
>
> --
> Thanks and Regards,
> Vimal Jain
>