You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by AnushaGuntaka <an...@tcs.com> on 2014/05/20 16:10:28 UTC

DataXceiver java.io.InterruptedIOException error on scannning Hbase table

Hi ,

Thanks in advance. Please help me out in figuring cause of the follwing
error and fixing it.

Am facing the below error while scanning a hbase table with partial RowKey
filter through MapReduce program.

Error: org.apache.hadoop.horg.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration():DataXceiver java.io.InterruptedIOException:
Interruped while waiting for IO on channel
java.nio.channels.SocketChannel[closed] 

Data node on the slave node is getting shutdown on this error.

My Map reduce program is running maptsks till 95% and then failing with this
error.

I have a hadoop cluster with two mechines ,

Table Size : 652 GB (223 GB in master Node  and 514GB in slave node)

System Disc details:

Node            space available 
---------------------------------
master   ----   22 GB  
slave     ----   210 GB

------------------------------- core-site.xml -----------------------
<configuration>
<property>

 <name>fs.tmp.dir</name>
  <value>/home/e521596/hadoop-1.1.1/full</value>
 </property>

 <property>
   <name>fs.default.name</name>
  <value>hdfs://172.20.193.234:9000</value>
   </property>

<property>
 <name>io.sort.factor</name>
   <value>15</value>
      <description>More streams merged at once while sorting
files.</description>
          </property>

<property>
<name>io.sort.mb</name>
<value>1000</value>
<description>Higher memory-limit while sorting data.</description>
</property>

<property>
<name>io.sort.record.percent</name>
<value>0.207</value>
<description>Higher memory-limit while sorting data.</description>
</property>

<property>
<name>io.sort.spill.percent</name>
 <value>1</value>
  <description>Higher memory-limit while sorting data.</description>
   </property>

</configuration>
------------------------------- mapred-site.xml -----------------------

<configuration>
  <property>
        <name>mapred.job.tracker</name>
        <value>fedora3:9001</value>
  </property>
  <property>
       <name>mapred.reduce.tasks</name>
       <value>6</value>
  </property>
  <property>
        <name>mapred.tasktracker.map.tasks.maximum</name>
        <value>6</value>
  </property>
  <property>
        <name>mapred.tasktracker.reduce.tasks.maximum</name>
        <value>6</value>
  </property>
  <property>
       <name>mapred.textoutputformat.separator</name>
       <value>#</value>
  </property>

 <property>
        <name>mapred.compress.map.output</name>
               <value>true</value>
                 </property>

 <property>
         <name>mapred.child.java.opts</name>
          <value>-Xms1024M -Xmx2048M</value>
  </property>


</configuration>
 ---------------------------------------- hdfs-site.xml--------------------

<configuration>
  <property>
        <name>dfs.name.dir</name>
        <value>/home/e521596/hadoop-1.1.1/full/dfs/name</value>
  </property>
  <property>
       <name>dfs.data.dir</name>
       <value>/home/e521596/hadoop-1.1.1/full/dfs/data</value>
  </property>
  <property>
     <name>dfs.replication</name>
       <value>1</value>
  </property>
<property>
     <name>dfs.datanode.max.xcievers</name>
          <value>5096</value>
            </property>

<property>
     <name>dfs.datanode.handler.count</name>
          <value>200</value>
            </property>
<property>
     <name>dfs.datanode.socket.write.timeout</name>
               <value>0</value>
                           </property>


</configuration>
---------------------------------------------------------------------






--
View this message in context: http://apache-hbase.679495.n3.nabble.com/DataXceiver-java-io-InterruptedIOException-error-on-scannning-Hbase-table-tp4059419.html
Sent from the HBase User mailing list archive at Nabble.com.

Re: DataXceiver java.io.InterruptedIOException error on scannning Hbase table

Posted by AnushaGuntaka <an...@tcs.com>.
Hi Ted,

Thanks for the response :) 

Please find the logs below:

----------------------- Logs on console -----------------------------

14/05/21 12:12:27 INFO zookeeper.ClientCnxn: Socket connection established
to slave3/172.20.89.10:2181, initiating session
14/05/21 12:12:27 INFO zookeeper.ClientCnxn: Unable to read additional data
from server sessionid 0x1461d46e8a10004, likely 

server has closed socket, closing socket connection and attempting reconnect
14/05/21 12:12:28 INFO zookeeper.ClientCnxn: Opening socket connection to
server slave2/172.20.193.234:2181. Will not attempt 

to authenticate using SASL (unknown error)
14/05/21 12:12:28 INFO zookeeper.ClientCnxn: Socket connection established
to slave2/172.20.193.234:2181, initiating session
14/05/21 12:12:28 INFO zookeeper.ClientCnxn: Session establishment complete
on server slave2/172.20.193.234:2181, sessionid = 

0x1461d46e8a10004, negotiated timeout = 40000
14/05/21 12:13:14 INFO mapred.JobClient:  map 87% reduce 26%
14/05/21 12:13:24 INFO mapred.JobClient:  map 87% reduce 29%
14/05/21 12:13:48 INFO mapred.JobClient:  map 88% reduce 29%
14/05/21 12:13:49 INFO mapred.JobClient:  map 89% reduce 29%
14/05/21 12:14:32 INFO mapred.JobClient:  map 90% reduce 29%
14/05/21 12:14:39 INFO mapred.JobClient:  map 90% reduce 30%
14/05/21 12:15:23 INFO mapred.JobClient:  map 91% reduce 30%

Wed May 21 20:25:28 IST 2014,
org.apache.hadoop.hbase.client.ScannerCallable@454b7177,
java.io.IOException: 

java.io.IOException: Could not seek StoreFileScanner[HFileScanner for reader 

reader=hdfs://172.20.193.234:9000/assortmentLinking/performance_weekly_sku/16b9d994146958a8ab66c709d077a3c7/cf/4df17d21166640

298824c9680ffb5bf3, compression=none, cacheConf=CacheConfig:enabled
[cacheDataOnRead=true] [cacheDataOnWrite=false] 

[cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false]
[cacheEvictOnClose=false] [cacheCompressed=false], 

firstKey=SKU125600STORE3938WEEK13/cf:facings/1397825072820/Put, 

lastKey=SKU126145STORE3971WEEK22/cf:week_id/1397848697370/Put, avgKeyLen=53,
avgValueLen=3, entries=155950935, 

length=10173377648, cur=null] to key
SKU125600STORE3938WEEK13/cf:/LATEST_TIMESTAMP/DeleteFamily/vlen=0/ts=0

        at
org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:186)
        at
org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:205)
        at
org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:120)
        at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:698)
        at
org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.restart(TableRecordReaderImpl.java:80)
        at
org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.initialize(TableRecordReaderImpl.java:142)
        at
org.apache.hadoop.hbase.mapreduce.TableRecordReader.initialize(TableRecordReader.java:122)
        at
org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.createRecordReader(TableInputFormatBase.java:132)
        at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.<init>(MapTask.java:489)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:731)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136)
        at org.apache.hadoop.mapred.Child.main(Child.java:249)



14/05/21 12:36:50 INFO mapred.JobClient:     Launched reduce tasks=1
14/05/21 12:36:50 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=54097436
14/05/21 12:36:50 INFO mapred.JobClient:     Total time spent by all reduces
waiting after reserving slots (ms)=0
14/05/21 12:36:50 INFO mapred.JobClient:     Total time spent by all maps
waiting after reserving slots (ms)=0
14/05/21 12:36:50 INFO mapred.JobClient:     Rack-local map tasks=19
14/05/21 12:36:50 INFO mapred.JobClient:     Launched map tasks=112
14/05/21 12:36:50 INFO mapred.JobClient:     Data-local map tasks=93
14/05/21 12:36:50 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=4906841
14/05/21 12:36:50 INFO mapred.JobClient:     Failed map tasks=1
14/05/21 12:36:50 INFO mapred.JobClient:   HBase Counters
14/05/21 12:36:50 INFO mapred.JobClient:     REMOTE_RPC_CALLS=17
14/05/21 12:36:50 INFO mapred.JobClient:     RPC_CALLS=110
14/05/21 12:36:50 INFO mapred.JobClient:     RPC_RETRIES=9
14/05/21 12:36:50 INFO mapred.JobClient:     NOT_SERVING_REGION_EXCEPTION=0
14/05/21 12:36:50 INFO mapred.JobClient:     NUM_SCANNER_RESTARTS=0
14/05/21 12:36:50 INFO mapred.JobClient:     MILLIS_BETWEEN_NEXTS=3526062
14/05/21 12:36:50 INFO mapred.JobClient:     BYTES_IN_RESULTS=94309527
14/05/21 12:36:50 INFO mapred.JobClient:     BYTES_IN_REMOTE_RESULTS=8701608
14/05/21 12:36:50 INFO mapred.JobClient:     REGIONS_SCANNED=12
14/05/21 12:36:50 INFO mapred.JobClient:     REMOTE_RPC_RETRIES=2
14/05/21 12:36:50 INFO mapred.JobClient:   FileSystemCounters
14/05/21 12:36:50 INFO mapred.JobClient:     HDFS_BYTES_READ=11003
14/05/21 12:36:50 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=27091327
14/05/21 12:36:50 INFO mapred.JobClient:   File Input Format Counters
14/05/21 12:36:50 INFO mapred.JobClient:     Bytes Read=0
14/05/21 12:36:50 INFO mapred.JobClient:   Map-Reduce Framework
14/05/21 12:36:50 INFO mapred.JobClient:     Map output materialized
bytes=23805827
14/05/21 12:36:50 INFO mapred.JobClient:     Combine output records=0
14/05/21 12:36:50 INFO mapred.JobClient:     Map input records=1280708
14/05/21 12:36:50 INFO mapred.JobClient:     Physical memory (bytes)
snapshot=39474577408
14/05/21 12:36:50 INFO mapred.JobClient:     Spilled Records=1280708
14/05/21 12:36:50 INFO mapred.JobClient:     Map output bytes=140649886
14/05/21 12:36:50 INFO mapred.JobClient:     CPU time spent (ms)=1139710
14/05/21 12:36:50 INFO mapred.JobClient:     Total committed heap usage
(bytes)=97243037696
14/05/21 12:36:50 INFO mapred.JobClient:     Virtual memory (bytes)
snapshot=266259857408
14/05/21 12:36:50 INFO mapred.JobClient:     Combine input records=0
14/05/21 12:36:50 INFO mapred.JobClient:     Map output records=1280708
14/05/21 12:36:50 INFO mapred.JobClient:     SPLIT_RAW_BYTES=11003
Exception in thread "main" java.io.IOException: error with job!
        at
ItemSelectionPerfCrunching.main(ItemSelectionPerfCrunching.java:268)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
[pts/1][12:36:50:e521596@fedora3 ] ~/optumera/hadoop-1.1.1>


------------------------------------------ Hbase region server logs
--------------------------------

2014-05-21 12:26:04,209 ERROR
org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner
java.io.IOException: Could not seek StoreFileScanner[HFileScanner for reader 

reader=hdfs://172.20.193.234:9000/assortmentLinking/performance_weekly_sku/fa0fb91bd58f2117443db90278c3a3fe/cf/1597dcfc99e540

25bc7b848cfb998b1f, compression=none, cacheConf=CacheConfig:enabled
[cacheDataOnRead=true] [cacheDataOnWrite=false] 

[cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false]
[cacheEvictOnClose=false] [cacheCompressed=false], 

firstKey=SKU128331STORE3942WEEK37/cf:facings/1397826519184/Put,
lastKey=SKU129999STORE3966WEEK9/cf:week_id/1397827347036/Put, 

avgKeyLen=53, avgValueLen=3, entries=120178401, length=7838467097, cur=null]
to key 

SKU128331STORE3942WEEK37/cf:/LATEST_TIMESTAMP/DeleteFamily/vlen=0/ts=0
        at
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
        at
org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:131)
        at
org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2210)
        at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:3818)
        at
org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1825)
        at
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1817)
        at
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1794)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2550)
        at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
        at
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
Caused by: java.io.IOException: Could not obtain block:
blk_-8577073124906886383_46721 

file=/assortmentLinking/performance_weekly_sku/fa0fb91bd58f2117443db90278c3a3fe/cf/1597dcfc99e54025bc7b848cfb998b1f
        at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2269)
        at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2063)
        at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2224)
        at java.io.DataInputStream.read(DataInputStream.java:149)
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:153)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1409)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1921)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1703)
        at
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:342)
        at
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:736)
        at
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:223)
        at
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:140)
        ... 12 more
2014-05-21 12:26:04,802 WARN org.apache.hadoop.hdfs.DFSClient: DFS Read:
java.io.IOException: Could not obtain block: 

blk_3026506654091753923_46711 

file=/assortmentLinking/performance_weekly_sku/214b370da35f5df2120bb1796327da86/cf/72c08b14261a441b979bcd504cda4f3e
        at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2269)
        at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2063)
        at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2224)
        at java.io.DataInputStream.read(DataInputStream.java:149)
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:153)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1409)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1921)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1703)
        at
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:342)
        at
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:736)
        at
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:223)
        at
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:140)
        at
org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:131)
        at
org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2210)
        at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:3818)
        at
org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1825)
        at
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1817)
        at
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1794)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2550)
        at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
        at
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)



---------------------------------------Data node log on slave
-------------------------------


2014-05-21 20:17:09,427 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(172.20.89.10:50010, 

storageID=DS-753110379-172.20.89.10-50010-1386096463554, infoPort=50075,
ipcPort=50020):Finishing DataNode in: FSDataset

{dirpath='/home/e521596/hadoop-1.1.1/full/dfs/data/current'}
2014-05-21 20:17:09,428 INFO org.mortbay.log: Stopped
SelectChannelConnector@0.0.0.0:50075
2014-05-21 20:17:09,530 INFO org.apache.hadoop.ipc.Server: Stopping server
on 50020
2014-05-21 20:17:09,530 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 0 on 50020: exiting
2014-05-21 20:17:09,530 INFO org.apache.hadoop.ipc.Server: Stopping IPC
Server listener on 50020
2014-05-21 20:17:09,531 INFO org.apache.hadoop.ipc.Server: Stopping IPC
Server Responder
2014-05-21 20:17:09,530 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 50020: exiting
2014-05-21 20:17:09,530 INFO
org.apache.hadoop.ipc.metrics.RpcInstrumentation: shut down
2014-05-21 20:17:09,530 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 50020: exiting
2014-05-21 20:17:09,534 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
blk_3925365670405691575_47237 0 

: Thread is interrupted.
2014-05-21 20:17:09,534 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block 

blk_3925365670405691575_47237 terminating
2014-05-21 20:17:09,534 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
exit, active threads 

is 44
2014-05-21 20:17:09,534 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in receiveBlock
for block 

blk_3925365670405691575_47237 java.io.InterruptedIOException: Interruped
while waiting for IO on channel 

java.nio.channels.SocketChannel[closed]. 0 millis timeout left.
2014-05-21 20:17:09,534 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
blk_3925365670405691575_47237 

received exception java.io.InterruptedIOException: Interruped while waiting
for IO on channel 

java.nio.channels.SocketChannel[closed]. 0 millis timeout left.
2014-05-21 20:17:09,537 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(172.20.89.10:50010, 

storageID=DS-753110379-172.20.89.10-50010-1386096463554, infoPort=50075,
ipcPort=50020):DataXceiver
java.io.InterruptedIOException: Interruped while waiting for IO on channel
java.nio.channels.SocketChannel[closed]. 0 millis 

timeout left.
        at
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
        at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
        at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
        at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
        at java.io.DataInputStream.read(DataInputStream.java:149)
        at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:292)
        at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:339)
        at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:403)
        at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:581)
        at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:406)
        at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:112)
        at java.lang.Thread.run(Thread.java:722)
2014-05-21 20:17:10,534 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
exit, active threads 

is 0
2014-05-21 20:17:33,618 INFO
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification
succeeded for 

blk_331134978823774520_46752
2014-05-21 20:17:33,618 INFO
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Exiting
DataBlockScanner thread.
2014-05-21 20:17:33,619 INFO
org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting
down all async disk 

service threads...
2014-05-21 20:17:33,619 INFO
org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async
disk service threads 

have been shut down.
2014-05-21 20:17:33,619 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2014-05-21 20:17:33,621 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at INCHNSNRRETREW/172.20.89.10
************************************************************/



Please also clarify the following doubt :


In very beginning I have installed hadoop 1.1.1 under home both in master
and slave nodes where HDFS is pointed to full folder and loading data
(/home/hadoop-1.1.1/full).

Then when faced an issue, I have installed hadoop in another folder(FolderA)
but pointing dfs.name.dir to previous full directory and loaded the
remaining data. (/home/FolderA/hadoop-1.1.1/) (All configuration files are
in new path . Please refer the core-site.xml and hdfs-site.xml from my
previous post)

In slave node I have hadoop in present in home as well as in folderA
(/home/hadoop-1.1.1/) and  (/home/FolderA/hadoop-1.1.1/)

The bashrc file of slave node has /home/hadoop-1.1.1/ as hadoop home.

But when I start hadoop from master, the logs on slave node are getting
generated on /home/FolderA/hadoop-1.1.1/ 

Please tell me why hadoop on slave is starting on
/home/FolderA/hadoop-1.1.1/ though bashrc is pointed to /home/hadoop-1.1.1/

Thanks and Regards,
Anusha.

Ted Yu-3 wrote
> Looks like you're using hadoop-1.1.1
> 
> Have you looked at Data node log ?
> 
> Would be helpful if you pastebin the portion of Data node log when it
> shutdown.
> 
> Cheers
> 
> 
> On Tue, May 20, 2014 at 7:10 AM, AnushaGuntaka &lt;

> anusha.guntaka@

> &gt;wrote:
> 
>> Hi ,
>>
>> Thanks in advance. Please help me out in figuring cause of the follwing
>> error and fixing it.
>>
>> Am facing the below error while scanning a hbase table with partial
>> RowKey
>> filter through MapReduce program.
>>
>> Error:
>> org.apache.hadoop.horg.apache.hadoop.hdfs.server.datanode.DataNode:
>> DatanodeRegistration():DataXceiver java.io.InterruptedIOException:
>> Interruped while waiting for IO on channel
>> java.nio.channels.SocketChannel[closed]
>>
>> Data node on the slave node is getting shutdown on this error.
>>
>> My Map reduce program is running maptsks till 95% and then failing with
>> this
>> error.
>>
>> I have a hadoop cluster with two mechines ,
>>
>> Table Size : 652 GB (223 GB in master Node  and 514GB in slave node)
>>
>> System Disc details:
>>
>> Node            space available
>> ---------------------------------
>> master   ----   22 GB
>> slave     ----   210 GB
>>
>> ------------------------------- core-site.xml -----------------------
>> 
> <configuration>
>> 
> <property>
>>
>>  
> <name>
> fs.tmp.dir
> </name>
>>   
> <value>
> /home/e521596/hadoop-1.1.1/full
> </value>
>>  
> </property>
>>
>>  
> <property>
>>    
> <name>
> fs.default.name
> </name>
>>   
> <value>
> hdfs://172.20.193.234:9000
> </value>
>>    
> </property>
>>
>> 
> <property>
>>  
> <name>
> io.sort.factor
> </name>
>>    
> <value>
> 15
> </value>
>>       
> <description>
> More streams merged at once while sorting
>> files.
> </description>
>>           
> </property>
>>
>> 
> <property>
>> 
> <name>
> io.sort.mb
> </name>
>> 
> <value>
> 1000
> </value>
>> 
> <description>
> Higher memory-limit while sorting data.
> </description>
>> 
> </property>
>>
>> 
> <property>
>> 
> <name>
> io.sort.record.percent
> </name>
>> 
> <value>
> 0.207
> </value>
>> 
> <description>
> Higher memory-limit while sorting data.
> </description>
>> 
> </property>
>>
>> 
> <property>
>> 
> <name>
> io.sort.spill.percent
> </name>
>>  
> <value>
> 1
> </value>
>>   
> <description>
> Higher memory-limit while sorting data.
> </description>
>>    
> </property>
>>
>> 
> </configuration>
>> ------------------------------- mapred-site.xml -----------------------
>>
>> 
> <configuration>
>>   
> <property>
>>         
> <name>
> mapred.job.tracker
> </name>
>>         
> <value>
> fedora3:9001
> </value>
>>   
> </property>
>>   
> <property>
>>        
> <name>
> mapred.reduce.tasks
> </name>
>>        
> <value>
> 6
> </value>
>>   
> </property>
>>   
> <property>
>>         
> <name>
> mapred.tasktracker.map.tasks.maximum
> </name>
>>         
> <value>
> 6
> </value>
>>   
> </property>
>>   
> <property>
>>         
> <name>
> mapred.tasktracker.reduce.tasks.maximum
> </name>
>>         
> <value>
> 6
> </value>
>>   
> </property>
>>   
> <property>
>>        
> <name>
> mapred.textoutputformat.separator
> </name>
>>        
> <value>
> #
> </value>
>>   
> </property>
>>
>>  
> <property>
>>         
> <name>
> mapred.compress.map.output
> </name>
>>                
> <value>
> true
> </value>
>>                  
> </property>
>>
>>  
> <property>
>>          
> <name>
> mapred.child.java.opts
> </name>
>>           
> <value>
> -Xms1024M -Xmx2048M
> </value>
>>   
> </property>
>>
>>
>> 
> </configuration>
>>  ----------------------------------------
>> hdfs-site.xml--------------------
>>
>> 
> <configuration>
>>   
> <property>
>>         
> <name>
> dfs.name.dir
> </name>
>>         
> <value>
> /home/e521596/hadoop-1.1.1/full/dfs/name
> </value>
>>   
> </property>
>>   
> <property>
>>        
> <name>
> dfs.data.dir
> </name>
>>        
> <value>
> /home/e521596/hadoop-1.1.1/full/dfs/data
> </value>
>>   
> </property>
>>   
> <property>
>>      
> <name>
> dfs.replication
> </name>
>>        
> <value>
> 1
> </value>
>>   
> </property>
>> 
> <property>
>>      
> <name>
> dfs.datanode.max.xcievers
> </name>
>>           
> <value>
> 5096
> </value>
>>             
> </property>
>>
>> 
> <property>
>>      
> <name>
> dfs.datanode.handler.count
> </name>
>>           
> <value>
> 200
> </value>
>>             
> </property>
>> 
> <property>
>>      
> <name>
> dfs.datanode.socket.write.timeout
> </name>
>>                
> <value>
> 0
> </value>
>>                            
> </property>
>>
>>
>> 
> </configuration>
>> ---------------------------------------------------------------------
>>
>>
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-hbase.679495.n3.nabble.com/DataXceiver-java-io-InterruptedIOException-error-on-scannning-Hbase-table-tp4059419.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>





--
View this message in context: http://apache-hbase.679495.n3.nabble.com/DataXceiver-java-io-InterruptedIOException-error-on-scannning-Hbase-table-tp4059419p4059459.html
Sent from the HBase User mailing list archive at Nabble.com.

Re: DataXceiver java.io.InterruptedIOException error on scannning Hbase table

Posted by Ted Yu <yu...@gmail.com>.
Looks like you're using hadoop-1.1.1

Have you looked at Data node log ?

Would be helpful if you pastebin the portion of Data node log when it
shutdown.

Cheers


On Tue, May 20, 2014 at 7:10 AM, AnushaGuntaka <an...@tcs.com>wrote:

> Hi ,
>
> Thanks in advance. Please help me out in figuring cause of the follwing
> error and fixing it.
>
> Am facing the below error while scanning a hbase table with partial RowKey
> filter through MapReduce program.
>
> Error: org.apache.hadoop.horg.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration():DataXceiver java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[closed]
>
> Data node on the slave node is getting shutdown on this error.
>
> My Map reduce program is running maptsks till 95% and then failing with
> this
> error.
>
> I have a hadoop cluster with two mechines ,
>
> Table Size : 652 GB (223 GB in master Node  and 514GB in slave node)
>
> System Disc details:
>
> Node            space available
> ---------------------------------
> master   ----   22 GB
> slave     ----   210 GB
>
> ------------------------------- core-site.xml -----------------------
> <configuration>
> <property>
>
>  <name>fs.tmp.dir</name>
>   <value>/home/e521596/hadoop-1.1.1/full</value>
>  </property>
>
>  <property>
>    <name>fs.default.name</name>
>   <value>hdfs://172.20.193.234:9000</value>
>    </property>
>
> <property>
>  <name>io.sort.factor</name>
>    <value>15</value>
>       <description>More streams merged at once while sorting
> files.</description>
>           </property>
>
> <property>
> <name>io.sort.mb</name>
> <value>1000</value>
> <description>Higher memory-limit while sorting data.</description>
> </property>
>
> <property>
> <name>io.sort.record.percent</name>
> <value>0.207</value>
> <description>Higher memory-limit while sorting data.</description>
> </property>
>
> <property>
> <name>io.sort.spill.percent</name>
>  <value>1</value>
>   <description>Higher memory-limit while sorting data.</description>
>    </property>
>
> </configuration>
> ------------------------------- mapred-site.xml -----------------------
>
> <configuration>
>   <property>
>         <name>mapred.job.tracker</name>
>         <value>fedora3:9001</value>
>   </property>
>   <property>
>        <name>mapred.reduce.tasks</name>
>        <value>6</value>
>   </property>
>   <property>
>         <name>mapred.tasktracker.map.tasks.maximum</name>
>         <value>6</value>
>   </property>
>   <property>
>         <name>mapred.tasktracker.reduce.tasks.maximum</name>
>         <value>6</value>
>   </property>
>   <property>
>        <name>mapred.textoutputformat.separator</name>
>        <value>#</value>
>   </property>
>
>  <property>
>         <name>mapred.compress.map.output</name>
>                <value>true</value>
>                  </property>
>
>  <property>
>          <name>mapred.child.java.opts</name>
>           <value>-Xms1024M -Xmx2048M</value>
>   </property>
>
>
> </configuration>
>  ---------------------------------------- hdfs-site.xml--------------------
>
> <configuration>
>   <property>
>         <name>dfs.name.dir</name>
>         <value>/home/e521596/hadoop-1.1.1/full/dfs/name</value>
>   </property>
>   <property>
>        <name>dfs.data.dir</name>
>        <value>/home/e521596/hadoop-1.1.1/full/dfs/data</value>
>   </property>
>   <property>
>      <name>dfs.replication</name>
>        <value>1</value>
>   </property>
> <property>
>      <name>dfs.datanode.max.xcievers</name>
>           <value>5096</value>
>             </property>
>
> <property>
>      <name>dfs.datanode.handler.count</name>
>           <value>200</value>
>             </property>
> <property>
>      <name>dfs.datanode.socket.write.timeout</name>
>                <value>0</value>
>                            </property>
>
>
> </configuration>
> ---------------------------------------------------------------------
>
>
>
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/DataXceiver-java-io-InterruptedIOException-error-on-scannning-Hbase-table-tp4059419.html
> Sent from the HBase User mailing list archive at Nabble.com.
>