You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by zxh116116 <zx...@sina.com> on 2009/03/30 04:16:53 UTC

Re: received exception java.net.SocketTimeoutException: connect timed out

yes,I had read 'Getting Started', and xceiver set 
<property>
 <name>dfs.datanode.max.xcievers</name>
<value>8192</value>
</property>
I run all daemons on ervery host,
after start hadoop and hbase I can see 5 regionservsers


stack-3 wrote:
> 
> Have you read the hbase 'Getting Started' and the mail archive for issues
> like those described below?  Have you made the necessa,ry file system and
> xceiver changes?
> 
> 512MB of RAM is also very little if you are running multiple daemons on
> the
> one host -- are you running datanodes, tasktrackers and regionservers on
> these nodes?
> 
> This configuration ensures you use more memory than usual:
> 
>    <name>hbase.io.index.interval<
>>
>> /name>
>>    <value>32</value>
> 
> 
> How many regions have you loaded when you start seeing the below?
> 
> Yours,
> St.Ack
> 
> On Sat, Mar 28, 2009 at 9:12 AM, zxh116116 <zx...@sina.com> wrote:
> 
>>
>> hi,All
>> I am new for HBase and have a couple of questions.and poor in english.
>> now,when I test Hbase insert data meeting some problem
>> my cluster have one master and five region machine base on hadoop
>> 0.19.0,hbase 0.19.1.
>> machines:
>> memory:512M
>> cpu:xxNHZ
>> hard disk:80G
>>
>> when I insert data to hbase,my datanode logs
>> 2009-03-28 00:42:41,699 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>> Error
>> in deleting blocks.
>>        at
>>
>> org.apache.hadoop.hdfs.server.datanode.FSDataset.invalidate(FSDataset.java:1299)
>>        at
>>
>> org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:807)
>>        at
>>
>> org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:677)
>>        at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1100)
>>        at java.lang.Thread.run(Thread.java:619)
>> 2009-03-28 01:18:36,623 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>> DatanodeRegistration(123.15.51.71:50010,
>> storageID=DS-629033738-123.15.51.71-50010-1238216938880, infoPort=50075,
>> ipcPort=50020):Failed to transfer blk_7832063470499311421_1802 to
>> 123.15.51.84:50010 got java.net.SocketException: Connection reset
>>        at
>> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:96)
>>        at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
>>        at
>> java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
>>        at java.io.DataOutputStream.write(DataOutputStream.java:90)
>>        at
>>
>> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:299)
>>        at
>>
>> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:387)
>>        at
>>
>> org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1067)
>>        at java.lang.Thread.run(Thread.java:619)
>> <configuration>^M
>> <property>^M
>> <name>fs.default.name</name>^M
>> <value>hdfs://123.15.51.76:9000/</value>^M
>> <description>The name of the default file system. Either the literal
>> string
>> ^M
>> "local" or a host:port for DFS.</description>^M
>> </property>^M
>> <property>^M
>> <name>mapred.job.tracker</name>^M
>> <value>ubuntu3:9001</value>^M
>> <description>The host and port that the MapReduce job tracker runs at. If
>> ^M
>> "local", then jobs are run in-process as a single map and reduce
>> task.</description>^M
>> </property>^M
>> <property>^M
>> <name>dfs.replication</name>^M
>> <value>3</value>^M
>> <description>Default block replication. The actual number of replications
>> ^M
>> can be specified when the file is created. The default is used if
>> replication ^M
>> is not specified in create time.</description>^M
>> </property>^M
>> <property>    ^M
>> <name>hadoop.tmp.dir</name>
>> <value>/home/hadoop/hadoop/tmp/</value>^M
>> </property>
>> <property>
>> <name>mapred.reduce.tasks</name>
>> <value>8</value>
>> </property>
>> <property>
>> <name>mapred.tasktracker.reduce.tasks.maximum</name>
>> <value>8</value>
>> </property>
>> <property>
>> <name>mapred.child.java.opts</name>
>> <value>-Xmx1024m</value>
>> </property>
>> <property>
>> <name>dfs.datanode.socket.write.timeout</name>
>> <value>0</value>
>> </property>
>> <property>
>> <name>dfs.datanode.max.xcievers</name>
>> <value>8192</value>
>> </property>
>> <property>
>> <name>dfs.datanode.handler.count</name>
>> <value>10</value>
>> </property>
>> </configuration>
>>
>>
>> <configuration>
>> <property>
>> <name>hbase.master</name>
>> <value>123.15.51.76:60000</value>
>> </property>
>> <property>
>> <name>hbase.rootdir</name>
>> <value>hdfs://ubuntu3:9000/hbase</value>
>> </property>
>> <property>
>> <name>dfs.datanode.socket.write.timeout</name>
>> <value>0</value>
>> </property>
>> <property>
>>    <name>hbase.io.index.interval</name>
>>    <value>32</value>
>>    <description>The interval at which we record offsets in hbase
>>    store files/mapfiles.  Default for stock mapfiles is 128.  Index
>>    files are read into memory.  If there are many of them, could prove
>>    a burden.  If so play with the hadoop io.map.index.skip property and
>>    skip every nth index member when reading back the index into memory.
>>    </description>
>>  </property>
>> </configuration>
>> http://www.nabble.com/file/p22754309/hadoop-hadoop-datanode-ubuntu6.log
>> hadoop-hadoop-datanode-ubuntu6.log<http://www.nabble.com/file/p22754309/hadoop-hadoop-datanode-ubuntu6.log%0Ahadoop-hadoop-datanode-ubuntu6.log>
>> http://www.nabble.com/file/p22754309/hadoop-hadoop-datanode-ubuntu6.rar
>> hadoop-hadoop-datanode-ubuntu6.rar<http://www.nabble.com/file/p22754309/hadoop-hadoop-datanode-ubuntu6.rar%0Ahadoop-hadoop-datanode-ubuntu6.rar>
>> --
>> View this message in context:
>> http://www.nabble.com/received-exception-java.net.SocketTimeoutException%3A-connect-timed-out-tp22754309p22754309.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
>>
> 
> 

-- 
View this message in context: http://www.nabble.com/received-exception-java.net.SocketTimeoutException%3A-connect-timed-out-tp22754309p22775273.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: received exception java.net.SocketTimeoutException: connect timed out

Posted by stack <st...@duboce.net>.
Can you answer my other questions above?

Have you upped the number of file descriptors in your shell (You can see
what ulimit -n is at the head of your regionserver logs).

The below looks bad.  Its as though one process removed a file that another
was trying to use (you said you were on 0.19.x hadoop and hbase -- I thought
we'd gotten rid of all such problems).

Do you have DEBUG enabled?  It might give you more clues on whats going on?
Check your .out files to see if they have anything in them.

St.Ack

On Tue, Mar 31, 2009 at 9:54 AM, zxh116116 <zx...@sina.com> wrote:

>
> Thank you very much,St.Ack.
> I use Multithread client insert data to hbase,no used mapreduce.
> now,I set down regionserver, datanode and tasktracker heaps to 256,but some
> times there others Exception like this.
> 2009-03-31 09:04:01,774 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
> blk_1533760219750822914_4283 received exception
> org.apache.hadoop.hdfs.server.datanode.BlockAlreadyExistsException: Block
> blk_1533760219750822914_4283 is valid, and cannot be written to.
> 2009-03-31 09:04:02,335 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(123.15.51.84:50010,
> storageID=DS-321195265-123.15.51.84-50010-1238498955405, infoPort=50075,
> ipcPort=50020):DataXceiver
> org.apache.hadoop.hdfs.server.datanode.BlockAlreadyExistsException: Block
> blk_1533760219750822914_4283 is valid, and cannot be written to.
>        at
>
> org.apache.hadoop.hdfs.server.datanode.FSDataset.writeToBlock(FSDataset.java:975)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:97)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:259)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
>         at java.lang.Thread.run(Thread.java:619)
> 2009-03-31 09:36:59,322 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
> blk_-4180713842201249509_6201 received exception
> org.apache.hadoop.hdfs.server.datanode.BlockAlreadyExistsException: Block
> blk_-4180713842201249509_6201 is valid, and cannot be written to.
> 2009-03-31 09:36:59,323 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(123.15.51.84:50010,
> storageID=DS-321195265-123.15.51.84-50010-1238498955405, infoPort=50075,
> ipcPort=50020):DataXceiver
> org.apache.hadoop.hdfs.server.datanode.BlockAlreadyExistsException: Block
> blk_-4180713842201249509_6201 is valid, and cannot be written to.
>        at
>
> org.apache.hadoop.hdfs.server.datanode.FSDataset.writeToBlock(FSDataset.java:975)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:97)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:259)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
>         at java.lang.Thread.run(Thread.java:619)
> 2009-03-31 09:38:18,415 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
> blk_-4310030737271067904_6230 1 Exception java.net.SocketTimeoutException:
> Read timed out
>        at java.net.SocketInputStream.socketRead0(Native Method)
>        at java.net.SocketInputStream.read(SocketInputStream.java:129)
>        at java.io.DataInputStream.readFully(DataInputStream.java:178)
>        at java.io.DataInputStream.readLong(DataInputStream.java:399)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:853)
>         at java.lang.Thread.run(Thread.java:619)
>
> 2009-03-31 09:38:18,415 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 1 for
> block
> blk_-4310030737271067904_6230 terminating
> 2009-03-31 09:38:18,423 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in receiveBlock
> for block blk_-4310030737271067904_6230 java.io.EOFException: while trying
> to read 65557 bytes
> 2009-03-31 09:38:18,424 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
> blk_-4310030737271067904_6230 received exception java.io.EOFException:
> while
> trying to read 65557 bytes
> 2009-03-31 09:38:18,425 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(123.15.51.84:50010,
> storageID=DS-321195265-123.15.51.84-50010-1238498955405, infoPort=50075,
> ipcPort=50020):DataXceiver
> java.io.EOFException: while trying to read 65557 bytes
>        at
>
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:264)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:308)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:372)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:524)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:357)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
>         at java.lang.Thread.run(Thread.java:619)
> 2009-03-31 09:38:18,415 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
> blk_-4310030737271067904_6230 1 Exception java.net.SocketTimeoutException:
> Read timed out
>        at java.net.SocketInputStream.socketRead0(Native Method)
>        at java.net.SocketInputStream.read(SocketInputStream.java:129)
>        at java.io.DataInputStream.readFully(DataInputStream.java:178)
>        at java.io.DataInputStream.readLong(DataInputStream.java:399)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:853)
>         at java.lang.Thread.run(Thread.java:619)
>
> 2009-03-31 09:38:18,415 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 1 for
> block
> blk_-4310030737271067904_6230 terminating
> 2009-03-31 09:38:18,423 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in receiveBlock
> for block blk_-4310030737271067904_6230 java.io.EOFException: while trying
> to read 65557 bytes
> 2009-03-31 09:38:18,424 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
> blk_-4310030737271067904_6230 received exception java.io.EOFException:
> while
> trying to read 65557 bytes
> 2009-03-31 09:38:18,425 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(123.15.51.84:50010,
> storageID=DS-321195265-123.15.51.84-50010-1238498955405, infoPort=50075,
> ipcPort=50020):DataXceiver
> java.io.EOFException: while trying to read 65557 bytes
>        at
>
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:264)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:308)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:372)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:524)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:357)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
>         at java.lang.Thread.run(Thread.java:619)
> and on namenode
> 2009-03-31 00:26:53,043 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 9000, call
>
> addBlock(/hbase/log_123.15.51.78_1238499031960_60020/hlog.dat.1238499034207,
> DFSClient_1203536607) from 123.15.51.78:38612: error:
> org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException: Not
> replicated
> yet:/hbase/log_123.15.51.78_1238499031960_60020/hlog.dat.1238499034207
> org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException: Not
> replicated
> yet:/hbase/log_123.15.51.78_1238499031960_60020/hlog.dat.1238499034207
>        at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1266)
>        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:351)
>        at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>        at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:481)
>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:894)
> is my hardward  problem?
>
> stack-3 wrote:
> >
> > OK.  Thanks for setting xceivers, etc.
> >
> > How many regions do you have loaded when you start to see issues?
> >
> > Looking in the regionserver logs, do you see OutOfMemoryErrors?
> >
> > I'd be surprised if all works in 512MB of RAM.  You might need to set
> down
> > the size of regionserver, datanode and tasktracker heaps so they don't
> > grow
> > to their default 1GB size to avoid swapping (Swapping will cause your
> > cluster headache)
> >
> > St.Ack
> >
> > On Mon, Mar 30, 2009 at 4:16 AM, zxh116116 <zx...@sina.com> wrote:
> >
> >>
> >> yes,I had read 'Getting Started', and xceiver set
> >> <property>
> >>  <name>dfs.datanode.max.xcievers</name>
> >> <value>8192</value>
> >> </property>
> >> I run all daemons on ervery host,
> >> after start hadoop and hbase I can see 5 regionservsers
> >>
> >>
> >> stack-3 wrote:
> >> >
> >> > Have you read the hbase 'Getting Started' and the mail archive for
> >> issues
> >> > like those described below?  Have you made the necessa,ry file system
> >> and
> >> > xceiver changes?
> >> >
> >> > 512MB of RAM is also very little if you are running multiple daemons
> on
> >> > the
> >> > one host -- are you running datanodes, tasktrackers and regionservers
> >> on
> >> > these nodes?
> >> >
> >> > This configuration ensures you use more memory than usual:
> >> >
> >> >    <name>hbase.io.index.interval<
> >> >>
> >> >> /name>
> >> >>    <value>32</value>
> >> >
> >> >
> >> > How many regions have you loaded when you start seeing the below?
> >> >
> >> > Yours,
> >> > St.Ack
> >> >
> >> > On Sat, Mar 28, 2009 at 9:12 AM, zxh116116 <zx...@sina.com>
> wrote:
> >> >
> >> >>
> >> >> hi,All
> >> >> I am new for HBase and have a couple of questions.and poor in
> english.
> >> >> now,when I test Hbase insert data meeting some problem
> >> >> my cluster have one master and five region machine base on hadoop
> >> >> 0.19.0,hbase 0.19.1.
> >> >> machines:
> >> >> memory:512M
> >> >> cpu:xxNHZ
> >> >> hard disk:80G
> >> >>
> >> >> when I insert data to hbase,my datanode logs
> >> >> 2009-03-28 00:42:41,699 WARN
> >> >> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
> >> >> Error
> >> >> in deleting blocks.
> >> >>        at
> >> >>
> >> >>
> >>
> org.apache.hadoop.hdfs.server.datanode.FSDataset.invalidate(FSDataset.java:1299)
> >> >>        at
> >> >>
> >> >>
> >>
> org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:807)
> >> >>        at
> >> >>
> >> >>
> >>
> org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:677)
> >> >>        at
> >> >>
> >> org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1100)
> >> >>        at java.lang.Thread.run(Thread.java:619)
> >> >> 2009-03-28 01:18:36,623 WARN
> >> >> org.apache.hadoop.hdfs.server.datanode.DataNode:
> >> >> DatanodeRegistration(123.15.51.71:50010,
> >> >> storageID=DS-629033738-123.15.51.71-50010-1238216938880,
> >> infoPort=50075,
> >> >> ipcPort=50020):Failed to transfer blk_7832063470499311421_1802 to
> >> >> 123.15.51.84:50010 got java.net.SocketException: Connection reset
> >> >>        at
> >> >> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:96)
> >> >>        at
> >> java.net.SocketOutputStream.write(SocketOutputStream.java:136)
> >> >>        at
> >> >> java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
> >> >>        at java.io.DataOutputStream.write(DataOutputStream.java:90)
> >> >>        at
> >> >>
> >> >>
> >>
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:299)
> >> >>        at
> >> >>
> >> >>
> >>
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:387)
> >> >>        at
> >> >>
> >> >>
> >>
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1067)
> >> >>        at java.lang.Thread.run(Thread.java:619)
> >> >> <configuration>^M
> >> >> <property>^M
> >> >> <name>fs.default.name</name>^M
> >> >> <value>hdfs://123.15.51.76:9000/</value>^M
> >> >> <description>The name of the default file system. Either the literal
> >> >> string
> >> >> ^M
> >> >> "local" or a host:port for DFS.</description>^M
> >> >> </property>^M
> >> >> <property>^M
> >> >> <name>mapred.job.tracker</name>^M
> >> >> <value>ubuntu3:9001</value>^M
> >> >> <description>The host and port that the MapReduce job tracker runs
> at.
> >> If
> >> >> ^M
> >> >> "local", then jobs are run in-process as a single map and reduce
> >> >> task.</description>^M
> >> >> </property>^M
> >> >> <property>^M
> >> >> <name>dfs.replication</name>^M
> >> >> <value>3</value>^M
> >> >> <description>Default block replication. The actual number of
> >> replications
> >> >> ^M
> >> >> can be specified when the file is created. The default is used if
> >> >> replication ^M
> >> >> is not specified in create time.</description>^M
> >> >> </property>^M
> >> >> <property>    ^M
> >> >> <name>hadoop.tmp.dir</name>
> >> >> <value>/home/hadoop/hadoop/tmp/</value>^M
> >> >> </property>
> >> >> <property>
> >> >> <name>mapred.reduce.tasks</name>
> >> >> <value>8</value>
> >> >> </property>
> >> >> <property>
> >> >> <name>mapred.tasktracker.reduce.tasks.maximum</name>
> >> >> <value>8</value>
> >> >> </property>
> >> >> <property>
> >> >> <name>mapred.child.java.opts</name>
> >> >> <value>-Xmx1024m</value>
> >> >> </property>
> >> >> <property>
> >> >> <name>dfs.datanode.socket.write.timeout</name>
> >> >> <value>0</value>
> >> >> </property>
> >> >> <property>
> >> >> <name>dfs.datanode.max.xcievers</name>
> >> >> <value>8192</value>
> >> >> </property>
> >> >> <property>
> >> >> <name>dfs.datanode.handler.count</name>
> >> >> <value>10</value>
> >> >> </property>
> >> >> </configuration>
> >> >>
> >> >>
> >> >> <configuration>
> >> >> <property>
> >> >> <name>hbase.master</name>
> >> >> <value>123.15.51.76:60000</value>
> >> >> </property>
> >> >> <property>
> >> >> <name>hbase.rootdir</name>
> >> >> <value>hdfs://ubuntu3:9000/hbase</value>
> >> >> </property>
> >> >> <property>
> >> >> <name>dfs.datanode.socket.write.timeout</name>
> >> >> <value>0</value>
> >> >> </property>
> >> >> <property>
> >> >>    <name>hbase.io.index.interval</name>
> >> >>    <value>32</value>
> >> >>    <description>The interval at which we record offsets in hbase
> >> >>    store files/mapfiles.  Default for stock mapfiles is 128.  Index
> >> >>    files are read into memory.  If there are many of them, could
> prove
> >> >>    a burden.  If so play with the hadoop io.map.index.skip property
> >> and
> >> >>    skip every nth index member when reading back the index into
> >> memory.
> >> >>    </description>
> >> >>  </property>
> >> >> </configuration>
> >> >>
> >> http://www.nabble.com/file/p22754309/hadoop-hadoop-datanode-ubuntu6.log
> >> >> hadoop-hadoop-datanode-ubuntu6.log<
> >>
> http://www.nabble.com/file/p22754309/hadoop-hadoop-datanode-ubuntu6.log%0Ahadoop-hadoop-datanode-ubuntu6.log
> >> >
> >> >>
> >> http://www.nabble.com/file/p22754309/hadoop-hadoop-datanode-ubuntu6.rar
> >> >> hadoop-hadoop-datanode-ubuntu6.rar<
> >>
> http://www.nabble.com/file/p22754309/hadoop-hadoop-datanode-ubuntu6.rar%0Ahadoop-hadoop-datanode-ubuntu6.rar
> >> >
> >> >> --
> >> >> View this message in context:
> >> >>
> >>
> http://www.nabble.com/received-exception-java.net.SocketTimeoutException%3A-connect-timed-out-tp22754309p22754309.html
> >> >> Sent from the HBase User mailing list archive at Nabble.com.
> >> >>
> >> >>
> >> >
> >> >
> >>
> >> --
> >> View this message in context:
> >>
> http://www.nabble.com/received-exception-java.net.SocketTimeoutException%3A-connect-timed-out-tp22754309p22775273.html
> >> Sent from the HBase User mailing list archive at Nabble.com.
> >>
> >>
> >
> >
>
> --
> View this message in context:
> http://www.nabble.com/received-exception-java.net.SocketTimeoutException%3A-connect-timed-out-tp22754309p22800095.html
> Sent from the HBase User mailing list archive at Nabble.com.
>
>

Re: received exception java.net.SocketTimeoutException: connect timed out

Posted by zxh116116 <zx...@sina.com>.
Thank you very much,St.Ack.
I use Multithread client insert data to hbase,no used mapreduce.
now,I set down regionserver, datanode and tasktracker heaps to 256,but some
times there others Exception like this.
2009-03-31 09:04:01,774 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
blk_1533760219750822914_4283 received exception
org.apache.hadoop.hdfs.server.datanode.BlockAlreadyExistsException: Block
blk_1533760219750822914_4283 is valid, and cannot be written to.
2009-03-31 09:04:02,335 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(123.15.51.84:50010,
storageID=DS-321195265-123.15.51.84-50010-1238498955405, infoPort=50075,
ipcPort=50020):DataXceiver
org.apache.hadoop.hdfs.server.datanode.BlockAlreadyExistsException: Block
blk_1533760219750822914_4283 is valid, and cannot be written to.
	at
org.apache.hadoop.hdfs.server.datanode.FSDataset.writeToBlock(FSDataset.java:975)
	at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:97)
	at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:259)
	at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
	at java.lang.Thread.run(Thread.java:619)
2009-03-31 09:36:59,322 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
blk_-4180713842201249509_6201 received exception
org.apache.hadoop.hdfs.server.datanode.BlockAlreadyExistsException: Block
blk_-4180713842201249509_6201 is valid, and cannot be written to.
2009-03-31 09:36:59,323 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(123.15.51.84:50010,
storageID=DS-321195265-123.15.51.84-50010-1238498955405, infoPort=50075,
ipcPort=50020):DataXceiver
org.apache.hadoop.hdfs.server.datanode.BlockAlreadyExistsException: Block
blk_-4180713842201249509_6201 is valid, and cannot be written to.
	at
org.apache.hadoop.hdfs.server.datanode.FSDataset.writeToBlock(FSDataset.java:975)
	at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:97)
	at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:259)
	at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
	at java.lang.Thread.run(Thread.java:619)
2009-03-31 09:38:18,415 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
blk_-4310030737271067904_6230 1 Exception java.net.SocketTimeoutException:
Read timed out
	at java.net.SocketInputStream.socketRead0(Native Method)
	at java.net.SocketInputStream.read(SocketInputStream.java:129)
	at java.io.DataInputStream.readFully(DataInputStream.java:178)
	at java.io.DataInputStream.readLong(DataInputStream.java:399)
	at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:853)
	at java.lang.Thread.run(Thread.java:619)

2009-03-31 09:38:18,415 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 1 for block
blk_-4310030737271067904_6230 terminating
2009-03-31 09:38:18,423 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in receiveBlock
for block blk_-4310030737271067904_6230 java.io.EOFException: while trying
to read 65557 bytes
2009-03-31 09:38:18,424 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
blk_-4310030737271067904_6230 received exception java.io.EOFException: while
trying to read 65557 bytes
2009-03-31 09:38:18,425 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(123.15.51.84:50010,
storageID=DS-321195265-123.15.51.84-50010-1238498955405, infoPort=50075,
ipcPort=50020):DataXceiver
java.io.EOFException: while trying to read 65557 bytes
	at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:264)
	at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:308)
	at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:372)
	at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:524)
	at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:357)
	at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
	at java.lang.Thread.run(Thread.java:619)
2009-03-31 09:38:18,415 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
blk_-4310030737271067904_6230 1 Exception java.net.SocketTimeoutException:
Read timed out
	at java.net.SocketInputStream.socketRead0(Native Method)
	at java.net.SocketInputStream.read(SocketInputStream.java:129)
	at java.io.DataInputStream.readFully(DataInputStream.java:178)
	at java.io.DataInputStream.readLong(DataInputStream.java:399)
	at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:853)
	at java.lang.Thread.run(Thread.java:619)

2009-03-31 09:38:18,415 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 1 for block
blk_-4310030737271067904_6230 terminating
2009-03-31 09:38:18,423 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in receiveBlock
for block blk_-4310030737271067904_6230 java.io.EOFException: while trying
to read 65557 bytes
2009-03-31 09:38:18,424 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
blk_-4310030737271067904_6230 received exception java.io.EOFException: while
trying to read 65557 bytes
2009-03-31 09:38:18,425 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(123.15.51.84:50010,
storageID=DS-321195265-123.15.51.84-50010-1238498955405, infoPort=50075,
ipcPort=50020):DataXceiver
java.io.EOFException: while trying to read 65557 bytes
	at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:264)
	at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:308)
	at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:372)
	at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:524)
	at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:357)
	at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
	at java.lang.Thread.run(Thread.java:619)
and on namenode 
2009-03-31 00:26:53,043 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 9 on 9000, call
addBlock(/hbase/log_123.15.51.78_1238499031960_60020/hlog.dat.1238499034207,
DFSClient_1203536607) from 123.15.51.78:38612: error:
org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException: Not
replicated
yet:/hbase/log_123.15.51.78_1238499031960_60020/hlog.dat.1238499034207
org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException: Not
replicated
yet:/hbase/log_123.15.51.78_1238499031960_60020/hlog.dat.1238499034207
	at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1266)
	at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:351)
	at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
	at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:481)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:894)
is my hardward  problem?

stack-3 wrote:
> 
> OK.  Thanks for setting xceivers, etc.
> 
> How many regions do you have loaded when you start to see issues?
> 
> Looking in the regionserver logs, do you see OutOfMemoryErrors?
> 
> I'd be surprised if all works in 512MB of RAM.  You might need to set down
> the size of regionserver, datanode and tasktracker heaps so they don't
> grow
> to their default 1GB size to avoid swapping (Swapping will cause your
> cluster headache)
> 
> St.Ack
> 
> On Mon, Mar 30, 2009 at 4:16 AM, zxh116116 <zx...@sina.com> wrote:
> 
>>
>> yes,I had read 'Getting Started', and xceiver set
>> <property>
>>  <name>dfs.datanode.max.xcievers</name>
>> <value>8192</value>
>> </property>
>> I run all daemons on ervery host,
>> after start hadoop and hbase I can see 5 regionservsers
>>
>>
>> stack-3 wrote:
>> >
>> > Have you read the hbase 'Getting Started' and the mail archive for
>> issues
>> > like those described below?  Have you made the necessa,ry file system
>> and
>> > xceiver changes?
>> >
>> > 512MB of RAM is also very little if you are running multiple daemons on
>> > the
>> > one host -- are you running datanodes, tasktrackers and regionservers
>> on
>> > these nodes?
>> >
>> > This configuration ensures you use more memory than usual:
>> >
>> >    <name>hbase.io.index.interval<
>> >>
>> >> /name>
>> >>    <value>32</value>
>> >
>> >
>> > How many regions have you loaded when you start seeing the below?
>> >
>> > Yours,
>> > St.Ack
>> >
>> > On Sat, Mar 28, 2009 at 9:12 AM, zxh116116 <zx...@sina.com> wrote:
>> >
>> >>
>> >> hi,All
>> >> I am new for HBase and have a couple of questions.and poor in english.
>> >> now,when I test Hbase insert data meeting some problem
>> >> my cluster have one master and five region machine base on hadoop
>> >> 0.19.0,hbase 0.19.1.
>> >> machines:
>> >> memory:512M
>> >> cpu:xxNHZ
>> >> hard disk:80G
>> >>
>> >> when I insert data to hbase,my datanode logs
>> >> 2009-03-28 00:42:41,699 WARN
>> >> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>> >> Error
>> >> in deleting blocks.
>> >>        at
>> >>
>> >>
>> org.apache.hadoop.hdfs.server.datanode.FSDataset.invalidate(FSDataset.java:1299)
>> >>        at
>> >>
>> >>
>> org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:807)
>> >>        at
>> >>
>> >>
>> org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:677)
>> >>        at
>> >>
>> org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1100)
>> >>        at java.lang.Thread.run(Thread.java:619)
>> >> 2009-03-28 01:18:36,623 WARN
>> >> org.apache.hadoop.hdfs.server.datanode.DataNode:
>> >> DatanodeRegistration(123.15.51.71:50010,
>> >> storageID=DS-629033738-123.15.51.71-50010-1238216938880,
>> infoPort=50075,
>> >> ipcPort=50020):Failed to transfer blk_7832063470499311421_1802 to
>> >> 123.15.51.84:50010 got java.net.SocketException: Connection reset
>> >>        at
>> >> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:96)
>> >>        at
>> java.net.SocketOutputStream.write(SocketOutputStream.java:136)
>> >>        at
>> >> java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
>> >>        at java.io.DataOutputStream.write(DataOutputStream.java:90)
>> >>        at
>> >>
>> >>
>> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:299)
>> >>        at
>> >>
>> >>
>> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:387)
>> >>        at
>> >>
>> >>
>> org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1067)
>> >>        at java.lang.Thread.run(Thread.java:619)
>> >> <configuration>^M
>> >> <property>^M
>> >> <name>fs.default.name</name>^M
>> >> <value>hdfs://123.15.51.76:9000/</value>^M
>> >> <description>The name of the default file system. Either the literal
>> >> string
>> >> ^M
>> >> "local" or a host:port for DFS.</description>^M
>> >> </property>^M
>> >> <property>^M
>> >> <name>mapred.job.tracker</name>^M
>> >> <value>ubuntu3:9001</value>^M
>> >> <description>The host and port that the MapReduce job tracker runs at.
>> If
>> >> ^M
>> >> "local", then jobs are run in-process as a single map and reduce
>> >> task.</description>^M
>> >> </property>^M
>> >> <property>^M
>> >> <name>dfs.replication</name>^M
>> >> <value>3</value>^M
>> >> <description>Default block replication. The actual number of
>> replications
>> >> ^M
>> >> can be specified when the file is created. The default is used if
>> >> replication ^M
>> >> is not specified in create time.</description>^M
>> >> </property>^M
>> >> <property>    ^M
>> >> <name>hadoop.tmp.dir</name>
>> >> <value>/home/hadoop/hadoop/tmp/</value>^M
>> >> </property>
>> >> <property>
>> >> <name>mapred.reduce.tasks</name>
>> >> <value>8</value>
>> >> </property>
>> >> <property>
>> >> <name>mapred.tasktracker.reduce.tasks.maximum</name>
>> >> <value>8</value>
>> >> </property>
>> >> <property>
>> >> <name>mapred.child.java.opts</name>
>> >> <value>-Xmx1024m</value>
>> >> </property>
>> >> <property>
>> >> <name>dfs.datanode.socket.write.timeout</name>
>> >> <value>0</value>
>> >> </property>
>> >> <property>
>> >> <name>dfs.datanode.max.xcievers</name>
>> >> <value>8192</value>
>> >> </property>
>> >> <property>
>> >> <name>dfs.datanode.handler.count</name>
>> >> <value>10</value>
>> >> </property>
>> >> </configuration>
>> >>
>> >>
>> >> <configuration>
>> >> <property>
>> >> <name>hbase.master</name>
>> >> <value>123.15.51.76:60000</value>
>> >> </property>
>> >> <property>
>> >> <name>hbase.rootdir</name>
>> >> <value>hdfs://ubuntu3:9000/hbase</value>
>> >> </property>
>> >> <property>
>> >> <name>dfs.datanode.socket.write.timeout</name>
>> >> <value>0</value>
>> >> </property>
>> >> <property>
>> >>    <name>hbase.io.index.interval</name>
>> >>    <value>32</value>
>> >>    <description>The interval at which we record offsets in hbase
>> >>    store files/mapfiles.  Default for stock mapfiles is 128.  Index
>> >>    files are read into memory.  If there are many of them, could prove
>> >>    a burden.  If so play with the hadoop io.map.index.skip property
>> and
>> >>    skip every nth index member when reading back the index into
>> memory.
>> >>    </description>
>> >>  </property>
>> >> </configuration>
>> >>
>> http://www.nabble.com/file/p22754309/hadoop-hadoop-datanode-ubuntu6.log
>> >> hadoop-hadoop-datanode-ubuntu6.log<
>> http://www.nabble.com/file/p22754309/hadoop-hadoop-datanode-ubuntu6.log%0Ahadoop-hadoop-datanode-ubuntu6.log
>> >
>> >>
>> http://www.nabble.com/file/p22754309/hadoop-hadoop-datanode-ubuntu6.rar
>> >> hadoop-hadoop-datanode-ubuntu6.rar<
>> http://www.nabble.com/file/p22754309/hadoop-hadoop-datanode-ubuntu6.rar%0Ahadoop-hadoop-datanode-ubuntu6.rar
>> >
>> >> --
>> >> View this message in context:
>> >>
>> http://www.nabble.com/received-exception-java.net.SocketTimeoutException%3A-connect-timed-out-tp22754309p22754309.html
>> >> Sent from the HBase User mailing list archive at Nabble.com.
>> >>
>> >>
>> >
>> >
>>
>> --
>> View this message in context:
>> http://www.nabble.com/received-exception-java.net.SocketTimeoutException%3A-connect-timed-out-tp22754309p22775273.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
>>
> 
> 

-- 
View this message in context: http://www.nabble.com/received-exception-java.net.SocketTimeoutException%3A-connect-timed-out-tp22754309p22800095.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: received exception java.net.SocketTimeoutException: connect timed out

Posted by stack <st...@duboce.net>.
OK.  Thanks for setting xceivers, etc.

How many regions do you have loaded when you start to see issues?

Looking in the regionserver logs, do you see OutOfMemoryErrors?

I'd be surprised if all works in 512MB of RAM.  You might need to set down
the size of regionserver, datanode and tasktracker heaps so they don't grow
to their default 1GB size to avoid swapping (Swapping will cause your
cluster headache)

St.Ack

On Mon, Mar 30, 2009 at 4:16 AM, zxh116116 <zx...@sina.com> wrote:

>
> yes,I had read 'Getting Started', and xceiver set
> <property>
>  <name>dfs.datanode.max.xcievers</name>
> <value>8192</value>
> </property>
> I run all daemons on ervery host,
> after start hadoop and hbase I can see 5 regionservsers
>
>
> stack-3 wrote:
> >
> > Have you read the hbase 'Getting Started' and the mail archive for issues
> > like those described below?  Have you made the necessa,ry file system and
> > xceiver changes?
> >
> > 512MB of RAM is also very little if you are running multiple daemons on
> > the
> > one host -- are you running datanodes, tasktrackers and regionservers on
> > these nodes?
> >
> > This configuration ensures you use more memory than usual:
> >
> >    <name>hbase.io.index.interval<
> >>
> >> /name>
> >>    <value>32</value>
> >
> >
> > How many regions have you loaded when you start seeing the below?
> >
> > Yours,
> > St.Ack
> >
> > On Sat, Mar 28, 2009 at 9:12 AM, zxh116116 <zx...@sina.com> wrote:
> >
> >>
> >> hi,All
> >> I am new for HBase and have a couple of questions.and poor in english.
> >> now,when I test Hbase insert data meeting some problem
> >> my cluster have one master and five region machine base on hadoop
> >> 0.19.0,hbase 0.19.1.
> >> machines:
> >> memory:512M
> >> cpu:xxNHZ
> >> hard disk:80G
> >>
> >> when I insert data to hbase,my datanode logs
> >> 2009-03-28 00:42:41,699 WARN
> >> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
> >> Error
> >> in deleting blocks.
> >>        at
> >>
> >>
> org.apache.hadoop.hdfs.server.datanode.FSDataset.invalidate(FSDataset.java:1299)
> >>        at
> >>
> >>
> org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:807)
> >>        at
> >>
> >>
> org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:677)
> >>        at
> >> org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1100)
> >>        at java.lang.Thread.run(Thread.java:619)
> >> 2009-03-28 01:18:36,623 WARN
> >> org.apache.hadoop.hdfs.server.datanode.DataNode:
> >> DatanodeRegistration(123.15.51.71:50010,
> >> storageID=DS-629033738-123.15.51.71-50010-1238216938880, infoPort=50075,
> >> ipcPort=50020):Failed to transfer blk_7832063470499311421_1802 to
> >> 123.15.51.84:50010 got java.net.SocketException: Connection reset
> >>        at
> >> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:96)
> >>        at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
> >>        at
> >> java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
> >>        at java.io.DataOutputStream.write(DataOutputStream.java:90)
> >>        at
> >>
> >>
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:299)
> >>        at
> >>
> >>
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:387)
> >>        at
> >>
> >>
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1067)
> >>        at java.lang.Thread.run(Thread.java:619)
> >> <configuration>^M
> >> <property>^M
> >> <name>fs.default.name</name>^M
> >> <value>hdfs://123.15.51.76:9000/</value>^M
> >> <description>The name of the default file system. Either the literal
> >> string
> >> ^M
> >> "local" or a host:port for DFS.</description>^M
> >> </property>^M
> >> <property>^M
> >> <name>mapred.job.tracker</name>^M
> >> <value>ubuntu3:9001</value>^M
> >> <description>The host and port that the MapReduce job tracker runs at.
> If
> >> ^M
> >> "local", then jobs are run in-process as a single map and reduce
> >> task.</description>^M
> >> </property>^M
> >> <property>^M
> >> <name>dfs.replication</name>^M
> >> <value>3</value>^M
> >> <description>Default block replication. The actual number of
> replications
> >> ^M
> >> can be specified when the file is created. The default is used if
> >> replication ^M
> >> is not specified in create time.</description>^M
> >> </property>^M
> >> <property>    ^M
> >> <name>hadoop.tmp.dir</name>
> >> <value>/home/hadoop/hadoop/tmp/</value>^M
> >> </property>
> >> <property>
> >> <name>mapred.reduce.tasks</name>
> >> <value>8</value>
> >> </property>
> >> <property>
> >> <name>mapred.tasktracker.reduce.tasks.maximum</name>
> >> <value>8</value>
> >> </property>
> >> <property>
> >> <name>mapred.child.java.opts</name>
> >> <value>-Xmx1024m</value>
> >> </property>
> >> <property>
> >> <name>dfs.datanode.socket.write.timeout</name>
> >> <value>0</value>
> >> </property>
> >> <property>
> >> <name>dfs.datanode.max.xcievers</name>
> >> <value>8192</value>
> >> </property>
> >> <property>
> >> <name>dfs.datanode.handler.count</name>
> >> <value>10</value>
> >> </property>
> >> </configuration>
> >>
> >>
> >> <configuration>
> >> <property>
> >> <name>hbase.master</name>
> >> <value>123.15.51.76:60000</value>
> >> </property>
> >> <property>
> >> <name>hbase.rootdir</name>
> >> <value>hdfs://ubuntu3:9000/hbase</value>
> >> </property>
> >> <property>
> >> <name>dfs.datanode.socket.write.timeout</name>
> >> <value>0</value>
> >> </property>
> >> <property>
> >>    <name>hbase.io.index.interval</name>
> >>    <value>32</value>
> >>    <description>The interval at which we record offsets in hbase
> >>    store files/mapfiles.  Default for stock mapfiles is 128.  Index
> >>    files are read into memory.  If there are many of them, could prove
> >>    a burden.  If so play with the hadoop io.map.index.skip property and
> >>    skip every nth index member when reading back the index into memory.
> >>    </description>
> >>  </property>
> >> </configuration>
> >> http://www.nabble.com/file/p22754309/hadoop-hadoop-datanode-ubuntu6.log
> >> hadoop-hadoop-datanode-ubuntu6.log<
> http://www.nabble.com/file/p22754309/hadoop-hadoop-datanode-ubuntu6.log%0Ahadoop-hadoop-datanode-ubuntu6.log
> >
> >> http://www.nabble.com/file/p22754309/hadoop-hadoop-datanode-ubuntu6.rar
> >> hadoop-hadoop-datanode-ubuntu6.rar<
> http://www.nabble.com/file/p22754309/hadoop-hadoop-datanode-ubuntu6.rar%0Ahadoop-hadoop-datanode-ubuntu6.rar
> >
> >> --
> >> View this message in context:
> >>
> http://www.nabble.com/received-exception-java.net.SocketTimeoutException%3A-connect-timed-out-tp22754309p22754309.html
> >> Sent from the HBase User mailing list archive at Nabble.com.
> >>
> >>
> >
> >
>
> --
> View this message in context:
> http://www.nabble.com/received-exception-java.net.SocketTimeoutException%3A-connect-timed-out-tp22754309p22775273.html
> Sent from the HBase User mailing list archive at Nabble.com.
>
>