You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Mahadev konar (JIRA)" <ji...@apache.org> on 2006/05/11 02:10:05 UTC

[jira] Created: (HADOOP-210) Namenode not able to accept connections

Namenode not able to accept connections
---------------------------------------

         Key: HADOOP-210
         URL: http://issues.apache.org/jira/browse/HADOOP-210
     Project: Hadoop
        Type: Bug

  Components: dfs  
 Environment: linux
    Reporter: Mahadev konar
 Assigned to: Mahadev konar 


I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:

Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method)
        at java.lang.Thread.start(Thread.java:574)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)

After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
java.net.SocketTimeoutException: timed out waiting for rpc response
	at org.apache.hadoop.ipc.Client.call(Client.java:305)
	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).


The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?


-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (HADOOP-210) Namenode not able to accept connections

Posted by "alan wootton (JIRA)" <ji...@apache.org>.
    [ http://issues.apache.org/jira/browse/HADOOP-210?page=comments#action_12413192 ] 

alan wootton commented on HADOOP-210:
-------------------------------------

It's looking clear to me that we need to change the RPC server to use java.nio.channels.ServerSocketChannel instead of the current connection/thread model.
Is that what we're talking about?

Who is the nio expert here? :-)

> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Mahadev konar

>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (HADOOP-210) Namenode not able to accept connections

Posted by "eric baldeschwieler (JIRA)" <ji...@apache.org>.
    [ http://issues.apache.org/jira/browse/HADOOP-210?page=comments#action_12379173 ] 

eric baldeschwieler commented on HADOOP-210:
--------------------------------------------

Let's not argue the point in the abstract. 

If someone does submit a patch that reduces the overhead of having many RPCs/connections without complicating the programming model or tanking performance, I assume it would be acceptable, right?

If someone feels they can achieve these aims, I'd encourage them to sign up / implement something.  Then we can test it.

Otherwise, let's let it lie.



> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Mahadev konar

>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (HADOOP-210) Namenode not able to accept connections

Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
    [ http://issues.apache.org/jira/browse/HADOOP-210?page=comments#action_12415927 ] 

Doug Cutting commented on HADOOP-210:
-------------------------------------

This looks great to me & passes my tests.

One improvement I'd like to see is for Client.java to not serialize the call twice.  This could easily be done with Hadoop's DataOutputBuffer: write the call to a DataOutputBuffer, write the length, then write the data from the buffer.

Thanks!

> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Mahadev konar
>  Attachments: nio.patch
>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (HADOOP-210) Namenode not able to accept connections

Posted by "Konstantin Shvachko (JIRA)" <ji...@apache.org>.
    [ http://issues.apache.org/jira/browse/HADOOP-210?page=comments#action_12413019 ] 

Konstantin Shvachko commented on HADOOP-210:
--------------------------------------------

Sorry put the comment in a wrong thread.

> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Mahadev konar

>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (HADOOP-210) Namenode not able to accept connections

Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
    [ http://issues.apache.org/jira/browse/HADOOP-210?page=comments#action_12413195 ] 

Doug Cutting commented on HADOOP-210:
-------------------------------------

Alan, yes, changing Server.java to use nio's Selector is what's under discussion, using a single thread on the server to buffer requests until they are complete, then dispatching each request to a worker thread.  Client.java must also be modified to buffer request objects so that they can be written preceded by their size, permitting Server.java to determine when each request has fully arrived.

> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Mahadev konar

>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (HADOOP-210) Namenode not able to accept connections

Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
    [ http://issues.apache.org/jira/browse/HADOOP-210?page=comments#action_12379098 ] 

Doug Cutting commented on HADOOP-210:
-------------------------------------

I'd guess you're out of file handles or threads (both which can appear as OutOfMemoryError).  Each DFS client JVM and each datanode keeps a connection open to the namenode with a corresponding thread.  The number of threads per process in some older kernels was limited, but more recent kernels have mostly removed that limit, and the scheduler now also supports large numbers of threads effectively.  But you may need to change some limits.  Use 'ulimit -n' to see how many file handles you are permitted, and increase that to at least 4x the number of nodes in your cluster.  You may need to change some kernel options to increase the number of threads:

http://www.kegel.com/c10k.html#limits.threads

You can monitor the number of open file handles with 'lsof', and the number of threads with 'ps'.

I spent some time trying to get Hadoop's IPC to use non-blocking IO a while back (and hence far fewer threads).  The problem is that, since IPC requests include objects, we cannot start processing a request until we've recieved the complete request, and requests can be bigger than a single packet.  Moreover, the end of one request and the beginning of the next can be combined in a packet.  So it's easy to accumulate buffers for many connections using just a single thread, the problem is knowing when a buffer has a complete request that should be dispatched to a worker thread.  So we'd need to length-prefix requests, or break them into length-prefixed chunks.  This may be required for effective operation of very large clusters, or perhaps Linux kernel threads are now up to the task.  We'll soon see.


> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Mahadev konar

>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (HADOOP-210) Namenode not able to accept connections

Posted by "Owen O'Malley (JIRA)" <ji...@apache.org>.
    [ http://issues.apache.org/jira/browse/HADOOP-210?page=comments#action_12379125 ] 

Owen O'Malley commented on HADOOP-210:
--------------------------------------

It seems clear to me that before we get to a 2000 node Hadoop cluster, we will be using select to manage the incoming connections. Even with the 128k stack the 8000 threads would need 1 gig of ram, which is too much on our current hardware. The servers already have thread pools, but they just also have a thread per a socket.

> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Mahadev konar

>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Reopened: (HADOOP-210) Namenode not able to accept connections

Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
     [ http://issues.apache.org/jira/browse/HADOOP-210?page=all ]
     
Doug Cutting reopened HADOOP-210:
---------------------------------

     Assign To: Sameer Paranjpye  (was: Mahadev konar)

I reverted this for now, since it (for unknown reasons) seemed to break distributed operation.

> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Sameer Paranjpye
>      Fix For: 0.4.0
>  Attachments: nio.patch, nio.patch
>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (HADOOP-210) Namenode not able to accept connections

Posted by "Konstantin Shvachko (JIRA)" <ji...@apache.org>.
    [ http://issues.apache.org/jira/browse/HADOOP-210?page=comments#action_12413017 ] 

Konstantin Shvachko commented on HADOOP-210:
--------------------------------------------

I thought that we might want to use java reflections to make versioning support more generic.
This would require some programming discipline and the reflection framework will do the rest.

So I suppose that all classes that require versioning implement Versioned (let me know if the
name does not sound right) interface that should include getVersion() method.
Additionally all versions of the same class should implement the same interface, which declares
methods that are version dependent.
For example lets consider the INode class.

public class INode implements INodeReader, Versioned {
    String name;
    int nrBlocks;
....
}

INodeReader is the interface that declares e.g. only one method readFields( in )
and each version of  INode  should implement this interface.
For each field declared in the INode class we must have corresponding methods
get<fieldName>
set<fieldName>
setDefault<fieldName>
For now I assume that all fields are of primitive types, and that version transition
means adding or removing fields in the class only.

Then we should have a procedure of retiring old versions, which I see as renaming
the package of the Versioned class such that it includes version number. E.g.
org.apache.hadoop.dfs.INode
is renamed to
org.apache.hadoop.dfs.v0.INode
if the old  version is 0 and the new one is 1.
The retired classes are placed in a separate jar file.
I didn't think whether the retiring can be automated with an ant script or not.

Finally we have VersionFactory class, which can be either
a member or a superclass of INode.
The implementation of readFields is simple:
INode.readFields( in ) {
    int storedVersion = in.readVersion();
    VersionFactory.read( getCurrentVersion(), storedVersion, this );
}

And then VersionFactory.read() does the actual job.
VersionFactory.read( targetVersion, sourceVersion, targetClass ) {
    get class name of the required version base on targetClass name and sourceVersion;
    construct the sourceClass;
    if class not found report return that the version is not supported;
    targetFields = targetClass.getFields() and sort them lexicographically by field name;
    sourceFields = sourceClass.getFields() and sort them lexicographically by field name;
    Then we scan the two lists.
    If both of them contain field A (of type T) then {
       // this common field of the two versions
       T value = in.readT();
        and then invoke targetClass.setA( value )
    }
    If field A is contained only in targetClass {
        // this is a new field
        invoke targetClass.setDefaultA();
    }
    if field A belongs only to sourceClass {
       // this field was removed in the new version
       T value = in.readT();
        and do not assign it to anything;
    }
}

Advantages:
That way we can read data from any previous version not only the preceding one.
And when defining a new version of the class we do not need to have any knowledge
of the previous version(s).
Also we can and should have many Versioned classes (for INode, for add, delete,
rename operation logs, ...) but we can use the same VersionFactory for all of them.



> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Mahadev konar

>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (HADOOP-210) Namenode not able to accept connections

Posted by "Owen O'Malley (JIRA)" <ji...@apache.org>.
    [ http://issues.apache.org/jira/browse/HADOOP-210?page=comments#action_12379107 ] 

Owen O'Malley commented on HADOOP-210:
--------------------------------------

The file handles are fine at 32768.
The kernel is 2.6.9, so it should be fine too.

The problem seems to be that the default thread stack size is 512k, which is more than a gig of stack for his 2036 threads. Mahadev is going to take the stack size on the Listener threads down to 128k, which should take the pressure off.

> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Mahadev konar

>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (HADOOP-210) Namenode not able to accept connections

Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
    [ http://issues.apache.org/jira/browse/HADOOP-210?page=comments#action_12383230 ] 

Doug Cutting commented on HADOOP-210:
-------------------------------------

> Let's not argue the point in the abstract.  If someone [...] it would be acceptable, right?

Is that a question for me?  I can't answer in the abstract.  Show me code & I'll give an opinion.  Other committers can too, folks can cause me to change my opinion, etc.  Heck, if someone convincingly demonstrates that the thread-per-connection model has reached the end of its tether, then I might implement it myself.

Having explored this a few times now, I currently think some sort of chunked encoding for requests is required.  We could also chunk responses, which might solve some other issues.

> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Mahadev konar

>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (HADOOP-210) Namenode not able to accept connections

Posted by "eric baldeschwieler (JIRA)" <ji...@apache.org>.
    [ http://issues.apache.org/jira/browse/HADOOP-210?page=comments#action_12412989 ] 

eric baldeschwieler commented on HADOOP-210:
--------------------------------------------

I think we need to break the one thread per connection model.  otherwise our servers will not scale, so "selectors" are needed.

Also we probably need to break the very long connection caching model and the invarient of one connection per VM.  Connection setup is nearly free and serializing requests from different threads creates race conditions and other failure cases.


> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Mahadev konar

>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Updated: (HADOOP-210) Namenode not able to accept connections

Posted by "Konstantin Shvachko (JIRA)" <ji...@apache.org>.
     [ http://issues.apache.org/jira/browse/HADOOP-210?page=all ]

Konstantin Shvachko updated HADOOP-210:
---------------------------------------

    Comment: was deleted

> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Mahadev konar

>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (HADOOP-210) Namenode not able to accept connections

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
    [ http://issues.apache.org/jira/browse/HADOOP-210?page=comments#action_12412915 ] 

Arun C Murthy commented on HADOOP-210:
--------------------------------------

I'm exploring possible solutions to this problem, kicking off a discussion...

a) Procrastinate

  Get a really beefy 64-bit namenode. 
  Run it with lots of RAM in 64-bit (assuming none of the code needs changes and the JVM works) or with (almost) full 4GB virtual address space in 32-bit mode.

b) Thread pool
   
  i) Create one thread-per (persistent) connection for the datanodes and then use a thread-pool to handle incoming client-connections. It would ensure that only during times of very high memory usage incoming client connections are penalized while the datanodes themselves have a persistent connection to the namenode.

  ii) Everyone (clients & datanodes) go through the (possibly separate) thread-pool(s).

c) Selectors

 Owen: I'm not very clear how selects will help could you please chime in (I'm only casually acquainted with Selectors)?

thanks,
Arun

> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Mahadev konar

>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (HADOOP-210) Namenode not able to accept connections

Posted by "Devaraj Das (JIRA)" <ji...@apache.org>.
    [ http://issues.apache.org/jira/browse/HADOOP-210?page=comments#action_12416324 ] 

Devaraj Das commented on HADOOP-210:
------------------------------------

I think I will need to work with Owen to have a quick resolution on this.

> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Sameer Paranjpye
>      Fix For: 0.4.0
>  Attachments: nio.patch, nio.patch
>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Updated: (HADOOP-210) Namenode not able to accept connections

Posted by "Konstantin Shvachko (JIRA)" <ji...@apache.org>.
     [ http://issues.apache.org/jira/browse/HADOOP-210?page=all ]

Konstantin Shvachko updated HADOOP-210:
---------------------------------------

    Comment: was deleted

> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Mahadev konar

>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (HADOOP-210) Namenode not able to accept connections

Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
    [ http://issues.apache.org/jira/browse/HADOOP-210?page=comments#action_12412958 ] 

Doug Cutting commented on HADOOP-210:
-------------------------------------

Threads are already pooled, with a single thread per client JVM.

The solution is either to not cache connections, using a new connection per request, or to use selectors, so that a single thread can efficiently handle requests on all connections.  In the latter case, we need to alter the request protocol so that incoming requests can be buffered until they are complete, and then dispatched to a worker thread.  Currently a request cannot be parsed except by a readFields method, so there's no way for generic server code to tell when one request ends and the next begins.  So we can simply first write requests to a buffer on the client, then send them length-prefixed.

> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Mahadev konar

>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (HADOOP-210) Namenode not able to accept connections

Posted by "Devaraj Das (JIRA)" <ji...@apache.org>.
    [ http://issues.apache.org/jira/browse/HADOOP-210?page=comments#action_12415008 ] 

Devaraj Das commented on HADOOP-210:
------------------------------------

I am implementing this. For now I am using nio only for client accepts and subsequent reads from the client. The handler threads write the output/response directly by themselves to the clients concerned. Clients are disconnected if they don't communicate within a certain timeout. The thing is that time intervals could potentially be different for different protocols (e.g., dfs datanodes' heartbeats and client leases). So for now I am assuming a maximum timeout for the IPC communication (read from the conf file) and that is applicable for all RPC protocol communication. The servers keep track of when a client last communicated with it (either through TCP connect or through TCP write). Comments?

> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Mahadev konar

>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Updated: (HADOOP-210) Namenode not able to accept connections

Posted by "Devaraj Das (JIRA)" <ji...@apache.org>.
     [ http://issues.apache.org/jira/browse/HADOOP-210?page=all ]

Devaraj Das updated HADOOP-210:
-------------------------------

    Attachment: nio.patch

Thanks Doug for the comment. I have updated the patch accordingly.

> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Mahadev konar
>  Attachments: nio.patch, nio.patch
>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (HADOOP-210) Namenode not able to accept connections

Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
    [ http://issues.apache.org/jira/browse/HADOOP-210?page=comments#action_12379122 ] 

Doug Cutting commented on HADOOP-210:
-------------------------------------

> Reducing stack size may relieve the problem temporally, but will not solve the problem completely.

This remains to be seen.  What you say is possible, but it is also possible that, e.g., 2GB heap may gracefully  handle 10k or more connection threads.  We need to determine this.

It's a question of constants.  We know we need to allocate some buffer memory per connection, perhaps a few K, but perhaps more in some cases (e.g., block reports).  With a thread, we need some stack space, but less buffer space; probably more memory on the whole.  But there's no point in optimizing this if we can handle as many threads as we need with the amount of memory we have.

> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Mahadev konar

>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (HADOOP-210) Namenode not able to accept connections

Posted by "Runping Qi (JIRA)" <ji...@apache.org>.
    [ http://issues.apache.org/jira/browse/HADOOP-210?page=comments#action_12379117 ] 

Runping Qi commented on HADOOP-210:
-----------------------------------


Reducing stack size may relieve the problem temporally, but will not solve the problem completely.
It seems to me that the problem is due to the fact that a thread is created per RPC connection. A better solution is to use a thread pool and a connection queue. This way, it is easier to manage the resource limits.



> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Mahadev konar

>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Updated: (HADOOP-210) Namenode not able to accept connections

Posted by "Devaraj Das (JIRA)" <ji...@apache.org>.
     [ http://issues.apache.org/jira/browse/HADOOP-210?page=all ]

Devaraj Das updated HADOOP-210:
-------------------------------

    Attachment: nio.new.patch

This patch was tested by Owen.

> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Sameer Paranjpye
>      Fix For: 0.4.0
>  Attachments: nio.new.patch, nio.patch, nio.patch
>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Updated: (HADOOP-210) Namenode not able to accept connections

Posted by "Devaraj Das (JIRA)" <ji...@apache.org>.
     [ http://issues.apache.org/jira/browse/HADOOP-210?page=all ]

Devaraj Das updated HADOOP-210:
-------------------------------

    Attachment: nio.patch

Attached is the patch for doing selector-based RPC communication.

> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Mahadev konar
>  Attachments: nio.patch
>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Assigned: (HADOOP-210) Namenode not able to accept connections

Posted by "Devaraj Das (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Devaraj Das reassigned HADOOP-210:
----------------------------------

    Assignee: Devaraj Das  (was: Sameer Paranjpye)

> Namenode not able to accept connections
> ---------------------------------------
>
>                 Key: HADOOP-210
>                 URL: https://issues.apache.org/jira/browse/HADOOP-210
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>         Environment: linux
>            Reporter: Mahadev konar
>            Assignee: Devaraj Das
>             Fix For: 0.4.0
>
>         Attachments: nio.new.patch, nio.patch, nio.patch
>
>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Resolved: (HADOOP-210) Namenode not able to accept connections

Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
     [ http://issues.apache.org/jira/browse/HADOOP-210?page=all ]
     
Doug Cutting resolved HADOOP-210:
---------------------------------

    Resolution: Fixed

I just committed this.  Thanks, Devaraj!

> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Sameer Paranjpye
>      Fix For: 0.4.0
>  Attachments: nio.new.patch, nio.patch, nio.patch
>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (HADOOP-210) Namenode not able to accept connections

Posted by "Owen O'Malley (JIRA)" <ji...@apache.org>.
    [ http://issues.apache.org/jira/browse/HADOOP-210?page=comments#action_12416253 ] 

Owen O'Malley commented on HADOOP-210:
--------------------------------------

I'm having problems with this patch seeming to cause servers to stop serving requests. Usually, I can do a bit of work, but when I try to submit a job it, the job never seems to show up in the webapp.

> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Mahadev konar
>      Fix For: 0.4.0
>  Attachments: nio.patch, nio.patch
>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Resolved: (HADOOP-210) Namenode not able to accept connections

Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
     [ http://issues.apache.org/jira/browse/HADOOP-210?page=all ]
     
Doug Cutting resolved HADOOP-210:
---------------------------------

    Fix Version: 0.4.0
     Resolution: Fixed

I just committed this.  Thanks, Devaraj!

> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Mahadev konar
>      Fix For: 0.4.0
>  Attachments: nio.patch, nio.patch
>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  After running for a while (map 12% reduce 1%) I get the following error on the Namenode:
> Exception in thread "Server listener on port 60000" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any of the clients. All the DFSClient calls get timeout. Here is a trace for one of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> 	at org.apache.hadoop.ipc.Client.call(Client.java:305)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> 	at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> 	at org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> 	at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> 	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the outofmemory exception has been thrown). I have profiled the NameNode and it seems to be using around a maixmum heap size of 57MB (which is not much). So, heap size does not seem to be a problem. It might be happening due to lack of Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira