You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by sam liu <sa...@gmail.com> on 2013/12/21 17:30:03 UTC

hdfs unable to create new block with 'Too many open fiiles' exception

Hi Experts,

We failed to run an MR job which accesses hive, as hdfs is unable to create
new block during reduce phase. The exceptions:
  1) In tasklog:
hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to
create new block
  2) In HDFS data node log:
DataXceiveServer: IOException due to:java.io.IOException: Too many open
fiiles
  ... ...
  at sun.nio.ch.ServerSocketAdapter.accept(ServerSocketAdaptor.java:96)
  at
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131)

In hdfs-site.xml, we set 'dfs.datanode.max.xcievers' to 8196. At the same
time, we modified /etc/security/limits.conf to increase nofile of mapred
user to 1048576.  But this issue still happen.

Any suggestions?

Thanks a lot!

Re: hdfs unable to create new block with 'Too many open fiiles' exception

Posted by sam liu <sa...@gmail.com>.
In this cluster, data nodes runs as user 'mapred'. Actually, all hadoop
daemons runs as user 'mapred'.


2013/12/22 Ted Yu <yu...@gmail.com>

> Are your data nodes running as user 'hdfs', or 'mapred' ?
>
> If the former, you need to increase file limit for 'hdfs' user.
>
> Cheers
>
>
> On Sat, Dec 21, 2013 at 8:30 AM, sam liu <sa...@gmail.com> wrote:
>
>> Hi Experts,
>>
>> We failed to run an MR job which accesses hive, as hdfs is unable to
>> create new block during reduce phase. The exceptions:
>>   1) In tasklog:
>> hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to
>> create new block
>>   2) In HDFS data node log:
>> DataXceiveServer: IOException due to:java.io.IOException: Too many open
>> fiiles
>>   ... ...
>>   at sun.nio.ch.ServerSocketAdapter.accept(ServerSocketAdaptor.java:96)
>>   at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131)
>>
>> In hdfs-site.xml, we set 'dfs.datanode.max.xcievers' to 8196. At the same
>> time, we modified /etc/security/limits.conf to increase nofile of mapred
>> user to 1048576.  But this issue still happen.
>>
>> Any suggestions?
>>
>> Thanks a lot!
>>
>>
>

Re: hdfs unable to create new block with 'Too many open fiiles' exception

Posted by sam liu <sa...@gmail.com>.
In this cluster, data nodes runs as user 'mapred'. Actually, all hadoop
daemons runs as user 'mapred'.


2013/12/22 Ted Yu <yu...@gmail.com>

> Are your data nodes running as user 'hdfs', or 'mapred' ?
>
> If the former, you need to increase file limit for 'hdfs' user.
>
> Cheers
>
>
> On Sat, Dec 21, 2013 at 8:30 AM, sam liu <sa...@gmail.com> wrote:
>
>> Hi Experts,
>>
>> We failed to run an MR job which accesses hive, as hdfs is unable to
>> create new block during reduce phase. The exceptions:
>>   1) In tasklog:
>> hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to
>> create new block
>>   2) In HDFS data node log:
>> DataXceiveServer: IOException due to:java.io.IOException: Too many open
>> fiiles
>>   ... ...
>>   at sun.nio.ch.ServerSocketAdapter.accept(ServerSocketAdaptor.java:96)
>>   at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131)
>>
>> In hdfs-site.xml, we set 'dfs.datanode.max.xcievers' to 8196. At the same
>> time, we modified /etc/security/limits.conf to increase nofile of mapred
>> user to 1048576.  But this issue still happen.
>>
>> Any suggestions?
>>
>> Thanks a lot!
>>
>>
>

Re: hdfs unable to create new block with 'Too many open fiiles' exception

Posted by sam liu <sa...@gmail.com>.
In this cluster, data nodes runs as user 'mapred'. Actually, all hadoop
daemons runs as user 'mapred'.


2013/12/22 Ted Yu <yu...@gmail.com>

> Are your data nodes running as user 'hdfs', or 'mapred' ?
>
> If the former, you need to increase file limit for 'hdfs' user.
>
> Cheers
>
>
> On Sat, Dec 21, 2013 at 8:30 AM, sam liu <sa...@gmail.com> wrote:
>
>> Hi Experts,
>>
>> We failed to run an MR job which accesses hive, as hdfs is unable to
>> create new block during reduce phase. The exceptions:
>>   1) In tasklog:
>> hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to
>> create new block
>>   2) In HDFS data node log:
>> DataXceiveServer: IOException due to:java.io.IOException: Too many open
>> fiiles
>>   ... ...
>>   at sun.nio.ch.ServerSocketAdapter.accept(ServerSocketAdaptor.java:96)
>>   at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131)
>>
>> In hdfs-site.xml, we set 'dfs.datanode.max.xcievers' to 8196. At the same
>> time, we modified /etc/security/limits.conf to increase nofile of mapred
>> user to 1048576.  But this issue still happen.
>>
>> Any suggestions?
>>
>> Thanks a lot!
>>
>>
>

Re: hdfs unable to create new block with 'Too many open fiiles' exception

Posted by sam liu <sa...@gmail.com>.
In this cluster, data nodes runs as user 'mapred'. Actually, all hadoop
daemons runs as user 'mapred'.


2013/12/22 Ted Yu <yu...@gmail.com>

> Are your data nodes running as user 'hdfs', or 'mapred' ?
>
> If the former, you need to increase file limit for 'hdfs' user.
>
> Cheers
>
>
> On Sat, Dec 21, 2013 at 8:30 AM, sam liu <sa...@gmail.com> wrote:
>
>> Hi Experts,
>>
>> We failed to run an MR job which accesses hive, as hdfs is unable to
>> create new block during reduce phase. The exceptions:
>>   1) In tasklog:
>> hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to
>> create new block
>>   2) In HDFS data node log:
>> DataXceiveServer: IOException due to:java.io.IOException: Too many open
>> fiiles
>>   ... ...
>>   at sun.nio.ch.ServerSocketAdapter.accept(ServerSocketAdaptor.java:96)
>>   at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131)
>>
>> In hdfs-site.xml, we set 'dfs.datanode.max.xcievers' to 8196. At the same
>> time, we modified /etc/security/limits.conf to increase nofile of mapred
>> user to 1048576.  But this issue still happen.
>>
>> Any suggestions?
>>
>> Thanks a lot!
>>
>>
>

Re: hdfs unable to create new block with 'Too many open fiiles' exception

Posted by Ted Yu <yu...@gmail.com>.
Are your data nodes running as user 'hdfs', or 'mapred' ?

If the former, you need to increase file limit for 'hdfs' user.

Cheers


On Sat, Dec 21, 2013 at 8:30 AM, sam liu <sa...@gmail.com> wrote:

> Hi Experts,
>
> We failed to run an MR job which accesses hive, as hdfs is unable to
> create new block during reduce phase. The exceptions:
>   1) In tasklog:
> hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to
> create new block
>   2) In HDFS data node log:
> DataXceiveServer: IOException due to:java.io.IOException: Too many open
> fiiles
>   ... ...
>   at sun.nio.ch.ServerSocketAdapter.accept(ServerSocketAdaptor.java:96)
>   at
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131)
>
> In hdfs-site.xml, we set 'dfs.datanode.max.xcievers' to 8196. At the same
> time, we modified /etc/security/limits.conf to increase nofile of mapred
> user to 1048576.  But this issue still happen.
>
> Any suggestions?
>
> Thanks a lot!
>
>

Re: hdfs unable to create new block with 'Too many open fiiles' exception

Posted by Ted Yu <yu...@gmail.com>.
Are your data nodes running as user 'hdfs', or 'mapred' ?

If the former, you need to increase file limit for 'hdfs' user.

Cheers


On Sat, Dec 21, 2013 at 8:30 AM, sam liu <sa...@gmail.com> wrote:

> Hi Experts,
>
> We failed to run an MR job which accesses hive, as hdfs is unable to
> create new block during reduce phase. The exceptions:
>   1) In tasklog:
> hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to
> create new block
>   2) In HDFS data node log:
> DataXceiveServer: IOException due to:java.io.IOException: Too many open
> fiiles
>   ... ...
>   at sun.nio.ch.ServerSocketAdapter.accept(ServerSocketAdaptor.java:96)
>   at
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131)
>
> In hdfs-site.xml, we set 'dfs.datanode.max.xcievers' to 8196. At the same
> time, we modified /etc/security/limits.conf to increase nofile of mapred
> user to 1048576.  But this issue still happen.
>
> Any suggestions?
>
> Thanks a lot!
>
>

Re: hdfs unable to create new block with 'Too many open fiiles' exception

Posted by Ted Yu <yu...@gmail.com>.
Are your data nodes running as user 'hdfs', or 'mapred' ?

If the former, you need to increase file limit for 'hdfs' user.

Cheers


On Sat, Dec 21, 2013 at 8:30 AM, sam liu <sa...@gmail.com> wrote:

> Hi Experts,
>
> We failed to run an MR job which accesses hive, as hdfs is unable to
> create new block during reduce phase. The exceptions:
>   1) In tasklog:
> hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to
> create new block
>   2) In HDFS data node log:
> DataXceiveServer: IOException due to:java.io.IOException: Too many open
> fiiles
>   ... ...
>   at sun.nio.ch.ServerSocketAdapter.accept(ServerSocketAdaptor.java:96)
>   at
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131)
>
> In hdfs-site.xml, we set 'dfs.datanode.max.xcievers' to 8196. At the same
> time, we modified /etc/security/limits.conf to increase nofile of mapred
> user to 1048576.  But this issue still happen.
>
> Any suggestions?
>
> Thanks a lot!
>
>

Re: hdfs unable to create new block with 'Too many open fiiles' exception

Posted by Ted Yu <yu...@gmail.com>.
Are your data nodes running as user 'hdfs', or 'mapred' ?

If the former, you need to increase file limit for 'hdfs' user.

Cheers


On Sat, Dec 21, 2013 at 8:30 AM, sam liu <sa...@gmail.com> wrote:

> Hi Experts,
>
> We failed to run an MR job which accesses hive, as hdfs is unable to
> create new block during reduce phase. The exceptions:
>   1) In tasklog:
> hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to
> create new block
>   2) In HDFS data node log:
> DataXceiveServer: IOException due to:java.io.IOException: Too many open
> fiiles
>   ... ...
>   at sun.nio.ch.ServerSocketAdapter.accept(ServerSocketAdaptor.java:96)
>   at
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131)
>
> In hdfs-site.xml, we set 'dfs.datanode.max.xcievers' to 8196. At the same
> time, we modified /etc/security/limits.conf to increase nofile of mapred
> user to 1048576.  But this issue still happen.
>
> Any suggestions?
>
> Thanks a lot!
>
>