You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Fourie Joubert <fo...@up.ac.za> on 2012/05/09 17:14:48 UTC

DataNodeRegistration problem

Hi

I am running Hadoop-1.0.1 with Sun jdk1.6.0_23.

My system is a head node with 14 compute blades

When trying to start hadoop, I get the following message in the logs for 
each data node:


2012-05-09 16:53:35,548 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: 
DatanodeRegistration(137.215.75.201:50010, 
storageID=DS-2067460883-137.215.75.201-50010-1336575105195, 
infoPort=50075, ipcPort=50020):DataXceiver

java.net.SocketException: Protocol not available
...
...

The full log is shown below.

I can't seem to get past this problem - any help or advice would be 
sincerely appreciated.

Kindest regards!

Fourie





2012-05-09 16:53:31,800 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = wonko1/137.215.75.201
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.0.1
STARTUP_MSG:   build = 
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 
1243785; compiled by 'hortonfo' on Tue Feb 14 08:15:38 UTC 2012
************************************************************/
2012-05-09 16:53:31,934 INFO 
org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from 
hadoop-metrics2.properties
2012-05-09 16:53:31,945 INFO 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source 
MetricsSystem,sub=Stats registered.
2012-05-09 16:53:31,946 INFO 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot 
period at 10 second(s).
2012-05-09 16:53:31,946 INFO 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics 
system started
2012-05-09 16:53:32,022 INFO 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source 
ugi registered.
2012-05-09 16:53:32,232 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Registered 
FSDatasetStatusMBean
2012-05-09 16:53:32,242 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
2012-05-09 16:53:32,244 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 
1048576 bytes/s
2012-05-09 16:53:32,291 INFO org.mortbay.log: Logging to 
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via 
org.mortbay.log.Slf4jLog
2012-05-09 16:53:32,347 INFO org.apache.hadoop.http.HttpServer: Added 
global filtersafety 
(class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2012-05-09 16:53:32,359 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false
2012-05-09 16:53:32,359 INFO org.apache.hadoop.http.HttpServer: Port 
returned by webServer.getConnectors()[0].getLocalPort() before open() is 
-1. Opening the listener on 50075
2012-05-09 16:53:32,359 INFO org.apache.hadoop.http.HttpServer: 
listener.getLocalPort() returned 50075 
webServer.getConnectors()[0].getLocalPort() returned 50075
2012-05-09 16:53:32,360 INFO org.apache.hadoop.http.HttpServer: Jetty 
bound to port 50075
2012-05-09 16:53:32,360 INFO org.mortbay.log: jetty-6.1.26
2012-05-09 16:53:32,590 INFO org.mortbay.log: Started 
SelectChannelConnector@0.0.0.0:50075
2012-05-09 16:53:32,594 INFO 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source 
jvm registered.
2012-05-09 16:53:32,595 INFO 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source 
DataNode registered.
2012-05-09 16:53:32,614 INFO org.apache.hadoop.ipc.Server: Starting 
SocketReader
2012-05-09 16:53:32,616 INFO 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source 
RpcDetailedActivityForPort50020 registered.
2012-05-09 16:53:32,616 INFO 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source 
RpcActivityForPort50020 registered.
2012-05-09 16:53:32,618 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration = 
DatanodeRegistration(wonko1.bi.up.ac.za:50010, 
storageID=DS-2067460883-137.215.75.201-50010-1336575105195, 
infoPort=50075, ipcPort=50020)
2012-05-09 16:53:32,620 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting asynchronous 
block report scan
2012-05-09 16:53:32,620 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: 
DatanodeRegistration(137.215.75.201:50010, 
storageID=DS-2067460883-137.215.75.201-50010-1336575105195, 
infoPort=50075, ipcPort=50020)In DataNode.run, data = 
FSDataset{dirpath='/hadooplocal/datadir/current'}
2012-05-09 16:53:32,620 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Finished asynchronous 
block report scan in 0ms
2012-05-09 16:53:32,621 INFO org.apache.hadoop.ipc.Server: IPC Server 
Responder: starting
2012-05-09 16:53:32,621 INFO org.apache.hadoop.ipc.Server: IPC Server 
listener on 50020: starting
2012-05-09 16:53:32,623 INFO org.apache.hadoop.ipc.Server: IPC Server 
handler 0 on 50020: starting
2012-05-09 16:53:32,623 INFO org.apache.hadoop.ipc.Server: IPC Server 
handler 1 on 50020: starting
2012-05-09 16:53:32,623 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: using 
BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
2012-05-09 16:53:32,623 INFO org.apache.hadoop.ipc.Server: IPC Server 
handler 2 on 50020: starting
2012-05-09 16:53:32,626 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous 
block report against current state in 0 ms
2012-05-09 16:53:32,628 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks 
took 0 msec to generate and 2 msecs for RPC and NN processing
2012-05-09 16:53:32,628 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block 
scanner.
2012-05-09 16:53:32,629 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Generated rough 
(lockless) block report in 0 ms
2012-05-09 16:53:32,629 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous 
block report against current state in 0 ms
2012-05-09 16:53:35,548 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: 
DatanodeRegistration(137.215.75.201:50010, 
storageID=DS-2067460883-137.215.75.201-50010-1336575105195, 
infoPort=50075, ipcPort=50020):DataXceiver
java.net.SocketException: Protocol not available
         at sun.nio.ch.Net.getIntOption0(Native Method)
         at sun.nio.ch.Net.getIntOption(Net.java:181)
         at 
sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419)
         at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60)
         at 
sun.nio.ch.SocketOptsImpl.receiveBufferSize(SocketOptsImpl.java:142)
         at 
sun.nio.ch.SocketOptsImpl$IP$TCP.receiveBufferSize(SocketOptsImpl.java:286)
         at 
sun.nio.ch.OptionAdaptor.getReceiveBufferSize(OptionAdaptor.java:148)
         at 
sun.nio.ch.SocketAdaptor.getReceiveBufferSize(SocketAdaptor.java:336)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:238)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:107)
         at java.lang.Thread.run(Thread.java:636)
2012-05-09 16:54:36,377 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: 
DatanodeRegistration(137.215.75.201:50010, 
storageID=DS-2067460883-137.215.75.201-50010-1336575105195, 
infoPort=50075, ipcPort=50020):DataXceiver
java.net.SocketException: Protocol not available
         at sun.nio.ch.Net.getIntOption0(Native Method)
         at sun.nio.ch.Net.getIntOption(Net.java:181)
         at 
sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419)
         at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60)
         at 
sun.nio.ch.SocketOptsImpl.receiveBufferSize(SocketOptsImpl.java:142)
         at 
sun.nio.ch.SocketOptsImpl$IP$TCP.receiveBufferSize(SocketOptsImpl.java:286)
         at 
sun.nio.ch.OptionAdaptor.getReceiveBufferSize(OptionAdaptor.java:148)
         at 
sun.nio.ch.SocketAdaptor.getReceiveBufferSize(SocketAdaptor.java:336)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:238)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:107)
         at java.lang.Thread.run(Thread.java:636)
2012-05-09 16:54:46,427 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: 
DatanodeRegistration(137.215.75.201:50010, 
storageID=DS-2067460883-137.215.75.201-50010-1336575105195, 
infoPort=50075, ipcPort=50020):DataXceiver
java.net.SocketException: Protocol not available
         at sun.nio.ch.Net.getIntOption0(Native Method)



-- 
--------------
Prof Fourie Joubert
Bioinformatics and Computational Biology Unit
Department of Biochemistry
University of Pretoria
fourie.joubert@up.ac.za
http://www.bi.up.ac.za
Tel. +27-12-420-5825
Fax. +27-12-420-5800

-------------------------------------------------------------------------
This message and attachments are subject to a disclaimer. Please refer
to www.it.up.ac.za/documentation/governance/disclaimer/ for full details.


Re: DataNodeRegistration problem

Posted by Fourie Joubert <fo...@up.ac.za>.
Thanks  - I'll check!

Regards!

Fourie

On 05/09/2012 05:47 PM, Harsh J wrote:
> You may be hitting https://issues.apache.org/jira/browse/HDFS-1115?
> Have you ensured Sun JDK is the only JDK available in the machines and
> your services aren't using OpenJDK accidentally?
>
> On Wed, May 9, 2012 at 8:44 PM, Fourie Joubert<fo...@up.ac.za>  wrote:
>> Hi
>>
>> I am running Hadoop-1.0.1 with Sun jdk1.6.0_23.
>>
>> My system is a head node with 14 compute blades
>>
>> When trying to start hadoop, I get the following message in the logs for
>> each data node:
>>
>>
>> 2012-05-09 16:53:35,548 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>> DatanodeRegistration(137.215.75.201:50010,
>> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075,
>> ipcPort=50020):DataXceiver
>>
>> java.net.SocketException: Protocol not available
>> ...
>> ...
>>
>> The full log is shown below.
>>
>> I can't seem to get past this problem - any help or advice would be
>> sincerely appreciated.
>>
>> Kindest regards!
>>
>> Fourie
>>
>>
>>
>>
>>
>> 2012-05-09 16:53:31,800 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = wonko1/137.215.75.201
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 1.0.1
>> STARTUP_MSG:   build =
>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>> 1243785; compiled by 'hortonfo' on Tue Feb 14 08:15:38 UTC 2012
>> ************************************************************/
>> 2012-05-09 16:53:31,934 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:
>> loaded properties from hadoop-metrics2.properties
>> 2012-05-09 16:53:31,945 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>> MetricsSystem,sub=Stats registered.
>> 2012-05-09 16:53:31,946 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period
>> at 10 second(s).
>> 2012-05-09 16:53:31,946 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
>> started
>> 2012-05-09 16:53:32,022 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>> registered.
>> 2012-05-09 16:53:32,232 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
>> FSDatasetStatusMBean
>> 2012-05-09 16:53:32,242 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
>> 2012-05-09 16:53:32,244 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
>> 1048576 bytes/s
>> 2012-05-09 16:53:32,291 INFO org.mortbay.log: Logging to
>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> org.mortbay.log.Slf4jLog
>> 2012-05-09 16:53:32,347 INFO org.apache.hadoop.http.HttpServer: Added global
>> filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
>> 2012-05-09 16:53:32,359 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false
>> 2012-05-09 16:53:32,359 INFO org.apache.hadoop.http.HttpServer: Port
>> returned by webServer.getConnectors()[0].getLocalPort() before open() is -1.
>> Opening the listener on 50075
>> 2012-05-09 16:53:32,359 INFO org.apache.hadoop.http.HttpServer:
>> listener.getLocalPort() returned 50075
>> webServer.getConnectors()[0].getLocalPort() returned 50075
>> 2012-05-09 16:53:32,360 INFO org.apache.hadoop.http.HttpServer: Jetty bound
>> to port 50075
>> 2012-05-09 16:53:32,360 INFO org.mortbay.log: jetty-6.1.26
>> 2012-05-09 16:53:32,590 INFO org.mortbay.log: Started
>> SelectChannelConnector@0.0.0.0:50075
>> 2012-05-09 16:53:32,594 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>> registered.
>> 2012-05-09 16:53:32,595 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>> DataNode registered.
>> 2012-05-09 16:53:32,614 INFO org.apache.hadoop.ipc.Server: Starting
>> SocketReader
>> 2012-05-09 16:53:32,616 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>> RpcDetailedActivityForPort50020 registered.
>> 2012-05-09 16:53:32,616 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>> RpcActivityForPort50020 registered.
>> 2012-05-09 16:53:32,618 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
>> DatanodeRegistration(wonko1.bi.up.ac.za:50010,
>> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075,
>> ipcPort=50020)
>> 2012-05-09 16:53:32,620 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting asynchronous block
>> report scan
>> 2012-05-09 16:53:32,620 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>> DatanodeRegistration(137.215.75.201:50010,
>> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075,
>> ipcPort=50020)In DataNode.run, data =
>> FSDataset{dirpath='/hadooplocal/datadir/current'}
>> 2012-05-09 16:53:32,620 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Finished asynchronous block
>> report scan in 0ms
>> 2012-05-09 16:53:32,621 INFO org.apache.hadoop.ipc.Server: IPC Server
>> Responder: starting
>> 2012-05-09 16:53:32,621 INFO org.apache.hadoop.ipc.Server: IPC Server
>> listener on 50020: starting
>> 2012-05-09 16:53:32,623 INFO org.apache.hadoop.ipc.Server: IPC Server
>> handler 0 on 50020: starting
>> 2012-05-09 16:53:32,623 INFO org.apache.hadoop.ipc.Server: IPC Server
>> handler 1 on 50020: starting
>> 2012-05-09 16:53:32,623 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL
>> of 3600000msec Initial delay: 0msec
>> 2012-05-09 16:53:32,623 INFO org.apache.hadoop.ipc.Server: IPC Server
>> handler 2 on 50020: starting
>> 2012-05-09 16:53:32,626 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous
>> block report against current state in 0 ms
>> 2012-05-09 16:53:32,628 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks
>> took 0 msec to generate and 2 msecs for RPC and NN processing
>> 2012-05-09 16:53:32,628 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block
>> scanner.
>> 2012-05-09 16:53:32,629 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Generated rough (lockless)
>> block report in 0 ms
>> 2012-05-09 16:53:32,629 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous
>> block report against current state in 0 ms
>> 2012-05-09 16:53:35,548 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>> DatanodeRegistration(137.215.75.201:50010,
>> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075,
>> ipcPort=50020):DataXceiver
>> java.net.SocketException: Protocol not available
>>         at sun.nio.ch.Net.getIntOption0(Native Method)
>>         at sun.nio.ch.Net.getIntOption(Net.java:181)
>>         at sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419)
>>         at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60)
>>         at
>> sun.nio.ch.SocketOptsImpl.receiveBufferSize(SocketOptsImpl.java:142)
>>         at
>> sun.nio.ch.SocketOptsImpl$IP$TCP.receiveBufferSize(SocketOptsImpl.java:286)
>>         at
>> sun.nio.ch.OptionAdaptor.getReceiveBufferSize(OptionAdaptor.java:148)
>>         at
>> sun.nio.ch.SocketAdaptor.getReceiveBufferSize(SocketAdaptor.java:336)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:238)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:107)
>>         at java.lang.Thread.run(Thread.java:636)
>> 2012-05-09 16:54:36,377 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>> DatanodeRegistration(137.215.75.201:50010,
>> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075,
>> ipcPort=50020):DataXceiver
>> java.net.SocketException: Protocol not available
>>         at sun.nio.ch.Net.getIntOption0(Native Method)
>>         at sun.nio.ch.Net.getIntOption(Net.java:181)
>>         at sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419)
>>         at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60)
>>         at
>> sun.nio.ch.SocketOptsImpl.receiveBufferSize(SocketOptsImpl.java:142)
>>         at
>> sun.nio.ch.SocketOptsImpl$IP$TCP.receiveBufferSize(SocketOptsImpl.java:286)
>>         at
>> sun.nio.ch.OptionAdaptor.getReceiveBufferSize(OptionAdaptor.java:148)
>>         at
>> sun.nio.ch.SocketAdaptor.getReceiveBufferSize(SocketAdaptor.java:336)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:238)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:107)
>>         at java.lang.Thread.run(Thread.java:636)
>> 2012-05-09 16:54:46,427 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>> DatanodeRegistration(137.215.75.201:50010,
>> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075,
>> ipcPort=50020):DataXceiver
>> java.net.SocketException: Protocol not available
>>         at sun.nio.ch.Net.getIntOption0(Native Method)
>>
>>
>>
>> --
>> --------------
>> Prof Fourie Joubert
>> Bioinformatics and Computational Biology Unit
>> Department of Biochemistry
>> University of Pretoria
>> fourie.joubert@up.ac.za
>> http://www.bi.up.ac.za
>> Tel. +27-12-420-5825
>> Fax. +27-12-420-5800
>>
>> -------------------------------------------------------------------------
>> This message and attachments are subject to a disclaimer. Please refer
>> to www.it.up.ac.za/documentation/governance/disclaimer/ for full details.
>>
>
>


-- 
--------------
Prof Fourie Joubert
Bioinformatics and Computational Biology Unit
Department of Biochemistry
University of Pretoria
fourie.joubert@up.ac.za
http://www.bi.up.ac.za
Tel. +27-12-420-5825
Fax. +27-12-420-5800

-------------------------------------------------------------------------
This message and attachments are subject to a disclaimer. Please refer
to www.it.up.ac.za/documentation/governance/disclaimer/ for full details.


Re: DataNodeRegistration problem

Posted by Harsh J <ha...@cloudera.com>.
Fourie,

Good to know. Just for the record, do you remember the version of
OpenJDK that was installed? So that we can mark that as unusable in
our docs/wiki.

On Thu, May 10, 2012 at 12:22 PM, Fourie Joubert
<fo...@up.ac.za> wrote:
> Hi
>
> Yes - that was indeed the problem...
>
> I cleaned up the Java's on all the nodes, did a clean reinstall of Sun
> jdk1.6.0_23 and the problem is gone.
>
> Many thanks and regards!
>
>
> Fourie
>
> On 05/09/2012 05:47 PM, Harsh J wrote:
>>
>> You may be hitting https://issues.apache.org/jira/browse/HDFS-1115?
>> Have you ensured Sun JDK is the only JDK available in the machines and
>> your services aren't using OpenJDK accidentally?
>>
>> On Wed, May 9, 2012 at 8:44 PM, Fourie Joubert<fo...@up.ac.za>
>>  wrote:
>>>
>>> Hi
>>>
>>> I am running Hadoop-1.0.1 with Sun jdk1.6.0_23.
>>>
>>> My system is a head node with 14 compute blades
>>>
>>> When trying to start hadoop, I get the following message in the logs for
>>> each data node:
>>>
>>>
>>> 2012-05-09 16:53:35,548 ERROR
>>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>>> DatanodeRegistration(137.215.75.201:50010,
>>> storageID=DS-2067460883-137.215.75.201-50010-1336575105195,
>>> infoPort=50075,
>>> ipcPort=50020):DataXceiver
>>>
>>> java.net.SocketException: Protocol not available
>>> ...
>>> ...
>>>
>>> The full log is shown below.
>>>
>>> I can't seem to get past this problem - any help or advice would be
>>> sincerely appreciated.
>>>
>>> Kindest regards!
>>>
>>> Fourie
>>>
>>>
>>>
>>>
>>>
>>> 2012-05-09 16:53:31,800 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>>> /************************************************************
>>> STARTUP_MSG: Starting DataNode
>>> STARTUP_MSG:   host = wonko1/137.215.75.201
>>> STARTUP_MSG:   args = []
>>> STARTUP_MSG:   version = 1.0.1
>>> STARTUP_MSG:   build =
>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>>> 1243785; compiled by 'hortonfo' on Tue Feb 14 08:15:38 UTC 2012
>>> ************************************************************/
>>> 2012-05-09 16:53:31,934 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsConfig:
>>> loaded properties from hadoop-metrics2.properties
>>> 2012-05-09 16:53:31,945 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>> MetricsSystem,sub=Stats registered.
>>> 2012-05-09 16:53:31,946 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>> period
>>> at 10 second(s).
>>> 2012-05-09 16:53:31,946 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics
>>> system
>>> started
>>> 2012-05-09 16:53:32,022 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>> ugi
>>> registered.
>>> 2012-05-09 16:53:32,232 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
>>> FSDatasetStatusMBean
>>> 2012-05-09 16:53:32,242 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at
>>> 50010
>>> 2012-05-09 16:53:32,244 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
>>> 1048576 bytes/s
>>> 2012-05-09 16:53:32,291 INFO org.mortbay.log: Logging to
>>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>>> org.mortbay.log.Slf4jLog
>>> 2012-05-09 16:53:32,347 INFO org.apache.hadoop.http.HttpServer: Added
>>> global
>>> filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
>>> 2012-05-09 16:53:32,359 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled =
>>> false
>>> 2012-05-09 16:53:32,359 INFO org.apache.hadoop.http.HttpServer: Port
>>> returned by webServer.getConnectors()[0].getLocalPort() before open() is
>>> -1.
>>> Opening the listener on 50075
>>> 2012-05-09 16:53:32,359 INFO org.apache.hadoop.http.HttpServer:
>>> listener.getLocalPort() returned 50075
>>> webServer.getConnectors()[0].getLocalPort() returned 50075
>>> 2012-05-09 16:53:32,360 INFO org.apache.hadoop.http.HttpServer: Jetty
>>> bound
>>> to port 50075
>>> 2012-05-09 16:53:32,360 INFO org.mortbay.log: jetty-6.1.26
>>> 2012-05-09 16:53:32,590 INFO org.mortbay.log: Started
>>> SelectChannelConnector@0.0.0.0:50075
>>> 2012-05-09 16:53:32,594 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>> jvm
>>> registered.
>>> 2012-05-09 16:53:32,595 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>> DataNode registered.
>>> 2012-05-09 16:53:32,614 INFO org.apache.hadoop.ipc.Server: Starting
>>> SocketReader
>>> 2012-05-09 16:53:32,616 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>> RpcDetailedActivityForPort50020 registered.
>>> 2012-05-09 16:53:32,616 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>> RpcActivityForPort50020 registered.
>>> 2012-05-09 16:53:32,618 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
>>> DatanodeRegistration(wonko1.bi.up.ac.za:50010,
>>> storageID=DS-2067460883-137.215.75.201-50010-1336575105195,
>>> infoPort=50075,
>>> ipcPort=50020)
>>> 2012-05-09 16:53:32,620 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting asynchronous
>>> block
>>> report scan
>>> 2012-05-09 16:53:32,620 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>>> DatanodeRegistration(137.215.75.201:50010,
>>> storageID=DS-2067460883-137.215.75.201-50010-1336575105195,
>>> infoPort=50075,
>>> ipcPort=50020)In DataNode.run, data =
>>> FSDataset{dirpath='/hadooplocal/datadir/current'}
>>> 2012-05-09 16:53:32,620 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Finished asynchronous
>>> block
>>> report scan in 0ms
>>> 2012-05-09 16:53:32,621 INFO org.apache.hadoop.ipc.Server: IPC Server
>>> Responder: starting
>>> 2012-05-09 16:53:32,621 INFO org.apache.hadoop.ipc.Server: IPC Server
>>> listener on 50020: starting
>>> 2012-05-09 16:53:32,623 INFO org.apache.hadoop.ipc.Server: IPC Server
>>> handler 0 on 50020: starting
>>> 2012-05-09 16:53:32,623 INFO org.apache.hadoop.ipc.Server: IPC Server
>>> handler 1 on 50020: starting
>>> 2012-05-09 16:53:32,623 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: using
>>> BLOCKREPORT_INTERVAL
>>> of 3600000msec Initial delay: 0msec
>>> 2012-05-09 16:53:32,623 INFO org.apache.hadoop.ipc.Server: IPC Server
>>> handler 2 on 50020: starting
>>> 2012-05-09 16:53:32,626 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous
>>> block report against current state in 0 ms
>>> 2012-05-09 16:53:32,628 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks
>>> took 0 msec to generate and 2 msecs for RPC and NN processing
>>> 2012-05-09 16:53:32,628 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block
>>> scanner.
>>> 2012-05-09 16:53:32,629 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Generated rough
>>> (lockless)
>>> block report in 0 ms
>>> 2012-05-09 16:53:32,629 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous
>>> block report against current state in 0 ms
>>> 2012-05-09 16:53:35,548 ERROR
>>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>>> DatanodeRegistration(137.215.75.201:50010,
>>> storageID=DS-2067460883-137.215.75.201-50010-1336575105195,
>>> infoPort=50075,
>>> ipcPort=50020):DataXceiver
>>> java.net.SocketException: Protocol not available
>>>        at sun.nio.ch.Net.getIntOption0(Native Method)
>>>        at sun.nio.ch.Net.getIntOption(Net.java:181)
>>>        at
>>> sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419)
>>>        at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60)
>>>        at
>>> sun.nio.ch.SocketOptsImpl.receiveBufferSize(SocketOptsImpl.java:142)
>>>        at
>>>
>>> sun.nio.ch.SocketOptsImpl$IP$TCP.receiveBufferSize(SocketOptsImpl.java:286)
>>>        at
>>> sun.nio.ch.OptionAdaptor.getReceiveBufferSize(OptionAdaptor.java:148)
>>>        at
>>> sun.nio.ch.SocketAdaptor.getReceiveBufferSize(SocketAdaptor.java:336)
>>>        at
>>>
>>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:238)
>>>        at
>>>
>>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:107)
>>>        at java.lang.Thread.run(Thread.java:636)
>>> 2012-05-09 16:54:36,377 ERROR
>>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>>> DatanodeRegistration(137.215.75.201:50010,
>>> storageID=DS-2067460883-137.215.75.201-50010-1336575105195,
>>> infoPort=50075,
>>> ipcPort=50020):DataXceiver
>>> java.net.SocketException: Protocol not available
>>>        at sun.nio.ch.Net.getIntOption0(Native Method)
>>>        at sun.nio.ch.Net.getIntOption(Net.java:181)
>>>        at
>>> sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419)
>>>        at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60)
>>>        at
>>> sun.nio.ch.SocketOptsImpl.receiveBufferSize(SocketOptsImpl.java:142)
>>>        at
>>>
>>> sun.nio.ch.SocketOptsImpl$IP$TCP.receiveBufferSize(SocketOptsImpl.java:286)
>>>        at
>>> sun.nio.ch.OptionAdaptor.getReceiveBufferSize(OptionAdaptor.java:148)
>>>        at
>>> sun.nio.ch.SocketAdaptor.getReceiveBufferSize(SocketAdaptor.java:336)
>>>        at
>>>
>>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:238)
>>>        at
>>>
>>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:107)
>>>        at java.lang.Thread.run(Thread.java:636)
>>> 2012-05-09 16:54:46,427 ERROR
>>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>>> DatanodeRegistration(137.215.75.201:50010,
>>> storageID=DS-2067460883-137.215.75.201-50010-1336575105195,
>>> infoPort=50075,
>>> ipcPort=50020):DataXceiver
>>> java.net.SocketException: Protocol not available
>>>        at sun.nio.ch.Net.getIntOption0(Native Method)
>>>
>>>
>>>
>>> --
>>> --------------
>>> Prof Fourie Joubert
>>> Bioinformatics and Computational Biology Unit
>>> Department of Biochemistry
>>> University of Pretoria
>>> fourie.joubert@up.ac.za
>>> http://www.bi.up.ac.za
>>> Tel. +27-12-420-5825
>>> Fax. +27-12-420-5800
>>>
>>> -------------------------------------------------------------------------
>>> This message and attachments are subject to a disclaimer. Please refer
>>> to www.it.up.ac.za/documentation/governance/disclaimer/ for full details.
>>>
>>
>>
>
>
> --
> --------------
> Prof Fourie Joubert
> Bioinformatics and Computational Biology Unit
> Department of Biochemistry
> University of Pretoria
> fourie.joubert@up.ac.za
> http://www.bi.up.ac.za
> Tel. +27-12-420-5825
> Fax. +27-12-420-5800
>
> -------------------------------------------------------------------------
> This message and attachments are subject to a disclaimer. Please refer
> to www.it.up.ac.za/documentation/governance/disclaimer/ for full details.
>



-- 
Harsh J

Re: DataNodeRegistration problem

Posted by Fourie Joubert <fo...@up.ac.za>.
Hi

Yes - that was indeed the problem...

I cleaned up the Java's on all the nodes, did a clean reinstall of Sun 
jdk1.6.0_23 and the problem is gone.

Many thanks and regards!

Fourie

On 05/09/2012 05:47 PM, Harsh J wrote:
> You may be hitting https://issues.apache.org/jira/browse/HDFS-1115?
> Have you ensured Sun JDK is the only JDK available in the machines and
> your services aren't using OpenJDK accidentally?
>
> On Wed, May 9, 2012 at 8:44 PM, Fourie Joubert<fo...@up.ac.za>  wrote:
>> Hi
>>
>> I am running Hadoop-1.0.1 with Sun jdk1.6.0_23.
>>
>> My system is a head node with 14 compute blades
>>
>> When trying to start hadoop, I get the following message in the logs for
>> each data node:
>>
>>
>> 2012-05-09 16:53:35,548 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>> DatanodeRegistration(137.215.75.201:50010,
>> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075,
>> ipcPort=50020):DataXceiver
>>
>> java.net.SocketException: Protocol not available
>> ...
>> ...
>>
>> The full log is shown below.
>>
>> I can't seem to get past this problem - any help or advice would be
>> sincerely appreciated.
>>
>> Kindest regards!
>>
>> Fourie
>>
>>
>>
>>
>>
>> 2012-05-09 16:53:31,800 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = wonko1/137.215.75.201
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 1.0.1
>> STARTUP_MSG:   build =
>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>> 1243785; compiled by 'hortonfo' on Tue Feb 14 08:15:38 UTC 2012
>> ************************************************************/
>> 2012-05-09 16:53:31,934 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:
>> loaded properties from hadoop-metrics2.properties
>> 2012-05-09 16:53:31,945 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>> MetricsSystem,sub=Stats registered.
>> 2012-05-09 16:53:31,946 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period
>> at 10 second(s).
>> 2012-05-09 16:53:31,946 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
>> started
>> 2012-05-09 16:53:32,022 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>> registered.
>> 2012-05-09 16:53:32,232 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
>> FSDatasetStatusMBean
>> 2012-05-09 16:53:32,242 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
>> 2012-05-09 16:53:32,244 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
>> 1048576 bytes/s
>> 2012-05-09 16:53:32,291 INFO org.mortbay.log: Logging to
>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> org.mortbay.log.Slf4jLog
>> 2012-05-09 16:53:32,347 INFO org.apache.hadoop.http.HttpServer: Added global
>> filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
>> 2012-05-09 16:53:32,359 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false
>> 2012-05-09 16:53:32,359 INFO org.apache.hadoop.http.HttpServer: Port
>> returned by webServer.getConnectors()[0].getLocalPort() before open() is -1.
>> Opening the listener on 50075
>> 2012-05-09 16:53:32,359 INFO org.apache.hadoop.http.HttpServer:
>> listener.getLocalPort() returned 50075
>> webServer.getConnectors()[0].getLocalPort() returned 50075
>> 2012-05-09 16:53:32,360 INFO org.apache.hadoop.http.HttpServer: Jetty bound
>> to port 50075
>> 2012-05-09 16:53:32,360 INFO org.mortbay.log: jetty-6.1.26
>> 2012-05-09 16:53:32,590 INFO org.mortbay.log: Started
>> SelectChannelConnector@0.0.0.0:50075
>> 2012-05-09 16:53:32,594 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>> registered.
>> 2012-05-09 16:53:32,595 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>> DataNode registered.
>> 2012-05-09 16:53:32,614 INFO org.apache.hadoop.ipc.Server: Starting
>> SocketReader
>> 2012-05-09 16:53:32,616 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>> RpcDetailedActivityForPort50020 registered.
>> 2012-05-09 16:53:32,616 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>> RpcActivityForPort50020 registered.
>> 2012-05-09 16:53:32,618 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
>> DatanodeRegistration(wonko1.bi.up.ac.za:50010,
>> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075,
>> ipcPort=50020)
>> 2012-05-09 16:53:32,620 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting asynchronous block
>> report scan
>> 2012-05-09 16:53:32,620 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>> DatanodeRegistration(137.215.75.201:50010,
>> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075,
>> ipcPort=50020)In DataNode.run, data =
>> FSDataset{dirpath='/hadooplocal/datadir/current'}
>> 2012-05-09 16:53:32,620 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Finished asynchronous block
>> report scan in 0ms
>> 2012-05-09 16:53:32,621 INFO org.apache.hadoop.ipc.Server: IPC Server
>> Responder: starting
>> 2012-05-09 16:53:32,621 INFO org.apache.hadoop.ipc.Server: IPC Server
>> listener on 50020: starting
>> 2012-05-09 16:53:32,623 INFO org.apache.hadoop.ipc.Server: IPC Server
>> handler 0 on 50020: starting
>> 2012-05-09 16:53:32,623 INFO org.apache.hadoop.ipc.Server: IPC Server
>> handler 1 on 50020: starting
>> 2012-05-09 16:53:32,623 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL
>> of 3600000msec Initial delay: 0msec
>> 2012-05-09 16:53:32,623 INFO org.apache.hadoop.ipc.Server: IPC Server
>> handler 2 on 50020: starting
>> 2012-05-09 16:53:32,626 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous
>> block report against current state in 0 ms
>> 2012-05-09 16:53:32,628 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks
>> took 0 msec to generate and 2 msecs for RPC and NN processing
>> 2012-05-09 16:53:32,628 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block
>> scanner.
>> 2012-05-09 16:53:32,629 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Generated rough (lockless)
>> block report in 0 ms
>> 2012-05-09 16:53:32,629 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous
>> block report against current state in 0 ms
>> 2012-05-09 16:53:35,548 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>> DatanodeRegistration(137.215.75.201:50010,
>> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075,
>> ipcPort=50020):DataXceiver
>> java.net.SocketException: Protocol not available
>>         at sun.nio.ch.Net.getIntOption0(Native Method)
>>         at sun.nio.ch.Net.getIntOption(Net.java:181)
>>         at sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419)
>>         at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60)
>>         at
>> sun.nio.ch.SocketOptsImpl.receiveBufferSize(SocketOptsImpl.java:142)
>>         at
>> sun.nio.ch.SocketOptsImpl$IP$TCP.receiveBufferSize(SocketOptsImpl.java:286)
>>         at
>> sun.nio.ch.OptionAdaptor.getReceiveBufferSize(OptionAdaptor.java:148)
>>         at
>> sun.nio.ch.SocketAdaptor.getReceiveBufferSize(SocketAdaptor.java:336)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:238)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:107)
>>         at java.lang.Thread.run(Thread.java:636)
>> 2012-05-09 16:54:36,377 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>> DatanodeRegistration(137.215.75.201:50010,
>> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075,
>> ipcPort=50020):DataXceiver
>> java.net.SocketException: Protocol not available
>>         at sun.nio.ch.Net.getIntOption0(Native Method)
>>         at sun.nio.ch.Net.getIntOption(Net.java:181)
>>         at sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419)
>>         at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60)
>>         at
>> sun.nio.ch.SocketOptsImpl.receiveBufferSize(SocketOptsImpl.java:142)
>>         at
>> sun.nio.ch.SocketOptsImpl$IP$TCP.receiveBufferSize(SocketOptsImpl.java:286)
>>         at
>> sun.nio.ch.OptionAdaptor.getReceiveBufferSize(OptionAdaptor.java:148)
>>         at
>> sun.nio.ch.SocketAdaptor.getReceiveBufferSize(SocketAdaptor.java:336)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:238)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:107)
>>         at java.lang.Thread.run(Thread.java:636)
>> 2012-05-09 16:54:46,427 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>> DatanodeRegistration(137.215.75.201:50010,
>> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075,
>> ipcPort=50020):DataXceiver
>> java.net.SocketException: Protocol not available
>>         at sun.nio.ch.Net.getIntOption0(Native Method)
>>
>>
>>
>> --
>> --------------
>> Prof Fourie Joubert
>> Bioinformatics and Computational Biology Unit
>> Department of Biochemistry
>> University of Pretoria
>> fourie.joubert@up.ac.za
>> http://www.bi.up.ac.za
>> Tel. +27-12-420-5825
>> Fax. +27-12-420-5800
>>
>> -------------------------------------------------------------------------
>> This message and attachments are subject to a disclaimer. Please refer
>> to www.it.up.ac.za/documentation/governance/disclaimer/ for full details.
>>
>
>


-- 
--------------
Prof Fourie Joubert
Bioinformatics and Computational Biology Unit
Department of Biochemistry
University of Pretoria
fourie.joubert@up.ac.za
http://www.bi.up.ac.za
Tel. +27-12-420-5825
Fax. +27-12-420-5800

-------------------------------------------------------------------------
This message and attachments are subject to a disclaimer. Please refer
to www.it.up.ac.za/documentation/governance/disclaimer/ for full details.


Re: DataNodeRegistration problem

Posted by Harsh J <ha...@cloudera.com>.
You may be hitting https://issues.apache.org/jira/browse/HDFS-1115?
Have you ensured Sun JDK is the only JDK available in the machines and
your services aren't using OpenJDK accidentally?

On Wed, May 9, 2012 at 8:44 PM, Fourie Joubert <fo...@up.ac.za> wrote:
> Hi
>
> I am running Hadoop-1.0.1 with Sun jdk1.6.0_23.
>
> My system is a head node with 14 compute blades
>
> When trying to start hadoop, I get the following message in the logs for
> each data node:
>
>
> 2012-05-09 16:53:35,548 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(137.215.75.201:50010,
> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075,
> ipcPort=50020):DataXceiver
>
> java.net.SocketException: Protocol not available
> ...
> ...
>
> The full log is shown below.
>
> I can't seem to get past this problem - any help or advice would be
> sincerely appreciated.
>
> Kindest regards!
>
> Fourie
>
>
>
>
>
> 2012-05-09 16:53:31,800 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = wonko1/137.215.75.201
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 1.0.1
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
> 1243785; compiled by 'hortonfo' on Tue Feb 14 08:15:38 UTC 2012
> ************************************************************/
> 2012-05-09 16:53:31,934 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:
> loaded properties from hadoop-metrics2.properties
> 2012-05-09 16:53:31,945 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> MetricsSystem,sub=Stats registered.
> 2012-05-09 16:53:31,946 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period
> at 10 second(s).
> 2012-05-09 16:53:31,946 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
> started
> 2012-05-09 16:53:32,022 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 2012-05-09 16:53:32,232 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
> FSDatasetStatusMBean
> 2012-05-09 16:53:32,242 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
> 2012-05-09 16:53:32,244 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 1048576 bytes/s
> 2012-05-09 16:53:32,291 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2012-05-09 16:53:32,347 INFO org.apache.hadoop.http.HttpServer: Added global
> filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 2012-05-09 16:53:32,359 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false
> 2012-05-09 16:53:32,359 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is -1.
> Opening the listener on 50075
> 2012-05-09 16:53:32,359 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50075
> webServer.getConnectors()[0].getLocalPort() returned 50075
> 2012-05-09 16:53:32,360 INFO org.apache.hadoop.http.HttpServer: Jetty bound
> to port 50075
> 2012-05-09 16:53:32,360 INFO org.mortbay.log: jetty-6.1.26
> 2012-05-09 16:53:32,590 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50075
> 2012-05-09 16:53:32,594 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
> registered.
> 2012-05-09 16:53:32,595 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> DataNode registered.
> 2012-05-09 16:53:32,614 INFO org.apache.hadoop.ipc.Server: Starting
> SocketReader
> 2012-05-09 16:53:32,616 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> RpcDetailedActivityForPort50020 registered.
> 2012-05-09 16:53:32,616 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> RpcActivityForPort50020 registered.
> 2012-05-09 16:53:32,618 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
> DatanodeRegistration(wonko1.bi.up.ac.za:50010,
> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075,
> ipcPort=50020)
> 2012-05-09 16:53:32,620 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting asynchronous block
> report scan
> 2012-05-09 16:53:32,620 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(137.215.75.201:50010,
> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075,
> ipcPort=50020)In DataNode.run, data =
> FSDataset{dirpath='/hadooplocal/datadir/current'}
> 2012-05-09 16:53:32,620 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Finished asynchronous block
> report scan in 0ms
> 2012-05-09 16:53:32,621 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2012-05-09 16:53:32,621 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 50020: starting
> 2012-05-09 16:53:32,623 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 50020: starting
> 2012-05-09 16:53:32,623 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 50020: starting
> 2012-05-09 16:53:32,623 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL
> of 3600000msec Initial delay: 0msec
> 2012-05-09 16:53:32,623 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 50020: starting
> 2012-05-09 16:53:32,626 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous
> block report against current state in 0 ms
> 2012-05-09 16:53:32,628 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks
> took 0 msec to generate and 2 msecs for RPC and NN processing
> 2012-05-09 16:53:32,628 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block
> scanner.
> 2012-05-09 16:53:32,629 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Generated rough (lockless)
> block report in 0 ms
> 2012-05-09 16:53:32,629 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous
> block report against current state in 0 ms
> 2012-05-09 16:53:35,548 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(137.215.75.201:50010,
> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075,
> ipcPort=50020):DataXceiver
> java.net.SocketException: Protocol not available
>        at sun.nio.ch.Net.getIntOption0(Native Method)
>        at sun.nio.ch.Net.getIntOption(Net.java:181)
>        at sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419)
>        at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60)
>        at
> sun.nio.ch.SocketOptsImpl.receiveBufferSize(SocketOptsImpl.java:142)
>        at
> sun.nio.ch.SocketOptsImpl$IP$TCP.receiveBufferSize(SocketOptsImpl.java:286)
>        at
> sun.nio.ch.OptionAdaptor.getReceiveBufferSize(OptionAdaptor.java:148)
>        at
> sun.nio.ch.SocketAdaptor.getReceiveBufferSize(SocketAdaptor.java:336)
>        at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:238)
>        at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:107)
>        at java.lang.Thread.run(Thread.java:636)
> 2012-05-09 16:54:36,377 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(137.215.75.201:50010,
> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075,
> ipcPort=50020):DataXceiver
> java.net.SocketException: Protocol not available
>        at sun.nio.ch.Net.getIntOption0(Native Method)
>        at sun.nio.ch.Net.getIntOption(Net.java:181)
>        at sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419)
>        at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60)
>        at
> sun.nio.ch.SocketOptsImpl.receiveBufferSize(SocketOptsImpl.java:142)
>        at
> sun.nio.ch.SocketOptsImpl$IP$TCP.receiveBufferSize(SocketOptsImpl.java:286)
>        at
> sun.nio.ch.OptionAdaptor.getReceiveBufferSize(OptionAdaptor.java:148)
>        at
> sun.nio.ch.SocketAdaptor.getReceiveBufferSize(SocketAdaptor.java:336)
>        at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:238)
>        at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:107)
>        at java.lang.Thread.run(Thread.java:636)
> 2012-05-09 16:54:46,427 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(137.215.75.201:50010,
> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075,
> ipcPort=50020):DataXceiver
> java.net.SocketException: Protocol not available
>        at sun.nio.ch.Net.getIntOption0(Native Method)
>
>
>
> --
> --------------
> Prof Fourie Joubert
> Bioinformatics and Computational Biology Unit
> Department of Biochemistry
> University of Pretoria
> fourie.joubert@up.ac.za
> http://www.bi.up.ac.za
> Tel. +27-12-420-5825
> Fax. +27-12-420-5800
>
> -------------------------------------------------------------------------
> This message and attachments are subject to a disclaimer. Please refer
> to www.it.up.ac.za/documentation/governance/disclaimer/ for full details.
>



-- 
Harsh J