You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Dhaval Patel (JIRA)" <ji...@apache.org> on 2017/10/04 17:16:01 UTC

[jira] [Created] (HDFS-12590) datanode process running in dead state for over 24 hours

Dhaval Patel created HDFS-12590:
-----------------------------------

             Summary: datanode process running in dead state for over 24 hours
                 Key: HDFS-12590
                 URL: https://issues.apache.org/jira/browse/HDFS-12590
             Project: Hadoop HDFS
          Issue Type: Bug
            Reporter: Dhaval Patel



{code:java}
2017-10-02 14:04:44,862 INFO  datanode.DataNode (BPServiceActor.java:run(733)) - Block pool <registering> (Datanode Uuid unassigned) service to master5.xxxxxx.local/10.10.10.10:8020 starting to offer serv
ice
2017-10-02 14:04:44,867 INFO  ipc.Server (Server.java:run(1045)) - IPC Server Responder: starting
2017-10-02 14:04:44,867 INFO  ipc.Server (Server.java:run(881)) - IPC Server listener on 8010: starting
2017-10-02 14:04:45,066 INFO  common.Storage (DataStorage.java:getParallelVolumeLoadThreadsNum(384)) - Using 2 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=2, dataDirs=
2)
2017-10-03 14:06:10,525 ERROR common.Storage (Storage.java:tryLock(783)) - Failed to acquire lock on /data1/hadoop/hdfs/data/in_use.lock. If this storage directory is mounted via NFS, ensure that the appropr
iate nfs lock services are running.
java.io.IOException: Resource temporarily unavailable
        at java.io.RandomAccessFile.writeBytes(Native Method)
        at java.io.RandomAccessFile.write(RandomAccessFile.java:512)
        at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:773)
        at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:736)
        at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:549)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:299)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:438)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:417)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:595)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1483)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1448)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:319)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:267)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:740)
        at java.lang.Thread.run(Thread.java:745)
2017-10-03 14:06:10,542 WARN  common.Storage (DataStorage.java:loadDataStorage(449)) - Failed to add storage directory [DISK]file:/data1/hadoop/hdfs/data/
java.io.IOException: Resource temporarily unavailable
        at java.io.RandomAccessFile.writeBytes(Native Method)
        at java.io.RandomAccessFile.write(RandomAccessFile.java:512)
        at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:773)
        at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:736)
        at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:549)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:299)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:438)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:417)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:595)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1483)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1448)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:319)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:267)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:740)
        at java.lang.Thread.run(Thread.java:745)
2017-10-03 18:03:16,928 ERROR datanode.DataNode (LogAdapter.java:error(71)) - RECEIVED SIGNAL 15: SIGTERM
2017-10-03 18:03:16,934 INFO  datanode.DataNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at dc-slave29.xxxxxx.local/10.10.10.10
************************************************************/
2017-10-03 18:03:23,093 INFO  datanode.DataNode (LogAdapter.java:info(47)) - STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   user = hdfs
STARTUP_MSG:   host = xx-slave29.xxxxxx.local/10.10.10.10
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.7.3.2.5.3.0-37
 
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org