You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Suresh Srinivas (Resolved) (JIRA)" <ji...@apache.org> on 2012/01/25 22:45:39 UTC

[jira] [Resolved] (HDFS-60) loss of VERSION file on datanode when trying to startup with full disk

     [ https://issues.apache.org/jira/browse/HDFS-60?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Suresh Srinivas resolved HDFS-60.
---------------------------------

    Resolution: Won't Fix

This is an old bug, that has not been observed recently. Closing it for now. Reopen if the problem still happens.
                
> loss of VERSION file on datanode when trying to startup with full disk
> ----------------------------------------------------------------------
>
>                 Key: HDFS-60
>                 URL: https://issues.apache.org/jira/browse/HDFS-60
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>         Environment: FCLinux
>            Reporter: Joydeep Sen Sarma
>            Priority: Critical
>             Fix For: 0.24.0
>
>
> datanode working ok previously. subsequent bringup of datanode fails:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = hadoop003.sf2p.facebook.com/10.16.159.103
> STARTUP_MSG:   args = []
> ************************************************************/
> 2008-01-08 08:23:38,400 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processN
> ame=DataNode, sessionId=null
> 2008-01-08 08:23:48,491 INFO org.apache.hadoop.ipc.RPC: Problem connecting to server: hadoop001.sf2p.facebook
> .com/10.16.159.101:9000
> 2008-01-08 08:23:59,495 INFO org.apache.hadoop.ipc.RPC: Problem connecting to server: hadoop001.sf2p.facebook
> .com/10.16.159.101:9000
> 2008-01-08 08:24:01,597 ERROR org.apache.hadoop.dfs.DataNode: java.io.IOException: No space left on device
>         at java.io.FileOutputStream.writeBytes(Native Method)
>         at java.io.FileOutputStream.write(FileOutputStream.java:260)
>         at sun.nio.cs.StreamEncoder$CharsetSE.writeBytes(StreamEncoder.java:336)
>         at sun.nio.cs.StreamEncoder$CharsetSE.implFlushBuffer(StreamEncoder.java:404)
>         at sun.nio.cs.StreamEncoder$CharsetSE.implFlush(StreamEncoder.java:408)
>         at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:152)
>         at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:213)
>         at java.io.BufferedWriter.flush(BufferedWriter.java:236)
>         at java.util.Properties.store(Properties.java:666)
>         at org.apache.hadoop.dfs.Storage$StorageDirectory.write(Storage.java:176)
>         at org.apache.hadoop.dfs.Storage$StorageDirectory.write(Storage.java:164)
>         at org.apache.hadoop.dfs.Storage.writeAll(Storage.java:510)
>         at org.apache.hadoop.dfs.DataStorage.recoverTransitionRead(DataStorage.java:146)
>         at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:243)
>         at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:206)
>         at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:1391)
>         at org.apache.hadoop.dfs.DataNode.run(DataNode.java:1335)
>         at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:1356)
>         at org.apache.hadoop.dfs.DataNode.main(DataNode.java:1525)
> 2008-01-08 08:24:01,597 INFO org.apache.hadoop.dfs.DataNode: SHUTDOWN_MSG: 
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at hadoop003.sf2p.facebook.com/10.16.159.103

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira