You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Runping Qi (JIRA)" <ji...@apache.org> on 2007/03/28 18:37:25 UTC

[jira] Created: (HADOOP-1172) Reduce job failed due to error in logging

Reduce job failed due to error in logging
-----------------------------------------

                 Key: HADOOP-1172
                 URL: https://issues.apache.org/jira/browse/HADOOP-1172
             Project: Hadoop
          Issue Type: Bug
            Reporter: Runping Qi



Here is the stack trace:

java.io.IOException: No space left on device
	at java.io.FileOutputStream.writeBytes(Native Method)
	at java.io.FileOutputStream.write(FileOutputStream.java:260)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
	at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
	at org.apache.hadoop.mapred.TaskLog$Writer.writeIndexRecord(TaskLog.java:251)
	at org.apache.hadoop.mapred.TaskLog$Writer.close(TaskLog.java:235)
	at org.apache.hadoop.mapred.TaskRunner.runChild(TaskRunner.java:406)
	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:281)

Fail to log should not fail the task. Especially when closing the logwriter. At that time, the mapper was actually complete.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Resolved: (HADOOP-1172) Reduce job failed due to error in logging

Posted by "Sameer Paranjpye (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sameer Paranjpye resolved HADOOP-1172.
--------------------------------------

    Resolution: Won't Fix

> Reduce job failed due to error in logging
> -----------------------------------------
>
>                 Key: HADOOP-1172
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1172
>             Project: Hadoop
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Runping Qi
>
> Here is the stack trace:
> java.io.IOException: No space left on device
> 	at java.io.FileOutputStream.writeBytes(Native Method)
> 	at java.io.FileOutputStream.write(FileOutputStream.java:260)
> 	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
> 	at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
> 	at org.apache.hadoop.mapred.TaskLog$Writer.writeIndexRecord(TaskLog.java:251)
> 	at org.apache.hadoop.mapred.TaskLog$Writer.close(TaskLog.java:235)
> 	at org.apache.hadoop.mapred.TaskRunner.runChild(TaskRunner.java:406)
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:281)
> Fail to log should not fail the task. Especially when closing the logwriter. At that time, the mapper was actually complete.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-1172) Reduce job failed due to error in logging

Posted by "Sameer Paranjpye (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sameer Paranjpye updated HADOOP-1172:
-------------------------------------

    Component/s: mapred
    Description: 
Here is the stack trace:

java.io.IOException: No space left on device
	at java.io.FileOutputStream.writeBytes(Native Method)
	at java.io.FileOutputStream.write(FileOutputStream.java:260)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
	at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
	at org.apache.hadoop.mapred.TaskLog$Writer.writeIndexRecord(TaskLog.java:251)
	at org.apache.hadoop.mapred.TaskLog$Writer.close(TaskLog.java:235)
	at org.apache.hadoop.mapred.TaskRunner.runChild(TaskRunner.java:406)
	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:281)

Fail to log should not fail the task. Especially when closing the logwriter. At that time, the mapper was actually complete.

  was:

Here is the stack trace:

java.io.IOException: No space left on device
	at java.io.FileOutputStream.writeBytes(Native Method)
	at java.io.FileOutputStream.write(FileOutputStream.java:260)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
	at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
	at org.apache.hadoop.mapred.TaskLog$Writer.writeIndexRecord(TaskLog.java:251)
	at org.apache.hadoop.mapred.TaskLog$Writer.close(TaskLog.java:235)
	at org.apache.hadoop.mapred.TaskRunner.runChild(TaskRunner.java:406)
	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:281)

Fail to log should not fail the task. Especially when closing the logwriter. At that time, the mapper was actually complete.


> Reduce job failed due to error in logging
> -----------------------------------------
>
>                 Key: HADOOP-1172
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1172
>             Project: Hadoop
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Runping Qi
>
> Here is the stack trace:
> java.io.IOException: No space left on device
> 	at java.io.FileOutputStream.writeBytes(Native Method)
> 	at java.io.FileOutputStream.write(FileOutputStream.java:260)
> 	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
> 	at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
> 	at org.apache.hadoop.mapred.TaskLog$Writer.writeIndexRecord(TaskLog.java:251)
> 	at org.apache.hadoop.mapred.TaskLog$Writer.close(TaskLog.java:235)
> 	at org.apache.hadoop.mapred.TaskRunner.runChild(TaskRunner.java:406)
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:281)
> Fail to log should not fail the task. Especially when closing the logwriter. At that time, the mapper was actually complete.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-1172) Reduce job failed due to error in logging

Posted by "Owen O'Malley (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485669 ] 

Owen O'Malley commented on HADOOP-1172:
---------------------------------------

-1

This is not logging from the framework. This is user data and if we lose user data due to resource constraints, it is appropriate to have the task die.

> Reduce job failed due to error in logging
> -----------------------------------------
>
>                 Key: HADOOP-1172
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1172
>             Project: Hadoop
>          Issue Type: Bug
>            Reporter: Runping Qi
>
> Here is the stack trace:
> java.io.IOException: No space left on device
> 	at java.io.FileOutputStream.writeBytes(Native Method)
> 	at java.io.FileOutputStream.write(FileOutputStream.java:260)
> 	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
> 	at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
> 	at org.apache.hadoop.mapred.TaskLog$Writer.writeIndexRecord(TaskLog.java:251)
> 	at org.apache.hadoop.mapred.TaskLog$Writer.close(TaskLog.java:235)
> 	at org.apache.hadoop.mapred.TaskRunner.runChild(TaskRunner.java:406)
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:281)
> Fail to log should not fail the task. Especially when closing the logwriter. At that time, the mapper was actually complete.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-1172) Reduce job failed due to error in logging

Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485269 ] 

Doug Cutting commented on HADOOP-1172:
--------------------------------------

+0 If the logging disk is full, then the node is not useful.  Optimizing the case when it fills just as a task is completed, but before it logs that it's completed doesn't seem worth the effort to me.

> Reduce job failed due to error in logging
> -----------------------------------------
>
>                 Key: HADOOP-1172
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1172
>             Project: Hadoop
>          Issue Type: Bug
>            Reporter: Runping Qi
>
> Here is the stack trace:
> java.io.IOException: No space left on device
> 	at java.io.FileOutputStream.writeBytes(Native Method)
> 	at java.io.FileOutputStream.write(FileOutputStream.java:260)
> 	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
> 	at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
> 	at org.apache.hadoop.mapred.TaskLog$Writer.writeIndexRecord(TaskLog.java:251)
> 	at org.apache.hadoop.mapred.TaskLog$Writer.close(TaskLog.java:235)
> 	at org.apache.hadoop.mapred.TaskRunner.runChild(TaskRunner.java:406)
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:281)
> Fail to log should not fail the task. Especially when closing the logwriter. At that time, the mapper was actually complete.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.