You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Jothi Padmanabhan (JIRA)" <ji...@apache.org> on 2008/08/18 07:35:44 UTC
[jira] Created: (HADOOP-3967) Task Tracker logs show a lot of
DiskChecker$DiskErrorException
Task Tracker logs show a lot of DiskChecker$DiskErrorException
--------------------------------------------------------------
Key: HADOOP-3967
URL: https://issues.apache.org/jira/browse/HADOOP-3967
Project: Hadoop Core
Issue Type: Bug
Reporter: Jothi Padmanabhan
Fix For: 0.19.0
The task tracker logs show a lot of messages like
789 INFO org.apache.hadoop.mapred.TaskTracker: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find taskTracker/jobcache/job_200808181059_0001/attempt_200808181059_0001_m_000097_0/output/file.out in any of the configured local directories
This appears to have been introduced by Hadoop-657. The message comes from
transmitHeartBeat -> cloneAndResetRunningTaskStatuses -> tryToGetOutputSize. This tries to estimate the map output size even before the actual file is created.
This error has no bearing on the overall functionality, all jobs run to success
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-3967) Task Tracker logs show a lot of
DiskChecker$DiskErrorException
Posted by "Devaraj Das (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-3967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Devaraj Das updated HADOOP-3967:
--------------------------------
Fix Version/s: (was: 0.19.0)
> Task Tracker logs show a lot of DiskChecker$DiskErrorException
> --------------------------------------------------------------
>
> Key: HADOOP-3967
> URL: https://issues.apache.org/jira/browse/HADOOP-3967
> Project: Hadoop Core
> Issue Type: Bug
> Reporter: Jothi Padmanabhan
>
> The task tracker logs show a lot of messages like
> 789 INFO org.apache.hadoop.mapred.TaskTracker: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find taskTracker/jobcache/job_200808181059_0001/attempt_200808181059_0001_m_000097_0/output/file.out in any of the configured local directories
> This appears to have been introduced by Hadoop-657. The message comes from
> transmitHeartBeat -> cloneAndResetRunningTaskStatuses -> tryToGetOutputSize. This tries to estimate the map output size even before the actual file is created.
> This error has no bearing on the overall functionality, all jobs run to success
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Resolved: (HADOOP-3967) Task Tracker logs show a lot of
DiskChecker$DiskErrorException
Posted by "Amareshwari Sriramadasu (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-3967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Amareshwari Sriramadasu resolved HADOOP-3967.
---------------------------------------------
Resolution: Duplicate
Fixed HADOOP-4963
> Task Tracker logs show a lot of DiskChecker$DiskErrorException
> --------------------------------------------------------------
>
> Key: HADOOP-3967
> URL: https://issues.apache.org/jira/browse/HADOOP-3967
> Project: Hadoop Core
> Issue Type: Bug
> Reporter: Jothi Padmanabhan
>
> The task tracker logs show a lot of messages like
> 789 INFO org.apache.hadoop.mapred.TaskTracker: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find taskTracker/jobcache/job_200808181059_0001/attempt_200808181059_0001_m_000097_0/output/file.out in any of the configured local directories
> This appears to have been introduced by Hadoop-657. The message comes from
> transmitHeartBeat -> cloneAndResetRunningTaskStatuses -> tryToGetOutputSize. This tries to estimate the map output size even before the actual file is created.
> This error has no bearing on the overall functionality, all jobs run to success
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.