You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Vinod K V (JIRA)" <ji...@apache.org> on 2008/09/09 13:45:44 UTC

[jira] Created: (HADOOP-4129) Memory limits of TaskTracker and Tasks should be in kiloBytes.

Memory limits of TaskTracker and Tasks should be in kiloBytes.
--------------------------------------------------------------

                 Key: HADOOP-4129
                 URL: https://issues.apache.org/jira/browse/HADOOP-4129
             Project: Hadoop Core
          Issue Type: Bug
          Components: mapred
            Reporter: Vinod K V
            Priority: Blocker


HADOOP-3759 uses memory limits in kilo-bytes and HADOOP-3581 changed it to be in bytes. Because of this, TestHighRAMJobs is failing on Linux. We should change this behaviour so that all memory limits are considered to be in kilo-bytes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4129) Memory limits of TaskTracker and Tasks should be in kiloBytes.

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12629594#action_12629594 ] 

Arun C Murthy commented on HADOOP-4129:
---------------------------------------

How about keeping the calculations in 'bytes' but allowing ppl to use GB/MB/KB suffixes in the config file?

> Memory limits of TaskTracker and Tasks should be in kiloBytes.
> --------------------------------------------------------------
>
>                 Key: HADOOP-4129
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4129
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Vinod K V
>            Assignee: Vinod K V
>            Priority: Blocker
>         Attachments: HADOOP-4129
>
>
> HADOOP-3759 uses memory limits in kilo-bytes and HADOOP-3581 changed it to be in bytes. Because of this, TestHighRAMJobs is failing on Linux. We should change this behaviour so that all memory limits are considered to be in kilo-bytes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4129) Memory limits of TaskTracker and Tasks should be in kiloBytes.

Posted by "Hemanth Yamijala (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12630978#action_12630978 ] 

Hemanth Yamijala commented on HADOOP-4129:
------------------------------------------

Patch looks good to me. 

I agree that we can keep the computation in kb, instead of bytes.

> Memory limits of TaskTracker and Tasks should be in kiloBytes.
> --------------------------------------------------------------
>
>                 Key: HADOOP-4129
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4129
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Vinod K V
>            Assignee: Vinod K V
>            Priority: Blocker
>             Fix For: 0.19.0
>
>         Attachments: HADOOP-4129
>
>
> HADOOP-3759 uses memory limits in kilo-bytes and HADOOP-3581 changed it to be in bytes. Because of this, TestHighRAMJobs is failing on Linux. We should change this behaviour so that all memory limits are considered to be in kilo-bytes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4129) Memory limits of TaskTracker and Tasks should be in kiloBytes.

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12629586#action_12629586 ] 

Hadoop QA commented on HADOOP-4129:
-----------------------------------

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12389746/HADOOP-4129
  against trunk revision 693524.

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 6 new or modified tests.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 findbugs.  The patch does not introduce any new Findbugs warnings.

    +1 core tests.  The patch passed core unit tests.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3220/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3220/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3220/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3220/console

This message is automatically generated.

> Memory limits of TaskTracker and Tasks should be in kiloBytes.
> --------------------------------------------------------------
>
>                 Key: HADOOP-4129
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4129
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Vinod K V
>            Assignee: Vinod K V
>            Priority: Blocker
>         Attachments: HADOOP-4129
>
>
> HADOOP-3759 uses memory limits in kilo-bytes and HADOOP-3581 changed it to be in bytes. Because of this, TestHighRAMJobs is failing on Linux. We should change this behaviour so that all memory limits are considered to be in kilo-bytes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4129) Memory limits of TaskTracker and Tasks should be in kiloBytes.

Posted by "Vinod K V (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Vinod K V updated HADOOP-4129:
------------------------------

    Attachment: HADOOP-4129

Attaching a patch.
 - TestHighRAMJobs, TestProcfsBasedProcessTree and TestTaskMemoryManager all run successfully now.
 - Changed documentation to reflect the change to kilobytes.
 - For the sake of clarity, also mentioned that memory is in kilobytes in JobConf, TaskTracker and TaskTrackerStatus.

> Memory limits of TaskTracker and Tasks should be in kiloBytes.
> --------------------------------------------------------------
>
>                 Key: HADOOP-4129
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4129
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Vinod K V
>            Assignee: Vinod K V
>            Priority: Blocker
>         Attachments: HADOOP-4129
>
>
> HADOOP-3759 uses memory limits in kilo-bytes and HADOOP-3581 changed it to be in bytes. Because of this, TestHighRAMJobs is failing on Linux. We should change this behaviour so that all memory limits are considered to be in kilo-bytes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4129) Memory limits of TaskTracker and Tasks should be in kiloBytes.

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12633337#action_12633337 ] 

Hudson commented on HADOOP-4129:
--------------------------------

Integrated in Hadoop-trunk #611 (See [http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/611/])

> Memory limits of TaskTracker and Tasks should be in kiloBytes.
> --------------------------------------------------------------
>
>                 Key: HADOOP-4129
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4129
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Vinod K V
>            Assignee: Vinod K V
>            Priority: Blocker
>             Fix For: 0.19.0
>
>         Attachments: HADOOP-4129
>
>
> HADOOP-3759 uses memory limits in kilo-bytes and HADOOP-3581 changed it to be in bytes. Because of this, TestHighRAMJobs is failing on Linux. We should change this behaviour so that all memory limits are considered to be in kilo-bytes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4129) Memory limits of TaskTracker and Tasks should be in kiloBytes.

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4129:
----------------------------------

    Resolution: Fixed
        Status: Resolved  (was: Patch Available)

I just committed this. Thanks, Vinod!

> Memory limits of TaskTracker and Tasks should be in kiloBytes.
> --------------------------------------------------------------
>
>                 Key: HADOOP-4129
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4129
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Vinod K V
>            Assignee: Vinod K V
>            Priority: Blocker
>             Fix For: 0.19.0
>
>         Attachments: HADOOP-4129
>
>
> HADOOP-3759 uses memory limits in kilo-bytes and HADOOP-3581 changed it to be in bytes. Because of this, TestHighRAMJobs is failing on Linux. We should change this behaviour so that all memory limits are considered to be in kilo-bytes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4129) Memory limits of TaskTracker and Tasks should be in kiloBytes.

Posted by "Vinod K V (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12630086#action_12630086 ] 

Vinod K V commented on HADOOP-4129:
-----------------------------------

The least count of the memory needs of TT and tasks will be kB or perhaps even mB. We don't need to track byte level memory sizes. Further, as Hemanth pointed out, this is consistent with the fact that ulimits are also mentioned in kB. So, I'll leave the computations to be in kB.

Discussed with Arun about the ability to mention memory with GB/MB/KB suffixes in config files. Yes, it seems to be a good to have, which can prevent the problems like the ones that occured with the test case. I'll create another JIRA and mark it for 0.19.

> Memory limits of TaskTracker and Tasks should be in kiloBytes.
> --------------------------------------------------------------
>
>                 Key: HADOOP-4129
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4129
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Vinod K V
>            Assignee: Vinod K V
>            Priority: Blocker
>         Attachments: HADOOP-4129
>
>
> HADOOP-3759 uses memory limits in kilo-bytes and HADOOP-3581 changed it to be in bytes. Because of this, TestHighRAMJobs is failing on Linux. We should change this behaviour so that all memory limits are considered to be in kilo-bytes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4129) Memory limits of TaskTracker and Tasks should be in kiloBytes.

Posted by "Nigel Daley (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Nigel Daley updated HADOOP-4129:
--------------------------------

    Fix Version/s: 0.19.0

> Memory limits of TaskTracker and Tasks should be in kiloBytes.
> --------------------------------------------------------------
>
>                 Key: HADOOP-4129
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4129
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Vinod K V
>            Assignee: Vinod K V
>            Priority: Blocker
>             Fix For: 0.19.0
>
>         Attachments: HADOOP-4129
>
>
> HADOOP-3759 uses memory limits in kilo-bytes and HADOOP-3581 changed it to be in bytes. Because of this, TestHighRAMJobs is failing on Linux. We should change this behaviour so that all memory limits are considered to be in kilo-bytes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4129) Memory limits of TaskTracker and Tasks should be in kiloBytes.

Posted by "Vinod K V (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Vinod K V updated HADOOP-4129:
------------------------------

    Status: Patch Available  (was: Open)

Running it through Hudson.

> Memory limits of TaskTracker and Tasks should be in kiloBytes.
> --------------------------------------------------------------
>
>                 Key: HADOOP-4129
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4129
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Vinod K V
>            Assignee: Vinod K V
>            Priority: Blocker
>         Attachments: HADOOP-4129
>
>
> HADOOP-3759 uses memory limits in kilo-bytes and HADOOP-3581 changed it to be in bytes. Because of this, TestHighRAMJobs is failing on Linux. We should change this behaviour so that all memory limits are considered to be in kilo-bytes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Assigned: (HADOOP-4129) Memory limits of TaskTracker and Tasks should be in kiloBytes.

Posted by "Vinod K V (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Vinod K V reassigned HADOOP-4129:
---------------------------------

    Assignee: Vinod K V

> Memory limits of TaskTracker and Tasks should be in kiloBytes.
> --------------------------------------------------------------
>
>                 Key: HADOOP-4129
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4129
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Vinod K V
>            Assignee: Vinod K V
>            Priority: Blocker
>
> HADOOP-3759 uses memory limits in kilo-bytes and HADOOP-3581 changed it to be in bytes. Because of this, TestHighRAMJobs is failing on Linux. We should change this behaviour so that all memory limits are considered to be in kilo-bytes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.