You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Eric Yang (JIRA)" <ji...@apache.org> on 2009/01/14 20:02:00 UTC
[jira] Created: (HADOOP-5031) metrics aggregation is incorrect in
database
metrics aggregation is incorrect in database
--------------------------------------------
Key: HADOOP-5031
URL: https://issues.apache.org/jira/browse/HADOOP-5031
Project: Hadoop Core
Issue Type: Bug
Components: contrib/chukwa
Environment: Redhat 5.1, Java 6
Reporter: Eric Yang
A few problem with the aggregation SQL statements:
hdfs throughput should be calculated by doing two level aggregation:
First, calculate the rate for hadoop datanode metrics with accumulated vales.
Second, sum up all datanode rate to provide a single number to represent the current cluster performance.
Disable hod jobs utilization measurement - The data provide a rough view of the cluster performance but mostly inaccurate.
Disable user utilization measurement generated from hod job - The data is generated from hod job metrics, and it's mostly inaccurate.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-5031) metrics aggregation is incorrect in
database
Posted by "Eric Yang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Eric Yang updated HADOOP-5031:
------------------------------
Resolution: Fixed
Hadoop Flags: [Reviewed]
Status: Resolved (was: Patch Available)
I just committed this. Thanks Kevin.
> metrics aggregation is incorrect in database
> --------------------------------------------
>
> Key: HADOOP-5031
> URL: https://issues.apache.org/jira/browse/HADOOP-5031
> Project: Hadoop Core
> Issue Type: Bug
> Components: contrib/chukwa
> Environment: Redhat 5.1, Java 6
> Reporter: Eric Yang
> Assignee: Eric Yang
> Attachments: HADOOP-5031.patch
>
>
> A few problem with the aggregation SQL statements:
> hdfs throughput should be calculated by doing two level aggregation:
> First, calculate the rate for hadoop datanode metrics with accumulated vales.
> Second, sum up all datanode rate to provide a single number to represent the current cluster performance.
> Disable hod jobs utilization measurement - The data provide a rough view of the cluster performance but mostly inaccurate.
> Disable user utilization measurement generated from hod job - The data is generated from hod job metrics, and it's mostly inaccurate.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-5031) metrics aggregation is incorrect in
database
Posted by "Eric Yang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Eric Yang updated HADOOP-5031:
------------------------------
Status: Patch Available (was: Open)
- SQL statement change.
> metrics aggregation is incorrect in database
> --------------------------------------------
>
> Key: HADOOP-5031
> URL: https://issues.apache.org/jira/browse/HADOOP-5031
> Project: Hadoop Core
> Issue Type: Bug
> Components: contrib/chukwa
> Environment: Redhat 5.1, Java 6
> Reporter: Eric Yang
> Attachments: HADOOP-5031.patch
>
>
> A few problem with the aggregation SQL statements:
> hdfs throughput should be calculated by doing two level aggregation:
> First, calculate the rate for hadoop datanode metrics with accumulated vales.
> Second, sum up all datanode rate to provide a single number to represent the current cluster performance.
> Disable hod jobs utilization measurement - The data provide a rough view of the cluster performance but mostly inaccurate.
> Disable user utilization measurement generated from hod job - The data is generated from hod job metrics, and it's mostly inaccurate.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-5031) metrics aggregation is incorrect in
database
Posted by "Kevin (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12679672#action_12679672 ]
Kevin commented on HADOOP-5031:
-------------------------------
+1 minor change. it's actually in prod already.
> metrics aggregation is incorrect in database
> --------------------------------------------
>
> Key: HADOOP-5031
> URL: https://issues.apache.org/jira/browse/HADOOP-5031
> Project: Hadoop Core
> Issue Type: Bug
> Components: contrib/chukwa
> Environment: Redhat 5.1, Java 6
> Reporter: Eric Yang
> Assignee: Eric Yang
> Attachments: HADOOP-5031.patch
>
>
> A few problem with the aggregation SQL statements:
> hdfs throughput should be calculated by doing two level aggregation:
> First, calculate the rate for hadoop datanode metrics with accumulated vales.
> Second, sum up all datanode rate to provide a single number to represent the current cluster performance.
> Disable hod jobs utilization measurement - The data provide a rough view of the cluster performance but mostly inaccurate.
> Disable user utilization measurement generated from hod job - The data is generated from hod job metrics, and it's mostly inaccurate.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-5031) metrics aggregation is incorrect in
database
Posted by "Eric Yang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Eric Yang updated HADOOP-5031:
------------------------------
Attachment: HADOOP-5031.patch
- Change aggregation to calculate rate for datanode first, then sum up the datanode rate to get the cluster rate.
> metrics aggregation is incorrect in database
> --------------------------------------------
>
> Key: HADOOP-5031
> URL: https://issues.apache.org/jira/browse/HADOOP-5031
> Project: Hadoop Core
> Issue Type: Bug
> Components: contrib/chukwa
> Environment: Redhat 5.1, Java 6
> Reporter: Eric Yang
> Attachments: HADOOP-5031.patch
>
>
> A few problem with the aggregation SQL statements:
> hdfs throughput should be calculated by doing two level aggregation:
> First, calculate the rate for hadoop datanode metrics with accumulated vales.
> Second, sum up all datanode rate to provide a single number to represent the current cluster performance.
> Disable hod jobs utilization measurement - The data provide a rough view of the cluster performance but mostly inaccurate.
> Disable user utilization measurement generated from hod job - The data is generated from hod job metrics, and it's mostly inaccurate.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Assigned: (HADOOP-5031) metrics aggregation is incorrect in
database
Posted by "Eric Yang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Eric Yang reassigned HADOOP-5031:
---------------------------------
Assignee: Eric Yang
> metrics aggregation is incorrect in database
> --------------------------------------------
>
> Key: HADOOP-5031
> URL: https://issues.apache.org/jira/browse/HADOOP-5031
> Project: Hadoop Core
> Issue Type: Bug
> Components: contrib/chukwa
> Environment: Redhat 5.1, Java 6
> Reporter: Eric Yang
> Assignee: Eric Yang
> Attachments: HADOOP-5031.patch
>
>
> A few problem with the aggregation SQL statements:
> hdfs throughput should be calculated by doing two level aggregation:
> First, calculate the rate for hadoop datanode metrics with accumulated vales.
> Second, sum up all datanode rate to provide a single number to represent the current cluster performance.
> Disable hod jobs utilization measurement - The data provide a rough view of the cluster performance but mostly inaccurate.
> Disable user utilization measurement generated from hod job - The data is generated from hod job metrics, and it's mostly inaccurate.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-5031) metrics aggregation is incorrect in
database
Posted by "Chris Douglas (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Chris Douglas updated HADOOP-5031:
----------------------------------
Fix Version/s: 0.21.0
Setting fix version for the commit
> metrics aggregation is incorrect in database
> --------------------------------------------
>
> Key: HADOOP-5031
> URL: https://issues.apache.org/jira/browse/HADOOP-5031
> Project: Hadoop Core
> Issue Type: Bug
> Components: contrib/chukwa
> Environment: Redhat 5.1, Java 6
> Reporter: Eric Yang
> Assignee: Eric Yang
> Fix For: 0.21.0
>
> Attachments: HADOOP-5031.patch
>
>
> A few problem with the aggregation SQL statements:
> hdfs throughput should be calculated by doing two level aggregation:
> First, calculate the rate for hadoop datanode metrics with accumulated vales.
> Second, sum up all datanode rate to provide a single number to represent the current cluster performance.
> Disable hod jobs utilization measurement - The data provide a rough view of the cluster performance but mostly inaccurate.
> Disable user utilization measurement generated from hod job - The data is generated from hod job metrics, and it's mostly inaccurate.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-5031) metrics aggregation is incorrect in
database
Posted by "Hudson (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12681738#action_12681738 ]
Hudson commented on HADOOP-5031:
--------------------------------
Integrated in Hadoop-trunk #778 (See [http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/778/])
> metrics aggregation is incorrect in database
> --------------------------------------------
>
> Key: HADOOP-5031
> URL: https://issues.apache.org/jira/browse/HADOOP-5031
> Project: Hadoop Core
> Issue Type: Bug
> Components: contrib/chukwa
> Environment: Redhat 5.1, Java 6
> Reporter: Eric Yang
> Assignee: Eric Yang
> Fix For: 0.21.0
>
> Attachments: HADOOP-5031.patch
>
>
> A few problem with the aggregation SQL statements:
> hdfs throughput should be calculated by doing two level aggregation:
> First, calculate the rate for hadoop datanode metrics with accumulated vales.
> Second, sum up all datanode rate to provide a single number to represent the current cluster performance.
> Disable hod jobs utilization measurement - The data provide a rough view of the cluster performance but mostly inaccurate.
> Disable user utilization measurement generated from hod job - The data is generated from hod job metrics, and it's mostly inaccurate.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-5031) metrics aggregation is incorrect in
database
Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12679492#action_12679492 ]
Hadoop QA commented on HADOOP-5031:
-----------------------------------
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12401538/HADOOP-5031.patch
against trunk revision 750703.
+1 @author. The patch does not contain any @author tags.
-1 tests included. The patch doesn't appear to include any new or modified tests.
Please justify why no tests are needed for this patch.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac compiler warnings.
+1 findbugs. The patch does not introduce any new Findbugs warnings.
+1 Eclipse classpath. The patch retains Eclipse classpath integrity.
+1 release audit. The applied patch does not increase the total number of release audit warnings.
-1 core tests. The patch failed core unit tests.
+1 contrib tests. The patch passed contrib unit tests.
Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/25/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/25/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/25/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/25/console
This message is automatically generated.
> metrics aggregation is incorrect in database
> --------------------------------------------
>
> Key: HADOOP-5031
> URL: https://issues.apache.org/jira/browse/HADOOP-5031
> Project: Hadoop Core
> Issue Type: Bug
> Components: contrib/chukwa
> Environment: Redhat 5.1, Java 6
> Reporter: Eric Yang
> Assignee: Eric Yang
> Attachments: HADOOP-5031.patch
>
>
> A few problem with the aggregation SQL statements:
> hdfs throughput should be calculated by doing two level aggregation:
> First, calculate the rate for hadoop datanode metrics with accumulated vales.
> Second, sum up all datanode rate to provide a single number to represent the current cluster performance.
> Disable hod jobs utilization measurement - The data provide a rough view of the cluster performance but mostly inaccurate.
> Disable user utilization measurement generated from hod job - The data is generated from hod job metrics, and it's mostly inaccurate.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.