You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Eric Yang (JIRA)" <ji...@apache.org> on 2008/09/20 01:46:44 UTC
[jira] Created: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
-------------------------------------------------------------------------------------
Key: HADOOP-4228
URL: https://issues.apache.org/jira/browse/HADOOP-4228
Project: Hadoop Core
Issue Type: Bug
Components: metrics
Affects Versions: 0.18.1
Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
Reporter: Eric Yang
bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Hairong Kuang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hairong Kuang updated HADOOP-4228:
----------------------------------
Status: Patch Available (was: Open)
Resubmit the patch since the hudson patch process is back.
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Fix For: 0.18.2, 0.19.0
>
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch, metricsLong.patch, metricsLong1-br18.patch, metricsLong1.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Sanjay Radia (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12636721#action_12636721 ]
Sanjay Radia commented on HADOOP-4228:
--------------------------------------
+1
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch, metricsLong.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Mac Yang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12637722#action_12637722 ]
Mac Yang commented on HADOOP-4228:
----------------------------------
> Ganglia has no Long type - perhaps a solution would be to map Longs in Ganglia to a float?
Short of supporting the Long data type, this seems like a reasonable thing to do.
Brian, would you mind coming up with the patch for the Ganglia context?
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Fix For: 0.18.2, 0.19.0
>
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch, metricsLong.patch, metricsLong1-br18.patch, metricsLong1.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Hairong Kuang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hairong Kuang updated HADOOP-4228:
----------------------------------
Status: Open (was: Patch Available)
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Fix For: 0.18.2, 0.19.0
>
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch, metricsLong.patch, metricsLong1-br18.patch, metricsLong1.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Hairong Kuang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hairong Kuang updated HADOOP-4228:
----------------------------------
Status: Open (was: Patch Available)
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Fix For: 0.18.2, 0.19.0
>
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch, metricsLong.patch, metricsLong1-br18.patch, metricsLong1.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Eric Yang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12633892#action_12633892 ]
Eric Yang commented on HADOOP-4228:
-----------------------------------
The failed test has no relationship to this patch. There is probably something else committed to trunk which caused Balancer test failure.
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Robert Chansler (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Robert Chansler updated HADOOP-4228:
------------------------------------
Release Note: Changed bytes_read, bytes_written to type long to prevent metrics overflow. (was: Change bytes_read, bytes_written to type long to prevent metrics overflow.)
Hadoop Flags: [Incompatible change, Reviewed] (was: [Reviewed, Incompatible change])
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Fix For: 0.18.2, 0.19.0
>
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch, metricsLong.patch, metricsLong1-br18.patch, metricsLong1.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Hairong Kuang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hairong Kuang updated HADOOP-4228:
----------------------------------
Status: Open (was: Patch Available)
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Fix For: 0.18.2, 0.19.0
>
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch, metricsLong.patch, metricsLong1-br18.patch, metricsLong1.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Eric Yang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Eric Yang updated HADOOP-4228:
------------------------------
Release Note: Change bytes_read, bytes_written to type long to prevent metrics overflow.
Status: Patch Available (was: Open)
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Priority: Blocker
> Attachments: HADOOP-4228.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Hairong Kuang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hairong Kuang updated HADOOP-4228:
----------------------------------
Status: Open (was: Patch Available)
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Fix For: 0.18.2, 0.19.0
>
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch, metricsLong.patch, metricsLong1-br18.patch, metricsLong1.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Hairong Kuang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hairong Kuang updated HADOOP-4228:
----------------------------------
Resolution: Fixed
Hadoop Flags: [Incompatible change, Reviewed] (was: [Reviewed])
Status: Resolved (was: Patch Available)
I just committed this. Thanks Eric!
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Fix For: 0.18.2, 0.19.0
>
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch, metricsLong.patch, metricsLong1-br18.patch, metricsLong1.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Eric Yang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Eric Yang updated HADOOP-4228:
------------------------------
Attachment: (was: HADOOP-4228.patch)
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Priority: Blocker
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12636774#action_12636774 ]
Hadoop QA commented on HADOOP-4228:
-----------------------------------
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12391383/metricsLong.patch
against trunk revision 701476.
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 6 new or modified tests.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac compiler warnings.
+1 findbugs. The patch does not introduce any new Findbugs warnings.
+1 Eclipse classpath. The patch retains Eclipse classpath integrity.
-1 core tests. The patch failed core unit tests.
+1 contrib tests. The patch passed contrib unit tests.
Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3431/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3431/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3431/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3431/console
This message is automatically generated.
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Fix For: 0.18.2, 0.19.0
>
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch, metricsLong.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Assigned: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Nigel Daley (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Nigel Daley reassigned HADOOP-4228:
-----------------------------------
Assignee: Eric Yang
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Attachments: HADOOP-4228.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Hairong Kuang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12635913#action_12635913 ]
Hairong Kuang commented on HADOOP-4228:
---------------------------------------
The SimulatedDataNodeMetrics seems quite dummy, which is introduced for only the testing purpose.
A side note: DataNodeStatics and DataNodeStaticsMBean collect bytesRead but not bytesWritten. They should collect metrics bytesWritten as well.
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Eric Yang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Eric Yang updated HADOOP-4228:
------------------------------
Attachment: HADOOP-4228.patch
Patch to change metrics type from Integer to Long.
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Priority: Blocker
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Hairong Kuang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hairong Kuang updated HADOOP-4228:
----------------------------------
Status: Patch Available (was: Open)
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Fix For: 0.18.2, 0.19.0
>
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch, metricsLong.patch, metricsLong1-br18.patch, metricsLong1.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Eric Yang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Eric Yang updated HADOOP-4228:
------------------------------
Attachment: HADOOP-4228-trunk.patch
Same patch for trunk.
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12633612#action_12633612 ]
Hadoop QA commented on HADOOP-4228:
-----------------------------------
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12390702/HADOOP-4228-trunk.patch
against trunk revision 697306.
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 6 new or modified tests.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac compiler warnings.
+1 findbugs. The patch does not introduce any new Findbugs warnings.
-1 core tests. The patch failed core unit tests.
-1 contrib tests. The patch failed contrib unit tests.
Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3349/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3349/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3349/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3349/console
This message is automatically generated.
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Hairong Kuang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hairong Kuang updated HADOOP-4228:
----------------------------------
Status: Patch Available (was: Open)
Thank Brian for submitting a patch to HADOOP-4137 for handling Long type in Ganglia context. I am resubmitting my patch to hudson.
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Fix For: 0.18.2, 0.19.0
>
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch, metricsLong.patch, metricsLong1-br18.patch, metricsLong1.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Brian Bockelman (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12637305#action_12637305 ]
Brian Bockelman commented on HADOOP-4228:
-----------------------------------------
Changing the type to Long breaks Ganglia metrics reporting (there's a static mapping of Java type -> Ganglia type; there's no current Long type, meaning the map returns null and causes a NPE).
Ganglia has no Long type - perhaps a solution would be to map Longs in Ganglia to a float?
See also:
HADOOP-4137
HADOOP-3422
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Fix For: 0.18.2, 0.19.0
>
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch, metricsLong.patch, metricsLong1-br18.patch, metricsLong1.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Eric Yang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Eric Yang updated HADOOP-4228:
------------------------------
Attachment: HADOOP-4228.patch
Patch to change bytes_read and bytes_written metrics from type Integer to type long.
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Priority: Blocker
> Attachments: HADOOP-4228.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Hairong Kuang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hairong Kuang updated HADOOP-4228:
----------------------------------
Status: Patch Available (was: Open)
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Fix For: 0.18.2, 0.19.0
>
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch, metricsLong.patch, metricsLong1-br18.patch, metricsLong1.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Eric Yang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12638184#action_12638184 ]
Eric Yang commented on HADOOP-4228:
-----------------------------------
+1 for Hairong's patch. The patch provides real test cases.
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Fix For: 0.18.2, 0.19.0
>
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch, metricsLong.patch, metricsLong1-br18.patch, metricsLong1.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Hairong Kuang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hairong Kuang updated HADOOP-4228:
----------------------------------
Attachment: metricsLong1.patch
This is a patch for the trunk. It removes the change to DFSTestUtil that caused the failed tests.
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Fix For: 0.18.2, 0.19.0
>
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch, metricsLong.patch, metricsLong1-br18.patch, metricsLong1.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Hairong Kuang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hairong Kuang updated HADOOP-4228:
----------------------------------
Status: Open (was: Patch Available)
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch, metricsLong.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Hudson (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12639150#action_12639150 ]
Hudson commented on HADOOP-4228:
--------------------------------
Integrated in Hadoop-trunk #632 (See [http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/632/])
. The type of dfs datanoe metrics, bytes_read and bytes_written, is changed to be long. Contributed by Eric Yang and Hairong Kuang.
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Fix For: 0.18.2, 0.19.0
>
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch, metricsLong.patch, metricsLong1-br18.patch, metricsLong1.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Hairong Kuang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hairong Kuang updated HADOOP-4228:
----------------------------------
Attachment: metricsLong.patch
This patch has the following changes:
1. DataNode's metrics bytesRead & bytesWritten are changed to be long;
2. Add getBytesWritten method to DataNodesStaticsticsMBean;
3. Add a unit test to see if DataNodeMetrics can handle long number of bytes written.
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch, metricsLong.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12638714#action_12638714 ]
Hadoop QA commented on HADOOP-4228:
-----------------------------------
+1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12391556/metricsLong1.patch
against trunk revision 703609.
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 3 new or modified tests.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac compiler warnings.
+1 findbugs. The patch does not introduce any new Findbugs warnings.
+1 Eclipse classpath. The patch retains Eclipse classpath integrity.
+1 core tests. The patch passed core unit tests.
+1 contrib tests. The patch passed contrib unit tests.
Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3442/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3442/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3442/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3442/console
This message is automatically generated.
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Fix For: 0.18.2, 0.19.0
>
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch, metricsLong.patch, metricsLong1-br18.patch, metricsLong1.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Hairong Kuang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hairong Kuang updated HADOOP-4228:
----------------------------------
Fix Version/s: 0.19.0
0.18.2
Hadoop Flags: [Reviewed]
Status: Patch Available (was: Open)
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Fix For: 0.18.2, 0.19.0
>
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch, metricsLong.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Hairong Kuang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hairong Kuang updated HADOOP-4228:
----------------------------------
Attachment: metricsLong1-br18.patch
This is a patch for branch 18.
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Blocker
> Fix For: 0.18.2, 0.19.0
>
> Attachments: HADOOP-4228-trunk.patch, HADOOP-4228.patch, metricsLong.patch, metricsLong1-br18.patch
>
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-4228) dfs datanode metrics, bytes_read,
bytes_written overflows due to incorrect type used.
Posted by "Nigel Daley (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Nigel Daley updated HADOOP-4228:
--------------------------------
Priority: Blocker (was: Major)
Affects Version/s: (was: 0.18.1)
0.19.0
0.18.2
> dfs datanode metrics, bytes_read, bytes_written overflows due to incorrect type used.
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-4228
> URL: https://issues.apache.org/jira/browse/HADOOP-4228
> Project: Hadoop Core
> Issue Type: Bug
> Components: metrics
> Affects Versions: 0.18.2, 0.19.0
> Environment: Red Hat Enterprise Linux AS release 4 (Nahant Update 5), Java 1.6.
> Reporter: Eric Yang
> Priority: Blocker
>
> bytes_read, and bytes_written metrics are using int (MetricsTimeVaryingInt) as counter. This type is too small to store the bytes_read and bytes_written metrics. Recommend to change this to long (metricsLongValue).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.