You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Doug Cutting (JIRA)" <ji...@apache.org> on 2007/05/31 23:59:15 UTC

[jira] Created: (HADOOP-1450) checksums should be closer to data generation and consumption

checksums should be closer to data generation and consumption
-------------------------------------------------------------

                 Key: HADOOP-1450
                 URL: https://issues.apache.org/jira/browse/HADOOP-1450
             Project: Hadoop
          Issue Type: Improvement
          Components: fs
            Reporter: Doug Cutting
             Fix For: 0.14.0


ChecksumFileSystem checksums data by inserting a filter between two buffers.  The outermost buffer should be as small as possible, so that, when writing, checksums are computed before the data has spent much time in memory, and, when reading, checksums are validated as close to their time of use as possible.  Currently the outer buffer is the larger, using the bufferSize specified by the user, and the inner is small, so that most reads and writes will bypass it, as an optimization.  Instead, the outer buffer should be made to be bytesPerChecksum, and the inner buffer should be the user-specified buffer size.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-1450) checksums should be closer to data generation and consumption

Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-1450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Doug Cutting updated HADOOP-1450:
---------------------------------

    Attachment: HADOOP-1450.patch

This patch changes the outer buffers to contain just bytesPerSum, and uses the user-specified buffer size for inner buffers.  This should catch more memory errors, especially when large buffers are used.

> checksums should be closer to data generation and consumption
> -------------------------------------------------------------
>
>                 Key: HADOOP-1450
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1450
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: fs
>            Reporter: Doug Cutting
>             Fix For: 0.14.0
>
>         Attachments: HADOOP-1450.patch
>
>
> ChecksumFileSystem checksums data by inserting a filter between two buffers.  The outermost buffer should be as small as possible, so that, when writing, checksums are computed before the data has spent much time in memory, and, when reading, checksums are validated as close to their time of use as possible.  Currently the outer buffer is the larger, using the bufferSize specified by the user, and the inner is small, so that most reads and writes will bypass it, as an optimization.  Instead, the outer buffer should be made to be bytesPerChecksum, and the inner buffer should be the user-specified buffer size.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-1450) checksums should be closer to data generation and consumption

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-1450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12503531 ] 

Raghu Angadi commented on HADOOP-1450:
--------------------------------------

Also note that this patch adds one more buffering to input and output streams.  Do we really need the small buffer close to the user?


> checksums should be closer to data generation and consumption
> -------------------------------------------------------------
>
>                 Key: HADOOP-1450
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1450
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: fs
>            Reporter: Doug Cutting
>             Fix For: 0.14.0
>
>         Attachments: HADOOP-1450.patch
>
>
> ChecksumFileSystem checksums data by inserting a filter between two buffers.  The outermost buffer should be as small as possible, so that, when writing, checksums are computed before the data has spent much time in memory, and, when reading, checksums are validated as close to their time of use as possible.  Currently the outer buffer is the larger, using the bufferSize specified by the user, and the inner is small, so that most reads and writes will bypass it, as an optimization.  Instead, the outer buffer should be made to be bytesPerChecksum, and the inner buffer should be the user-specified buffer size.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-1450) checksums should be closer to data generation and consumption

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-1450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12500556 ] 

Raghu Angadi commented on HADOOP-1450:
--------------------------------------

minor: may we should remove the comment {{// open with an extremly small buffer size ...}}

> checksums should be closer to data generation and consumption
> -------------------------------------------------------------
>
>                 Key: HADOOP-1450
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1450
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: fs
>            Reporter: Doug Cutting
>             Fix For: 0.14.0
>
>         Attachments: HADOOP-1450.patch
>
>
> ChecksumFileSystem checksums data by inserting a filter between two buffers.  The outermost buffer should be as small as possible, so that, when writing, checksums are computed before the data has spent much time in memory, and, when reading, checksums are validated as close to their time of use as possible.  Currently the outer buffer is the larger, using the bufferSize specified by the user, and the inner is small, so that most reads and writes will bypass it, as an optimization.  Instead, the outer buffer should be made to be bytesPerChecksum, and the inner buffer should be the user-specified buffer size.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-1450) checksums should be closer to data generation and consumption

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-1450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12500511 ] 

Hadoop QA commented on HADOOP-1450:
-----------------------------------

+1

http://issues.apache.org/jira/secure/attachment/12358664/HADOOP-1450.patch applied and successfully tested against trunk revision r543222.

Test results:   http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/221/testReport/
Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/221/console

> checksums should be closer to data generation and consumption
> -------------------------------------------------------------
>
>                 Key: HADOOP-1450
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1450
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: fs
>            Reporter: Doug Cutting
>             Fix For: 0.14.0
>
>         Attachments: HADOOP-1450.patch
>
>
> ChecksumFileSystem checksums data by inserting a filter between two buffers.  The outermost buffer should be as small as possible, so that, when writing, checksums are computed before the data has spent much time in memory, and, when reading, checksums are validated as close to their time of use as possible.  Currently the outer buffer is the larger, using the bufferSize specified by the user, and the inner is small, so that most reads and writes will bypass it, as an optimization.  Instead, the outer buffer should be made to be bytesPerChecksum, and the inner buffer should be the user-specified buffer size.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-1450) checksums should be closer to data generation and consumption

Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-1450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Doug Cutting updated HADOOP-1450:
---------------------------------

    Resolution: Fixed
      Assignee: Doug Cutting
        Status: Resolved  (was: Patch Available)

I just committed this.

> checksums should be closer to data generation and consumption
> -------------------------------------------------------------
>
>                 Key: HADOOP-1450
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1450
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: fs
>            Reporter: Doug Cutting
>            Assignee: Doug Cutting
>             Fix For: 0.14.0
>
>         Attachments: HADOOP-1450.patch
>
>
> ChecksumFileSystem checksums data by inserting a filter between two buffers.  The outermost buffer should be as small as possible, so that, when writing, checksums are computed before the data has spent much time in memory, and, when reading, checksums are validated as close to their time of use as possible.  Currently the outer buffer is the larger, using the bufferSize specified by the user, and the inner is small, so that most reads and writes will bypass it, as an optimization.  Instead, the outer buffer should be made to be bytesPerChecksum, and the inner buffer should be the user-specified buffer size.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-1450) checksums should be closer to data generation and consumption

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-1450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12503911 ] 

Hadoop QA commented on HADOOP-1450:
-----------------------------------

Integrated in Hadoop-Nightly #119 (See [http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/119/])

> checksums should be closer to data generation and consumption
> -------------------------------------------------------------
>
>                 Key: HADOOP-1450
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1450
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: fs
>            Reporter: Doug Cutting
>            Assignee: Doug Cutting
>             Fix For: 0.14.0
>
>         Attachments: HADOOP-1450.patch
>
>
> ChecksumFileSystem checksums data by inserting a filter between two buffers.  The outermost buffer should be as small as possible, so that, when writing, checksums are computed before the data has spent much time in memory, and, when reading, checksums are validated as close to their time of use as possible.  Currently the outer buffer is the larger, using the bufferSize specified by the user, and the inner is small, so that most reads and writes will bypass it, as an optimization.  Instead, the outer buffer should be made to be bytesPerChecksum, and the inner buffer should be the user-specified buffer size.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-1450) checksums should be closer to data generation and consumption

Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-1450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Doug Cutting updated HADOOP-1450:
---------------------------------

    Status: Patch Available  (was: Open)

> checksums should be closer to data generation and consumption
> -------------------------------------------------------------
>
>                 Key: HADOOP-1450
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1450
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: fs
>            Reporter: Doug Cutting
>             Fix For: 0.14.0
>
>         Attachments: HADOOP-1450.patch
>
>
> ChecksumFileSystem checksums data by inserting a filter between two buffers.  The outermost buffer should be as small as possible, so that, when writing, checksums are computed before the data has spent much time in memory, and, when reading, checksums are validated as close to their time of use as possible.  Currently the outer buffer is the larger, using the bufferSize specified by the user, and the inner is small, so that most reads and writes will bypass it, as an optimization.  Instead, the outer buffer should be made to be bytesPerChecksum, and the inner buffer should be the user-specified buffer size.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-1450) checksums should be closer to data generation and consumption

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-1450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12503585 ] 

Raghu Angadi commented on HADOOP-1450:
--------------------------------------

> Also note that this patch adds one more buffering to input and output streams. Do we really need the small buffer close to the user?
May be only when user reads less than bytePerChecksum. In that sense its OK.


> checksums should be closer to data generation and consumption
> -------------------------------------------------------------
>
>                 Key: HADOOP-1450
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1450
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: fs
>            Reporter: Doug Cutting
>             Fix For: 0.14.0
>
>         Attachments: HADOOP-1450.patch
>
>
> ChecksumFileSystem checksums data by inserting a filter between two buffers.  The outermost buffer should be as small as possible, so that, when writing, checksums are computed before the data has spent much time in memory, and, when reading, checksums are validated as close to their time of use as possible.  Currently the outer buffer is the larger, using the bufferSize specified by the user, and the inner is small, so that most reads and writes will bypass it, as an optimization.  Instead, the outer buffer should be made to be bytesPerChecksum, and the inner buffer should be the user-specified buffer size.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.