You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Chris Douglas (JIRA)" <ji...@apache.org> on 2009/03/11 06:42:50 UTC

[jira] Created: (HADOOP-5459) CRC errors not detected reading intermediate output into memory with problematic length

CRC errors not detected reading intermediate output into memory with problematic length
---------------------------------------------------------------------------------------

                 Key: HADOOP-5459
                 URL: https://issues.apache.org/jira/browse/HADOOP-5459
             Project: Hadoop Core
          Issue Type: Bug
    Affects Versions: 0.20.0
            Reporter: Chris Douglas
            Priority: Blocker


It's possible that the expected, uncompressed length of the segment is less than the available/decompressed data. This can happen in some worst-cases for compression, but it is exceedingly rare. It is also possible (though also fantastically unlikely) for the data to deflate to a size greater than that reported by the map. CRC errors will remain undetected because IFileInputStream does not validate the checksum until the end of the stream, and close() does not advance the stream to the end of the segment. The (abbreviated) read loop fetching data in shuffleInMemory:

{code}
int n = input.read(shuffleData, 0, shuffleData.length);
while (n > 0) { 
  bytesRead += n;
  n = input.read(shuffleData, bytesRead, 
                 (shuffleData.length-bytesRead));
} 
{code}

Will read only up to the expected length. Without reading the whole segment, the checksum is not validated. Even if IFileInputStream instances are closed, they should always validate checksums.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-5459) CRC errors not detected reading intermediate output into memory with problematic length

Posted by "Chris Douglas (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-5459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Chris Douglas updated HADOOP-5459:
----------------------------------

    Attachment: 5459-1.patch

Added unit tests for IFile\*Streams.

> CRC errors not detected reading intermediate output into memory with problematic length
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5459
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5459
>             Project: Hadoop Core
>          Issue Type: Bug
>    Affects Versions: 0.20.0
>            Reporter: Chris Douglas
>            Priority: Blocker
>         Attachments: 5459-0.patch, 5459-1.patch
>
>
> It's possible that the expected, uncompressed length of the segment is less than the available/decompressed data. This can happen in some worst-cases for compression, but it is exceedingly rare. It is also possible (though also fantastically unlikely) for the data to deflate to a size greater than that reported by the map. CRC errors will remain undetected because IFileInputStream does not validate the checksum until the end of the stream, and close() does not advance the stream to the end of the segment. The (abbreviated) read loop fetching data in shuffleInMemory:
> {code}
> int n = input.read(shuffleData, 0, shuffleData.length);
> while (n > 0) { 
>   bytesRead += n;
>   n = input.read(shuffleData, bytesRead, 
>                  (shuffleData.length-bytesRead));
> } 
> {code}
> Will read only up to the expected length. Without reading the whole segment, the checksum is not validated. Even if IFileInputStream instances are closed, they should always validate checksums.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-5459) CRC errors not detected reading intermediate output into memory with problematic length

Posted by "Sameer Paranjpye (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-5459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sameer Paranjpye updated HADOOP-5459:
-------------------------------------

    Priority: Major  (was: Blocker)

> CRC errors not detected reading intermediate output into memory with problematic length
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5459
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5459
>             Project: Hadoop Core
>          Issue Type: Bug
>    Affects Versions: 0.20.0
>            Reporter: Chris Douglas
>            Assignee: Chris Douglas
>         Attachments: 5459-0.patch, 5459-1.patch
>
>
> It's possible that the expected, uncompressed length of the segment is less than the available/decompressed data. This can happen in some worst-cases for compression, but it is exceedingly rare. It is also possible (though also fantastically unlikely) for the data to deflate to a size greater than that reported by the map. CRC errors will remain undetected because IFileInputStream does not validate the checksum until the end of the stream, and close() does not advance the stream to the end of the segment. The (abbreviated) read loop fetching data in shuffleInMemory:
> {code}
> int n = input.read(shuffleData, 0, shuffleData.length);
> while (n > 0) { 
>   bytesRead += n;
>   n = input.read(shuffleData, bytesRead, 
>                  (shuffleData.length-bytesRead));
> } 
> {code}
> Will read only up to the expected length. Without reading the whole segment, the checksum is not validated. Even if IFileInputStream instances are closed, they should always validate checksums.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-5459) CRC errors not detected reading intermediate output into memory with problematic length

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-5459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12681948#action_12681948 ] 

Arun C Murthy commented on HADOOP-5459:
---------------------------------------

+1

> CRC errors not detected reading intermediate output into memory with problematic length
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5459
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5459
>             Project: Hadoop Core
>          Issue Type: Bug
>    Affects Versions: 0.20.0
>            Reporter: Chris Douglas
>            Assignee: Chris Douglas
>         Attachments: 5459-0.patch, 5459-1.patch
>
>
> It's possible that the expected, uncompressed length of the segment is less than the available/decompressed data. This can happen in some worst-cases for compression, but it is exceedingly rare. It is also possible (though also fantastically unlikely) for the data to deflate to a size greater than that reported by the map. CRC errors will remain undetected because IFileInputStream does not validate the checksum until the end of the stream, and close() does not advance the stream to the end of the segment. The (abbreviated) read loop fetching data in shuffleInMemory:
> {code}
> int n = input.read(shuffleData, 0, shuffleData.length);
> while (n > 0) { 
>   bytesRead += n;
>   n = input.read(shuffleData, bytesRead, 
>                  (shuffleData.length-bytesRead));
> } 
> {code}
> Will read only up to the expected length. Without reading the whole segment, the checksum is not validated. Even if IFileInputStream instances are closed, they should always validate checksums.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-5459) CRC errors not detected reading intermediate output into memory with problematic length

Posted by "Chris Douglas (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-5459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Chris Douglas updated HADOOP-5459:
----------------------------------

    Assignee: Chris Douglas
      Status: Patch Available  (was: Open)

> CRC errors not detected reading intermediate output into memory with problematic length
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5459
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5459
>             Project: Hadoop Core
>          Issue Type: Bug
>    Affects Versions: 0.20.0
>            Reporter: Chris Douglas
>            Assignee: Chris Douglas
>            Priority: Blocker
>         Attachments: 5459-0.patch, 5459-1.patch
>
>
> It's possible that the expected, uncompressed length of the segment is less than the available/decompressed data. This can happen in some worst-cases for compression, but it is exceedingly rare. It is also possible (though also fantastically unlikely) for the data to deflate to a size greater than that reported by the map. CRC errors will remain undetected because IFileInputStream does not validate the checksum until the end of the stream, and close() does not advance the stream to the end of the segment. The (abbreviated) read loop fetching data in shuffleInMemory:
> {code}
> int n = input.read(shuffleData, 0, shuffleData.length);
> while (n > 0) { 
>   bytesRead += n;
>   n = input.read(shuffleData, bytesRead, 
>                  (shuffleData.length-bytesRead));
> } 
> {code}
> Will read only up to the expected length. Without reading the whole segment, the checksum is not validated. Even if IFileInputStream instances are closed, they should always validate checksums.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-5459) CRC errors not detected reading intermediate output into memory with problematic length

Posted by "Chris Douglas (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-5459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Chris Douglas updated HADOOP-5459:
----------------------------------

       Resolution: Fixed
    Fix Version/s: 0.20.0
     Hadoop Flags: [Reviewed]
           Status: Resolved  (was: Patch Available)

I committed this.

> CRC errors not detected reading intermediate output into memory with problematic length
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5459
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5459
>             Project: Hadoop Core
>          Issue Type: Bug
>    Affects Versions: 0.20.0
>            Reporter: Chris Douglas
>            Assignee: Chris Douglas
>             Fix For: 0.20.0
>
>         Attachments: 5459-0.patch, 5459-1.patch
>
>
> It's possible that the expected, uncompressed length of the segment is less than the available/decompressed data. This can happen in some worst-cases for compression, but it is exceedingly rare. It is also possible (though also fantastically unlikely) for the data to deflate to a size greater than that reported by the map. CRC errors will remain undetected because IFileInputStream does not validate the checksum until the end of the stream, and close() does not advance the stream to the end of the segment. The (abbreviated) read loop fetching data in shuffleInMemory:
> {code}
> int n = input.read(shuffleData, 0, shuffleData.length);
> while (n > 0) { 
>   bytesRead += n;
>   n = input.read(shuffleData, bytesRead, 
>                  (shuffleData.length-bytesRead));
> } 
> {code}
> Will read only up to the expected length. Without reading the whole segment, the checksum is not validated. Even if IFileInputStream instances are closed, they should always validate checksums.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-5459) CRC errors not detected reading intermediate output into memory with problematic length

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-5459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12680851#action_12680851 ] 

Hadoop QA commented on HADOOP-5459:
-----------------------------------

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12401904/5459-1.patch
  against trunk revision 752405.

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 2 new or modified tests.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 findbugs.  The patch does not introduce any new Findbugs warnings.

    +1 Eclipse classpath. The patch retains Eclipse classpath integrity.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    +1 core tests.  The patch passed core unit tests.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/49/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/49/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/49/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/49/console

This message is automatically generated.

> CRC errors not detected reading intermediate output into memory with problematic length
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5459
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5459
>             Project: Hadoop Core
>          Issue Type: Bug
>    Affects Versions: 0.20.0
>            Reporter: Chris Douglas
>            Assignee: Chris Douglas
>            Priority: Blocker
>         Attachments: 5459-0.patch, 5459-1.patch
>
>
> It's possible that the expected, uncompressed length of the segment is less than the available/decompressed data. This can happen in some worst-cases for compression, but it is exceedingly rare. It is also possible (though also fantastically unlikely) for the data to deflate to a size greater than that reported by the map. CRC errors will remain undetected because IFileInputStream does not validate the checksum until the end of the stream, and close() does not advance the stream to the end of the segment. The (abbreviated) read loop fetching data in shuffleInMemory:
> {code}
> int n = input.read(shuffleData, 0, shuffleData.length);
> while (n > 0) { 
>   bytesRead += n;
>   n = input.read(shuffleData, bytesRead, 
>                  (shuffleData.length-bytesRead));
> } 
> {code}
> Will read only up to the expected length. Without reading the whole segment, the checksum is not validated. Even if IFileInputStream instances are closed, they should always validate checksums.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-5459) CRC errors not detected reading intermediate output into memory with problematic length

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-5459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12695439#action_12695439 ] 

Hudson commented on HADOOP-5459:
--------------------------------

Integrated in Hadoop-trunk #796 (See [http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/796/])
    

> CRC errors not detected reading intermediate output into memory with problematic length
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5459
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5459
>             Project: Hadoop Core
>          Issue Type: Bug
>    Affects Versions: 0.20.0
>            Reporter: Chris Douglas
>            Assignee: Chris Douglas
>             Fix For: 0.20.0
>
>         Attachments: 5459-0.patch, 5459-1.patch
>
>
> It's possible that the expected, uncompressed length of the segment is less than the available/decompressed data. This can happen in some worst-cases for compression, but it is exceedingly rare. It is also possible (though also fantastically unlikely) for the data to deflate to a size greater than that reported by the map. CRC errors will remain undetected because IFileInputStream does not validate the checksum until the end of the stream, and close() does not advance the stream to the end of the segment. The (abbreviated) read loop fetching data in shuffleInMemory:
> {code}
> int n = input.read(shuffleData, 0, shuffleData.length);
> while (n > 0) { 
>   bytesRead += n;
>   n = input.read(shuffleData, bytesRead, 
>                  (shuffleData.length-bytesRead));
> } 
> {code}
> Will read only up to the expected length. Without reading the whole segment, the checksum is not validated. Even if IFileInputStream instances are closed, they should always validate checksums.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-5459) CRC errors not detected reading intermediate output into memory with problematic length

Posted by "Chris Douglas (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-5459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Chris Douglas updated HADOOP-5459:
----------------------------------

    Attachment: 5459-0.patch

Patch reading the remainder of the segment. This needs a unit test, but writing one is less trivial. IFileInputStream should probably also take a Progressable, so this doesn't time out.

> CRC errors not detected reading intermediate output into memory with problematic length
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5459
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5459
>             Project: Hadoop Core
>          Issue Type: Bug
>    Affects Versions: 0.20.0
>            Reporter: Chris Douglas
>            Priority: Blocker
>         Attachments: 5459-0.patch
>
>
> It's possible that the expected, uncompressed length of the segment is less than the available/decompressed data. This can happen in some worst-cases for compression, but it is exceedingly rare. It is also possible (though also fantastically unlikely) for the data to deflate to a size greater than that reported by the map. CRC errors will remain undetected because IFileInputStream does not validate the checksum until the end of the stream, and close() does not advance the stream to the end of the segment. The (abbreviated) read loop fetching data in shuffleInMemory:
> {code}
> int n = input.read(shuffleData, 0, shuffleData.length);
> while (n > 0) { 
>   bytesRead += n;
>   n = input.read(shuffleData, bytesRead, 
>                  (shuffleData.length-bytesRead));
> } 
> {code}
> Will read only up to the expected length. Without reading the whole segment, the checksum is not validated. Even if IFileInputStream instances are closed, they should always validate checksums.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.