You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "LN (JIRA)" <ji...@apache.org> on 2008/07/17 11:19:31 UTC

[jira] Created: (HADOOP-3778) seek(long) in DFSInputStream should catch socket exception for retry later

seek(long) in DFSInputStream should catch socket exception for retry later
--------------------------------------------------------------------------

                 Key: HADOOP-3778
                 URL: https://issues.apache.org/jira/browse/HADOOP-3778
             Project: Hadoop Core
          Issue Type: Bug
    Affects Versions: 0.17.1
            Reporter: LN


HADOOP-2346 introduced data read/write timeout. when data stream borken, DFSClient will retry in read/write methods, but no such mechanism found when seek(long) calling blockReader.skip(diff) (DFSClient.java #1582), will let IOException throw to application.  i met NPE when using MapFile in hbase.

i'm supposing in the seek(long) method, let done be 'false' will causing retry (via 'blockEnd = -1'), a patch will attached later for review.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3778) seek(long) in DFSInputStream should catch socket exception for retry later

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12614801#action_12614801 ] 

Raghu Angadi commented on HADOOP-3778:
--------------------------------------

Patch looks good. One minor change : could you wrap 'try block' around skip() so that it is clear where we expect the exception and it will also minimize changes in the patch.


> seek(long) in DFSInputStream should catch socket exception for retry later
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-3778
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3778
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.17.1
>            Reporter: LN
>            Priority: Minor
>         Attachments: HADOOP-3778.patch, HADOOP-3778.patch
>
>
> HADOOP-2346 introduced data read/write timeout. when data stream borken, DFSClient will retry in read/write methods, but no such mechanism found when seek(long) calling blockReader.skip(diff) (DFSClient.java #1582), will let IOException throw to application.  i met NPE when using MapFile in hbase.
> i'm supposing in the seek(long) method, let done be 'false' will causing retry (via 'blockEnd = -1'), a patch will attached later for review.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3778) seek(long) in DFSInputStream should catch socket exception for retry later

Posted by "LN (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

LN updated HADOOP-3778:
-----------------------

    Component/s: dfs

> seek(long) in DFSInputStream should catch socket exception for retry later
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-3778
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3778
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.17.1
>            Reporter: LN
>            Priority: Minor
>         Attachments: HADOOP-3778.patch
>
>
> HADOOP-2346 introduced data read/write timeout. when data stream borken, DFSClient will retry in read/write methods, but no such mechanism found when seek(long) calling blockReader.skip(diff) (DFSClient.java #1582), will let IOException throw to application.  i met NPE when using MapFile in hbase.
> i'm supposing in the seek(long) method, let done be 'false' will causing retry (via 'blockEnd = -1'), a patch will attached later for review.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3778) seek(long) in DFSInputStream should catch socket exception for retry later

Posted by "LN (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12615148#action_12615148 ] 

LN commented on HADOOP-3778:
----------------------------

thanks Raghu and Stack, i'm very glad to. what should i do? fixing and commiting patch to svn? how about integration test rule?

> seek(long) in DFSInputStream should catch socket exception for retry later
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-3778
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3778
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: LN
>            Priority: Minor
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3778.patch, HADOOP-3778.patch
>
>
> HADOOP-2346 introduced data read/write timeout. when data stream borken, DFSClient will retry in read/write methods, but no such mechanism found when seek(long) calling blockReader.skip(diff) (DFSClient.java #1582), will let IOException throw to application.  i met NPE when using MapFile in hbase.
> i'm supposing in the seek(long) method, let done be 'false' will causing retry (via 'blockEnd = -1'), a patch will attached later for review.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3778) seek(long) in DFSInputStream should catch socket exception for retry later

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Raghu Angadi updated HADOOP-3778:
---------------------------------

    Attachment: HADOOP-3778.patch

Minor rearrangement of LN's patch. Hopefully its ok.

> seek(long) in DFSInputStream should catch socket exception for retry later
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-3778
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3778
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: LN
>            Assignee: LN
>            Priority: Minor
>             Fix For: 0.19.0
>
>         Attachments: HADOOP-3778.patch, HADOOP-3778.patch, HADOOP-3778.patch
>
>
> HADOOP-2346 introduced data read/write timeout. when data stream borken, DFSClient will retry in read/write methods, but no such mechanism found when seek(long) calling blockReader.skip(diff) (DFSClient.java #1582), will let IOException throw to application.  i met NPE when using MapFile in hbase.
> i'm supposing in the seek(long) method, let done be 'false' will causing retry (via 'blockEnd = -1'), a patch will attached later for review.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3778) seek(long) in DFSInputStream should catch socket exception for retry later

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Raghu Angadi updated HADOOP-3778:
---------------------------------

    Status: Patch Available  (was: Open)

> seek(long) in DFSInputStream should catch socket exception for retry later
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-3778
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3778
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: LN
>            Assignee: LN
>            Priority: Minor
>             Fix For: 0.19.0
>
>         Attachments: HADOOP-3778.patch, HADOOP-3778.patch, HADOOP-3778.patch
>
>
> HADOOP-2346 introduced data read/write timeout. when data stream borken, DFSClient will retry in read/write methods, but no such mechanism found when seek(long) calling blockReader.skip(diff) (DFSClient.java #1582), will let IOException throw to application.  i met NPE when using MapFile in hbase.
> i'm supposing in the seek(long) method, let done be 'false' will causing retry (via 'blockEnd = -1'), a patch will attached later for review.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3778) seek(long) in DFSInputStream should catch socket exception for retry later

Posted by "LN (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

LN updated HADOOP-3778:
-----------------------

    Priority: Minor  (was: Major)

> seek(long) in DFSInputStream should catch socket exception for retry later
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-3778
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3778
>             Project: Hadoop Core
>          Issue Type: Bug
>    Affects Versions: 0.17.1
>            Reporter: LN
>            Priority: Minor
>         Attachments: HADOOP-3778.patch
>
>
> HADOOP-2346 introduced data read/write timeout. when data stream borken, DFSClient will retry in read/write methods, but no such mechanism found when seek(long) calling blockReader.skip(diff) (DFSClient.java #1582), will let IOException throw to application.  i met NPE when using MapFile in hbase.
> i'm supposing in the seek(long) method, let done be 'false' will causing retry (via 'blockEnd = -1'), a patch will attached later for review.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3778) seek(long) in DFSInputStream should catch socket exception for retry later

Posted by "stack (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12615056#action_12615056 ] 

stack commented on HADOOP-3778:
-------------------------------

I added him as a contributor under hbase with user name LN.  Does that work for you Raghu? (https://issues.apache.org/jira/secure/ViewProfile.jspa?name=ln%40webcate.net)

> seek(long) in DFSInputStream should catch socket exception for retry later
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-3778
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3778
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: LN
>            Priority: Minor
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3778.patch, HADOOP-3778.patch
>
>
> HADOOP-2346 introduced data read/write timeout. when data stream borken, DFSClient will retry in read/write methods, but no such mechanism found when seek(long) calling blockReader.skip(diff) (DFSClient.java #1582), will let IOException throw to application.  i met NPE when using MapFile in hbase.
> i'm supposing in the seek(long) method, let done be 'false' will causing retry (via 'blockEnd = -1'), a patch will attached later for review.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3778) seek(long) in DFSInputStream should catch socket exception for retry later

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Raghu Angadi updated HADOOP-3778:
---------------------------------

    Affects Version/s:     (was: 0.17.1)
                       0.16.0
        Fix Version/s: 0.18.0

This bug existed in hadoop for a long time. Marking it for 0.18. How do assign it to LN?

> seek(long) in DFSInputStream should catch socket exception for retry later
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-3778
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3778
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: LN
>            Priority: Minor
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3778.patch, HADOOP-3778.patch
>
>
> HADOOP-2346 introduced data read/write timeout. when data stream borken, DFSClient will retry in read/write methods, but no such mechanism found when seek(long) calling blockReader.skip(diff) (DFSClient.java #1582), will let IOException throw to application.  i met NPE when using MapFile in hbase.
> i'm supposing in the seek(long) method, let done be 'false' will causing retry (via 'blockEnd = -1'), a patch will attached later for review.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3778) seek(long) in DFSInputStream should catch socket exception for retry later

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Raghu Angadi updated HADOOP-3778:
---------------------------------

    Fix Version/s:     (was: 0.18.0)
                   0.19.0

> seek(long) in DFSInputStream should catch socket exception for retry later
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-3778
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3778
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: LN
>            Assignee: LN
>            Priority: Minor
>             Fix For: 0.19.0
>
>         Attachments: HADOOP-3778.patch, HADOOP-3778.patch
>
>
> HADOOP-2346 introduced data read/write timeout. when data stream borken, DFSClient will retry in read/write methods, but no such mechanism found when seek(long) calling blockReader.skip(diff) (DFSClient.java #1582), will let IOException throw to application.  i met NPE when using MapFile in hbase.
> i'm supposing in the seek(long) method, let done be 'false' will causing retry (via 'blockEnd = -1'), a patch will attached later for review.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3778) seek(long) in DFSInputStream should catch socket exception for retry later

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12614492#action_12614492 ] 

Raghu Angadi commented on HADOOP-3778:
--------------------------------------

More regd patch :

We don't need to print warning.. may be you could make it a debug. Also remove comments with your name.

> seek(long) in DFSInputStream should catch socket exception for retry later
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-3778
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3778
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.17.1
>            Reporter: LN
>            Priority: Minor
>         Attachments: HADOOP-3778.patch
>
>
> HADOOP-2346 introduced data read/write timeout. when data stream borken, DFSClient will retry in read/write methods, but no such mechanism found when seek(long) calling blockReader.skip(diff) (DFSClient.java #1582), will let IOException throw to application.  i met NPE when using MapFile in hbase.
> i'm supposing in the seek(long) method, let done be 'false' will causing retry (via 'blockEnd = -1'), a patch will attached later for review.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3778) seek(long) in DFSInputStream should catch socket exception for retry later

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12624679#action_12624679 ] 

Hudson commented on HADOOP-3778:
--------------------------------

Integrated in Hadoop-trunk #581 (See [http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/581/])

> seek(long) in DFSInputStream should catch socket exception for retry later
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-3778
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3778
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Luo Ning
>            Assignee: Luo Ning
>            Priority: Minor
>             Fix For: 0.19.0
>
>         Attachments: HADOOP-3778.patch, HADOOP-3778.patch, HADOOP-3778.patch
>
>
> HADOOP-2346 introduced data read/write timeout. when data stream borken, DFSClient will retry in read/write methods, but no such mechanism found when seek(long) calling blockReader.skip(diff) (DFSClient.java #1582), will let IOException throw to application.  i met NPE when using MapFile in hbase.
> i'm supposing in the seek(long) method, let done be 'false' will causing retry (via 'blockEnd = -1'), a patch will attached later for review.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3778) seek(long) in DFSInputStream should catch socket exception for retry later

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12615787#action_12615787 ] 

Hadoop QA commented on HADOOP-3778:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12386648/HADOOP-3778.patch
  against trunk revision 678845.

    +1 @author.  The patch does not contain any @author tags.

    -1 tests included.  The patch doesn't appear to include any new or modified tests.
                        Please justify why no tests are needed for this patch.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 findbugs.  The patch does not introduce any new Findbugs warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    -1 core tests.  The patch failed core unit tests.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2922/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2922/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2922/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2922/console

This message is automatically generated.

> seek(long) in DFSInputStream should catch socket exception for retry later
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-3778
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3778
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: LN
>            Assignee: LN
>            Priority: Minor
>             Fix For: 0.19.0
>
>         Attachments: HADOOP-3778.patch, HADOOP-3778.patch, HADOOP-3778.patch
>
>
> HADOOP-2346 introduced data read/write timeout. when data stream borken, DFSClient will retry in read/write methods, but no such mechanism found when seek(long) calling blockReader.skip(diff) (DFSClient.java #1582), will let IOException throw to application.  i met NPE when using MapFile in hbase.
> i'm supposing in the seek(long) method, let done be 'false' will causing retry (via 'blockEnd = -1'), a patch will attached later for review.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Assigned: (HADOOP-3778) seek(long) in DFSInputStream should catch socket exception for retry later

Posted by "stack (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

stack reassigned HADOOP-3778:
-----------------------------

    Assignee: LN

> seek(long) in DFSInputStream should catch socket exception for retry later
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-3778
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3778
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: LN
>            Assignee: LN
>            Priority: Minor
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3778.patch, HADOOP-3778.patch
>
>
> HADOOP-2346 introduced data read/write timeout. when data stream borken, DFSClient will retry in read/write methods, but no such mechanism found when seek(long) calling blockReader.skip(diff) (DFSClient.java #1582), will let IOException throw to application.  i met NPE when using MapFile in hbase.
> i'm supposing in the seek(long) method, let done be 'false' will causing retry (via 'blockEnd = -1'), a patch will attached later for review.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3778) seek(long) in DFSInputStream should catch socket exception for retry later

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Raghu Angadi updated HADOOP-3778:
---------------------------------

      Resolution: Fixed
    Release Note: DFSInputStream.seek() retries in case of errors just like read().
    Hadoop Flags: [Reviewed]
          Status: Resolved  (was: Patch Available)

I just committed this. Thanks LN!

> seek(long) in DFSInputStream should catch socket exception for retry later
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-3778
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3778
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: LN
>            Assignee: LN
>            Priority: Minor
>             Fix For: 0.19.0
>
>         Attachments: HADOOP-3778.patch, HADOOP-3778.patch, HADOOP-3778.patch
>
>
> HADOOP-2346 introduced data read/write timeout. when data stream borken, DFSClient will retry in read/write methods, but no such mechanism found when seek(long) calling blockReader.skip(diff) (DFSClient.java #1582), will let IOException throw to application.  i met NPE when using MapFile in hbase.
> i'm supposing in the seek(long) method, let done be 'false' will causing retry (via 'blockEnd = -1'), a patch will attached later for review.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3778) seek(long) in DFSInputStream should catch socket exception for retry later

Posted by "LN (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

LN updated HADOOP-3778:
-----------------------

    Attachment: HADOOP-3778.patch

corrected idents, whitspace and comments.

> seek(long) in DFSInputStream should catch socket exception for retry later
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-3778
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3778
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.17.1
>            Reporter: LN
>            Priority: Minor
>         Attachments: HADOOP-3778.patch, HADOOP-3778.patch
>
>
> HADOOP-2346 introduced data read/write timeout. when data stream borken, DFSClient will retry in read/write methods, but no such mechanism found when seek(long) calling blockReader.skip(diff) (DFSClient.java #1582), will let IOException throw to application.  i met NPE when using MapFile in hbase.
> i'm supposing in the seek(long) method, let done be 'false' will causing retry (via 'blockEnd = -1'), a patch will attached later for review.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3778) seek(long) in DFSInputStream should catch socket exception for retry later

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12615379#action_12615379 ] 

Raghu Angadi commented on HADOOP-3778:
--------------------------------------

Are planning to make any changes to the patch? Once you think it is ready I can commit it, after sending it through hudson etc.

> how about integration test rule?
I haven't seen anyone insisting on a test for this jira.

> seek(long) in DFSInputStream should catch socket exception for retry later
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-3778
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3778
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: LN
>            Assignee: LN
>            Priority: Minor
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3778.patch, HADOOP-3778.patch
>
>
> HADOOP-2346 introduced data read/write timeout. when data stream borken, DFSClient will retry in read/write methods, but no such mechanism found when seek(long) calling blockReader.skip(diff) (DFSClient.java #1582), will let IOException throw to application.  i met NPE when using MapFile in hbase.
> i'm supposing in the seek(long) method, let done be 'false' will causing retry (via 'blockEnd = -1'), a patch will attached later for review.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3778) seek(long) in DFSInputStream should catch socket exception for retry later

Posted by "LN (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

LN updated HADOOP-3778:
-----------------------

    Attachment: HADOOP-3778.patch

only catch IOException, print a warning message, let local variable 'done' be false.

> seek(long) in DFSInputStream should catch socket exception for retry later
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-3778
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3778
>             Project: Hadoop Core
>          Issue Type: Bug
>    Affects Versions: 0.17.1
>            Reporter: LN
>         Attachments: HADOOP-3778.patch
>
>
> HADOOP-2346 introduced data read/write timeout. when data stream borken, DFSClient will retry in read/write methods, but no such mechanism found when seek(long) calling blockReader.skip(diff) (DFSClient.java #1582), will let IOException throw to application.  i met NPE when using MapFile in hbase.
> i'm supposing in the seek(long) method, let done be 'false' will causing retry (via 'blockEnd = -1'), a patch will attached later for review.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3778) seek(long) in DFSInputStream should catch socket exception for retry later

Posted by "Robert Chansler (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Robert Chansler updated HADOOP-3778:
------------------------------------

    Release Note:   (was: DFSInputStream.seek() retries in case of errors just like read().)

> seek(long) in DFSInputStream should catch socket exception for retry later
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-3778
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3778
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Luo Ning
>            Assignee: Luo Ning
>            Priority: Minor
>             Fix For: 0.19.0
>
>         Attachments: HADOOP-3778.patch, HADOOP-3778.patch, HADOOP-3778.patch
>
>
> HADOOP-2346 introduced data read/write timeout. when data stream borken, DFSClient will retry in read/write methods, but no such mechanism found when seek(long) calling blockReader.skip(diff) (DFSClient.java #1582), will let IOException throw to application.  i met NPE when using MapFile in hbase.
> i'm supposing in the seek(long) method, let done be 'false' will causing retry (via 'blockEnd = -1'), a patch will attached later for review.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3778) seek(long) in DFSInputStream should catch socket exception for retry later

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12614487#action_12614487 ] 

Raghu Angadi commented on HADOOP-3778:
--------------------------------------

> DFSClient will retry in read/write methods, but no such mechanism found when seek(long) calling blockReader.skip(diff) (DFSClient.java #1582),

Thats right. seek() should retry just like read().  Your fix seems correct.

> HADOOP-2346 introduced data read/write timeout.
I don't see why it is related to HADOOP-2346.

Could you clean up the patch so that it has the right indentation and does not have the white space changes ...


> seek(long) in DFSInputStream should catch socket exception for retry later
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-3778
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3778
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.17.1
>            Reporter: LN
>            Priority: Minor
>         Attachments: HADOOP-3778.patch
>
>
> HADOOP-2346 introduced data read/write timeout. when data stream borken, DFSClient will retry in read/write methods, but no such mechanism found when seek(long) calling blockReader.skip(diff) (DFSClient.java #1582), will let IOException throw to application.  i met NPE when using MapFile in hbase.
> i'm supposing in the seek(long) method, let done be 'false' will causing retry (via 'blockEnd = -1'), a patch will attached later for review.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.