You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Runping Qi (JIRA)" <ji...@apache.org> on 2008/04/05 21:31:26 UTC

[jira] Created: (HADOOP-3198) ReduceTask should handle rpc timeout exception for getting recordWriter

ReduceTask should handle rpc timeout exception for getting recordWriter
-----------------------------------------------------------------------

                 Key: HADOOP-3198
                 URL: https://issues.apache.org/jira/browse/HADOOP-3198
             Project: Hadoop Core
          Issue Type: Bug
          Components: mapred
            Reporter: Runping Qi


After shuffling and sorting, the reduce task is ready for the final phase --- reduce. 
The first thing is to create a record writer. 
That call may fail due to rpc timeout. 

java.net.SocketTimeoutException: timed out waiting for rpc response 
at org.apache.hadoop.ipc.Client.call(Client.java:559) 
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212) 
at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 
at java.lang.reflect.Method.invoke(Method.java:597) 
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) 
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) 
at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
at org.apache.hadoop.dfs.DFSClient.getFileInfo(DFSClient.java:548) 
at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:380) 
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:598) 
at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:106) 
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:366) 
at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2126) 


Then the whole reduce task failed, and all the work of shuflling and sorting is gone! 

The reduce task should handle this case better. It is worthwile to try a few times before it gives up. 
The stake is too high to give up at the first try. 


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3198) ReduceTask should handle rpc timeout exception for getting recordWriter

Posted by "Runping Qi (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12609701#action_12609701 ] 

Runping Qi commented on HADOOP-3198:
------------------------------------


I saw a jos, 13% of its reducers were failed due to this rpc timeout problem.
That is seriously flawed.

 

> ReduceTask should handle rpc timeout exception for getting recordWriter
> -----------------------------------------------------------------------
>
>                 Key: HADOOP-3198
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3198
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Runping Qi
>         Attachments: patch-3198.txt
>
>
> After shuffling and sorting, the reduce task is ready for the final phase --- reduce. 
> The first thing is to create a record writer. 
> That call may fail due to rpc timeout. 
> java.net.SocketTimeoutException: timed out waiting for rpc response 
> at org.apache.hadoop.ipc.Client.call(Client.java:559) 
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212) 
> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 
> at java.lang.reflect.Method.invoke(Method.java:597) 
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) 
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) 
> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at org.apache.hadoop.dfs.DFSClient.getFileInfo(DFSClient.java:548) 
> at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:380) 
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:598) 
> at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:106) 
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:366) 
> at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2126) 
> Then the whole reduce task failed, and all the work of shuflling and sorting is gone! 
> The reduce task should handle this case better. It is worthwile to try a few times before it gives up. 
> The stake is too high to give up at the first try. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3198) ReduceTask should handle rpc timeout exception for getting recordWriter

Posted by "Owen O'Malley (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12587088#action_12587088 ] 

Owen O'Malley commented on HADOOP-3198:
---------------------------------------

This is the wrong place to do this.

In particular, I believe the HDFS client already does a retry on exists. If it doesn't, it should. If the rpc timeout is coming out of the record writer creation times out, it means that it already failed several times.

-1

> ReduceTask should handle rpc timeout exception for getting recordWriter
> -----------------------------------------------------------------------
>
>                 Key: HADOOP-3198
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3198
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Runping Qi
>         Attachments: patch-3198.txt
>
>
> After shuffling and sorting, the reduce task is ready for the final phase --- reduce. 
> The first thing is to create a record writer. 
> That call may fail due to rpc timeout. 
> java.net.SocketTimeoutException: timed out waiting for rpc response 
> at org.apache.hadoop.ipc.Client.call(Client.java:559) 
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212) 
> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 
> at java.lang.reflect.Method.invoke(Method.java:597) 
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) 
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) 
> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at org.apache.hadoop.dfs.DFSClient.getFileInfo(DFSClient.java:548) 
> at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:380) 
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:598) 
> at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:106) 
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:366) 
> at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2126) 
> Then the whole reduce task failed, and all the work of shuflling and sorting is gone! 
> The reduce task should handle this case better. It is worthwile to try a few times before it gives up. 
> The stake is too high to give up at the first try. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3198) ReduceTask should handle rpc timeout exception for getting recordWriter

Posted by "Runping Qi (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12587185#action_12587185 ] 

Runping Qi commented on HADOOP-3198:
------------------------------------

BTW, why the output format class bothers to check the existence of the output dir (see https://issues.apache.org/jira/browse/HADOOP-3218)?


> ReduceTask should handle rpc timeout exception for getting recordWriter
> -----------------------------------------------------------------------
>
>                 Key: HADOOP-3198
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3198
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Runping Qi
>         Attachments: patch-3198.txt
>
>
> After shuffling and sorting, the reduce task is ready for the final phase --- reduce. 
> The first thing is to create a record writer. 
> That call may fail due to rpc timeout. 
> java.net.SocketTimeoutException: timed out waiting for rpc response 
> at org.apache.hadoop.ipc.Client.call(Client.java:559) 
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212) 
> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 
> at java.lang.reflect.Method.invoke(Method.java:597) 
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) 
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) 
> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at org.apache.hadoop.dfs.DFSClient.getFileInfo(DFSClient.java:548) 
> at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:380) 
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:598) 
> at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:106) 
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:366) 
> at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2126) 
> Then the whole reduce task failed, and all the work of shuflling and sorting is gone! 
> The reduce task should handle this case better. It is worthwile to try a few times before it gives up. 
> The stake is too high to give up at the first try. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3198) ReduceTask should handle rpc timeout exception for getting recordWriter

Posted by "Owen O'Malley (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Owen O'Malley updated HADOOP-3198:
----------------------------------

    Resolution: Won't Fix
        Status: Resolved  (was: Patch Available)

This will lead to very unmaintainable code. We absolutely do not want to have nested retries for different contexts.

> ReduceTask should handle rpc timeout exception for getting recordWriter
> -----------------------------------------------------------------------
>
>                 Key: HADOOP-3198
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3198
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Runping Qi
>         Attachments: patch-3198.txt
>
>
> After shuffling and sorting, the reduce task is ready for the final phase --- reduce. 
> The first thing is to create a record writer. 
> That call may fail due to rpc timeout. 
> java.net.SocketTimeoutException: timed out waiting for rpc response 
> at org.apache.hadoop.ipc.Client.call(Client.java:559) 
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212) 
> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 
> at java.lang.reflect.Method.invoke(Method.java:597) 
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) 
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) 
> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at org.apache.hadoop.dfs.DFSClient.getFileInfo(DFSClient.java:548) 
> at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:380) 
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:598) 
> at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:106) 
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:366) 
> at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2126) 
> Then the whole reduce task failed, and all the work of shuflling and sorting is gone! 
> The reduce task should handle this case better. It is worthwile to try a few times before it gives up. 
> The stake is too high to give up at the first try. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3198) ReduceTask should handle rpc timeout exception for getting recordWriter

Posted by "Runping Qi (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Runping Qi updated HADOOP-3198:
-------------------------------

    Status: Patch Available  (was: Open)


Add simple re-try logic.


> ReduceTask should handle rpc timeout exception for getting recordWriter
> -----------------------------------------------------------------------
>
>                 Key: HADOOP-3198
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3198
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Runping Qi
>         Attachments: patch-3198.txt
>
>
> After shuffling and sorting, the reduce task is ready for the final phase --- reduce. 
> The first thing is to create a record writer. 
> That call may fail due to rpc timeout. 
> java.net.SocketTimeoutException: timed out waiting for rpc response 
> at org.apache.hadoop.ipc.Client.call(Client.java:559) 
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212) 
> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 
> at java.lang.reflect.Method.invoke(Method.java:597) 
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) 
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) 
> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at org.apache.hadoop.dfs.DFSClient.getFileInfo(DFSClient.java:548) 
> at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:380) 
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:598) 
> at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:106) 
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:366) 
> at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2126) 
> Then the whole reduce task failed, and all the work of shuflling and sorting is gone! 
> The reduce task should handle this case better. It is worthwile to try a few times before it gives up. 
> The stake is too high to give up at the first try. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3198) ReduceTask should handle rpc timeout exception for getting recordWriter

Posted by "Runping Qi (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12587181#action_12587181 ] 

Runping Qi commented on HADOOP-3198:
------------------------------------


HDFS client has a retry on exists.
It is likely that it tried and failed the several times. 
That is perhaps fine for exists call in general.

However, for this particular call in getRecordWriter in reduce rask, the cost of failure is too expensive.
Thus, reduce task has to do something special.
In this sense, I think it is reduce task's responsibility to further re-try.

I am open for any suggestions to fix the problem.
However, I am not convenced that re-try at rpc level is the right answer.



> ReduceTask should handle rpc timeout exception for getting recordWriter
> -----------------------------------------------------------------------
>
>                 Key: HADOOP-3198
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3198
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Runping Qi
>         Attachments: patch-3198.txt
>
>
> After shuffling and sorting, the reduce task is ready for the final phase --- reduce. 
> The first thing is to create a record writer. 
> That call may fail due to rpc timeout. 
> java.net.SocketTimeoutException: timed out waiting for rpc response 
> at org.apache.hadoop.ipc.Client.call(Client.java:559) 
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212) 
> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 
> at java.lang.reflect.Method.invoke(Method.java:597) 
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) 
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) 
> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at org.apache.hadoop.dfs.DFSClient.getFileInfo(DFSClient.java:548) 
> at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:380) 
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:598) 
> at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:106) 
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:366) 
> at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2126) 
> Then the whole reduce task failed, and all the work of shuflling and sorting is gone! 
> The reduce task should handle this case better. It is worthwile to try a few times before it gives up. 
> The stake is too high to give up at the first try. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3198) ReduceTask should handle rpc timeout exception for getting recordWriter

Posted by "Runping Qi (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Runping Qi updated HADOOP-3198:
-------------------------------

    Attachment: patch-3198.txt

> ReduceTask should handle rpc timeout exception for getting recordWriter
> -----------------------------------------------------------------------
>
>                 Key: HADOOP-3198
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3198
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Runping Qi
>         Attachments: patch-3198.txt
>
>
> After shuffling and sorting, the reduce task is ready for the final phase --- reduce. 
> The first thing is to create a record writer. 
> That call may fail due to rpc timeout. 
> java.net.SocketTimeoutException: timed out waiting for rpc response 
> at org.apache.hadoop.ipc.Client.call(Client.java:559) 
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212) 
> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 
> at java.lang.reflect.Method.invoke(Method.java:597) 
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) 
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) 
> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at org.apache.hadoop.dfs.DFSClient.getFileInfo(DFSClient.java:548) 
> at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:380) 
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:598) 
> at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:106) 
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:366) 
> at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2126) 
> Then the whole reduce task failed, and all the work of shuflling and sorting is gone! 
> The reduce task should handle this case better. It is worthwile to try a few times before it gives up. 
> The stake is too high to give up at the first try. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3198) ReduceTask should handle rpc timeout exception for getting recordWriter

Posted by "Amar Kamat (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12586698#action_12586698 ] 

Amar Kamat commented on HADOOP-3198:
------------------------------------

Some comments.
1) Declare a _private static final_ for {{MAX_DFS_RETRIES}} and initialize it to 10. Use this in the for loop. 
2) Remove extra spaces after {{reporter}} (line 14 of the patch)
3) Sleeping for 1 sec needs to be argued. Btw a log message is required before waiting. 
4) Some extra code slipped in (regarding the log message). 
5) After 10 retries we should throw the exception rather than silently coming out of the loop (leading to null pointer exception).

+_Points to ponder_+
Can we do a timeout based stuff where we wait for _shuffle-run-time / 2_ before bailing out and having multiple retries within this timeout. This will somehow make sure that we dont kill the reducer too early.

> ReduceTask should handle rpc timeout exception for getting recordWriter
> -----------------------------------------------------------------------
>
>                 Key: HADOOP-3198
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3198
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Runping Qi
>         Attachments: patch-3198.txt
>
>
> After shuffling and sorting, the reduce task is ready for the final phase --- reduce. 
> The first thing is to create a record writer. 
> That call may fail due to rpc timeout. 
> java.net.SocketTimeoutException: timed out waiting for rpc response 
> at org.apache.hadoop.ipc.Client.call(Client.java:559) 
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212) 
> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 
> at java.lang.reflect.Method.invoke(Method.java:597) 
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) 
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) 
> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at org.apache.hadoop.dfs.DFSClient.getFileInfo(DFSClient.java:548) 
> at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:380) 
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:598) 
> at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:106) 
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:366) 
> at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2126) 
> Then the whole reduce task failed, and all the work of shuflling and sorting is gone! 
> The reduce task should handle this case better. It is worthwile to try a few times before it gives up. 
> The stake is too high to give up at the first try. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.