You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Kan Zhang (JIRA)" <ji...@apache.org> on 2009/06/16 07:02:07 UTC

[jira] Updated: (HADOOP-6048) Need to handle access token expiration when re-establishing the pipeline for dfs write

     [ https://issues.apache.org/jira/browse/HADOOP-6048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Kan Zhang updated HADOOP-6048:
------------------------------

    Attachment: 6048-09.patch

this patch does the following.
1. when the primary datanode recovers a block, it sends back a newly generated access token along a new generation stamp if any.
2. add a namenode method so that a DFSClient can call to get a new access token when access token becomes expired during datanode error recovery.
3. To differentiate access token error from other errors, datanode now needs to send back a status code together with firstBadLink during pipeline setup for dfs write (only for DFSClients, not for other datanodes).

> Need to handle access token expiration when re-establishing the pipeline for dfs write
> --------------------------------------------------------------------------------------
>
>                 Key: HADOOP-6048
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6048
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs, security
>            Reporter: Kan Zhang
>            Assignee: Kan Zhang
>         Attachments: 6048-09.patch
>
>
> Original access token may have expired when re-establishing the pipeline within processDatanodeError().

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.