You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Koji Noguchi (JIRA)" <ji...@apache.org> on 2007/05/05 00:17:15 UTC
[jira] Created: (HADOOP-1331) Multiple entries for
'dfs.client.buffer.dir'
Multiple entries for 'dfs.client.buffer.dir'
--------------------------------------------
Key: HADOOP-1331
URL: https://issues.apache.org/jira/browse/HADOOP-1331
Project: Hadoop
Issue Type: Improvement
Components: dfs
Reporter: Koji Noguchi
Priority: Minor
If the (DFS) client host has multiple drives, I'd like the different dfs -put calls to utilize these drives.
Also,
- It might be helpful when we have multiple reducers writing to dfs.
- If we want datanode/tasktracker to skip dead drive, we probably need this?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Resolved: (HADOOP-1331) Multiple entries for
'dfs.client.buffer.dir'
Posted by "dhruba borthakur (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-1331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
dhruba borthakur resolved HADOOP-1331.
--------------------------------------
Resolution: Duplicate
Duplicate of HADOOP-1372
> Multiple entries for 'dfs.client.buffer.dir'
> --------------------------------------------
>
> Key: HADOOP-1331
> URL: https://issues.apache.org/jira/browse/HADOOP-1331
> Project: Hadoop
> Issue Type: Improvement
> Components: dfs
> Reporter: Koji Noguchi
> Assignee: dhruba borthakur
> Priority: Minor
>
> If the (DFS) client host has multiple drives, I'd like the different dfs -put calls to utilize these drives.
> Also,
> - It might be helpful when we have multiple reducers writing to dfs.
> - If we want datanode/tasktracker to skip dead drive, we probably need this?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-1331) Multiple entries for
'dfs.client.buffer.dir'
Posted by "Devaraj Das (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-1331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12493991 ]
Devaraj Das commented on HADOOP-1331:
-------------------------------------
The DFSClient already uses the Configuration.getLocalPath API that will allocate a directory (corresponding to the drive) based on the hash of the pathname. So, yes, all the drives will be utilized (subject to the hash function's return value). But HADOOP-1252 can improve this situation IMO, and the DFSClient should use the new APIs provided there.
> Multiple entries for 'dfs.client.buffer.dir'
> --------------------------------------------
>
> Key: HADOOP-1331
> URL: https://issues.apache.org/jira/browse/HADOOP-1331
> Project: Hadoop
> Issue Type: Improvement
> Components: dfs
> Reporter: Koji Noguchi
> Priority: Minor
>
> If the (DFS) client host has multiple drives, I'd like the different dfs -put calls to utilize these drives.
> Also,
> - It might be helpful when we have multiple reducers writing to dfs.
> - If we want datanode/tasktracker to skip dead drive, we probably need this?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Assigned: (HADOOP-1331) Multiple entries for
'dfs.client.buffer.dir'
Posted by "dhruba borthakur (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-1331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
dhruba borthakur reassigned HADOOP-1331:
----------------------------------------
Assignee: dhruba borthakur
> Multiple entries for 'dfs.client.buffer.dir'
> --------------------------------------------
>
> Key: HADOOP-1331
> URL: https://issues.apache.org/jira/browse/HADOOP-1331
> Project: Hadoop
> Issue Type: Improvement
> Components: dfs
> Reporter: Koji Noguchi
> Assignee: dhruba borthakur
> Priority: Minor
>
> If the (DFS) client host has multiple drives, I'd like the different dfs -put calls to utilize these drives.
> Also,
> - It might be helpful when we have multiple reducers writing to dfs.
> - If we want datanode/tasktracker to skip dead drive, we probably need this?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.