You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Mahadev konar (Commented) (JIRA)" <ji...@apache.org> on 2012/02/03 01:10:55 UTC
[jira] [Commented] (MAPREDUCE-3736) Variable substitution depth too
large for fs.default.name causes jobs to fail
[ https://issues.apache.org/jira/browse/MAPREDUCE-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13199386#comment-13199386 ]
Mahadev konar commented on MAPREDUCE-3736:
------------------------------------------
@Ahmed,
Any update on this?
> Variable substitution depth too large for fs.default.name causes jobs to fail
> -----------------------------------------------------------------------------
>
> Key: MAPREDUCE-3736
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3736
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Components: mrv2
> Affects Versions: 0.23.1
> Reporter: Eli Collins
> Assignee: Ahmed Radwan
> Priority: Blocker
> Attachments: MAPREDUCE-3736.patch, MAPREDUCE-3736_rev2.patch
>
>
> I'm seeing the same failure as MAPREDUCE-3462 in downstream projects running against a recent build of branch-23. MR-3462 modified the tests rather than fixing the framework. In that jira Ravi mentioned "I'm still ignorant of the change which made the tests start to fail. I should probably understand better the reasons for that change before proposing a more generalized fix." Let's figure out the general fix (rather than require all projects to set mapreduce.job.hdfs-servers in their conf we should fix this in the framework). Perhaps we should not default this config to "$fs.default.name"?
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira