You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Brahma Reddy Battula (JIRA)" <ji...@apache.org> on 2015/11/01 00:39:27 UTC
[jira] [Commented] (HADOOP-12053) Harfs defaulturiport should be
Zero ( should not -1)
[ https://issues.apache.org/jira/browse/HADOOP-12053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14984220#comment-14984220 ]
Brahma Reddy Battula commented on HADOOP-12053:
-----------------------------------------------
I feel, this should be considered for 2.7.2 as I mentioned in earlier comment.. [~cnauroth] can you please look into this issue once..?
> Harfs defaulturiport should be Zero ( should not -1)
> ----------------------------------------------------
>
> Key: HADOOP-12053
> URL: https://issues.apache.org/jira/browse/HADOOP-12053
> Project: Hadoop Common
> Issue Type: Bug
> Affects Versions: 2.7.0
> Reporter: Brahma Reddy Battula
> Assignee: Gera Shegalov
> Priority: Critical
> Attachments: HADOOP-12053.001.patch, HADOOP-12053.002.patch, HADOOP-12053.003.patch
>
>
> The harfs overrides the "getUriDefaultPort" method of AbstractFilesystem, and returns "-1" . But "-1" can't pass the "checkPath" method when the {{fs.defaultfs}} is setted without port(like hdfs://hacluster)
> *Test Code :*
> {code}
> for (FileStatus file : files) {
> String[] edges = file.getPath().getName().split("-");
> if (applicationId.toString().compareTo(edges[0]) >= 0 && applicationId.toString().compareTo(edges[1]) <= 0) {
> Path harPath = new Path("har://" + file.getPath().toUri().getPath());
> harPath = harPath.getFileSystem(conf).makeQualified(harPath);
> remoteAppDir = LogAggregationUtils.getRemoteAppLogDir(
> harPath, applicationId, appOwner,
> LogAggregationUtils.getRemoteNodeLogDirSuffix(conf));
> if (FileContext.getFileContext(remoteAppDir.toUri()).util().exists(remoteAppDir)) {
> remoteDirSet.add(remoteAppDir);
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)