You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@falcon.apache.org by "Balu Vellanki (JIRA)" <ji...@apache.org> on 2015/09/22 17:59:04 UTC

[jira] [Updated] (FALCON-1343) Fix validation of read/write endpoints in ClusterEntityParser.

     [ https://issues.apache.org/jira/browse/FALCON-1343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Balu Vellanki updated FALCON-1343:
----------------------------------
    Summary: Fix validation of read/write endpoints in ClusterEntityParser.  (was: validation of read/write endpoints is not reliable)

> Fix validation of read/write endpoints in ClusterEntityParser.
> --------------------------------------------------------------
>
>                 Key: FALCON-1343
>                 URL: https://issues.apache.org/jira/browse/FALCON-1343
>             Project: Falcon
>          Issue Type: Sub-task
>          Components: general
>            Reporter: Balu Vellanki
>            Assignee: Balu Vellanki
>             Fix For: 0.8
>
>         Attachments: FALCON-1343-v1.patch, FALCON-1343.patch
>
>
> A read/write endpoint is currently validated by creating a filesystem with the endpoint url. 
> {code}
>             HadoopClientFactory.get().createProxiedFileSystem(conf);
> {code}
> This is not sufficient validation for read/write endpoint. It wont catch any typos in the url used for the endpoint. The  validation of cluster will then fail in validateLocations(...) method, and will throw an exception that can confuse the user.  Better validation would be to check if "/" exists in HDFS. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)