You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@falcon.apache.org by "Adam Kawa (JIRA)" <ji...@apache.org> on 2014/11/21 12:01:33 UTC
[jira] [Created] (FALCON-910) Better error messages when creating
cluster's directories
Adam Kawa created FALCON-910:
--------------------------------
Summary: Better error messages when creating cluster's directories
Key: FALCON-910
URL: https://issues.apache.org/jira/browse/FALCON-910
Project: Falcon
Issue Type: Improvement
Components: client
Affects Versions: 0.7
Reporter: Adam Kawa
Priority: Minor
I followed the example from http://hortonworks.com/blog/introduction-apache-falcon-hadoop, where all locations (i.e. staging, working, temp) of the cluster are set to the same directory.
{code}
<?xml version="1.0" encoding="UTF-8"?>
<cluster colo="toronto" description="Primary Cluster"
(...)
<locations>
<location name="staging" path="/tmp/falcon"/>
<location name="working" path="/tmp/falcon"/>
<location name="temp" path="/tmp/falcon"/>
</locations>
</cluster>
{code}
When submitting such a cluster entity, I got:
{code}
bash-4.1$ ./bin/falcon entity -submit -type cluster -file cluster.xml
Stacktrace:
org.apache.falcon.client.FalconCLIException: Bad Request;Path /tmp/falcon has permissions: rwxr-xr-x, should be rwxrwxrwx
at org.apache.falcon.client.FalconCLIException.fromReponse(FalconCLIException.java:44)
at org.apache.falcon.client.FalconClient.checkIfSuccessful(FalconClient.java:1162)
at org.apache.falcon.client.FalconClient.sendEntityRequestWithObject(FalconClient.java:684)
at org.apache.falcon.client.FalconClient.submit(FalconClient.java:323)
at org.apache.falcon.cli.FalconCLI.entityCommand(FalconCLI.java:361)
at org.apache.falcon.cli.FalconCLI.run(FalconCLI.java:182)
at org.apache.falcon.cli.FalconCLI.main(FalconCLI.java:132)
bash-4.1$ ./bin/falcon entity -submit -type cluster -file cluster.xml
Stacktrace:
org.apache.falcon.client.FalconCLIException: Bad Request;Path /tmp/falcon has permissions: rwxrwxrwx, should be rwxr-xr-x
at org.apache.falcon.client.FalconCLIException.fromReponse(FalconCLIException.java:44)
at org.apache.falcon.client.FalconClient.checkIfSuccessful(FalconClient.java:1162)
at org.apache.falcon.client.FalconClient.sendEntityRequestWithObject(FalconClient.java:684)
at org.apache.falcon.client.FalconClient.submit(FalconClient.java:323)
at org.apache.falcon.cli.FalconCLI.entityCommand(FalconCLI.java:361)
at org.apache.falcon.cli.FalconCLI.run(FalconCLI.java:182)
at org.apache.falcon.cli.FalconCLI.main(FalconCLI.java:132)
{code}
I can change these permission forever with the same effect :)
According to https://github.com/apache/incubator-falcon/blob/master/common/src/main/java/org/apache/falcon/entity/parser/ClusterEntityParser.java
{code}
for (Location location : cluster.getLocations().getLocations()) {
final String locationName = location.getName();
if (locationName.equals("temp")) {
continue;
}
try {
checkPathOwnerAndPermission(cluster.getName(), location.getPath(), fs,
"staging".equals(locationName)
? HadoopClientFactory.ALL_PERMISSION
: HadoopClientFactory.READ_EXECUTE_PERMISSION);
} catch (IOException e) {
(...)
}
}
{code}
This basically means:
* staging directory must have exactly ALL permissions
* execute directory must have exactly READ_EXECUTE permissions
If the staging and execute directories are the same, then we have the misconfiguration that is hard to detect based on the current message.
Therefore:
* a better (less confusing) message could be printed
* or, code could be fixed that execute directory should have at least (not exactly) READ_EXECUTE permissions.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)