You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Robert Chansler (JIRA)" <ji...@apache.org> on 2008/03/31 23:58:24 UTC
[jira] Updated: (HADOOP-3138) distcp fail copying to
/user// (with permission on)
[ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Robert Chansler updated HADOOP-3138:
------------------------------------
Fix Version/s: 0.17.0
> distcp fail copying to /user/<username>/<newtarget> (with permission on)
> ------------------------------------------------------------------------
>
> Key: HADOOP-3138
> URL: https://issues.apache.org/jira/browse/HADOOP-3138
> Project: Hadoop Core
> Issue Type: Bug
> Components: util
> Affects Versions: 0.16.1
> Reporter: Koji Noguchi
> Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 0.17.0
>
>
> When distcp-ing to /user/<username>/<newtarget>, I get an error with
> Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
> {noformat}
> at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
> at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
> at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
> at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
> at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
> at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
> at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
> at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
> at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
> at org.apache.hadoop.ipc.Client.call(Client.java:512)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
> at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
> at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
> at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
> at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
> at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
> at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
> at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
> {noformat}
> In distcp set up, we have
> {noformat}
> if (!dstExists || !dstIsDir) {
> Path parent = destPath.getParent();
> dstfs.mkdirs(parent);
> logPath = new Path(parent, filename);
> }
> {noformat}
> We should check if parent path exists before calling mkdir?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.