You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@pig.apache.org by "Ian Holsman (JIRA)" <ji...@apache.org> on 2008/12/04 05:27:44 UTC

[jira] Created: (PIG-555) ORDER by requires write access to /tmp

ORDER by requires write access to /tmp
--------------------------------------

                 Key: PIG-555
                 URL: https://issues.apache.org/jira/browse/PIG-555
             Project: Pig
          Issue Type: Bug
          Components: grunt
    Affects Versions: types_branch
            Reporter: Ian Holsman
            Priority: Minor
             Fix For: types_branch


when doing an order by in pig, it relies on you having write access to /tmp/ in hdfs. (and i think for it to be present in the first place).

this comes down to FileLocalizer creating the default 

    relativeRoot = pigContext.getDfs().asContainer("/tmp/temp" + r.nextInt())

i'm not sure if this is due to someone on our side deleting the /tmp dir accidently, so apolagies for the spam if it is.


2008-12-03 22:36:41,716 [main] ERROR org.apache.pig.tools.grunt.Grunt - You don't have permission to perform the operation. Error from the server: Unable to store for alias: 7 [org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x]
java.io.IOException: Unable to store for alias: 7 [org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x]
        at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.execute(HExecutionEngine.java:255)
        at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:647)
        at org.apache.pig.PigServer.execute(PigServer.java:638)
        at org.apache.pig.PigServer.registerQuery(PigServer.java:278)
        at org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:439)
        at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:249)
        at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:84)
        at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:64)
        at org.apache.pig.Main.main(Main.java:242)
Caused by: org.apache.pig.backend.executionengine.ExecException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
        ... 9 more
Caused by: org.apache.pig.impl.plan.VisitorException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.visitSort(MRCompiler.java:858)
        at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POSort.visit(POSort.java:304)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.compile(MRCompiler.java:266)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.compile(MRCompiler.java:251)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.compile(MRCompiler.java:193)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.compile(MapReduceLauncher.java:134)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:63)
        at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.execute(HExecutionEngine.java:245)
        ... 8 more
Caused by: org.apache.hadoop.fs.permission.AccessControlException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:52)
        at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:704)
        at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:236)
        at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1116)
        at org.apache.pig.backend.hadoop.datastorage.HDirectory.create(HDirectory.java:64)
        at org.apache.pig.backend.hadoop.datastorage.HPath.create(HPath.java:155)
        at org.apache.pig.impl.io.FileLocalizer.getTemporaryPath(FileLocalizer.java:391)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.getTempFileSpec(MRCompiler.java:479)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.visitSort(MRCompiler.java:846)
        ... 15 more
Caused by: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
        at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:175)
        at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:156)
        at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:104)
        at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4228)
        at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4198)
        at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1595)
        at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1564)
        at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:450)
        at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:452)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:888)

        at org.apache.hadoop.ipc.Client.call(Client.java:715)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
        at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
        at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
        at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:702)
        ... 22 more

shell returned 1



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Re: [jira] Created: (PIG-555) ORDER by requires write access to /tmp

Posted by Mridul Muralidharan <mr...@yahoo-inc.com>.
Shouldn't pig not check if the directory exists and if it does, use 
another one ?
Or does 'asContainer' already do it ? (I guess not ...).

Regards,
Mridul


Ian Holsman (JIRA) wrote:
> ORDER by requires write access to /tmp
> --------------------------------------
> 
>                  Key: PIG-555
>                  URL: https://issues.apache.org/jira/browse/PIG-555
>              Project: Pig
>           Issue Type: Bug
>           Components: grunt
>     Affects Versions: types_branch
>             Reporter: Ian Holsman
>             Priority: Minor
>              Fix For: types_branch
> 
> 
> when doing an order by in pig, it relies on you having write access to /tmp/ in hdfs. (and i think for it to be present in the first place).
> 
> this comes down to FileLocalizer creating the default 
> 
>     relativeRoot = pigContext.getDfs().asContainer("/tmp/temp" + r.nextInt())
> 
> i'm not sure if this is due to someone on our side deleting the /tmp dir accidently, so apolagies for the spam if it is.
> 
> 
> 2008-12-03 22:36:41,716 [main] ERROR org.apache.pig.tools.grunt.Grunt - You don't have permission to perform the operation. Error from the server: Unable to store for alias: 7 [org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x]
> java.io.IOException: Unable to store for alias: 7 [org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x]
>         at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.execute(HExecutionEngine.java:255)
>         at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:647)
>         at org.apache.pig.PigServer.execute(PigServer.java:638)
>         at org.apache.pig.PigServer.registerQuery(PigServer.java:278)
>         at org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:439)
>         at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:249)
>         at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:84)
>         at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:64)
>         at org.apache.pig.Main.main(Main.java:242)
> Caused by: org.apache.pig.backend.executionengine.ExecException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
>         ... 9 more
> Caused by: org.apache.pig.impl.plan.VisitorException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.visitSort(MRCompiler.java:858)
>         at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POSort.visit(POSort.java:304)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.compile(MRCompiler.java:266)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.compile(MRCompiler.java:251)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.compile(MRCompiler.java:193)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.compile(MapReduceLauncher.java:134)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:63)
>         at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.execute(HExecutionEngine.java:245)
>         ... 8 more
> Caused by: org.apache.hadoop.fs.permission.AccessControlException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>         at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>         at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:52)
>         at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:704)
>         at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:236)
>         at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1116)
>         at org.apache.pig.backend.hadoop.datastorage.HDirectory.create(HDirectory.java:64)
>         at org.apache.pig.backend.hadoop.datastorage.HPath.create(HPath.java:155)
>         at org.apache.pig.impl.io.FileLocalizer.getTemporaryPath(FileLocalizer.java:391)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.getTempFileSpec(MRCompiler.java:479)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.visitSort(MRCompiler.java:846)
>         ... 15 more
> Caused by: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
>         at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:175)
>         at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:156)
>         at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:104)
>         at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4228)
>         at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4198)
>         at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1595)
>         at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1564)
>         at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:450)
>         at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:452)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:888)
> 
>         at org.apache.hadoop.ipc.Client.call(Client.java:715)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
>         at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>         at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
>         at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:702)
>         ... 22 more
> 
> shell returned 1
> 
> 
> 


[jira] Resolved: (PIG-555) ORDER by requires write access to /tmp

Posted by "Olga Natkovich (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/PIG-555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Olga Natkovich resolved PIG-555.
--------------------------------

    Resolution: Duplicate

> ORDER by requires write access to /tmp
> --------------------------------------
>
>                 Key: PIG-555
>                 URL: https://issues.apache.org/jira/browse/PIG-555
>             Project: Pig
>          Issue Type: Bug
>          Components: grunt
>    Affects Versions: 0.2.0
>            Reporter: Ian Holsman
>            Priority: Minor
>
> when doing an order by in pig, it relies on you having write access to /tmp/ in hdfs. (and i think for it to be present in the first place).
> this comes down to FileLocalizer creating the default 
>     relativeRoot = pigContext.getDfs().asContainer("/tmp/temp" + r.nextInt())
> i'm not sure if this is due to someone on our side deleting the /tmp dir accidently, so apolagies for the spam if it is.
> 2008-12-03 22:36:41,716 [main] ERROR org.apache.pig.tools.grunt.Grunt - You don't have permission to perform the operation. Error from the server: Unable to store for alias: 7 [org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x]
> java.io.IOException: Unable to store for alias: 7 [org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x]
>         at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.execute(HExecutionEngine.java:255)
>         at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:647)
>         at org.apache.pig.PigServer.execute(PigServer.java:638)
>         at org.apache.pig.PigServer.registerQuery(PigServer.java:278)
>         at org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:439)
>         at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:249)
>         at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:84)
>         at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:64)
>         at org.apache.pig.Main.main(Main.java:242)
> Caused by: org.apache.pig.backend.executionengine.ExecException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
>         ... 9 more
> Caused by: org.apache.pig.impl.plan.VisitorException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.visitSort(MRCompiler.java:858)
>         at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POSort.visit(POSort.java:304)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.compile(MRCompiler.java:266)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.compile(MRCompiler.java:251)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.compile(MRCompiler.java:193)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.compile(MapReduceLauncher.java:134)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:63)
>         at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.execute(HExecutionEngine.java:245)
>         ... 8 more
> Caused by: org.apache.hadoop.fs.permission.AccessControlException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>         at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>         at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:52)
>         at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:704)
>         at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:236)
>         at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1116)
>         at org.apache.pig.backend.hadoop.datastorage.HDirectory.create(HDirectory.java:64)
>         at org.apache.pig.backend.hadoop.datastorage.HPath.create(HPath.java:155)
>         at org.apache.pig.impl.io.FileLocalizer.getTemporaryPath(FileLocalizer.java:391)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.getTempFileSpec(MRCompiler.java:479)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.visitSort(MRCompiler.java:846)
>         ... 15 more
> Caused by: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
>         at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:175)
>         at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:156)
>         at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:104)
>         at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4228)
>         at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4198)
>         at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1595)
>         at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1564)
>         at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:450)
>         at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:452)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:888)
>         at org.apache.hadoop.ipc.Client.call(Client.java:715)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
>         at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>         at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
>         at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:702)
>         ... 22 more
> shell returned 1

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (PIG-555) ORDER by requires write access to /tmp

Posted by "Olga Natkovich (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/PIG-555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Olga Natkovich updated PIG-555:
-------------------------------


This will be addressed in Pig 0.8.0 by allowing to configure your temp location

> ORDER by requires write access to /tmp
> --------------------------------------
>
>                 Key: PIG-555
>                 URL: https://issues.apache.org/jira/browse/PIG-555
>             Project: Pig
>          Issue Type: Bug
>          Components: grunt
>    Affects Versions: 0.2.0
>            Reporter: Ian Holsman
>            Priority: Minor
>
> when doing an order by in pig, it relies on you having write access to /tmp/ in hdfs. (and i think for it to be present in the first place).
> this comes down to FileLocalizer creating the default 
>     relativeRoot = pigContext.getDfs().asContainer("/tmp/temp" + r.nextInt())
> i'm not sure if this is due to someone on our side deleting the /tmp dir accidently, so apolagies for the spam if it is.
> 2008-12-03 22:36:41,716 [main] ERROR org.apache.pig.tools.grunt.Grunt - You don't have permission to perform the operation. Error from the server: Unable to store for alias: 7 [org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x]
> java.io.IOException: Unable to store for alias: 7 [org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x]
>         at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.execute(HExecutionEngine.java:255)
>         at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:647)
>         at org.apache.pig.PigServer.execute(PigServer.java:638)
>         at org.apache.pig.PigServer.registerQuery(PigServer.java:278)
>         at org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:439)
>         at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:249)
>         at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:84)
>         at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:64)
>         at org.apache.pig.Main.main(Main.java:242)
> Caused by: org.apache.pig.backend.executionengine.ExecException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
>         ... 9 more
> Caused by: org.apache.pig.impl.plan.VisitorException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.visitSort(MRCompiler.java:858)
>         at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POSort.visit(POSort.java:304)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.compile(MRCompiler.java:266)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.compile(MRCompiler.java:251)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.compile(MRCompiler.java:193)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.compile(MapReduceLauncher.java:134)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:63)
>         at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.execute(HExecutionEngine.java:245)
>         ... 8 more
> Caused by: org.apache.hadoop.fs.permission.AccessControlException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>         at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>         at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:52)
>         at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:704)
>         at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:236)
>         at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1116)
>         at org.apache.pig.backend.hadoop.datastorage.HDirectory.create(HDirectory.java:64)
>         at org.apache.pig.backend.hadoop.datastorage.HPath.create(HPath.java:155)
>         at org.apache.pig.impl.io.FileLocalizer.getTemporaryPath(FileLocalizer.java:391)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.getTempFileSpec(MRCompiler.java:479)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler.visitSort(MRCompiler.java:846)
>         ... 15 more
> Caused by: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=jholsma, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
>         at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:175)
>         at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:156)
>         at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:104)
>         at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4228)
>         at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4198)
>         at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1595)
>         at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1564)
>         at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:450)
>         at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:452)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:888)
>         at org.apache.hadoop.ipc.Client.call(Client.java:715)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
>         at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>         at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
>         at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:702)
>         ... 22 more
> shell returned 1

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.