You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Samuel <sa...@gmail.com> on 2015/12/11 09:19:51 UTC

Dependency on absolute path /bin/ls in Shell.java

Hi,

I am experiencing some crashes when using spark over local files (mainly
for testing). Some operations fail with

java.lang.RuntimeException: Error while running command to get file
permissions : java.io.IOException: Cannot run program "/bin/ls": error=2,
No such file or directory
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
        at org.apache.hadoop.util.Shell.run(Shell.java:188)
        at org.apache.hadoop
.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
        at org.apache.hadoop
.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
        at org.apache.hadoop
.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
        at org.apache.hadoop
.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
        at org.apache.hadoop
.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
        at
org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)

etcetera...

Which seems to be related to Shell.java in org.apache.hadoop-util, that
uses ls -ld to figure out file permissions (that is in
RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of
just calling ls, Shell .java calls /bin/ls, which is usually available, but
in certain circumstances might not. Regardless of the reasons not to have ls
 in /bin, hardcoding the directory bans users from using the standard
mechanisms to decide which binaries to run in their systems (in this case,
$PATH), so I wonder if there is a particular reason why that path has been
hardcoded to an absolute path instead to something resolvable using$PATH.

Or in other words, is this a bug or a feature?

Best

-- 
Samuel

Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Samuel <sa...@gmail.com>.
Thanks for the information. If you are already aware of the problem,
that is enough for me :)

Best

On 11 December 2015 at 18:49, Chris Nauroth <cn...@hortonworks.com> wrote:
> Hello Samuel,
>
> Issue HADOOP-11935 tracks an improvement to re-implement this code path
> using either JNI to OS syscalls or the JDK 7 java.nio.file APIs (probably
> the latter).
>
> https://issues.apache.org/jira/browse/HADOOP-11935
>
>
> For right now, I don't see a viable workaround besides ensuring that the
> command is accessible at /bin/ls on your system.
>
> --Chris Nauroth
>
>
>
>
> On 12/11/15, 8:05 AM, "Namikaze Minato" <ll...@gmail.com> wrote:
>
>>My question was, which spark command are you using, and since you
>>already did the analysis, which function of Shell.java is this spark
>>code using?
>>
>>Regards,
>>LLoyd
>>
>>On 11 December 2015 at 15:43, Samuel <sa...@gmail.com> wrote:
>>> I am not using hadoop-util directly, it is Spark code what uses it
>>> (i,e. not directly under my control).
>>>
>>> Regarding ls, for my particular use case it is fine if you use "ls"
>>> instead of "/bin/ls".
>>>
>>> However, I do agree that using ls to fetch file permissions is
>>> incorrect, so a better solution (in terms of code quality) would be
>>> not to use ls at all.
>>>
>>>
>>> On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com>
>>>wrote:
>>>> So what you ultimately need is a piece of java code listing the rwx
>>>> permissions for user, group and others that is not using ls
>>>> internally, is that correct?
>>>> If "RawLocalFileSystem" is not HDFS, do you really need to use
>>>> hadoop-util for that?
>>>> Can you tell us more about your use case?
>>>>
>>>> Regards,
>>>> LLoyd
>>>>
>>>>
>>>>
>>>>
>>>> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>>>>>> Using ls to figure out permissions is a bad design anyway, so I would
>>>>>> not be surprised if this hardcode was reported as a bug.
>>>>>
>>>>> Of course, I have no idea why it was implemented like this. I assume
>>>>> it was written at some point in time where Java didn't provide the
>>>>> needed APIS (?)
>>>>>
>>>>> Implementing the permission check without relying in ls at all is also
>>>>> a solution for the problem I have :)
>>>>>
>>>>>> LLoyd
>>>>>>
>>>>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>>>>>> > Hi,
>>>>>> >
>>>>>> > I am experiencing some crashes when using spark over local files
>>>>>>(mainly for
>>>>>> > testing). Some operations fail with
>>>>>> >
>>>>>> > java.lang.RuntimeException: Error while running command to get file
>>>>>> > permissions : java.io.IOException: Cannot run program "/bin/ls":
>>>>>>error=2, No
>>>>>> > such file or directory
>>>>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>>>>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>>>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:3
>>>>>>81)
>>>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>>>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem
>>>>>>.java:593)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.
>>>>>>java:51)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermiss
>>>>>>ionInfo(RawLocalFileSystem.java:514)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermissi
>>>>>>on(RawLocalFileSystem.java:489)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$an
>>>>>>on$1$$anonfun$12.apply(newParquet.scala:292)
>>>>>> >
>>>>>> > etcetera...
>>>>>> >
>>>>>> > Which seems to be related to Shell.java in org.apache.hadoop-util,
>>>>>>that uses
>>>>>> > ls -ld to figure out file permissions (that is in
>>>>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that
>>>>>>instead of just
>>>>>> > calling ls, Shell .java calls /bin/ls, which is usually available,
>>>>>>but in
>>>>>> > certain circumstances might not. Regardless of the reasons not to
>>>>>>have ls in
>>>>>> > /bin, hardcoding the directory bans users from using the standard
>>>>>>mechanisms
>>>>>> > to decide which binaries to run in their systems (in this case,
>>>>>>$PATH), so I
>>>>>> > wonder if there is a particular reason why that path has been
>>>>>>hardcoded to
>>>>>> > an absolute path instead to something resolvable using$PATH.
>>>>>> >
>>>>>> > Or in other words, is this a bug or a feature?
>>>>>> >
>>>>>> > Best
>>>>>> >
>>>>>> > --
>>>>>> > Samuel
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Samuel
>>>
>>>
>>>
>>> --
>>> Samuel
>>
>>---------------------------------------------------------------------
>>To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
>>For additional commands, e-mail: user-help@hadoop.apache.org
>>
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: user-help@hadoop.apache.org
>



-- 
Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Samuel <sa...@gmail.com>.
Thanks for the information. If you are already aware of the problem,
that is enough for me :)

Best

On 11 December 2015 at 18:49, Chris Nauroth <cn...@hortonworks.com> wrote:
> Hello Samuel,
>
> Issue HADOOP-11935 tracks an improvement to re-implement this code path
> using either JNI to OS syscalls or the JDK 7 java.nio.file APIs (probably
> the latter).
>
> https://issues.apache.org/jira/browse/HADOOP-11935
>
>
> For right now, I don't see a viable workaround besides ensuring that the
> command is accessible at /bin/ls on your system.
>
> --Chris Nauroth
>
>
>
>
> On 12/11/15, 8:05 AM, "Namikaze Minato" <ll...@gmail.com> wrote:
>
>>My question was, which spark command are you using, and since you
>>already did the analysis, which function of Shell.java is this spark
>>code using?
>>
>>Regards,
>>LLoyd
>>
>>On 11 December 2015 at 15:43, Samuel <sa...@gmail.com> wrote:
>>> I am not using hadoop-util directly, it is Spark code what uses it
>>> (i,e. not directly under my control).
>>>
>>> Regarding ls, for my particular use case it is fine if you use "ls"
>>> instead of "/bin/ls".
>>>
>>> However, I do agree that using ls to fetch file permissions is
>>> incorrect, so a better solution (in terms of code quality) would be
>>> not to use ls at all.
>>>
>>>
>>> On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com>
>>>wrote:
>>>> So what you ultimately need is a piece of java code listing the rwx
>>>> permissions for user, group and others that is not using ls
>>>> internally, is that correct?
>>>> If "RawLocalFileSystem" is not HDFS, do you really need to use
>>>> hadoop-util for that?
>>>> Can you tell us more about your use case?
>>>>
>>>> Regards,
>>>> LLoyd
>>>>
>>>>
>>>>
>>>>
>>>> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>>>>>> Using ls to figure out permissions is a bad design anyway, so I would
>>>>>> not be surprised if this hardcode was reported as a bug.
>>>>>
>>>>> Of course, I have no idea why it was implemented like this. I assume
>>>>> it was written at some point in time where Java didn't provide the
>>>>> needed APIS (?)
>>>>>
>>>>> Implementing the permission check without relying in ls at all is also
>>>>> a solution for the problem I have :)
>>>>>
>>>>>> LLoyd
>>>>>>
>>>>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>>>>>> > Hi,
>>>>>> >
>>>>>> > I am experiencing some crashes when using spark over local files
>>>>>>(mainly for
>>>>>> > testing). Some operations fail with
>>>>>> >
>>>>>> > java.lang.RuntimeException: Error while running command to get file
>>>>>> > permissions : java.io.IOException: Cannot run program "/bin/ls":
>>>>>>error=2, No
>>>>>> > such file or directory
>>>>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>>>>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>>>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:3
>>>>>>81)
>>>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>>>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem
>>>>>>.java:593)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.
>>>>>>java:51)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermiss
>>>>>>ionInfo(RawLocalFileSystem.java:514)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermissi
>>>>>>on(RawLocalFileSystem.java:489)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$an
>>>>>>on$1$$anonfun$12.apply(newParquet.scala:292)
>>>>>> >
>>>>>> > etcetera...
>>>>>> >
>>>>>> > Which seems to be related to Shell.java in org.apache.hadoop-util,
>>>>>>that uses
>>>>>> > ls -ld to figure out file permissions (that is in
>>>>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that
>>>>>>instead of just
>>>>>> > calling ls, Shell .java calls /bin/ls, which is usually available,
>>>>>>but in
>>>>>> > certain circumstances might not. Regardless of the reasons not to
>>>>>>have ls in
>>>>>> > /bin, hardcoding the directory bans users from using the standard
>>>>>>mechanisms
>>>>>> > to decide which binaries to run in their systems (in this case,
>>>>>>$PATH), so I
>>>>>> > wonder if there is a particular reason why that path has been
>>>>>>hardcoded to
>>>>>> > an absolute path instead to something resolvable using$PATH.
>>>>>> >
>>>>>> > Or in other words, is this a bug or a feature?
>>>>>> >
>>>>>> > Best
>>>>>> >
>>>>>> > --
>>>>>> > Samuel
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Samuel
>>>
>>>
>>>
>>> --
>>> Samuel
>>
>>---------------------------------------------------------------------
>>To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
>>For additional commands, e-mail: user-help@hadoop.apache.org
>>
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: user-help@hadoop.apache.org
>



-- 
Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Samuel <sa...@gmail.com>.
Thanks for the information. If you are already aware of the problem,
that is enough for me :)

Best

On 11 December 2015 at 18:49, Chris Nauroth <cn...@hortonworks.com> wrote:
> Hello Samuel,
>
> Issue HADOOP-11935 tracks an improvement to re-implement this code path
> using either JNI to OS syscalls or the JDK 7 java.nio.file APIs (probably
> the latter).
>
> https://issues.apache.org/jira/browse/HADOOP-11935
>
>
> For right now, I don't see a viable workaround besides ensuring that the
> command is accessible at /bin/ls on your system.
>
> --Chris Nauroth
>
>
>
>
> On 12/11/15, 8:05 AM, "Namikaze Minato" <ll...@gmail.com> wrote:
>
>>My question was, which spark command are you using, and since you
>>already did the analysis, which function of Shell.java is this spark
>>code using?
>>
>>Regards,
>>LLoyd
>>
>>On 11 December 2015 at 15:43, Samuel <sa...@gmail.com> wrote:
>>> I am not using hadoop-util directly, it is Spark code what uses it
>>> (i,e. not directly under my control).
>>>
>>> Regarding ls, for my particular use case it is fine if you use "ls"
>>> instead of "/bin/ls".
>>>
>>> However, I do agree that using ls to fetch file permissions is
>>> incorrect, so a better solution (in terms of code quality) would be
>>> not to use ls at all.
>>>
>>>
>>> On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com>
>>>wrote:
>>>> So what you ultimately need is a piece of java code listing the rwx
>>>> permissions for user, group and others that is not using ls
>>>> internally, is that correct?
>>>> If "RawLocalFileSystem" is not HDFS, do you really need to use
>>>> hadoop-util for that?
>>>> Can you tell us more about your use case?
>>>>
>>>> Regards,
>>>> LLoyd
>>>>
>>>>
>>>>
>>>>
>>>> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>>>>>> Using ls to figure out permissions is a bad design anyway, so I would
>>>>>> not be surprised if this hardcode was reported as a bug.
>>>>>
>>>>> Of course, I have no idea why it was implemented like this. I assume
>>>>> it was written at some point in time where Java didn't provide the
>>>>> needed APIS (?)
>>>>>
>>>>> Implementing the permission check without relying in ls at all is also
>>>>> a solution for the problem I have :)
>>>>>
>>>>>> LLoyd
>>>>>>
>>>>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>>>>>> > Hi,
>>>>>> >
>>>>>> > I am experiencing some crashes when using spark over local files
>>>>>>(mainly for
>>>>>> > testing). Some operations fail with
>>>>>> >
>>>>>> > java.lang.RuntimeException: Error while running command to get file
>>>>>> > permissions : java.io.IOException: Cannot run program "/bin/ls":
>>>>>>error=2, No
>>>>>> > such file or directory
>>>>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>>>>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>>>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:3
>>>>>>81)
>>>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>>>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem
>>>>>>.java:593)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.
>>>>>>java:51)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermiss
>>>>>>ionInfo(RawLocalFileSystem.java:514)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermissi
>>>>>>on(RawLocalFileSystem.java:489)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$an
>>>>>>on$1$$anonfun$12.apply(newParquet.scala:292)
>>>>>> >
>>>>>> > etcetera...
>>>>>> >
>>>>>> > Which seems to be related to Shell.java in org.apache.hadoop-util,
>>>>>>that uses
>>>>>> > ls -ld to figure out file permissions (that is in
>>>>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that
>>>>>>instead of just
>>>>>> > calling ls, Shell .java calls /bin/ls, which is usually available,
>>>>>>but in
>>>>>> > certain circumstances might not. Regardless of the reasons not to
>>>>>>have ls in
>>>>>> > /bin, hardcoding the directory bans users from using the standard
>>>>>>mechanisms
>>>>>> > to decide which binaries to run in their systems (in this case,
>>>>>>$PATH), so I
>>>>>> > wonder if there is a particular reason why that path has been
>>>>>>hardcoded to
>>>>>> > an absolute path instead to something resolvable using$PATH.
>>>>>> >
>>>>>> > Or in other words, is this a bug or a feature?
>>>>>> >
>>>>>> > Best
>>>>>> >
>>>>>> > --
>>>>>> > Samuel
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Samuel
>>>
>>>
>>>
>>> --
>>> Samuel
>>
>>---------------------------------------------------------------------
>>To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
>>For additional commands, e-mail: user-help@hadoop.apache.org
>>
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: user-help@hadoop.apache.org
>



-- 
Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Samuel <sa...@gmail.com>.
Thanks for the information. If you are already aware of the problem,
that is enough for me :)

Best

On 11 December 2015 at 18:49, Chris Nauroth <cn...@hortonworks.com> wrote:
> Hello Samuel,
>
> Issue HADOOP-11935 tracks an improvement to re-implement this code path
> using either JNI to OS syscalls or the JDK 7 java.nio.file APIs (probably
> the latter).
>
> https://issues.apache.org/jira/browse/HADOOP-11935
>
>
> For right now, I don't see a viable workaround besides ensuring that the
> command is accessible at /bin/ls on your system.
>
> --Chris Nauroth
>
>
>
>
> On 12/11/15, 8:05 AM, "Namikaze Minato" <ll...@gmail.com> wrote:
>
>>My question was, which spark command are you using, and since you
>>already did the analysis, which function of Shell.java is this spark
>>code using?
>>
>>Regards,
>>LLoyd
>>
>>On 11 December 2015 at 15:43, Samuel <sa...@gmail.com> wrote:
>>> I am not using hadoop-util directly, it is Spark code what uses it
>>> (i,e. not directly under my control).
>>>
>>> Regarding ls, for my particular use case it is fine if you use "ls"
>>> instead of "/bin/ls".
>>>
>>> However, I do agree that using ls to fetch file permissions is
>>> incorrect, so a better solution (in terms of code quality) would be
>>> not to use ls at all.
>>>
>>>
>>> On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com>
>>>wrote:
>>>> So what you ultimately need is a piece of java code listing the rwx
>>>> permissions for user, group and others that is not using ls
>>>> internally, is that correct?
>>>> If "RawLocalFileSystem" is not HDFS, do you really need to use
>>>> hadoop-util for that?
>>>> Can you tell us more about your use case?
>>>>
>>>> Regards,
>>>> LLoyd
>>>>
>>>>
>>>>
>>>>
>>>> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>>>>>> Using ls to figure out permissions is a bad design anyway, so I would
>>>>>> not be surprised if this hardcode was reported as a bug.
>>>>>
>>>>> Of course, I have no idea why it was implemented like this. I assume
>>>>> it was written at some point in time where Java didn't provide the
>>>>> needed APIS (?)
>>>>>
>>>>> Implementing the permission check without relying in ls at all is also
>>>>> a solution for the problem I have :)
>>>>>
>>>>>> LLoyd
>>>>>>
>>>>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>>>>>> > Hi,
>>>>>> >
>>>>>> > I am experiencing some crashes when using spark over local files
>>>>>>(mainly for
>>>>>> > testing). Some operations fail with
>>>>>> >
>>>>>> > java.lang.RuntimeException: Error while running command to get file
>>>>>> > permissions : java.io.IOException: Cannot run program "/bin/ls":
>>>>>>error=2, No
>>>>>> > such file or directory
>>>>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>>>>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>>>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:3
>>>>>>81)
>>>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>>>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem
>>>>>>.java:593)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.
>>>>>>java:51)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermiss
>>>>>>ionInfo(RawLocalFileSystem.java:514)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermissi
>>>>>>on(RawLocalFileSystem.java:489)
>>>>>> >         at
>>>>>> >
>>>>>>org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$an
>>>>>>on$1$$anonfun$12.apply(newParquet.scala:292)
>>>>>> >
>>>>>> > etcetera...
>>>>>> >
>>>>>> > Which seems to be related to Shell.java in org.apache.hadoop-util,
>>>>>>that uses
>>>>>> > ls -ld to figure out file permissions (that is in
>>>>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that
>>>>>>instead of just
>>>>>> > calling ls, Shell .java calls /bin/ls, which is usually available,
>>>>>>but in
>>>>>> > certain circumstances might not. Regardless of the reasons not to
>>>>>>have ls in
>>>>>> > /bin, hardcoding the directory bans users from using the standard
>>>>>>mechanisms
>>>>>> > to decide which binaries to run in their systems (in this case,
>>>>>>$PATH), so I
>>>>>> > wonder if there is a particular reason why that path has been
>>>>>>hardcoded to
>>>>>> > an absolute path instead to something resolvable using$PATH.
>>>>>> >
>>>>>> > Or in other words, is this a bug or a feature?
>>>>>> >
>>>>>> > Best
>>>>>> >
>>>>>> > --
>>>>>> > Samuel
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Samuel
>>>
>>>
>>>
>>> --
>>> Samuel
>>
>>---------------------------------------------------------------------
>>To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
>>For additional commands, e-mail: user-help@hadoop.apache.org
>>
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: user-help@hadoop.apache.org
>



-- 
Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Chris Nauroth <cn...@hortonworks.com>.
Hello Samuel,

Issue HADOOP-11935 tracks an improvement to re-implement this code path
using either JNI to OS syscalls or the JDK 7 java.nio.file APIs (probably
the latter).

https://issues.apache.org/jira/browse/HADOOP-11935


For right now, I don't see a viable workaround besides ensuring that the
command is accessible at /bin/ls on your system.

--Chris Nauroth




On 12/11/15, 8:05 AM, "Namikaze Minato" <ll...@gmail.com> wrote:

>My question was, which spark command are you using, and since you
>already did the analysis, which function of Shell.java is this spark
>code using?
>
>Regards,
>LLoyd
>
>On 11 December 2015 at 15:43, Samuel <sa...@gmail.com> wrote:
>> I am not using hadoop-util directly, it is Spark code what uses it
>> (i,e. not directly under my control).
>>
>> Regarding ls, for my particular use case it is fine if you use "ls"
>> instead of "/bin/ls".
>>
>> However, I do agree that using ls to fetch file permissions is
>> incorrect, so a better solution (in terms of code quality) would be
>> not to use ls at all.
>>
>>
>> On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com>
>>wrote:
>>> So what you ultimately need is a piece of java code listing the rwx
>>> permissions for user, group and others that is not using ls
>>> internally, is that correct?
>>> If "RawLocalFileSystem" is not HDFS, do you really need to use
>>> hadoop-util for that?
>>> Can you tell us more about your use case?
>>>
>>> Regards,
>>> LLoyd
>>>
>>>
>>>
>>>
>>> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>>>>> Using ls to figure out permissions is a bad design anyway, so I would
>>>>> not be surprised if this hardcode was reported as a bug.
>>>>
>>>> Of course, I have no idea why it was implemented like this. I assume
>>>> it was written at some point in time where Java didn't provide the
>>>> needed APIS (?)
>>>>
>>>> Implementing the permission check without relying in ls at all is also
>>>> a solution for the problem I have :)
>>>>
>>>>> LLoyd
>>>>>
>>>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>>>>> > Hi,
>>>>> >
>>>>> > I am experiencing some crashes when using spark over local files
>>>>>(mainly for
>>>>> > testing). Some operations fail with
>>>>> >
>>>>> > java.lang.RuntimeException: Error while running command to get file
>>>>> > permissions : java.io.IOException: Cannot run program "/bin/ls":
>>>>>error=2, No
>>>>> > such file or directory
>>>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>>>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>>>> >         at
>>>>> > 
>>>>>org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:3
>>>>>81)
>>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>>>>> >         at
>>>>> > 
>>>>>org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem
>>>>>.java:593)
>>>>> >         at
>>>>> > 
>>>>>org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.
>>>>>java:51)
>>>>> >         at
>>>>> > 
>>>>>org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermiss
>>>>>ionInfo(RawLocalFileSystem.java:514)
>>>>> >         at
>>>>> > 
>>>>>org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermissi
>>>>>on(RawLocalFileSystem.java:489)
>>>>> >         at
>>>>> > 
>>>>>org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$an
>>>>>on$1$$anonfun$12.apply(newParquet.scala:292)
>>>>> >
>>>>> > etcetera...
>>>>> >
>>>>> > Which seems to be related to Shell.java in org.apache.hadoop-util,
>>>>>that uses
>>>>> > ls -ld to figure out file permissions (that is in
>>>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that
>>>>>instead of just
>>>>> > calling ls, Shell .java calls /bin/ls, which is usually available,
>>>>>but in
>>>>> > certain circumstances might not. Regardless of the reasons not to
>>>>>have ls in
>>>>> > /bin, hardcoding the directory bans users from using the standard
>>>>>mechanisms
>>>>> > to decide which binaries to run in their systems (in this case,
>>>>>$PATH), so I
>>>>> > wonder if there is a particular reason why that path has been
>>>>>hardcoded to
>>>>> > an absolute path instead to something resolvable using$PATH.
>>>>> >
>>>>> > Or in other words, is this a bug or a feature?
>>>>> >
>>>>> > Best
>>>>> >
>>>>> > --
>>>>> > Samuel
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Samuel
>>
>>
>>
>> --
>> Samuel
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
>For additional commands, e-mail: user-help@hadoop.apache.org
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Chris Nauroth <cn...@hortonworks.com>.
Hello Samuel,

Issue HADOOP-11935 tracks an improvement to re-implement this code path
using either JNI to OS syscalls or the JDK 7 java.nio.file APIs (probably
the latter).

https://issues.apache.org/jira/browse/HADOOP-11935


For right now, I don't see a viable workaround besides ensuring that the
command is accessible at /bin/ls on your system.

--Chris Nauroth




On 12/11/15, 8:05 AM, "Namikaze Minato" <ll...@gmail.com> wrote:

>My question was, which spark command are you using, and since you
>already did the analysis, which function of Shell.java is this spark
>code using?
>
>Regards,
>LLoyd
>
>On 11 December 2015 at 15:43, Samuel <sa...@gmail.com> wrote:
>> I am not using hadoop-util directly, it is Spark code what uses it
>> (i,e. not directly under my control).
>>
>> Regarding ls, for my particular use case it is fine if you use "ls"
>> instead of "/bin/ls".
>>
>> However, I do agree that using ls to fetch file permissions is
>> incorrect, so a better solution (in terms of code quality) would be
>> not to use ls at all.
>>
>>
>> On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com>
>>wrote:
>>> So what you ultimately need is a piece of java code listing the rwx
>>> permissions for user, group and others that is not using ls
>>> internally, is that correct?
>>> If "RawLocalFileSystem" is not HDFS, do you really need to use
>>> hadoop-util for that?
>>> Can you tell us more about your use case?
>>>
>>> Regards,
>>> LLoyd
>>>
>>>
>>>
>>>
>>> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>>>>> Using ls to figure out permissions is a bad design anyway, so I would
>>>>> not be surprised if this hardcode was reported as a bug.
>>>>
>>>> Of course, I have no idea why it was implemented like this. I assume
>>>> it was written at some point in time where Java didn't provide the
>>>> needed APIS (?)
>>>>
>>>> Implementing the permission check without relying in ls at all is also
>>>> a solution for the problem I have :)
>>>>
>>>>> LLoyd
>>>>>
>>>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>>>>> > Hi,
>>>>> >
>>>>> > I am experiencing some crashes when using spark over local files
>>>>>(mainly for
>>>>> > testing). Some operations fail with
>>>>> >
>>>>> > java.lang.RuntimeException: Error while running command to get file
>>>>> > permissions : java.io.IOException: Cannot run program "/bin/ls":
>>>>>error=2, No
>>>>> > such file or directory
>>>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>>>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>>>> >         at
>>>>> > 
>>>>>org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:3
>>>>>81)
>>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>>>>> >         at
>>>>> > 
>>>>>org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem
>>>>>.java:593)
>>>>> >         at
>>>>> > 
>>>>>org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.
>>>>>java:51)
>>>>> >         at
>>>>> > 
>>>>>org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermiss
>>>>>ionInfo(RawLocalFileSystem.java:514)
>>>>> >         at
>>>>> > 
>>>>>org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermissi
>>>>>on(RawLocalFileSystem.java:489)
>>>>> >         at
>>>>> > 
>>>>>org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$an
>>>>>on$1$$anonfun$12.apply(newParquet.scala:292)
>>>>> >
>>>>> > etcetera...
>>>>> >
>>>>> > Which seems to be related to Shell.java in org.apache.hadoop-util,
>>>>>that uses
>>>>> > ls -ld to figure out file permissions (that is in
>>>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that
>>>>>instead of just
>>>>> > calling ls, Shell .java calls /bin/ls, which is usually available,
>>>>>but in
>>>>> > certain circumstances might not. Regardless of the reasons not to
>>>>>have ls in
>>>>> > /bin, hardcoding the directory bans users from using the standard
>>>>>mechanisms
>>>>> > to decide which binaries to run in their systems (in this case,
>>>>>$PATH), so I
>>>>> > wonder if there is a particular reason why that path has been
>>>>>hardcoded to
>>>>> > an absolute path instead to something resolvable using$PATH.
>>>>> >
>>>>> > Or in other words, is this a bug or a feature?
>>>>> >
>>>>> > Best
>>>>> >
>>>>> > --
>>>>> > Samuel
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Samuel
>>
>>
>>
>> --
>> Samuel
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
>For additional commands, e-mail: user-help@hadoop.apache.org
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Chris Nauroth <cn...@hortonworks.com>.
Hello Samuel,

Issue HADOOP-11935 tracks an improvement to re-implement this code path
using either JNI to OS syscalls or the JDK 7 java.nio.file APIs (probably
the latter).

https://issues.apache.org/jira/browse/HADOOP-11935


For right now, I don't see a viable workaround besides ensuring that the
command is accessible at /bin/ls on your system.

--Chris Nauroth




On 12/11/15, 8:05 AM, "Namikaze Minato" <ll...@gmail.com> wrote:

>My question was, which spark command are you using, and since you
>already did the analysis, which function of Shell.java is this spark
>code using?
>
>Regards,
>LLoyd
>
>On 11 December 2015 at 15:43, Samuel <sa...@gmail.com> wrote:
>> I am not using hadoop-util directly, it is Spark code what uses it
>> (i,e. not directly under my control).
>>
>> Regarding ls, for my particular use case it is fine if you use "ls"
>> instead of "/bin/ls".
>>
>> However, I do agree that using ls to fetch file permissions is
>> incorrect, so a better solution (in terms of code quality) would be
>> not to use ls at all.
>>
>>
>> On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com>
>>wrote:
>>> So what you ultimately need is a piece of java code listing the rwx
>>> permissions for user, group and others that is not using ls
>>> internally, is that correct?
>>> If "RawLocalFileSystem" is not HDFS, do you really need to use
>>> hadoop-util for that?
>>> Can you tell us more about your use case?
>>>
>>> Regards,
>>> LLoyd
>>>
>>>
>>>
>>>
>>> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>>>>> Using ls to figure out permissions is a bad design anyway, so I would
>>>>> not be surprised if this hardcode was reported as a bug.
>>>>
>>>> Of course, I have no idea why it was implemented like this. I assume
>>>> it was written at some point in time where Java didn't provide the
>>>> needed APIS (?)
>>>>
>>>> Implementing the permission check without relying in ls at all is also
>>>> a solution for the problem I have :)
>>>>
>>>>> LLoyd
>>>>>
>>>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>>>>> > Hi,
>>>>> >
>>>>> > I am experiencing some crashes when using spark over local files
>>>>>(mainly for
>>>>> > testing). Some operations fail with
>>>>> >
>>>>> > java.lang.RuntimeException: Error while running command to get file
>>>>> > permissions : java.io.IOException: Cannot run program "/bin/ls":
>>>>>error=2, No
>>>>> > such file or directory
>>>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>>>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>>>> >         at
>>>>> > 
>>>>>org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:3
>>>>>81)
>>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>>>>> >         at
>>>>> > 
>>>>>org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem
>>>>>.java:593)
>>>>> >         at
>>>>> > 
>>>>>org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.
>>>>>java:51)
>>>>> >         at
>>>>> > 
>>>>>org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermiss
>>>>>ionInfo(RawLocalFileSystem.java:514)
>>>>> >         at
>>>>> > 
>>>>>org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermissi
>>>>>on(RawLocalFileSystem.java:489)
>>>>> >         at
>>>>> > 
>>>>>org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$an
>>>>>on$1$$anonfun$12.apply(newParquet.scala:292)
>>>>> >
>>>>> > etcetera...
>>>>> >
>>>>> > Which seems to be related to Shell.java in org.apache.hadoop-util,
>>>>>that uses
>>>>> > ls -ld to figure out file permissions (that is in
>>>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that
>>>>>instead of just
>>>>> > calling ls, Shell .java calls /bin/ls, which is usually available,
>>>>>but in
>>>>> > certain circumstances might not. Regardless of the reasons not to
>>>>>have ls in
>>>>> > /bin, hardcoding the directory bans users from using the standard
>>>>>mechanisms
>>>>> > to decide which binaries to run in their systems (in this case,
>>>>>$PATH), so I
>>>>> > wonder if there is a particular reason why that path has been
>>>>>hardcoded to
>>>>> > an absolute path instead to something resolvable using$PATH.
>>>>> >
>>>>> > Or in other words, is this a bug or a feature?
>>>>> >
>>>>> > Best
>>>>> >
>>>>> > --
>>>>> > Samuel
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Samuel
>>
>>
>>
>> --
>> Samuel
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
>For additional commands, e-mail: user-help@hadoop.apache.org
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Namikaze Minato <ll...@gmail.com>.
Then it is enough :)

Regards,
LLoyd

On 11 December 2015 at 23:27, Samuel <sa...@gmail.com> wrote:
>> My question was, which spark command are you using, and since you
>> already did the analysis, which function of Shell.java is this spark
>> code using?
>
> Sorry, I misunderstood you. It was something using RawLocalFileSystem
> to load parquet files. The problem seemed to go away after I upgraded
> to spark 1.5.2 (that was in spark 1.4.0).
>
> I don't have the code with me now, but if the information is useful in
> any way I can provide an example later.
>
> Otherwise, there seems to be a ticket already to reimplement that code
> path, so that is good enough for me.
>
> Best
>
>>
>> Regards,
>> LLoyd
>>
>> On 11 December 2015 at 15:43, Samuel <sa...@gmail.com> wrote:
>> > I am not using hadoop-util directly, it is Spark code what uses it
>> > (i,e. not directly under my control).
>> >
>> > Regarding ls, for my particular use case it is fine if you use "ls"
>> > instead of "/bin/ls".
>> >
>> > However, I do agree that using ls to fetch file permissions is
>> > incorrect, so a better solution (in terms of code quality) would be
>> > not to use ls at all.
>> >
>> >
>> > On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com> wrote:
>> >> So what you ultimately need is a piece of java code listing the rwx
>> >> permissions for user, group and others that is not using ls
>> >> internally, is that correct?
>> >> If "RawLocalFileSystem" is not HDFS, do you really need to use
>> >> hadoop-util for that?
>> >> Can you tell us more about your use case?
>> >>
>> >> Regards,
>> >> LLoyd
>> >>
>> >>
>> >>
>> >>
>> >> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>> >>>> Using ls to figure out permissions is a bad design anyway, so I would
>> >>>> not be surprised if this hardcode was reported as a bug.
>> >>>
>> >>> Of course, I have no idea why it was implemented like this. I assume
>> >>> it was written at some point in time where Java didn't provide the
>> >>> needed APIS (?)
>> >>>
>> >>> Implementing the permission check without relying in ls at all is also
>> >>> a solution for the problem I have :)
>> >>>
>> >>>> LLoyd
>> >>>>
>> >>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>> >>>> > Hi,
>> >>>> >
>> >>>> > I am experiencing some crashes when using spark over local files (mainly for
>> >>>> > testing). Some operations fail with
>> >>>> >
>> >>>> > java.lang.RuntimeException: Error while running command to get file
>> >>>> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
>> >>>> > such file or directory
>> >>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>> >>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>> >>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>> >>>> >         at
>> >>>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>> >>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>> >>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>> >>>> >         at
>> >>>> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>> >>>> >         at
>> >>>> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
>> >>>> >         at
>> >>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
>> >>>> >         at
>> >>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
>> >>>> >         at
>> >>>> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
>> >>>> >
>> >>>> > etcetera...
>> >>>> >
>> >>>> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
>> >>>> > ls -ld to figure out file permissions (that is in
>> >>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
>> >>>> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
>> >>>> > certain circumstances might not. Regardless of the reasons not to have ls in
>> >>>> > /bin, hardcoding the directory bans users from using the standard mechanisms
>> >>>> > to decide which binaries to run in their systems (in this case, $PATH), so I
>> >>>> > wonder if there is a particular reason why that path has been hardcoded to
>> >>>> > an absolute path instead to something resolvable using$PATH.
>> >>>> >
>> >>>> > Or in other words, is this a bug or a feature?
>> >>>> >
>> >>>> > Best
>> >>>> >
>> >>>> > --
>> >>>> > Samuel
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Samuel
>> >
>> >
>> >
>> > --
>> > Samuel
>
>
>
>
> --
> Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Namikaze Minato <ll...@gmail.com>.
Then it is enough :)

Regards,
LLoyd

On 11 December 2015 at 23:27, Samuel <sa...@gmail.com> wrote:
>> My question was, which spark command are you using, and since you
>> already did the analysis, which function of Shell.java is this spark
>> code using?
>
> Sorry, I misunderstood you. It was something using RawLocalFileSystem
> to load parquet files. The problem seemed to go away after I upgraded
> to spark 1.5.2 (that was in spark 1.4.0).
>
> I don't have the code with me now, but if the information is useful in
> any way I can provide an example later.
>
> Otherwise, there seems to be a ticket already to reimplement that code
> path, so that is good enough for me.
>
> Best
>
>>
>> Regards,
>> LLoyd
>>
>> On 11 December 2015 at 15:43, Samuel <sa...@gmail.com> wrote:
>> > I am not using hadoop-util directly, it is Spark code what uses it
>> > (i,e. not directly under my control).
>> >
>> > Regarding ls, for my particular use case it is fine if you use "ls"
>> > instead of "/bin/ls".
>> >
>> > However, I do agree that using ls to fetch file permissions is
>> > incorrect, so a better solution (in terms of code quality) would be
>> > not to use ls at all.
>> >
>> >
>> > On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com> wrote:
>> >> So what you ultimately need is a piece of java code listing the rwx
>> >> permissions for user, group and others that is not using ls
>> >> internally, is that correct?
>> >> If "RawLocalFileSystem" is not HDFS, do you really need to use
>> >> hadoop-util for that?
>> >> Can you tell us more about your use case?
>> >>
>> >> Regards,
>> >> LLoyd
>> >>
>> >>
>> >>
>> >>
>> >> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>> >>>> Using ls to figure out permissions is a bad design anyway, so I would
>> >>>> not be surprised if this hardcode was reported as a bug.
>> >>>
>> >>> Of course, I have no idea why it was implemented like this. I assume
>> >>> it was written at some point in time where Java didn't provide the
>> >>> needed APIS (?)
>> >>>
>> >>> Implementing the permission check without relying in ls at all is also
>> >>> a solution for the problem I have :)
>> >>>
>> >>>> LLoyd
>> >>>>
>> >>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>> >>>> > Hi,
>> >>>> >
>> >>>> > I am experiencing some crashes when using spark over local files (mainly for
>> >>>> > testing). Some operations fail with
>> >>>> >
>> >>>> > java.lang.RuntimeException: Error while running command to get file
>> >>>> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
>> >>>> > such file or directory
>> >>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>> >>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>> >>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>> >>>> >         at
>> >>>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>> >>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>> >>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>> >>>> >         at
>> >>>> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>> >>>> >         at
>> >>>> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
>> >>>> >         at
>> >>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
>> >>>> >         at
>> >>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
>> >>>> >         at
>> >>>> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
>> >>>> >
>> >>>> > etcetera...
>> >>>> >
>> >>>> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
>> >>>> > ls -ld to figure out file permissions (that is in
>> >>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
>> >>>> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
>> >>>> > certain circumstances might not. Regardless of the reasons not to have ls in
>> >>>> > /bin, hardcoding the directory bans users from using the standard mechanisms
>> >>>> > to decide which binaries to run in their systems (in this case, $PATH), so I
>> >>>> > wonder if there is a particular reason why that path has been hardcoded to
>> >>>> > an absolute path instead to something resolvable using$PATH.
>> >>>> >
>> >>>> > Or in other words, is this a bug or a feature?
>> >>>> >
>> >>>> > Best
>> >>>> >
>> >>>> > --
>> >>>> > Samuel
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Samuel
>> >
>> >
>> >
>> > --
>> > Samuel
>
>
>
>
> --
> Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Namikaze Minato <ll...@gmail.com>.
Then it is enough :)

Regards,
LLoyd

On 11 December 2015 at 23:27, Samuel <sa...@gmail.com> wrote:
>> My question was, which spark command are you using, and since you
>> already did the analysis, which function of Shell.java is this spark
>> code using?
>
> Sorry, I misunderstood you. It was something using RawLocalFileSystem
> to load parquet files. The problem seemed to go away after I upgraded
> to spark 1.5.2 (that was in spark 1.4.0).
>
> I don't have the code with me now, but if the information is useful in
> any way I can provide an example later.
>
> Otherwise, there seems to be a ticket already to reimplement that code
> path, so that is good enough for me.
>
> Best
>
>>
>> Regards,
>> LLoyd
>>
>> On 11 December 2015 at 15:43, Samuel <sa...@gmail.com> wrote:
>> > I am not using hadoop-util directly, it is Spark code what uses it
>> > (i,e. not directly under my control).
>> >
>> > Regarding ls, for my particular use case it is fine if you use "ls"
>> > instead of "/bin/ls".
>> >
>> > However, I do agree that using ls to fetch file permissions is
>> > incorrect, so a better solution (in terms of code quality) would be
>> > not to use ls at all.
>> >
>> >
>> > On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com> wrote:
>> >> So what you ultimately need is a piece of java code listing the rwx
>> >> permissions for user, group and others that is not using ls
>> >> internally, is that correct?
>> >> If "RawLocalFileSystem" is not HDFS, do you really need to use
>> >> hadoop-util for that?
>> >> Can you tell us more about your use case?
>> >>
>> >> Regards,
>> >> LLoyd
>> >>
>> >>
>> >>
>> >>
>> >> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>> >>>> Using ls to figure out permissions is a bad design anyway, so I would
>> >>>> not be surprised if this hardcode was reported as a bug.
>> >>>
>> >>> Of course, I have no idea why it was implemented like this. I assume
>> >>> it was written at some point in time where Java didn't provide the
>> >>> needed APIS (?)
>> >>>
>> >>> Implementing the permission check without relying in ls at all is also
>> >>> a solution for the problem I have :)
>> >>>
>> >>>> LLoyd
>> >>>>
>> >>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>> >>>> > Hi,
>> >>>> >
>> >>>> > I am experiencing some crashes when using spark over local files (mainly for
>> >>>> > testing). Some operations fail with
>> >>>> >
>> >>>> > java.lang.RuntimeException: Error while running command to get file
>> >>>> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
>> >>>> > such file or directory
>> >>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>> >>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>> >>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>> >>>> >         at
>> >>>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>> >>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>> >>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>> >>>> >         at
>> >>>> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>> >>>> >         at
>> >>>> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
>> >>>> >         at
>> >>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
>> >>>> >         at
>> >>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
>> >>>> >         at
>> >>>> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
>> >>>> >
>> >>>> > etcetera...
>> >>>> >
>> >>>> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
>> >>>> > ls -ld to figure out file permissions (that is in
>> >>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
>> >>>> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
>> >>>> > certain circumstances might not. Regardless of the reasons not to have ls in
>> >>>> > /bin, hardcoding the directory bans users from using the standard mechanisms
>> >>>> > to decide which binaries to run in their systems (in this case, $PATH), so I
>> >>>> > wonder if there is a particular reason why that path has been hardcoded to
>> >>>> > an absolute path instead to something resolvable using$PATH.
>> >>>> >
>> >>>> > Or in other words, is this a bug or a feature?
>> >>>> >
>> >>>> > Best
>> >>>> >
>> >>>> > --
>> >>>> > Samuel
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Samuel
>> >
>> >
>> >
>> > --
>> > Samuel
>
>
>
>
> --
> Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Namikaze Minato <ll...@gmail.com>.
Then it is enough :)

Regards,
LLoyd

On 11 December 2015 at 23:27, Samuel <sa...@gmail.com> wrote:
>> My question was, which spark command are you using, and since you
>> already did the analysis, which function of Shell.java is this spark
>> code using?
>
> Sorry, I misunderstood you. It was something using RawLocalFileSystem
> to load parquet files. The problem seemed to go away after I upgraded
> to spark 1.5.2 (that was in spark 1.4.0).
>
> I don't have the code with me now, but if the information is useful in
> any way I can provide an example later.
>
> Otherwise, there seems to be a ticket already to reimplement that code
> path, so that is good enough for me.
>
> Best
>
>>
>> Regards,
>> LLoyd
>>
>> On 11 December 2015 at 15:43, Samuel <sa...@gmail.com> wrote:
>> > I am not using hadoop-util directly, it is Spark code what uses it
>> > (i,e. not directly under my control).
>> >
>> > Regarding ls, for my particular use case it is fine if you use "ls"
>> > instead of "/bin/ls".
>> >
>> > However, I do agree that using ls to fetch file permissions is
>> > incorrect, so a better solution (in terms of code quality) would be
>> > not to use ls at all.
>> >
>> >
>> > On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com> wrote:
>> >> So what you ultimately need is a piece of java code listing the rwx
>> >> permissions for user, group and others that is not using ls
>> >> internally, is that correct?
>> >> If "RawLocalFileSystem" is not HDFS, do you really need to use
>> >> hadoop-util for that?
>> >> Can you tell us more about your use case?
>> >>
>> >> Regards,
>> >> LLoyd
>> >>
>> >>
>> >>
>> >>
>> >> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>> >>>> Using ls to figure out permissions is a bad design anyway, so I would
>> >>>> not be surprised if this hardcode was reported as a bug.
>> >>>
>> >>> Of course, I have no idea why it was implemented like this. I assume
>> >>> it was written at some point in time where Java didn't provide the
>> >>> needed APIS (?)
>> >>>
>> >>> Implementing the permission check without relying in ls at all is also
>> >>> a solution for the problem I have :)
>> >>>
>> >>>> LLoyd
>> >>>>
>> >>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>> >>>> > Hi,
>> >>>> >
>> >>>> > I am experiencing some crashes when using spark over local files (mainly for
>> >>>> > testing). Some operations fail with
>> >>>> >
>> >>>> > java.lang.RuntimeException: Error while running command to get file
>> >>>> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
>> >>>> > such file or directory
>> >>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>> >>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>> >>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>> >>>> >         at
>> >>>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>> >>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>> >>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>> >>>> >         at
>> >>>> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>> >>>> >         at
>> >>>> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
>> >>>> >         at
>> >>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
>> >>>> >         at
>> >>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
>> >>>> >         at
>> >>>> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
>> >>>> >
>> >>>> > etcetera...
>> >>>> >
>> >>>> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
>> >>>> > ls -ld to figure out file permissions (that is in
>> >>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
>> >>>> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
>> >>>> > certain circumstances might not. Regardless of the reasons not to have ls in
>> >>>> > /bin, hardcoding the directory bans users from using the standard mechanisms
>> >>>> > to decide which binaries to run in their systems (in this case, $PATH), so I
>> >>>> > wonder if there is a particular reason why that path has been hardcoded to
>> >>>> > an absolute path instead to something resolvable using$PATH.
>> >>>> >
>> >>>> > Or in other words, is this a bug or a feature?
>> >>>> >
>> >>>> > Best
>> >>>> >
>> >>>> > --
>> >>>> > Samuel
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Samuel
>> >
>> >
>> >
>> > --
>> > Samuel
>
>
>
>
> --
> Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Samuel <sa...@gmail.com>.
> My question was, which spark command are you using, and since you
> already did the analysis, which function of Shell.java is this spark
> code using?

Sorry, I misunderstood you. It was something using RawLocalFileSystem
to load parquet files. The problem seemed to go away after I upgraded
to spark 1.5.2 (that was in spark 1.4.0).

I don't have the code with me now, but if the information is useful in
any way I can provide an example later.

Otherwise, there seems to be a ticket already to reimplement that code
path, so that is good enough for me.

Best

>
> Regards,
> LLoyd
>
> On 11 December 2015 at 15:43, Samuel <sa...@gmail.com> wrote:
> > I am not using hadoop-util directly, it is Spark code what uses it
> > (i,e. not directly under my control).
> >
> > Regarding ls, for my particular use case it is fine if you use "ls"
> > instead of "/bin/ls".
> >
> > However, I do agree that using ls to fetch file permissions is
> > incorrect, so a better solution (in terms of code quality) would be
> > not to use ls at all.
> >
> >
> > On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com> wrote:
> >> So what you ultimately need is a piece of java code listing the rwx
> >> permissions for user, group and others that is not using ls
> >> internally, is that correct?
> >> If "RawLocalFileSystem" is not HDFS, do you really need to use
> >> hadoop-util for that?
> >> Can you tell us more about your use case?
> >>
> >> Regards,
> >> LLoyd
> >>
> >>
> >>
> >>
> >> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
> >>>> Using ls to figure out permissions is a bad design anyway, so I would
> >>>> not be surprised if this hardcode was reported as a bug.
> >>>
> >>> Of course, I have no idea why it was implemented like this. I assume
> >>> it was written at some point in time where Java didn't provide the
> >>> needed APIS (?)
> >>>
> >>> Implementing the permission check without relying in ls at all is also
> >>> a solution for the problem I have :)
> >>>
> >>>> LLoyd
> >>>>
> >>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
> >>>> > Hi,
> >>>> >
> >>>> > I am experiencing some crashes when using spark over local files (mainly for
> >>>> > testing). Some operations fail with
> >>>> >
> >>>> > java.lang.RuntimeException: Error while running command to get file
> >>>> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
> >>>> > such file or directory
> >>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> >>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
> >>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
> >>>> >         at
> >>>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
> >>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
> >>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
> >>>> >         at
> >>>> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
> >>>> >         at
> >>>> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
> >>>> >         at
> >>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
> >>>> >         at
> >>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
> >>>> >         at
> >>>> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
> >>>> >
> >>>> > etcetera...
> >>>> >
> >>>> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
> >>>> > ls -ld to figure out file permissions (that is in
> >>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
> >>>> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
> >>>> > certain circumstances might not. Regardless of the reasons not to have ls in
> >>>> > /bin, hardcoding the directory bans users from using the standard mechanisms
> >>>> > to decide which binaries to run in their systems (in this case, $PATH), so I
> >>>> > wonder if there is a particular reason why that path has been hardcoded to
> >>>> > an absolute path instead to something resolvable using$PATH.
> >>>> >
> >>>> > Or in other words, is this a bug or a feature?
> >>>> >
> >>>> > Best
> >>>> >
> >>>> > --
> >>>> > Samuel
> >>>
> >>>
> >>>
> >>>
> >>> --
> >>> Samuel
> >
> >
> >
> > --
> > Samuel




-- 
Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Samuel <sa...@gmail.com>.
> My question was, which spark command are you using, and since you
> already did the analysis, which function of Shell.java is this spark
> code using?

Sorry, I misunderstood you. It was something using RawLocalFileSystem
to load parquet files. The problem seemed to go away after I upgraded
to spark 1.5.2 (that was in spark 1.4.0).

I don't have the code with me now, but if the information is useful in
any way I can provide an example later.

Otherwise, there seems to be a ticket already to reimplement that code
path, so that is good enough for me.

Best

>
> Regards,
> LLoyd
>
> On 11 December 2015 at 15:43, Samuel <sa...@gmail.com> wrote:
> > I am not using hadoop-util directly, it is Spark code what uses it
> > (i,e. not directly under my control).
> >
> > Regarding ls, for my particular use case it is fine if you use "ls"
> > instead of "/bin/ls".
> >
> > However, I do agree that using ls to fetch file permissions is
> > incorrect, so a better solution (in terms of code quality) would be
> > not to use ls at all.
> >
> >
> > On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com> wrote:
> >> So what you ultimately need is a piece of java code listing the rwx
> >> permissions for user, group and others that is not using ls
> >> internally, is that correct?
> >> If "RawLocalFileSystem" is not HDFS, do you really need to use
> >> hadoop-util for that?
> >> Can you tell us more about your use case?
> >>
> >> Regards,
> >> LLoyd
> >>
> >>
> >>
> >>
> >> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
> >>>> Using ls to figure out permissions is a bad design anyway, so I would
> >>>> not be surprised if this hardcode was reported as a bug.
> >>>
> >>> Of course, I have no idea why it was implemented like this. I assume
> >>> it was written at some point in time where Java didn't provide the
> >>> needed APIS (?)
> >>>
> >>> Implementing the permission check without relying in ls at all is also
> >>> a solution for the problem I have :)
> >>>
> >>>> LLoyd
> >>>>
> >>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
> >>>> > Hi,
> >>>> >
> >>>> > I am experiencing some crashes when using spark over local files (mainly for
> >>>> > testing). Some operations fail with
> >>>> >
> >>>> > java.lang.RuntimeException: Error while running command to get file
> >>>> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
> >>>> > such file or directory
> >>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> >>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
> >>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
> >>>> >         at
> >>>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
> >>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
> >>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
> >>>> >         at
> >>>> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
> >>>> >         at
> >>>> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
> >>>> >         at
> >>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
> >>>> >         at
> >>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
> >>>> >         at
> >>>> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
> >>>> >
> >>>> > etcetera...
> >>>> >
> >>>> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
> >>>> > ls -ld to figure out file permissions (that is in
> >>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
> >>>> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
> >>>> > certain circumstances might not. Regardless of the reasons not to have ls in
> >>>> > /bin, hardcoding the directory bans users from using the standard mechanisms
> >>>> > to decide which binaries to run in their systems (in this case, $PATH), so I
> >>>> > wonder if there is a particular reason why that path has been hardcoded to
> >>>> > an absolute path instead to something resolvable using$PATH.
> >>>> >
> >>>> > Or in other words, is this a bug or a feature?
> >>>> >
> >>>> > Best
> >>>> >
> >>>> > --
> >>>> > Samuel
> >>>
> >>>
> >>>
> >>>
> >>> --
> >>> Samuel
> >
> >
> >
> > --
> > Samuel




-- 
Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Samuel <sa...@gmail.com>.
> My question was, which spark command are you using, and since you
> already did the analysis, which function of Shell.java is this spark
> code using?

Sorry, I misunderstood you. It was something using RawLocalFileSystem
to load parquet files. The problem seemed to go away after I upgraded
to spark 1.5.2 (that was in spark 1.4.0).

I don't have the code with me now, but if the information is useful in
any way I can provide an example later.

Otherwise, there seems to be a ticket already to reimplement that code
path, so that is good enough for me.

Best

>
> Regards,
> LLoyd
>
> On 11 December 2015 at 15:43, Samuel <sa...@gmail.com> wrote:
> > I am not using hadoop-util directly, it is Spark code what uses it
> > (i,e. not directly under my control).
> >
> > Regarding ls, for my particular use case it is fine if you use "ls"
> > instead of "/bin/ls".
> >
> > However, I do agree that using ls to fetch file permissions is
> > incorrect, so a better solution (in terms of code quality) would be
> > not to use ls at all.
> >
> >
> > On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com> wrote:
> >> So what you ultimately need is a piece of java code listing the rwx
> >> permissions for user, group and others that is not using ls
> >> internally, is that correct?
> >> If "RawLocalFileSystem" is not HDFS, do you really need to use
> >> hadoop-util for that?
> >> Can you tell us more about your use case?
> >>
> >> Regards,
> >> LLoyd
> >>
> >>
> >>
> >>
> >> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
> >>>> Using ls to figure out permissions is a bad design anyway, so I would
> >>>> not be surprised if this hardcode was reported as a bug.
> >>>
> >>> Of course, I have no idea why it was implemented like this. I assume
> >>> it was written at some point in time where Java didn't provide the
> >>> needed APIS (?)
> >>>
> >>> Implementing the permission check without relying in ls at all is also
> >>> a solution for the problem I have :)
> >>>
> >>>> LLoyd
> >>>>
> >>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
> >>>> > Hi,
> >>>> >
> >>>> > I am experiencing some crashes when using spark over local files (mainly for
> >>>> > testing). Some operations fail with
> >>>> >
> >>>> > java.lang.RuntimeException: Error while running command to get file
> >>>> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
> >>>> > such file or directory
> >>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> >>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
> >>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
> >>>> >         at
> >>>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
> >>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
> >>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
> >>>> >         at
> >>>> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
> >>>> >         at
> >>>> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
> >>>> >         at
> >>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
> >>>> >         at
> >>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
> >>>> >         at
> >>>> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
> >>>> >
> >>>> > etcetera...
> >>>> >
> >>>> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
> >>>> > ls -ld to figure out file permissions (that is in
> >>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
> >>>> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
> >>>> > certain circumstances might not. Regardless of the reasons not to have ls in
> >>>> > /bin, hardcoding the directory bans users from using the standard mechanisms
> >>>> > to decide which binaries to run in their systems (in this case, $PATH), so I
> >>>> > wonder if there is a particular reason why that path has been hardcoded to
> >>>> > an absolute path instead to something resolvable using$PATH.
> >>>> >
> >>>> > Or in other words, is this a bug or a feature?
> >>>> >
> >>>> > Best
> >>>> >
> >>>> > --
> >>>> > Samuel
> >>>
> >>>
> >>>
> >>>
> >>> --
> >>> Samuel
> >
> >
> >
> > --
> > Samuel




-- 
Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Samuel <sa...@gmail.com>.
> My question was, which spark command are you using, and since you
> already did the analysis, which function of Shell.java is this spark
> code using?

Sorry, I misunderstood you. It was something using RawLocalFileSystem
to load parquet files. The problem seemed to go away after I upgraded
to spark 1.5.2 (that was in spark 1.4.0).

I don't have the code with me now, but if the information is useful in
any way I can provide an example later.

Otherwise, there seems to be a ticket already to reimplement that code
path, so that is good enough for me.

Best

>
> Regards,
> LLoyd
>
> On 11 December 2015 at 15:43, Samuel <sa...@gmail.com> wrote:
> > I am not using hadoop-util directly, it is Spark code what uses it
> > (i,e. not directly under my control).
> >
> > Regarding ls, for my particular use case it is fine if you use "ls"
> > instead of "/bin/ls".
> >
> > However, I do agree that using ls to fetch file permissions is
> > incorrect, so a better solution (in terms of code quality) would be
> > not to use ls at all.
> >
> >
> > On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com> wrote:
> >> So what you ultimately need is a piece of java code listing the rwx
> >> permissions for user, group and others that is not using ls
> >> internally, is that correct?
> >> If "RawLocalFileSystem" is not HDFS, do you really need to use
> >> hadoop-util for that?
> >> Can you tell us more about your use case?
> >>
> >> Regards,
> >> LLoyd
> >>
> >>
> >>
> >>
> >> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
> >>>> Using ls to figure out permissions is a bad design anyway, so I would
> >>>> not be surprised if this hardcode was reported as a bug.
> >>>
> >>> Of course, I have no idea why it was implemented like this. I assume
> >>> it was written at some point in time where Java didn't provide the
> >>> needed APIS (?)
> >>>
> >>> Implementing the permission check without relying in ls at all is also
> >>> a solution for the problem I have :)
> >>>
> >>>> LLoyd
> >>>>
> >>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
> >>>> > Hi,
> >>>> >
> >>>> > I am experiencing some crashes when using spark over local files (mainly for
> >>>> > testing). Some operations fail with
> >>>> >
> >>>> > java.lang.RuntimeException: Error while running command to get file
> >>>> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
> >>>> > such file or directory
> >>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> >>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
> >>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
> >>>> >         at
> >>>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
> >>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
> >>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
> >>>> >         at
> >>>> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
> >>>> >         at
> >>>> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
> >>>> >         at
> >>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
> >>>> >         at
> >>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
> >>>> >         at
> >>>> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
> >>>> >
> >>>> > etcetera...
> >>>> >
> >>>> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
> >>>> > ls -ld to figure out file permissions (that is in
> >>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
> >>>> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
> >>>> > certain circumstances might not. Regardless of the reasons not to have ls in
> >>>> > /bin, hardcoding the directory bans users from using the standard mechanisms
> >>>> > to decide which binaries to run in their systems (in this case, $PATH), so I
> >>>> > wonder if there is a particular reason why that path has been hardcoded to
> >>>> > an absolute path instead to something resolvable using$PATH.
> >>>> >
> >>>> > Or in other words, is this a bug or a feature?
> >>>> >
> >>>> > Best
> >>>> >
> >>>> > --
> >>>> > Samuel
> >>>
> >>>
> >>>
> >>>
> >>> --
> >>> Samuel
> >
> >
> >
> > --
> > Samuel




-- 
Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Chris Nauroth <cn...@hortonworks.com>.
Hello Samuel,

Issue HADOOP-11935 tracks an improvement to re-implement this code path
using either JNI to OS syscalls or the JDK 7 java.nio.file APIs (probably
the latter).

https://issues.apache.org/jira/browse/HADOOP-11935


For right now, I don't see a viable workaround besides ensuring that the
command is accessible at /bin/ls on your system.

--Chris Nauroth




On 12/11/15, 8:05 AM, "Namikaze Minato" <ll...@gmail.com> wrote:

>My question was, which spark command are you using, and since you
>already did the analysis, which function of Shell.java is this spark
>code using?
>
>Regards,
>LLoyd
>
>On 11 December 2015 at 15:43, Samuel <sa...@gmail.com> wrote:
>> I am not using hadoop-util directly, it is Spark code what uses it
>> (i,e. not directly under my control).
>>
>> Regarding ls, for my particular use case it is fine if you use "ls"
>> instead of "/bin/ls".
>>
>> However, I do agree that using ls to fetch file permissions is
>> incorrect, so a better solution (in terms of code quality) would be
>> not to use ls at all.
>>
>>
>> On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com>
>>wrote:
>>> So what you ultimately need is a piece of java code listing the rwx
>>> permissions for user, group and others that is not using ls
>>> internally, is that correct?
>>> If "RawLocalFileSystem" is not HDFS, do you really need to use
>>> hadoop-util for that?
>>> Can you tell us more about your use case?
>>>
>>> Regards,
>>> LLoyd
>>>
>>>
>>>
>>>
>>> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>>>>> Using ls to figure out permissions is a bad design anyway, so I would
>>>>> not be surprised if this hardcode was reported as a bug.
>>>>
>>>> Of course, I have no idea why it was implemented like this. I assume
>>>> it was written at some point in time where Java didn't provide the
>>>> needed APIS (?)
>>>>
>>>> Implementing the permission check without relying in ls at all is also
>>>> a solution for the problem I have :)
>>>>
>>>>> LLoyd
>>>>>
>>>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>>>>> > Hi,
>>>>> >
>>>>> > I am experiencing some crashes when using spark over local files
>>>>>(mainly for
>>>>> > testing). Some operations fail with
>>>>> >
>>>>> > java.lang.RuntimeException: Error while running command to get file
>>>>> > permissions : java.io.IOException: Cannot run program "/bin/ls":
>>>>>error=2, No
>>>>> > such file or directory
>>>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>>>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>>>> >         at
>>>>> > 
>>>>>org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:3
>>>>>81)
>>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>>>>> >         at
>>>>> > 
>>>>>org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem
>>>>>.java:593)
>>>>> >         at
>>>>> > 
>>>>>org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.
>>>>>java:51)
>>>>> >         at
>>>>> > 
>>>>>org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermiss
>>>>>ionInfo(RawLocalFileSystem.java:514)
>>>>> >         at
>>>>> > 
>>>>>org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermissi
>>>>>on(RawLocalFileSystem.java:489)
>>>>> >         at
>>>>> > 
>>>>>org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$an
>>>>>on$1$$anonfun$12.apply(newParquet.scala:292)
>>>>> >
>>>>> > etcetera...
>>>>> >
>>>>> > Which seems to be related to Shell.java in org.apache.hadoop-util,
>>>>>that uses
>>>>> > ls -ld to figure out file permissions (that is in
>>>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that
>>>>>instead of just
>>>>> > calling ls, Shell .java calls /bin/ls, which is usually available,
>>>>>but in
>>>>> > certain circumstances might not. Regardless of the reasons not to
>>>>>have ls in
>>>>> > /bin, hardcoding the directory bans users from using the standard
>>>>>mechanisms
>>>>> > to decide which binaries to run in their systems (in this case,
>>>>>$PATH), so I
>>>>> > wonder if there is a particular reason why that path has been
>>>>>hardcoded to
>>>>> > an absolute path instead to something resolvable using$PATH.
>>>>> >
>>>>> > Or in other words, is this a bug or a feature?
>>>>> >
>>>>> > Best
>>>>> >
>>>>> > --
>>>>> > Samuel
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Samuel
>>
>>
>>
>> --
>> Samuel
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
>For additional commands, e-mail: user-help@hadoop.apache.org
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Namikaze Minato <ll...@gmail.com>.
My question was, which spark command are you using, and since you
already did the analysis, which function of Shell.java is this spark
code using?

Regards,
LLoyd

On 11 December 2015 at 15:43, Samuel <sa...@gmail.com> wrote:
> I am not using hadoop-util directly, it is Spark code what uses it
> (i,e. not directly under my control).
>
> Regarding ls, for my particular use case it is fine if you use "ls"
> instead of "/bin/ls".
>
> However, I do agree that using ls to fetch file permissions is
> incorrect, so a better solution (in terms of code quality) would be
> not to use ls at all.
>
>
> On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com> wrote:
>> So what you ultimately need is a piece of java code listing the rwx
>> permissions for user, group and others that is not using ls
>> internally, is that correct?
>> If "RawLocalFileSystem" is not HDFS, do you really need to use
>> hadoop-util for that?
>> Can you tell us more about your use case?
>>
>> Regards,
>> LLoyd
>>
>>
>>
>>
>> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>>>> Using ls to figure out permissions is a bad design anyway, so I would
>>>> not be surprised if this hardcode was reported as a bug.
>>>
>>> Of course, I have no idea why it was implemented like this. I assume
>>> it was written at some point in time where Java didn't provide the
>>> needed APIS (?)
>>>
>>> Implementing the permission check without relying in ls at all is also
>>> a solution for the problem I have :)
>>>
>>>> LLoyd
>>>>
>>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>>>> > Hi,
>>>> >
>>>> > I am experiencing some crashes when using spark over local files (mainly for
>>>> > testing). Some operations fail with
>>>> >
>>>> > java.lang.RuntimeException: Error while running command to get file
>>>> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
>>>> > such file or directory
>>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>>> >         at
>>>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>>>> >         at
>>>> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>>>> >         at
>>>> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
>>>> >         at
>>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
>>>> >         at
>>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
>>>> >         at
>>>> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
>>>> >
>>>> > etcetera...
>>>> >
>>>> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
>>>> > ls -ld to figure out file permissions (that is in
>>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
>>>> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
>>>> > certain circumstances might not. Regardless of the reasons not to have ls in
>>>> > /bin, hardcoding the directory bans users from using the standard mechanisms
>>>> > to decide which binaries to run in their systems (in this case, $PATH), so I
>>>> > wonder if there is a particular reason why that path has been hardcoded to
>>>> > an absolute path instead to something resolvable using$PATH.
>>>> >
>>>> > Or in other words, is this a bug or a feature?
>>>> >
>>>> > Best
>>>> >
>>>> > --
>>>> > Samuel
>>>
>>>
>>>
>>>
>>> --
>>> Samuel
>
>
>
> --
> Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Namikaze Minato <ll...@gmail.com>.
My question was, which spark command are you using, and since you
already did the analysis, which function of Shell.java is this spark
code using?

Regards,
LLoyd

On 11 December 2015 at 15:43, Samuel <sa...@gmail.com> wrote:
> I am not using hadoop-util directly, it is Spark code what uses it
> (i,e. not directly under my control).
>
> Regarding ls, for my particular use case it is fine if you use "ls"
> instead of "/bin/ls".
>
> However, I do agree that using ls to fetch file permissions is
> incorrect, so a better solution (in terms of code quality) would be
> not to use ls at all.
>
>
> On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com> wrote:
>> So what you ultimately need is a piece of java code listing the rwx
>> permissions for user, group and others that is not using ls
>> internally, is that correct?
>> If "RawLocalFileSystem" is not HDFS, do you really need to use
>> hadoop-util for that?
>> Can you tell us more about your use case?
>>
>> Regards,
>> LLoyd
>>
>>
>>
>>
>> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>>>> Using ls to figure out permissions is a bad design anyway, so I would
>>>> not be surprised if this hardcode was reported as a bug.
>>>
>>> Of course, I have no idea why it was implemented like this. I assume
>>> it was written at some point in time where Java didn't provide the
>>> needed APIS (?)
>>>
>>> Implementing the permission check without relying in ls at all is also
>>> a solution for the problem I have :)
>>>
>>>> LLoyd
>>>>
>>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>>>> > Hi,
>>>> >
>>>> > I am experiencing some crashes when using spark over local files (mainly for
>>>> > testing). Some operations fail with
>>>> >
>>>> > java.lang.RuntimeException: Error while running command to get file
>>>> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
>>>> > such file or directory
>>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>>> >         at
>>>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>>>> >         at
>>>> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>>>> >         at
>>>> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
>>>> >         at
>>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
>>>> >         at
>>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
>>>> >         at
>>>> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
>>>> >
>>>> > etcetera...
>>>> >
>>>> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
>>>> > ls -ld to figure out file permissions (that is in
>>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
>>>> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
>>>> > certain circumstances might not. Regardless of the reasons not to have ls in
>>>> > /bin, hardcoding the directory bans users from using the standard mechanisms
>>>> > to decide which binaries to run in their systems (in this case, $PATH), so I
>>>> > wonder if there is a particular reason why that path has been hardcoded to
>>>> > an absolute path instead to something resolvable using$PATH.
>>>> >
>>>> > Or in other words, is this a bug or a feature?
>>>> >
>>>> > Best
>>>> >
>>>> > --
>>>> > Samuel
>>>
>>>
>>>
>>>
>>> --
>>> Samuel
>
>
>
> --
> Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Namikaze Minato <ll...@gmail.com>.
My question was, which spark command are you using, and since you
already did the analysis, which function of Shell.java is this spark
code using?

Regards,
LLoyd

On 11 December 2015 at 15:43, Samuel <sa...@gmail.com> wrote:
> I am not using hadoop-util directly, it is Spark code what uses it
> (i,e. not directly under my control).
>
> Regarding ls, for my particular use case it is fine if you use "ls"
> instead of "/bin/ls".
>
> However, I do agree that using ls to fetch file permissions is
> incorrect, so a better solution (in terms of code quality) would be
> not to use ls at all.
>
>
> On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com> wrote:
>> So what you ultimately need is a piece of java code listing the rwx
>> permissions for user, group and others that is not using ls
>> internally, is that correct?
>> If "RawLocalFileSystem" is not HDFS, do you really need to use
>> hadoop-util for that?
>> Can you tell us more about your use case?
>>
>> Regards,
>> LLoyd
>>
>>
>>
>>
>> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>>>> Using ls to figure out permissions is a bad design anyway, so I would
>>>> not be surprised if this hardcode was reported as a bug.
>>>
>>> Of course, I have no idea why it was implemented like this. I assume
>>> it was written at some point in time where Java didn't provide the
>>> needed APIS (?)
>>>
>>> Implementing the permission check without relying in ls at all is also
>>> a solution for the problem I have :)
>>>
>>>> LLoyd
>>>>
>>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>>>> > Hi,
>>>> >
>>>> > I am experiencing some crashes when using spark over local files (mainly for
>>>> > testing). Some operations fail with
>>>> >
>>>> > java.lang.RuntimeException: Error while running command to get file
>>>> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
>>>> > such file or directory
>>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>>> >         at
>>>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>>>> >         at
>>>> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>>>> >         at
>>>> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
>>>> >         at
>>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
>>>> >         at
>>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
>>>> >         at
>>>> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
>>>> >
>>>> > etcetera...
>>>> >
>>>> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
>>>> > ls -ld to figure out file permissions (that is in
>>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
>>>> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
>>>> > certain circumstances might not. Regardless of the reasons not to have ls in
>>>> > /bin, hardcoding the directory bans users from using the standard mechanisms
>>>> > to decide which binaries to run in their systems (in this case, $PATH), so I
>>>> > wonder if there is a particular reason why that path has been hardcoded to
>>>> > an absolute path instead to something resolvable using$PATH.
>>>> >
>>>> > Or in other words, is this a bug or a feature?
>>>> >
>>>> > Best
>>>> >
>>>> > --
>>>> > Samuel
>>>
>>>
>>>
>>>
>>> --
>>> Samuel
>
>
>
> --
> Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Namikaze Minato <ll...@gmail.com>.
My question was, which spark command are you using, and since you
already did the analysis, which function of Shell.java is this spark
code using?

Regards,
LLoyd

On 11 December 2015 at 15:43, Samuel <sa...@gmail.com> wrote:
> I am not using hadoop-util directly, it is Spark code what uses it
> (i,e. not directly under my control).
>
> Regarding ls, for my particular use case it is fine if you use "ls"
> instead of "/bin/ls".
>
> However, I do agree that using ls to fetch file permissions is
> incorrect, so a better solution (in terms of code quality) would be
> not to use ls at all.
>
>
> On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com> wrote:
>> So what you ultimately need is a piece of java code listing the rwx
>> permissions for user, group and others that is not using ls
>> internally, is that correct?
>> If "RawLocalFileSystem" is not HDFS, do you really need to use
>> hadoop-util for that?
>> Can you tell us more about your use case?
>>
>> Regards,
>> LLoyd
>>
>>
>>
>>
>> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>>>> Using ls to figure out permissions is a bad design anyway, so I would
>>>> not be surprised if this hardcode was reported as a bug.
>>>
>>> Of course, I have no idea why it was implemented like this. I assume
>>> it was written at some point in time where Java didn't provide the
>>> needed APIS (?)
>>>
>>> Implementing the permission check without relying in ls at all is also
>>> a solution for the problem I have :)
>>>
>>>> LLoyd
>>>>
>>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>>>> > Hi,
>>>> >
>>>> > I am experiencing some crashes when using spark over local files (mainly for
>>>> > testing). Some operations fail with
>>>> >
>>>> > java.lang.RuntimeException: Error while running command to get file
>>>> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
>>>> > such file or directory
>>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>>> >         at
>>>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>>>> >         at
>>>> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>>>> >         at
>>>> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
>>>> >         at
>>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
>>>> >         at
>>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
>>>> >         at
>>>> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
>>>> >
>>>> > etcetera...
>>>> >
>>>> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
>>>> > ls -ld to figure out file permissions (that is in
>>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
>>>> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
>>>> > certain circumstances might not. Regardless of the reasons not to have ls in
>>>> > /bin, hardcoding the directory bans users from using the standard mechanisms
>>>> > to decide which binaries to run in their systems (in this case, $PATH), so I
>>>> > wonder if there is a particular reason why that path has been hardcoded to
>>>> > an absolute path instead to something resolvable using$PATH.
>>>> >
>>>> > Or in other words, is this a bug or a feature?
>>>> >
>>>> > Best
>>>> >
>>>> > --
>>>> > Samuel
>>>
>>>
>>>
>>>
>>> --
>>> Samuel
>
>
>
> --
> Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Samuel <sa...@gmail.com>.
I am not using hadoop-util directly, it is Spark code what uses it
(i,e. not directly under my control).

Regarding ls, for my particular use case it is fine if you use "ls"
instead of "/bin/ls".

However, I do agree that using ls to fetch file permissions is
incorrect, so a better solution (in terms of code quality) would be
not to use ls at all.


On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com> wrote:
> So what you ultimately need is a piece of java code listing the rwx
> permissions for user, group and others that is not using ls
> internally, is that correct?
> If "RawLocalFileSystem" is not HDFS, do you really need to use
> hadoop-util for that?
> Can you tell us more about your use case?
>
> Regards,
> LLoyd
>
>
>
>
> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>>> Using ls to figure out permissions is a bad design anyway, so I would
>>> not be surprised if this hardcode was reported as a bug.
>>
>> Of course, I have no idea why it was implemented like this. I assume
>> it was written at some point in time where Java didn't provide the
>> needed APIS (?)
>>
>> Implementing the permission check without relying in ls at all is also
>> a solution for the problem I have :)
>>
>>> LLoyd
>>>
>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>>> > Hi,
>>> >
>>> > I am experiencing some crashes when using spark over local files (mainly for
>>> > testing). Some operations fail with
>>> >
>>> > java.lang.RuntimeException: Error while running command to get file
>>> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
>>> > such file or directory
>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>> >         at
>>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>>> >         at
>>> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>>> >         at
>>> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
>>> >         at
>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
>>> >         at
>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
>>> >         at
>>> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
>>> >
>>> > etcetera...
>>> >
>>> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
>>> > ls -ld to figure out file permissions (that is in
>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
>>> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
>>> > certain circumstances might not. Regardless of the reasons not to have ls in
>>> > /bin, hardcoding the directory bans users from using the standard mechanisms
>>> > to decide which binaries to run in their systems (in this case, $PATH), so I
>>> > wonder if there is a particular reason why that path has been hardcoded to
>>> > an absolute path instead to something resolvable using$PATH.
>>> >
>>> > Or in other words, is this a bug or a feature?
>>> >
>>> > Best
>>> >
>>> > --
>>> > Samuel
>>
>>
>>
>>
>> --
>> Samuel



-- 
Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Samuel <sa...@gmail.com>.
I am not using hadoop-util directly, it is Spark code what uses it
(i,e. not directly under my control).

Regarding ls, for my particular use case it is fine if you use "ls"
instead of "/bin/ls".

However, I do agree that using ls to fetch file permissions is
incorrect, so a better solution (in terms of code quality) would be
not to use ls at all.


On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com> wrote:
> So what you ultimately need is a piece of java code listing the rwx
> permissions for user, group and others that is not using ls
> internally, is that correct?
> If "RawLocalFileSystem" is not HDFS, do you really need to use
> hadoop-util for that?
> Can you tell us more about your use case?
>
> Regards,
> LLoyd
>
>
>
>
> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>>> Using ls to figure out permissions is a bad design anyway, so I would
>>> not be surprised if this hardcode was reported as a bug.
>>
>> Of course, I have no idea why it was implemented like this. I assume
>> it was written at some point in time where Java didn't provide the
>> needed APIS (?)
>>
>> Implementing the permission check without relying in ls at all is also
>> a solution for the problem I have :)
>>
>>> LLoyd
>>>
>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>>> > Hi,
>>> >
>>> > I am experiencing some crashes when using spark over local files (mainly for
>>> > testing). Some operations fail with
>>> >
>>> > java.lang.RuntimeException: Error while running command to get file
>>> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
>>> > such file or directory
>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>> >         at
>>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>>> >         at
>>> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>>> >         at
>>> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
>>> >         at
>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
>>> >         at
>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
>>> >         at
>>> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
>>> >
>>> > etcetera...
>>> >
>>> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
>>> > ls -ld to figure out file permissions (that is in
>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
>>> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
>>> > certain circumstances might not. Regardless of the reasons not to have ls in
>>> > /bin, hardcoding the directory bans users from using the standard mechanisms
>>> > to decide which binaries to run in their systems (in this case, $PATH), so I
>>> > wonder if there is a particular reason why that path has been hardcoded to
>>> > an absolute path instead to something resolvable using$PATH.
>>> >
>>> > Or in other words, is this a bug or a feature?
>>> >
>>> > Best
>>> >
>>> > --
>>> > Samuel
>>
>>
>>
>>
>> --
>> Samuel



-- 
Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Samuel <sa...@gmail.com>.
I am not using hadoop-util directly, it is Spark code what uses it
(i,e. not directly under my control).

Regarding ls, for my particular use case it is fine if you use "ls"
instead of "/bin/ls".

However, I do agree that using ls to fetch file permissions is
incorrect, so a better solution (in terms of code quality) would be
not to use ls at all.


On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com> wrote:
> So what you ultimately need is a piece of java code listing the rwx
> permissions for user, group and others that is not using ls
> internally, is that correct?
> If "RawLocalFileSystem" is not HDFS, do you really need to use
> hadoop-util for that?
> Can you tell us more about your use case?
>
> Regards,
> LLoyd
>
>
>
>
> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>>> Using ls to figure out permissions is a bad design anyway, so I would
>>> not be surprised if this hardcode was reported as a bug.
>>
>> Of course, I have no idea why it was implemented like this. I assume
>> it was written at some point in time where Java didn't provide the
>> needed APIS (?)
>>
>> Implementing the permission check without relying in ls at all is also
>> a solution for the problem I have :)
>>
>>> LLoyd
>>>
>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>>> > Hi,
>>> >
>>> > I am experiencing some crashes when using spark over local files (mainly for
>>> > testing). Some operations fail with
>>> >
>>> > java.lang.RuntimeException: Error while running command to get file
>>> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
>>> > such file or directory
>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>> >         at
>>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>>> >         at
>>> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>>> >         at
>>> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
>>> >         at
>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
>>> >         at
>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
>>> >         at
>>> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
>>> >
>>> > etcetera...
>>> >
>>> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
>>> > ls -ld to figure out file permissions (that is in
>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
>>> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
>>> > certain circumstances might not. Regardless of the reasons not to have ls in
>>> > /bin, hardcoding the directory bans users from using the standard mechanisms
>>> > to decide which binaries to run in their systems (in this case, $PATH), so I
>>> > wonder if there is a particular reason why that path has been hardcoded to
>>> > an absolute path instead to something resolvable using$PATH.
>>> >
>>> > Or in other words, is this a bug or a feature?
>>> >
>>> > Best
>>> >
>>> > --
>>> > Samuel
>>
>>
>>
>>
>> --
>> Samuel



-- 
Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Samuel <sa...@gmail.com>.
I am not using hadoop-util directly, it is Spark code what uses it
(i,e. not directly under my control).

Regarding ls, for my particular use case it is fine if you use "ls"
instead of "/bin/ls".

However, I do agree that using ls to fetch file permissions is
incorrect, so a better solution (in terms of code quality) would be
not to use ls at all.


On 11 December 2015 at 14:56, Namikaze Minato <ll...@gmail.com> wrote:
> So what you ultimately need is a piece of java code listing the rwx
> permissions for user, group and others that is not using ls
> internally, is that correct?
> If "RawLocalFileSystem" is not HDFS, do you really need to use
> hadoop-util for that?
> Can you tell us more about your use case?
>
> Regards,
> LLoyd
>
>
>
>
> On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>>> Using ls to figure out permissions is a bad design anyway, so I would
>>> not be surprised if this hardcode was reported as a bug.
>>
>> Of course, I have no idea why it was implemented like this. I assume
>> it was written at some point in time where Java didn't provide the
>> needed APIS (?)
>>
>> Implementing the permission check without relying in ls at all is also
>> a solution for the problem I have :)
>>
>>> LLoyd
>>>
>>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>>> > Hi,
>>> >
>>> > I am experiencing some crashes when using spark over local files (mainly for
>>> > testing). Some operations fail with
>>> >
>>> > java.lang.RuntimeException: Error while running command to get file
>>> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
>>> > such file or directory
>>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>> >         at
>>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>>> >         at
>>> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>>> >         at
>>> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
>>> >         at
>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
>>> >         at
>>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
>>> >         at
>>> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
>>> >
>>> > etcetera...
>>> >
>>> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
>>> > ls -ld to figure out file permissions (that is in
>>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
>>> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
>>> > certain circumstances might not. Regardless of the reasons not to have ls in
>>> > /bin, hardcoding the directory bans users from using the standard mechanisms
>>> > to decide which binaries to run in their systems (in this case, $PATH), so I
>>> > wonder if there is a particular reason why that path has been hardcoded to
>>> > an absolute path instead to something resolvable using$PATH.
>>> >
>>> > Or in other words, is this a bug or a feature?
>>> >
>>> > Best
>>> >
>>> > --
>>> > Samuel
>>
>>
>>
>>
>> --
>> Samuel



-- 
Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Namikaze Minato <ll...@gmail.com>.
So what you ultimately need is a piece of java code listing the rwx
permissions for user, group and others that is not using ls
internally, is that correct?
If "RawLocalFileSystem" is not HDFS, do you really need to use
hadoop-util for that?
Can you tell us more about your use case?

Regards,
LLoyd




On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>> Using ls to figure out permissions is a bad design anyway, so I would
>> not be surprised if this hardcode was reported as a bug.
>
> Of course, I have no idea why it was implemented like this. I assume
> it was written at some point in time where Java didn't provide the
> needed APIS (?)
>
> Implementing the permission check without relying in ls at all is also
> a solution for the problem I have :)
>
>> LLoyd
>>
>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>> > Hi,
>> >
>> > I am experiencing some crashes when using spark over local files (mainly for
>> > testing). Some operations fail with
>> >
>> > java.lang.RuntimeException: Error while running command to get file
>> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
>> > such file or directory
>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>> >         at
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>> >         at
>> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>> >         at
>> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
>> >         at
>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
>> >         at
>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
>> >         at
>> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
>> >
>> > etcetera...
>> >
>> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
>> > ls -ld to figure out file permissions (that is in
>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
>> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
>> > certain circumstances might not. Regardless of the reasons not to have ls in
>> > /bin, hardcoding the directory bans users from using the standard mechanisms
>> > to decide which binaries to run in their systems (in this case, $PATH), so I
>> > wonder if there is a particular reason why that path has been hardcoded to
>> > an absolute path instead to something resolvable using$PATH.
>> >
>> > Or in other words, is this a bug or a feature?
>> >
>> > Best
>> >
>> > --
>> > Samuel
>
>
>
>
> --
> Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Namikaze Minato <ll...@gmail.com>.
So what you ultimately need is a piece of java code listing the rwx
permissions for user, group and others that is not using ls
internally, is that correct?
If "RawLocalFileSystem" is not HDFS, do you really need to use
hadoop-util for that?
Can you tell us more about your use case?

Regards,
LLoyd




On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>> Using ls to figure out permissions is a bad design anyway, so I would
>> not be surprised if this hardcode was reported as a bug.
>
> Of course, I have no idea why it was implemented like this. I assume
> it was written at some point in time where Java didn't provide the
> needed APIS (?)
>
> Implementing the permission check without relying in ls at all is also
> a solution for the problem I have :)
>
>> LLoyd
>>
>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>> > Hi,
>> >
>> > I am experiencing some crashes when using spark over local files (mainly for
>> > testing). Some operations fail with
>> >
>> > java.lang.RuntimeException: Error while running command to get file
>> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
>> > such file or directory
>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>> >         at
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>> >         at
>> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>> >         at
>> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
>> >         at
>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
>> >         at
>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
>> >         at
>> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
>> >
>> > etcetera...
>> >
>> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
>> > ls -ld to figure out file permissions (that is in
>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
>> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
>> > certain circumstances might not. Regardless of the reasons not to have ls in
>> > /bin, hardcoding the directory bans users from using the standard mechanisms
>> > to decide which binaries to run in their systems (in this case, $PATH), so I
>> > wonder if there is a particular reason why that path has been hardcoded to
>> > an absolute path instead to something resolvable using$PATH.
>> >
>> > Or in other words, is this a bug or a feature?
>> >
>> > Best
>> >
>> > --
>> > Samuel
>
>
>
>
> --
> Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Namikaze Minato <ll...@gmail.com>.
So what you ultimately need is a piece of java code listing the rwx
permissions for user, group and others that is not using ls
internally, is that correct?
If "RawLocalFileSystem" is not HDFS, do you really need to use
hadoop-util for that?
Can you tell us more about your use case?

Regards,
LLoyd




On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>> Using ls to figure out permissions is a bad design anyway, so I would
>> not be surprised if this hardcode was reported as a bug.
>
> Of course, I have no idea why it was implemented like this. I assume
> it was written at some point in time where Java didn't provide the
> needed APIS (?)
>
> Implementing the permission check without relying in ls at all is also
> a solution for the problem I have :)
>
>> LLoyd
>>
>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>> > Hi,
>> >
>> > I am experiencing some crashes when using spark over local files (mainly for
>> > testing). Some operations fail with
>> >
>> > java.lang.RuntimeException: Error while running command to get file
>> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
>> > such file or directory
>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>> >         at
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>> >         at
>> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>> >         at
>> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
>> >         at
>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
>> >         at
>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
>> >         at
>> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
>> >
>> > etcetera...
>> >
>> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
>> > ls -ld to figure out file permissions (that is in
>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
>> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
>> > certain circumstances might not. Regardless of the reasons not to have ls in
>> > /bin, hardcoding the directory bans users from using the standard mechanisms
>> > to decide which binaries to run in their systems (in this case, $PATH), so I
>> > wonder if there is a particular reason why that path has been hardcoded to
>> > an absolute path instead to something resolvable using$PATH.
>> >
>> > Or in other words, is this a bug or a feature?
>> >
>> > Best
>> >
>> > --
>> > Samuel
>
>
>
>
> --
> Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Namikaze Minato <ll...@gmail.com>.
So what you ultimately need is a piece of java code listing the rwx
permissions for user, group and others that is not using ls
internally, is that correct?
If "RawLocalFileSystem" is not HDFS, do you really need to use
hadoop-util for that?
Can you tell us more about your use case?

Regards,
LLoyd




On 11 December 2015 at 13:05, Samuel <sa...@gmail.com> wrote:
>> Using ls to figure out permissions is a bad design anyway, so I would
>> not be surprised if this hardcode was reported as a bug.
>
> Of course, I have no idea why it was implemented like this. I assume
> it was written at some point in time where Java didn't provide the
> needed APIS (?)
>
> Implementing the permission check without relying in ls at all is also
> a solution for the problem I have :)
>
>> LLoyd
>>
>> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
>> > Hi,
>> >
>> > I am experiencing some crashes when using spark over local files (mainly for
>> > testing). Some operations fail with
>> >
>> > java.lang.RuntimeException: Error while running command to get file
>> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
>> > such file or directory
>> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>> >         at
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>> >         at
>> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>> >         at
>> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
>> >         at
>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
>> >         at
>> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
>> >         at
>> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
>> >
>> > etcetera...
>> >
>> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
>> > ls -ld to figure out file permissions (that is in
>> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
>> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
>> > certain circumstances might not. Regardless of the reasons not to have ls in
>> > /bin, hardcoding the directory bans users from using the standard mechanisms
>> > to decide which binaries to run in their systems (in this case, $PATH), so I
>> > wonder if there is a particular reason why that path has been hardcoded to
>> > an absolute path instead to something resolvable using$PATH.
>> >
>> > Or in other words, is this a bug or a feature?
>> >
>> > Best
>> >
>> > --
>> > Samuel
>
>
>
>
> --
> Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Samuel <sa...@gmail.com>.
> Using ls to figure out permissions is a bad design anyway, so I would
> not be surprised if this hardcode was reported as a bug.

Of course, I have no idea why it was implemented like this. I assume
it was written at some point in time where Java didn't provide the
needed APIS (?)

Implementing the permission check without relying in ls at all is also
a solution for the problem I have :)

> LLoyd
>
> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
> > Hi,
> >
> > I am experiencing some crashes when using spark over local files (mainly for
> > testing). Some operations fail with
> >
> > java.lang.RuntimeException: Error while running command to get file
> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
> > such file or directory
> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
> >         at
> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
> >         at
> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
> >         at
> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
> >         at
> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
> >         at
> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
> >
> > etcetera...
> >
> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
> > ls -ld to figure out file permissions (that is in
> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
> > certain circumstances might not. Regardless of the reasons not to have ls in
> > /bin, hardcoding the directory bans users from using the standard mechanisms
> > to decide which binaries to run in their systems (in this case, $PATH), so I
> > wonder if there is a particular reason why that path has been hardcoded to
> > an absolute path instead to something resolvable using$PATH.
> >
> > Or in other words, is this a bug or a feature?
> >
> > Best
> >
> > --
> > Samuel




-- 
Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Samuel <sa...@gmail.com>.
> Using ls to figure out permissions is a bad design anyway, so I would
> not be surprised if this hardcode was reported as a bug.

Of course, I have no idea why it was implemented like this. I assume
it was written at some point in time where Java didn't provide the
needed APIS (?)

Implementing the permission check without relying in ls at all is also
a solution for the problem I have :)

> LLoyd
>
> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
> > Hi,
> >
> > I am experiencing some crashes when using spark over local files (mainly for
> > testing). Some operations fail with
> >
> > java.lang.RuntimeException: Error while running command to get file
> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
> > such file or directory
> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
> >         at
> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
> >         at
> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
> >         at
> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
> >         at
> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
> >         at
> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
> >
> > etcetera...
> >
> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
> > ls -ld to figure out file permissions (that is in
> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
> > certain circumstances might not. Regardless of the reasons not to have ls in
> > /bin, hardcoding the directory bans users from using the standard mechanisms
> > to decide which binaries to run in their systems (in this case, $PATH), so I
> > wonder if there is a particular reason why that path has been hardcoded to
> > an absolute path instead to something resolvable using$PATH.
> >
> > Or in other words, is this a bug or a feature?
> >
> > Best
> >
> > --
> > Samuel




-- 
Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Samuel <sa...@gmail.com>.
> Using ls to figure out permissions is a bad design anyway, so I would
> not be surprised if this hardcode was reported as a bug.

Of course, I have no idea why it was implemented like this. I assume
it was written at some point in time where Java didn't provide the
needed APIS (?)

Implementing the permission check without relying in ls at all is also
a solution for the problem I have :)

> LLoyd
>
> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
> > Hi,
> >
> > I am experiencing some crashes when using spark over local files (mainly for
> > testing). Some operations fail with
> >
> > java.lang.RuntimeException: Error while running command to get file
> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
> > such file or directory
> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
> >         at
> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
> >         at
> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
> >         at
> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
> >         at
> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
> >         at
> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
> >
> > etcetera...
> >
> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
> > ls -ld to figure out file permissions (that is in
> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
> > certain circumstances might not. Regardless of the reasons not to have ls in
> > /bin, hardcoding the directory bans users from using the standard mechanisms
> > to decide which binaries to run in their systems (in this case, $PATH), so I
> > wonder if there is a particular reason why that path has been hardcoded to
> > an absolute path instead to something resolvable using$PATH.
> >
> > Or in other words, is this a bug or a feature?
> >
> > Best
> >
> > --
> > Samuel




-- 
Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Samuel <sa...@gmail.com>.
> Using ls to figure out permissions is a bad design anyway, so I would
> not be surprised if this hardcode was reported as a bug.

Of course, I have no idea why it was implemented like this. I assume
it was written at some point in time where Java didn't provide the
needed APIS (?)

Implementing the permission check without relying in ls at all is also
a solution for the problem I have :)

> LLoyd
>
> On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
> > Hi,
> >
> > I am experiencing some crashes when using spark over local files (mainly for
> > testing). Some operations fail with
> >
> > java.lang.RuntimeException: Error while running command to get file
> > permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
> > such file or directory
> >         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:188)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
> >         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
> >         at
> > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
> >         at
> > org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
> >         at
> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
> >         at
> > org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
> >         at
> > org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
> >
> > etcetera...
> >
> > Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
> > ls -ld to figure out file permissions (that is in
> > RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
> > calling ls, Shell .java calls /bin/ls, which is usually available, but in
> > certain circumstances might not. Regardless of the reasons not to have ls in
> > /bin, hardcoding the directory bans users from using the standard mechanisms
> > to decide which binaries to run in their systems (in this case, $PATH), so I
> > wonder if there is a particular reason why that path has been hardcoded to
> > an absolute path instead to something resolvable using$PATH.
> >
> > Or in other words, is this a bug or a feature?
> >
> > Best
> >
> > --
> > Samuel




-- 
Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Namikaze Minato <ll...@gmail.com>.
Using ls to figure out permissions is a bad design anyway, so I would
not be surprised if this hardcode was reported as a bug.

LLoyd

On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
> Hi,
>
> I am experiencing some crashes when using spark over local files (mainly for
> testing). Some operations fail with
>
> java.lang.RuntimeException: Error while running command to get file
> permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
> such file or directory
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
>         at
> org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
>
> etcetera...
>
> Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
> ls -ld to figure out file permissions (that is in
> RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
> calling ls, Shell .java calls /bin/ls, which is usually available, but in
> certain circumstances might not. Regardless of the reasons not to have ls in
> /bin, hardcoding the directory bans users from using the standard mechanisms
> to decide which binaries to run in their systems (in this case, $PATH), so I
> wonder if there is a particular reason why that path has been hardcoded to
> an absolute path instead to something resolvable using$PATH.
>
> Or in other words, is this a bug or a feature?
>
> Best
>
> --
> Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Namikaze Minato <ll...@gmail.com>.
Using ls to figure out permissions is a bad design anyway, so I would
not be surprised if this hardcode was reported as a bug.

LLoyd

On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
> Hi,
>
> I am experiencing some crashes when using spark over local files (mainly for
> testing). Some operations fail with
>
> java.lang.RuntimeException: Error while running command to get file
> permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
> such file or directory
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
>         at
> org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
>
> etcetera...
>
> Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
> ls -ld to figure out file permissions (that is in
> RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
> calling ls, Shell .java calls /bin/ls, which is usually available, but in
> certain circumstances might not. Regardless of the reasons not to have ls in
> /bin, hardcoding the directory bans users from using the standard mechanisms
> to decide which binaries to run in their systems (in this case, $PATH), so I
> wonder if there is a particular reason why that path has been hardcoded to
> an absolute path instead to something resolvable using$PATH.
>
> Or in other words, is this a bug or a feature?
>
> Best
>
> --
> Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Namikaze Minato <ll...@gmail.com>.
Using ls to figure out permissions is a bad design anyway, so I would
not be surprised if this hardcode was reported as a bug.

LLoyd

On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
> Hi,
>
> I am experiencing some crashes when using spark over local files (mainly for
> testing). Some operations fail with
>
> java.lang.RuntimeException: Error while running command to get file
> permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
> such file or directory
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
>         at
> org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
>
> etcetera...
>
> Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
> ls -ld to figure out file permissions (that is in
> RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
> calling ls, Shell .java calls /bin/ls, which is usually available, but in
> certain circumstances might not. Regardless of the reasons not to have ls in
> /bin, hardcoding the directory bans users from using the standard mechanisms
> to decide which binaries to run in their systems (in this case, $PATH), so I
> wonder if there is a particular reason why that path has been hardcoded to
> an absolute path instead to something resolvable using$PATH.
>
> Or in other words, is this a bug or a feature?
>
> Best
>
> --
> Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Dependency on absolute path /bin/ls in Shell.java

Posted by Namikaze Minato <ll...@gmail.com>.
Using ls to figure out permissions is a bad design anyway, so I would
not be surprised if this hardcode was reported as a bug.

LLoyd

On 11 December 2015 at 09:19, Samuel <sa...@gmail.com> wrote:
> Hi,
>
> I am experiencing some crashes when using spark over local files (mainly for
> testing). Some operations fail with
>
> java.lang.RuntimeException: Error while running command to get file
> permissions : java.io.IOException: Cannot run program "/bin/ls": error=2, No
> such file or directory
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:51)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:514)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:489)
>         at
> org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$buildScan$1$$anon$1$$anonfun$12.apply(newParquet.scala:292)
>
> etcetera...
>
> Which seems to be related to Shell.java in org.apache.hadoop-util, that uses
> ls -ld to figure out file permissions (that is in
> RawLocalFileSystem.loadPermissionsInfo). The problem is that instead of just
> calling ls, Shell .java calls /bin/ls, which is usually available, but in
> certain circumstances might not. Regardless of the reasons not to have ls in
> /bin, hardcoding the directory bans users from using the standard mechanisms
> to decide which binaries to run in their systems (in this case, $PATH), so I
> wonder if there is a particular reason why that path has been hardcoded to
> an absolute path instead to something resolvable using$PATH.
>
> Or in other words, is this a bug or a feature?
>
> Best
>
> --
> Samuel

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org