You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Oleg Zhurakousky <ol...@gmail.com> on 2012/10/05 15:19:32 UTC

Impersonating HDFS user

I am working on some samples where I want to write to HDFS running on
another machine (different OS etc.)
The identity of my client process is just whatever my OS says it is (e.g.,
'oleg') hence:

08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310 from
oleg ipc.Client:803 - IPC Client (47) connection to
/192.168.15.20:54310from oleg got value #2


But there is no 'oleg' where the hadoop is running. Instead there is
'hduser'.

Is there a way or an equivalent of "RunAs" in hadoop?

Thanks
Oleg

Re: Impersonating HDFS user

Posted by Oleg Zhurakousky <ol...@gmail.com>.
Thank ou guys I got everything i needed working

On Fri, Oct 5, 2012 at 3:29 PM, Chris Nauroth <cn...@hortonworks.com>wrote:

> BTW, additional details on impersonation are here, including information
> about a piece of configuration required to allow use of doAs.
>
> http://hadoop.apache.org/docs/r1.0.3/Secure_Impersonation.html
>
> Thank you,
> --Chris
>
>
> On Fri, Oct 5, 2012 at 7:42 AM, Oleg Zhurakousky <
> oleg.zhurakousky@gmail.com> wrote:
>
>> sorry clicked send too soon, but basically changing that did not produce
>> any result, still seeing the same message.So I guess my question is what is
>> the property that is responsible for that?
>>
>> Thanks
>> Oleg
>>
>>
>> On Fri, Oct 5, 2012 at 10:40 AM, Oleg Zhurakousky <
>> oleg.zhurakousky@gmail.com> wrote:
>>
>>> Yes I understand that and I guess I am trying to find that 'right
>>> property'
>>> I did find one reference to it in hdfs-defaul.xml
>>>
>>> <name>dfs.datanode.address</name>
>>>
>>> <value>0.0.0.0:50010</value>
>>>
>>> so i changed that in my hdfs-site.xml to
>>>
>>> <name>dfs.datanode.address</name>
>>>
>>> <value>192.168.15.20:50010</value>
>>>
>>>
>>> But
>>>
>>>
>>> On Fri, Oct 5, 2012 at 10:33 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>>
>>>> Indeed, you are connecting to localhost and you said it was a remote
>>>> connection so I guess there is nothing there which is relevant for you.
>>>> The main idea is that you need to provide the configuration files. They
>>>> are read by default from the classpath. Any place where you have a
>>>> Configuration/JobConf you could also set up the right properties which
>>>> would be the location of the HDFS master (and mapred if you want to do
>>>> something about it).
>>>>
>>>> Regards
>>>>
>>>> Bertrand
>>>>
>>>>
>>>> On Fri, Oct 5, 2012 at 4:15 PM, Oleg Zhurakousky <
>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>
>>>>> So now I am passed it and able to RunAs 'hduser', but when I attempt
>>>>> to read from FSDataInputStream i see this message in my console
>>>>>
>>>>> 10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
>>>>> 127.0.0.1:50010, add to deadNodes and
>>>>> continuejava.net.ConnectException: Connection refused
>>>>>
>>>>> 10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
>>>>> blk_-4047236896256451627_1003 from any node: java.io.IOException: No
>>>>> live nodes contain current block. Will get new block locations from
>>>>> namenode and retry...
>>>>>
>>>>>
>>>>> I am obviously missing a configuration setting somewhere. . . any idea?
>>>>>
>>>>> Thanks
>>>>>
>>>>> Oleg
>>>>>
>>>>> On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <
>>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>>
>>>>>> After i clicked send I found the same link ;), but thank you anyway.
>>>>>>
>>>>>> Oleg
>>>>>>
>>>>>>
>>>>>> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> You might be looking for something like :
>>>>>>> UserGroupInformation.createRemoteUser(user).doAs(
>>>>>>>
>>>>>>> see
>>>>>>>
>>>>>>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>>>>>>
>>>>>>> It is a JAAS wrapper for Hadoop.
>>>>>>>
>>>>>>> Regards
>>>>>>>
>>>>>>> Bertrand
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>>>>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>>>>
>>>>>>>> I am working on some samples where I want to write to HDFS running
>>>>>>>> on another machine (different OS etc.)
>>>>>>>> The identity of my client process is just whatever my OS says it
>>>>>>>> is (e.g., 'oleg') hence:
>>>>>>>>
>>>>>>>> 08:56:49,240 DEBUG IPC Client (47) connection to /
>>>>>>>> 192.168.15.20:54310 from oleg ipc.Client:803 - IPC Client (47)
>>>>>>>> connection to /192.168.15.20:54310 from oleg got value #2
>>>>>>>>
>>>>>>>> But there is no 'oleg' where the hadoop is running. Instead there
>>>>>>>> is 'hduser'.
>>>>>>>>
>>>>>>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>  Oleg
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Bertrand Dechoux
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Bertrand Dechoux
>>>>
>>>
>>>
>>
>

Re: Impersonating HDFS user

Posted by Oleg Zhurakousky <ol...@gmail.com>.
Thank ou guys I got everything i needed working

On Fri, Oct 5, 2012 at 3:29 PM, Chris Nauroth <cn...@hortonworks.com>wrote:

> BTW, additional details on impersonation are here, including information
> about a piece of configuration required to allow use of doAs.
>
> http://hadoop.apache.org/docs/r1.0.3/Secure_Impersonation.html
>
> Thank you,
> --Chris
>
>
> On Fri, Oct 5, 2012 at 7:42 AM, Oleg Zhurakousky <
> oleg.zhurakousky@gmail.com> wrote:
>
>> sorry clicked send too soon, but basically changing that did not produce
>> any result, still seeing the same message.So I guess my question is what is
>> the property that is responsible for that?
>>
>> Thanks
>> Oleg
>>
>>
>> On Fri, Oct 5, 2012 at 10:40 AM, Oleg Zhurakousky <
>> oleg.zhurakousky@gmail.com> wrote:
>>
>>> Yes I understand that and I guess I am trying to find that 'right
>>> property'
>>> I did find one reference to it in hdfs-defaul.xml
>>>
>>> <name>dfs.datanode.address</name>
>>>
>>> <value>0.0.0.0:50010</value>
>>>
>>> so i changed that in my hdfs-site.xml to
>>>
>>> <name>dfs.datanode.address</name>
>>>
>>> <value>192.168.15.20:50010</value>
>>>
>>>
>>> But
>>>
>>>
>>> On Fri, Oct 5, 2012 at 10:33 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>>
>>>> Indeed, you are connecting to localhost and you said it was a remote
>>>> connection so I guess there is nothing there which is relevant for you.
>>>> The main idea is that you need to provide the configuration files. They
>>>> are read by default from the classpath. Any place where you have a
>>>> Configuration/JobConf you could also set up the right properties which
>>>> would be the location of the HDFS master (and mapred if you want to do
>>>> something about it).
>>>>
>>>> Regards
>>>>
>>>> Bertrand
>>>>
>>>>
>>>> On Fri, Oct 5, 2012 at 4:15 PM, Oleg Zhurakousky <
>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>
>>>>> So now I am passed it and able to RunAs 'hduser', but when I attempt
>>>>> to read from FSDataInputStream i see this message in my console
>>>>>
>>>>> 10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
>>>>> 127.0.0.1:50010, add to deadNodes and
>>>>> continuejava.net.ConnectException: Connection refused
>>>>>
>>>>> 10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
>>>>> blk_-4047236896256451627_1003 from any node: java.io.IOException: No
>>>>> live nodes contain current block. Will get new block locations from
>>>>> namenode and retry...
>>>>>
>>>>>
>>>>> I am obviously missing a configuration setting somewhere. . . any idea?
>>>>>
>>>>> Thanks
>>>>>
>>>>> Oleg
>>>>>
>>>>> On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <
>>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>>
>>>>>> After i clicked send I found the same link ;), but thank you anyway.
>>>>>>
>>>>>> Oleg
>>>>>>
>>>>>>
>>>>>> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> You might be looking for something like :
>>>>>>> UserGroupInformation.createRemoteUser(user).doAs(
>>>>>>>
>>>>>>> see
>>>>>>>
>>>>>>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>>>>>>
>>>>>>> It is a JAAS wrapper for Hadoop.
>>>>>>>
>>>>>>> Regards
>>>>>>>
>>>>>>> Bertrand
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>>>>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>>>>
>>>>>>>> I am working on some samples where I want to write to HDFS running
>>>>>>>> on another machine (different OS etc.)
>>>>>>>> The identity of my client process is just whatever my OS says it
>>>>>>>> is (e.g., 'oleg') hence:
>>>>>>>>
>>>>>>>> 08:56:49,240 DEBUG IPC Client (47) connection to /
>>>>>>>> 192.168.15.20:54310 from oleg ipc.Client:803 - IPC Client (47)
>>>>>>>> connection to /192.168.15.20:54310 from oleg got value #2
>>>>>>>>
>>>>>>>> But there is no 'oleg' where the hadoop is running. Instead there
>>>>>>>> is 'hduser'.
>>>>>>>>
>>>>>>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>  Oleg
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Bertrand Dechoux
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Bertrand Dechoux
>>>>
>>>
>>>
>>
>

Re: Impersonating HDFS user

Posted by Oleg Zhurakousky <ol...@gmail.com>.
Thank ou guys I got everything i needed working

On Fri, Oct 5, 2012 at 3:29 PM, Chris Nauroth <cn...@hortonworks.com>wrote:

> BTW, additional details on impersonation are here, including information
> about a piece of configuration required to allow use of doAs.
>
> http://hadoop.apache.org/docs/r1.0.3/Secure_Impersonation.html
>
> Thank you,
> --Chris
>
>
> On Fri, Oct 5, 2012 at 7:42 AM, Oleg Zhurakousky <
> oleg.zhurakousky@gmail.com> wrote:
>
>> sorry clicked send too soon, but basically changing that did not produce
>> any result, still seeing the same message.So I guess my question is what is
>> the property that is responsible for that?
>>
>> Thanks
>> Oleg
>>
>>
>> On Fri, Oct 5, 2012 at 10:40 AM, Oleg Zhurakousky <
>> oleg.zhurakousky@gmail.com> wrote:
>>
>>> Yes I understand that and I guess I am trying to find that 'right
>>> property'
>>> I did find one reference to it in hdfs-defaul.xml
>>>
>>> <name>dfs.datanode.address</name>
>>>
>>> <value>0.0.0.0:50010</value>
>>>
>>> so i changed that in my hdfs-site.xml to
>>>
>>> <name>dfs.datanode.address</name>
>>>
>>> <value>192.168.15.20:50010</value>
>>>
>>>
>>> But
>>>
>>>
>>> On Fri, Oct 5, 2012 at 10:33 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>>
>>>> Indeed, you are connecting to localhost and you said it was a remote
>>>> connection so I guess there is nothing there which is relevant for you.
>>>> The main idea is that you need to provide the configuration files. They
>>>> are read by default from the classpath. Any place where you have a
>>>> Configuration/JobConf you could also set up the right properties which
>>>> would be the location of the HDFS master (and mapred if you want to do
>>>> something about it).
>>>>
>>>> Regards
>>>>
>>>> Bertrand
>>>>
>>>>
>>>> On Fri, Oct 5, 2012 at 4:15 PM, Oleg Zhurakousky <
>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>
>>>>> So now I am passed it and able to RunAs 'hduser', but when I attempt
>>>>> to read from FSDataInputStream i see this message in my console
>>>>>
>>>>> 10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
>>>>> 127.0.0.1:50010, add to deadNodes and
>>>>> continuejava.net.ConnectException: Connection refused
>>>>>
>>>>> 10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
>>>>> blk_-4047236896256451627_1003 from any node: java.io.IOException: No
>>>>> live nodes contain current block. Will get new block locations from
>>>>> namenode and retry...
>>>>>
>>>>>
>>>>> I am obviously missing a configuration setting somewhere. . . any idea?
>>>>>
>>>>> Thanks
>>>>>
>>>>> Oleg
>>>>>
>>>>> On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <
>>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>>
>>>>>> After i clicked send I found the same link ;), but thank you anyway.
>>>>>>
>>>>>> Oleg
>>>>>>
>>>>>>
>>>>>> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> You might be looking for something like :
>>>>>>> UserGroupInformation.createRemoteUser(user).doAs(
>>>>>>>
>>>>>>> see
>>>>>>>
>>>>>>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>>>>>>
>>>>>>> It is a JAAS wrapper for Hadoop.
>>>>>>>
>>>>>>> Regards
>>>>>>>
>>>>>>> Bertrand
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>>>>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>>>>
>>>>>>>> I am working on some samples where I want to write to HDFS running
>>>>>>>> on another machine (different OS etc.)
>>>>>>>> The identity of my client process is just whatever my OS says it
>>>>>>>> is (e.g., 'oleg') hence:
>>>>>>>>
>>>>>>>> 08:56:49,240 DEBUG IPC Client (47) connection to /
>>>>>>>> 192.168.15.20:54310 from oleg ipc.Client:803 - IPC Client (47)
>>>>>>>> connection to /192.168.15.20:54310 from oleg got value #2
>>>>>>>>
>>>>>>>> But there is no 'oleg' where the hadoop is running. Instead there
>>>>>>>> is 'hduser'.
>>>>>>>>
>>>>>>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>  Oleg
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Bertrand Dechoux
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Bertrand Dechoux
>>>>
>>>
>>>
>>
>

Re: Impersonating HDFS user

Posted by Oleg Zhurakousky <ol...@gmail.com>.
Thank ou guys I got everything i needed working

On Fri, Oct 5, 2012 at 3:29 PM, Chris Nauroth <cn...@hortonworks.com>wrote:

> BTW, additional details on impersonation are here, including information
> about a piece of configuration required to allow use of doAs.
>
> http://hadoop.apache.org/docs/r1.0.3/Secure_Impersonation.html
>
> Thank you,
> --Chris
>
>
> On Fri, Oct 5, 2012 at 7:42 AM, Oleg Zhurakousky <
> oleg.zhurakousky@gmail.com> wrote:
>
>> sorry clicked send too soon, but basically changing that did not produce
>> any result, still seeing the same message.So I guess my question is what is
>> the property that is responsible for that?
>>
>> Thanks
>> Oleg
>>
>>
>> On Fri, Oct 5, 2012 at 10:40 AM, Oleg Zhurakousky <
>> oleg.zhurakousky@gmail.com> wrote:
>>
>>> Yes I understand that and I guess I am trying to find that 'right
>>> property'
>>> I did find one reference to it in hdfs-defaul.xml
>>>
>>> <name>dfs.datanode.address</name>
>>>
>>> <value>0.0.0.0:50010</value>
>>>
>>> so i changed that in my hdfs-site.xml to
>>>
>>> <name>dfs.datanode.address</name>
>>>
>>> <value>192.168.15.20:50010</value>
>>>
>>>
>>> But
>>>
>>>
>>> On Fri, Oct 5, 2012 at 10:33 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>>
>>>> Indeed, you are connecting to localhost and you said it was a remote
>>>> connection so I guess there is nothing there which is relevant for you.
>>>> The main idea is that you need to provide the configuration files. They
>>>> are read by default from the classpath. Any place where you have a
>>>> Configuration/JobConf you could also set up the right properties which
>>>> would be the location of the HDFS master (and mapred if you want to do
>>>> something about it).
>>>>
>>>> Regards
>>>>
>>>> Bertrand
>>>>
>>>>
>>>> On Fri, Oct 5, 2012 at 4:15 PM, Oleg Zhurakousky <
>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>
>>>>> So now I am passed it and able to RunAs 'hduser', but when I attempt
>>>>> to read from FSDataInputStream i see this message in my console
>>>>>
>>>>> 10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
>>>>> 127.0.0.1:50010, add to deadNodes and
>>>>> continuejava.net.ConnectException: Connection refused
>>>>>
>>>>> 10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
>>>>> blk_-4047236896256451627_1003 from any node: java.io.IOException: No
>>>>> live nodes contain current block. Will get new block locations from
>>>>> namenode and retry...
>>>>>
>>>>>
>>>>> I am obviously missing a configuration setting somewhere. . . any idea?
>>>>>
>>>>> Thanks
>>>>>
>>>>> Oleg
>>>>>
>>>>> On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <
>>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>>
>>>>>> After i clicked send I found the same link ;), but thank you anyway.
>>>>>>
>>>>>> Oleg
>>>>>>
>>>>>>
>>>>>> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> You might be looking for something like :
>>>>>>> UserGroupInformation.createRemoteUser(user).doAs(
>>>>>>>
>>>>>>> see
>>>>>>>
>>>>>>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>>>>>>
>>>>>>> It is a JAAS wrapper for Hadoop.
>>>>>>>
>>>>>>> Regards
>>>>>>>
>>>>>>> Bertrand
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>>>>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>>>>
>>>>>>>> I am working on some samples where I want to write to HDFS running
>>>>>>>> on another machine (different OS etc.)
>>>>>>>> The identity of my client process is just whatever my OS says it
>>>>>>>> is (e.g., 'oleg') hence:
>>>>>>>>
>>>>>>>> 08:56:49,240 DEBUG IPC Client (47) connection to /
>>>>>>>> 192.168.15.20:54310 from oleg ipc.Client:803 - IPC Client (47)
>>>>>>>> connection to /192.168.15.20:54310 from oleg got value #2
>>>>>>>>
>>>>>>>> But there is no 'oleg' where the hadoop is running. Instead there
>>>>>>>> is 'hduser'.
>>>>>>>>
>>>>>>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>  Oleg
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Bertrand Dechoux
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Bertrand Dechoux
>>>>
>>>
>>>
>>
>

Re: Impersonating HDFS user

Posted by Chris Nauroth <cn...@hortonworks.com>.
BTW, additional details on impersonation are here, including information
about a piece of configuration required to allow use of doAs.

http://hadoop.apache.org/docs/r1.0.3/Secure_Impersonation.html

Thank you,
--Chris

On Fri, Oct 5, 2012 at 7:42 AM, Oleg Zhurakousky <oleg.zhurakousky@gmail.com
> wrote:

> sorry clicked send too soon, but basically changing that did not produce
> any result, still seeing the same message.So I guess my question is what is
> the property that is responsible for that?
>
> Thanks
> Oleg
>
>
> On Fri, Oct 5, 2012 at 10:40 AM, Oleg Zhurakousky <
> oleg.zhurakousky@gmail.com> wrote:
>
>> Yes I understand that and I guess I am trying to find that 'right
>> property'
>> I did find one reference to it in hdfs-defaul.xml
>>
>> <name>dfs.datanode.address</name>
>>
>> <value>0.0.0.0:50010</value>
>>
>> so i changed that in my hdfs-site.xml to
>>
>> <name>dfs.datanode.address</name>
>>
>> <value>192.168.15.20:50010</value>
>>
>>
>> But
>>
>>
>> On Fri, Oct 5, 2012 at 10:33 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>
>>> Indeed, you are connecting to localhost and you said it was a remote
>>> connection so I guess there is nothing there which is relevant for you.
>>> The main idea is that you need to provide the configuration files. They
>>> are read by default from the classpath. Any place where you have a
>>> Configuration/JobConf you could also set up the right properties which
>>> would be the location of the HDFS master (and mapred if you want to do
>>> something about it).
>>>
>>> Regards
>>>
>>> Bertrand
>>>
>>>
>>> On Fri, Oct 5, 2012 at 4:15 PM, Oleg Zhurakousky <
>>> oleg.zhurakousky@gmail.com> wrote:
>>>
>>>> So now I am passed it and able to RunAs 'hduser', but when I attempt to
>>>> read from FSDataInputStream i see this message in my console
>>>>
>>>> 10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
>>>> 127.0.0.1:50010, add to deadNodes and continuejava.net.ConnectException:
>>>> Connection refused
>>>>
>>>> 10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
>>>> blk_-4047236896256451627_1003 from any node: java.io.IOException: No
>>>> live nodes contain current block. Will get new block locations from
>>>> namenode and retry...
>>>>
>>>>
>>>> I am obviously missing a configuration setting somewhere. . . any idea?
>>>>
>>>> Thanks
>>>>
>>>> Oleg
>>>>
>>>> On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <
>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>
>>>>> After i clicked send I found the same link ;), but thank you anyway.
>>>>>
>>>>> Oleg
>>>>>
>>>>>
>>>>> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> You might be looking for something like :
>>>>>> UserGroupInformation.createRemoteUser(user).doAs(
>>>>>>
>>>>>> see
>>>>>>
>>>>>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>>>>>
>>>>>> It is a JAAS wrapper for Hadoop.
>>>>>>
>>>>>> Regards
>>>>>>
>>>>>> Bertrand
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>>>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>>>
>>>>>>> I am working on some samples where I want to write to HDFS running
>>>>>>> on another machine (different OS etc.)
>>>>>>> The identity of my client process is just whatever my OS says it is
>>>>>>> (e.g., 'oleg') hence:
>>>>>>>
>>>>>>> 08:56:49,240 DEBUG IPC Client (47) connection to /
>>>>>>> 192.168.15.20:54310 from oleg ipc.Client:803 - IPC Client (47)
>>>>>>> connection to /192.168.15.20:54310 from oleg got value #2
>>>>>>>
>>>>>>> But there is no 'oleg' where the hadoop is running. Instead there is
>>>>>>> 'hduser'.
>>>>>>>
>>>>>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>>>>>
>>>>>>> Thanks
>>>>>>>  Oleg
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Bertrand Dechoux
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> Bertrand Dechoux
>>>
>>
>>
>

Re: Impersonating HDFS user

Posted by Chris Nauroth <cn...@hortonworks.com>.
BTW, additional details on impersonation are here, including information
about a piece of configuration required to allow use of doAs.

http://hadoop.apache.org/docs/r1.0.3/Secure_Impersonation.html

Thank you,
--Chris

On Fri, Oct 5, 2012 at 7:42 AM, Oleg Zhurakousky <oleg.zhurakousky@gmail.com
> wrote:

> sorry clicked send too soon, but basically changing that did not produce
> any result, still seeing the same message.So I guess my question is what is
> the property that is responsible for that?
>
> Thanks
> Oleg
>
>
> On Fri, Oct 5, 2012 at 10:40 AM, Oleg Zhurakousky <
> oleg.zhurakousky@gmail.com> wrote:
>
>> Yes I understand that and I guess I am trying to find that 'right
>> property'
>> I did find one reference to it in hdfs-defaul.xml
>>
>> <name>dfs.datanode.address</name>
>>
>> <value>0.0.0.0:50010</value>
>>
>> so i changed that in my hdfs-site.xml to
>>
>> <name>dfs.datanode.address</name>
>>
>> <value>192.168.15.20:50010</value>
>>
>>
>> But
>>
>>
>> On Fri, Oct 5, 2012 at 10:33 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>
>>> Indeed, you are connecting to localhost and you said it was a remote
>>> connection so I guess there is nothing there which is relevant for you.
>>> The main idea is that you need to provide the configuration files. They
>>> are read by default from the classpath. Any place where you have a
>>> Configuration/JobConf you could also set up the right properties which
>>> would be the location of the HDFS master (and mapred if you want to do
>>> something about it).
>>>
>>> Regards
>>>
>>> Bertrand
>>>
>>>
>>> On Fri, Oct 5, 2012 at 4:15 PM, Oleg Zhurakousky <
>>> oleg.zhurakousky@gmail.com> wrote:
>>>
>>>> So now I am passed it and able to RunAs 'hduser', but when I attempt to
>>>> read from FSDataInputStream i see this message in my console
>>>>
>>>> 10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
>>>> 127.0.0.1:50010, add to deadNodes and continuejava.net.ConnectException:
>>>> Connection refused
>>>>
>>>> 10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
>>>> blk_-4047236896256451627_1003 from any node: java.io.IOException: No
>>>> live nodes contain current block. Will get new block locations from
>>>> namenode and retry...
>>>>
>>>>
>>>> I am obviously missing a configuration setting somewhere. . . any idea?
>>>>
>>>> Thanks
>>>>
>>>> Oleg
>>>>
>>>> On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <
>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>
>>>>> After i clicked send I found the same link ;), but thank you anyway.
>>>>>
>>>>> Oleg
>>>>>
>>>>>
>>>>> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> You might be looking for something like :
>>>>>> UserGroupInformation.createRemoteUser(user).doAs(
>>>>>>
>>>>>> see
>>>>>>
>>>>>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>>>>>
>>>>>> It is a JAAS wrapper for Hadoop.
>>>>>>
>>>>>> Regards
>>>>>>
>>>>>> Bertrand
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>>>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>>>
>>>>>>> I am working on some samples where I want to write to HDFS running
>>>>>>> on another machine (different OS etc.)
>>>>>>> The identity of my client process is just whatever my OS says it is
>>>>>>> (e.g., 'oleg') hence:
>>>>>>>
>>>>>>> 08:56:49,240 DEBUG IPC Client (47) connection to /
>>>>>>> 192.168.15.20:54310 from oleg ipc.Client:803 - IPC Client (47)
>>>>>>> connection to /192.168.15.20:54310 from oleg got value #2
>>>>>>>
>>>>>>> But there is no 'oleg' where the hadoop is running. Instead there is
>>>>>>> 'hduser'.
>>>>>>>
>>>>>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>>>>>
>>>>>>> Thanks
>>>>>>>  Oleg
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Bertrand Dechoux
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> Bertrand Dechoux
>>>
>>
>>
>

Re: Impersonating HDFS user

Posted by Chris Nauroth <cn...@hortonworks.com>.
BTW, additional details on impersonation are here, including information
about a piece of configuration required to allow use of doAs.

http://hadoop.apache.org/docs/r1.0.3/Secure_Impersonation.html

Thank you,
--Chris

On Fri, Oct 5, 2012 at 7:42 AM, Oleg Zhurakousky <oleg.zhurakousky@gmail.com
> wrote:

> sorry clicked send too soon, but basically changing that did not produce
> any result, still seeing the same message.So I guess my question is what is
> the property that is responsible for that?
>
> Thanks
> Oleg
>
>
> On Fri, Oct 5, 2012 at 10:40 AM, Oleg Zhurakousky <
> oleg.zhurakousky@gmail.com> wrote:
>
>> Yes I understand that and I guess I am trying to find that 'right
>> property'
>> I did find one reference to it in hdfs-defaul.xml
>>
>> <name>dfs.datanode.address</name>
>>
>> <value>0.0.0.0:50010</value>
>>
>> so i changed that in my hdfs-site.xml to
>>
>> <name>dfs.datanode.address</name>
>>
>> <value>192.168.15.20:50010</value>
>>
>>
>> But
>>
>>
>> On Fri, Oct 5, 2012 at 10:33 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>
>>> Indeed, you are connecting to localhost and you said it was a remote
>>> connection so I guess there is nothing there which is relevant for you.
>>> The main idea is that you need to provide the configuration files. They
>>> are read by default from the classpath. Any place where you have a
>>> Configuration/JobConf you could also set up the right properties which
>>> would be the location of the HDFS master (and mapred if you want to do
>>> something about it).
>>>
>>> Regards
>>>
>>> Bertrand
>>>
>>>
>>> On Fri, Oct 5, 2012 at 4:15 PM, Oleg Zhurakousky <
>>> oleg.zhurakousky@gmail.com> wrote:
>>>
>>>> So now I am passed it and able to RunAs 'hduser', but when I attempt to
>>>> read from FSDataInputStream i see this message in my console
>>>>
>>>> 10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
>>>> 127.0.0.1:50010, add to deadNodes and continuejava.net.ConnectException:
>>>> Connection refused
>>>>
>>>> 10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
>>>> blk_-4047236896256451627_1003 from any node: java.io.IOException: No
>>>> live nodes contain current block. Will get new block locations from
>>>> namenode and retry...
>>>>
>>>>
>>>> I am obviously missing a configuration setting somewhere. . . any idea?
>>>>
>>>> Thanks
>>>>
>>>> Oleg
>>>>
>>>> On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <
>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>
>>>>> After i clicked send I found the same link ;), but thank you anyway.
>>>>>
>>>>> Oleg
>>>>>
>>>>>
>>>>> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> You might be looking for something like :
>>>>>> UserGroupInformation.createRemoteUser(user).doAs(
>>>>>>
>>>>>> see
>>>>>>
>>>>>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>>>>>
>>>>>> It is a JAAS wrapper for Hadoop.
>>>>>>
>>>>>> Regards
>>>>>>
>>>>>> Bertrand
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>>>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>>>
>>>>>>> I am working on some samples where I want to write to HDFS running
>>>>>>> on another machine (different OS etc.)
>>>>>>> The identity of my client process is just whatever my OS says it is
>>>>>>> (e.g., 'oleg') hence:
>>>>>>>
>>>>>>> 08:56:49,240 DEBUG IPC Client (47) connection to /
>>>>>>> 192.168.15.20:54310 from oleg ipc.Client:803 - IPC Client (47)
>>>>>>> connection to /192.168.15.20:54310 from oleg got value #2
>>>>>>>
>>>>>>> But there is no 'oleg' where the hadoop is running. Instead there is
>>>>>>> 'hduser'.
>>>>>>>
>>>>>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>>>>>
>>>>>>> Thanks
>>>>>>>  Oleg
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Bertrand Dechoux
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> Bertrand Dechoux
>>>
>>
>>
>

Re: Impersonating HDFS user

Posted by Chris Nauroth <cn...@hortonworks.com>.
BTW, additional details on impersonation are here, including information
about a piece of configuration required to allow use of doAs.

http://hadoop.apache.org/docs/r1.0.3/Secure_Impersonation.html

Thank you,
--Chris

On Fri, Oct 5, 2012 at 7:42 AM, Oleg Zhurakousky <oleg.zhurakousky@gmail.com
> wrote:

> sorry clicked send too soon, but basically changing that did not produce
> any result, still seeing the same message.So I guess my question is what is
> the property that is responsible for that?
>
> Thanks
> Oleg
>
>
> On Fri, Oct 5, 2012 at 10:40 AM, Oleg Zhurakousky <
> oleg.zhurakousky@gmail.com> wrote:
>
>> Yes I understand that and I guess I am trying to find that 'right
>> property'
>> I did find one reference to it in hdfs-defaul.xml
>>
>> <name>dfs.datanode.address</name>
>>
>> <value>0.0.0.0:50010</value>
>>
>> so i changed that in my hdfs-site.xml to
>>
>> <name>dfs.datanode.address</name>
>>
>> <value>192.168.15.20:50010</value>
>>
>>
>> But
>>
>>
>> On Fri, Oct 5, 2012 at 10:33 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>
>>> Indeed, you are connecting to localhost and you said it was a remote
>>> connection so I guess there is nothing there which is relevant for you.
>>> The main idea is that you need to provide the configuration files. They
>>> are read by default from the classpath. Any place where you have a
>>> Configuration/JobConf you could also set up the right properties which
>>> would be the location of the HDFS master (and mapred if you want to do
>>> something about it).
>>>
>>> Regards
>>>
>>> Bertrand
>>>
>>>
>>> On Fri, Oct 5, 2012 at 4:15 PM, Oleg Zhurakousky <
>>> oleg.zhurakousky@gmail.com> wrote:
>>>
>>>> So now I am passed it and able to RunAs 'hduser', but when I attempt to
>>>> read from FSDataInputStream i see this message in my console
>>>>
>>>> 10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
>>>> 127.0.0.1:50010, add to deadNodes and continuejava.net.ConnectException:
>>>> Connection refused
>>>>
>>>> 10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
>>>> blk_-4047236896256451627_1003 from any node: java.io.IOException: No
>>>> live nodes contain current block. Will get new block locations from
>>>> namenode and retry...
>>>>
>>>>
>>>> I am obviously missing a configuration setting somewhere. . . any idea?
>>>>
>>>> Thanks
>>>>
>>>> Oleg
>>>>
>>>> On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <
>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>
>>>>> After i clicked send I found the same link ;), but thank you anyway.
>>>>>
>>>>> Oleg
>>>>>
>>>>>
>>>>> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> You might be looking for something like :
>>>>>> UserGroupInformation.createRemoteUser(user).doAs(
>>>>>>
>>>>>> see
>>>>>>
>>>>>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>>>>>
>>>>>> It is a JAAS wrapper for Hadoop.
>>>>>>
>>>>>> Regards
>>>>>>
>>>>>> Bertrand
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>>>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>>>
>>>>>>> I am working on some samples where I want to write to HDFS running
>>>>>>> on another machine (different OS etc.)
>>>>>>> The identity of my client process is just whatever my OS says it is
>>>>>>> (e.g., 'oleg') hence:
>>>>>>>
>>>>>>> 08:56:49,240 DEBUG IPC Client (47) connection to /
>>>>>>> 192.168.15.20:54310 from oleg ipc.Client:803 - IPC Client (47)
>>>>>>> connection to /192.168.15.20:54310 from oleg got value #2
>>>>>>>
>>>>>>> But there is no 'oleg' where the hadoop is running. Instead there is
>>>>>>> 'hduser'.
>>>>>>>
>>>>>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>>>>>
>>>>>>> Thanks
>>>>>>>  Oleg
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Bertrand Dechoux
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> Bertrand Dechoux
>>>
>>
>>
>

Re: Impersonating HDFS user

Posted by Oleg Zhurakousky <ol...@gmail.com>.
sorry clicked send too soon, but basically changing that did not produce
any result, still seeing the same message.So I guess my question is what is
the property that is responsible for that?

Thanks
Oleg

On Fri, Oct 5, 2012 at 10:40 AM, Oleg Zhurakousky <
oleg.zhurakousky@gmail.com> wrote:

> Yes I understand that and I guess I am trying to find that 'right
> property'
> I did find one reference to it in hdfs-defaul.xml
>
> <name>dfs.datanode.address</name>
>
> <value>0.0.0.0:50010</value>
>
> so i changed that in my hdfs-site.xml to
>
> <name>dfs.datanode.address</name>
>
> <value>192.168.15.20:50010</value>
>
>
> But
>
>
> On Fri, Oct 5, 2012 at 10:33 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>
>> Indeed, you are connecting to localhost and you said it was a remote
>> connection so I guess there is nothing there which is relevant for you.
>> The main idea is that you need to provide the configuration files. They
>> are read by default from the classpath. Any place where you have a
>> Configuration/JobConf you could also set up the right properties which
>> would be the location of the HDFS master (and mapred if you want to do
>> something about it).
>>
>> Regards
>>
>> Bertrand
>>
>>
>> On Fri, Oct 5, 2012 at 4:15 PM, Oleg Zhurakousky <
>> oleg.zhurakousky@gmail.com> wrote:
>>
>>> So now I am passed it and able to RunAs 'hduser', but when I attempt to
>>> read from FSDataInputStream i see this message in my console
>>>
>>> 10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
>>> 127.0.0.1:50010, add to deadNodes and continuejava.net.ConnectException:
>>> Connection refused
>>>
>>> 10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
>>> blk_-4047236896256451627_1003 from any node: java.io.IOException: No
>>> live nodes contain current block. Will get new block locations from
>>> namenode and retry...
>>>
>>>
>>> I am obviously missing a configuration setting somewhere. . . any idea?
>>>
>>> Thanks
>>>
>>> Oleg
>>>
>>> On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <
>>> oleg.zhurakousky@gmail.com> wrote:
>>>
>>>> After i clicked send I found the same link ;), but thank you anyway.
>>>>
>>>> Oleg
>>>>
>>>>
>>>> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> You might be looking for something like :
>>>>> UserGroupInformation.createRemoteUser(user).doAs(
>>>>>
>>>>> see
>>>>>
>>>>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>>>>
>>>>> It is a JAAS wrapper for Hadoop.
>>>>>
>>>>> Regards
>>>>>
>>>>> Bertrand
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>>
>>>>>> I am working on some samples where I want to write to HDFS running on
>>>>>> another machine (different OS etc.)
>>>>>> The identity of my client process is just whatever my OS says it is
>>>>>> (e.g., 'oleg') hence:
>>>>>>
>>>>>> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
>>>>>> 192.168.15.20:54310 from oleg got value #2
>>>>>>
>>>>>> But there is no 'oleg' where the hadoop is running. Instead there is
>>>>>> 'hduser'.
>>>>>>
>>>>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>>>>
>>>>>> Thanks
>>>>>>  Oleg
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Bertrand Dechoux
>>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> Bertrand Dechoux
>>
>
>

Re: Impersonating HDFS user

Posted by Oleg Zhurakousky <ol...@gmail.com>.
sorry clicked send too soon, but basically changing that did not produce
any result, still seeing the same message.So I guess my question is what is
the property that is responsible for that?

Thanks
Oleg

On Fri, Oct 5, 2012 at 10:40 AM, Oleg Zhurakousky <
oleg.zhurakousky@gmail.com> wrote:

> Yes I understand that and I guess I am trying to find that 'right
> property'
> I did find one reference to it in hdfs-defaul.xml
>
> <name>dfs.datanode.address</name>
>
> <value>0.0.0.0:50010</value>
>
> so i changed that in my hdfs-site.xml to
>
> <name>dfs.datanode.address</name>
>
> <value>192.168.15.20:50010</value>
>
>
> But
>
>
> On Fri, Oct 5, 2012 at 10:33 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>
>> Indeed, you are connecting to localhost and you said it was a remote
>> connection so I guess there is nothing there which is relevant for you.
>> The main idea is that you need to provide the configuration files. They
>> are read by default from the classpath. Any place where you have a
>> Configuration/JobConf you could also set up the right properties which
>> would be the location of the HDFS master (and mapred if you want to do
>> something about it).
>>
>> Regards
>>
>> Bertrand
>>
>>
>> On Fri, Oct 5, 2012 at 4:15 PM, Oleg Zhurakousky <
>> oleg.zhurakousky@gmail.com> wrote:
>>
>>> So now I am passed it and able to RunAs 'hduser', but when I attempt to
>>> read from FSDataInputStream i see this message in my console
>>>
>>> 10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
>>> 127.0.0.1:50010, add to deadNodes and continuejava.net.ConnectException:
>>> Connection refused
>>>
>>> 10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
>>> blk_-4047236896256451627_1003 from any node: java.io.IOException: No
>>> live nodes contain current block. Will get new block locations from
>>> namenode and retry...
>>>
>>>
>>> I am obviously missing a configuration setting somewhere. . . any idea?
>>>
>>> Thanks
>>>
>>> Oleg
>>>
>>> On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <
>>> oleg.zhurakousky@gmail.com> wrote:
>>>
>>>> After i clicked send I found the same link ;), but thank you anyway.
>>>>
>>>> Oleg
>>>>
>>>>
>>>> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> You might be looking for something like :
>>>>> UserGroupInformation.createRemoteUser(user).doAs(
>>>>>
>>>>> see
>>>>>
>>>>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>>>>
>>>>> It is a JAAS wrapper for Hadoop.
>>>>>
>>>>> Regards
>>>>>
>>>>> Bertrand
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>>
>>>>>> I am working on some samples where I want to write to HDFS running on
>>>>>> another machine (different OS etc.)
>>>>>> The identity of my client process is just whatever my OS says it is
>>>>>> (e.g., 'oleg') hence:
>>>>>>
>>>>>> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
>>>>>> 192.168.15.20:54310 from oleg got value #2
>>>>>>
>>>>>> But there is no 'oleg' where the hadoop is running. Instead there is
>>>>>> 'hduser'.
>>>>>>
>>>>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>>>>
>>>>>> Thanks
>>>>>>  Oleg
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Bertrand Dechoux
>>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> Bertrand Dechoux
>>
>
>

Re: Impersonating HDFS user

Posted by Oleg Zhurakousky <ol...@gmail.com>.
sorry clicked send too soon, but basically changing that did not produce
any result, still seeing the same message.So I guess my question is what is
the property that is responsible for that?

Thanks
Oleg

On Fri, Oct 5, 2012 at 10:40 AM, Oleg Zhurakousky <
oleg.zhurakousky@gmail.com> wrote:

> Yes I understand that and I guess I am trying to find that 'right
> property'
> I did find one reference to it in hdfs-defaul.xml
>
> <name>dfs.datanode.address</name>
>
> <value>0.0.0.0:50010</value>
>
> so i changed that in my hdfs-site.xml to
>
> <name>dfs.datanode.address</name>
>
> <value>192.168.15.20:50010</value>
>
>
> But
>
>
> On Fri, Oct 5, 2012 at 10:33 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>
>> Indeed, you are connecting to localhost and you said it was a remote
>> connection so I guess there is nothing there which is relevant for you.
>> The main idea is that you need to provide the configuration files. They
>> are read by default from the classpath. Any place where you have a
>> Configuration/JobConf you could also set up the right properties which
>> would be the location of the HDFS master (and mapred if you want to do
>> something about it).
>>
>> Regards
>>
>> Bertrand
>>
>>
>> On Fri, Oct 5, 2012 at 4:15 PM, Oleg Zhurakousky <
>> oleg.zhurakousky@gmail.com> wrote:
>>
>>> So now I am passed it and able to RunAs 'hduser', but when I attempt to
>>> read from FSDataInputStream i see this message in my console
>>>
>>> 10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
>>> 127.0.0.1:50010, add to deadNodes and continuejava.net.ConnectException:
>>> Connection refused
>>>
>>> 10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
>>> blk_-4047236896256451627_1003 from any node: java.io.IOException: No
>>> live nodes contain current block. Will get new block locations from
>>> namenode and retry...
>>>
>>>
>>> I am obviously missing a configuration setting somewhere. . . any idea?
>>>
>>> Thanks
>>>
>>> Oleg
>>>
>>> On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <
>>> oleg.zhurakousky@gmail.com> wrote:
>>>
>>>> After i clicked send I found the same link ;), but thank you anyway.
>>>>
>>>> Oleg
>>>>
>>>>
>>>> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> You might be looking for something like :
>>>>> UserGroupInformation.createRemoteUser(user).doAs(
>>>>>
>>>>> see
>>>>>
>>>>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>>>>
>>>>> It is a JAAS wrapper for Hadoop.
>>>>>
>>>>> Regards
>>>>>
>>>>> Bertrand
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>>
>>>>>> I am working on some samples where I want to write to HDFS running on
>>>>>> another machine (different OS etc.)
>>>>>> The identity of my client process is just whatever my OS says it is
>>>>>> (e.g., 'oleg') hence:
>>>>>>
>>>>>> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
>>>>>> 192.168.15.20:54310 from oleg got value #2
>>>>>>
>>>>>> But there is no 'oleg' where the hadoop is running. Instead there is
>>>>>> 'hduser'.
>>>>>>
>>>>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>>>>
>>>>>> Thanks
>>>>>>  Oleg
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Bertrand Dechoux
>>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> Bertrand Dechoux
>>
>
>

Re: Impersonating HDFS user

Posted by Oleg Zhurakousky <ol...@gmail.com>.
sorry clicked send too soon, but basically changing that did not produce
any result, still seeing the same message.So I guess my question is what is
the property that is responsible for that?

Thanks
Oleg

On Fri, Oct 5, 2012 at 10:40 AM, Oleg Zhurakousky <
oleg.zhurakousky@gmail.com> wrote:

> Yes I understand that and I guess I am trying to find that 'right
> property'
> I did find one reference to it in hdfs-defaul.xml
>
> <name>dfs.datanode.address</name>
>
> <value>0.0.0.0:50010</value>
>
> so i changed that in my hdfs-site.xml to
>
> <name>dfs.datanode.address</name>
>
> <value>192.168.15.20:50010</value>
>
>
> But
>
>
> On Fri, Oct 5, 2012 at 10:33 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>
>> Indeed, you are connecting to localhost and you said it was a remote
>> connection so I guess there is nothing there which is relevant for you.
>> The main idea is that you need to provide the configuration files. They
>> are read by default from the classpath. Any place where you have a
>> Configuration/JobConf you could also set up the right properties which
>> would be the location of the HDFS master (and mapred if you want to do
>> something about it).
>>
>> Regards
>>
>> Bertrand
>>
>>
>> On Fri, Oct 5, 2012 at 4:15 PM, Oleg Zhurakousky <
>> oleg.zhurakousky@gmail.com> wrote:
>>
>>> So now I am passed it and able to RunAs 'hduser', but when I attempt to
>>> read from FSDataInputStream i see this message in my console
>>>
>>> 10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
>>> 127.0.0.1:50010, add to deadNodes and continuejava.net.ConnectException:
>>> Connection refused
>>>
>>> 10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
>>> blk_-4047236896256451627_1003 from any node: java.io.IOException: No
>>> live nodes contain current block. Will get new block locations from
>>> namenode and retry...
>>>
>>>
>>> I am obviously missing a configuration setting somewhere. . . any idea?
>>>
>>> Thanks
>>>
>>> Oleg
>>>
>>> On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <
>>> oleg.zhurakousky@gmail.com> wrote:
>>>
>>>> After i clicked send I found the same link ;), but thank you anyway.
>>>>
>>>> Oleg
>>>>
>>>>
>>>> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> You might be looking for something like :
>>>>> UserGroupInformation.createRemoteUser(user).doAs(
>>>>>
>>>>> see
>>>>>
>>>>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>>>>
>>>>> It is a JAAS wrapper for Hadoop.
>>>>>
>>>>> Regards
>>>>>
>>>>> Bertrand
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>>
>>>>>> I am working on some samples where I want to write to HDFS running on
>>>>>> another machine (different OS etc.)
>>>>>> The identity of my client process is just whatever my OS says it is
>>>>>> (e.g., 'oleg') hence:
>>>>>>
>>>>>> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
>>>>>> 192.168.15.20:54310 from oleg got value #2
>>>>>>
>>>>>> But there is no 'oleg' where the hadoop is running. Instead there is
>>>>>> 'hduser'.
>>>>>>
>>>>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>>>>
>>>>>> Thanks
>>>>>>  Oleg
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Bertrand Dechoux
>>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> Bertrand Dechoux
>>
>
>

Re: Impersonating HDFS user

Posted by Oleg Zhurakousky <ol...@gmail.com>.
Yes I understand that and I guess I am trying to find that 'right property'
I did find one reference to it in hdfs-defaul.xml

<name>dfs.datanode.address</name>

<value>0.0.0.0:50010</value>

so i changed that in my hdfs-site.xml to

<name>dfs.datanode.address</name>

<value>192.168.15.20:50010</value>


But


On Fri, Oct 5, 2012 at 10:33 AM, Bertrand Dechoux <de...@gmail.com>wrote:

> Indeed, you are connecting to localhost and you said it was a remote
> connection so I guess there is nothing there which is relevant for you.
> The main idea is that you need to provide the configuration files. They
> are read by default from the classpath. Any place where you have a
> Configuration/JobConf you could also set up the right properties which
> would be the location of the HDFS master (and mapred if you want to do
> something about it).
>
> Regards
>
> Bertrand
>
>
> On Fri, Oct 5, 2012 at 4:15 PM, Oleg Zhurakousky <
> oleg.zhurakousky@gmail.com> wrote:
>
>> So now I am passed it and able to RunAs 'hduser', but when I attempt to
>> read from FSDataInputStream i see this message in my console
>>
>> 10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
>> 127.0.0.1:50010, add to deadNodes and continuejava.net.ConnectException:
>> Connection refused
>>
>> 10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
>> blk_-4047236896256451627_1003 from any node: java.io.IOException: No
>> live nodes contain current block. Will get new block locations from
>> namenode and retry...
>>
>>
>> I am obviously missing a configuration setting somewhere. . . any idea?
>>
>> Thanks
>>
>> Oleg
>>
>> On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <
>> oleg.zhurakousky@gmail.com> wrote:
>>
>>> After i clicked send I found the same link ;), but thank you anyway.
>>>
>>> Oleg
>>>
>>>
>>> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>>
>>>> Hi,
>>>>
>>>> You might be looking for something like :
>>>> UserGroupInformation.createRemoteUser(user).doAs(
>>>>
>>>> see
>>>>
>>>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>>>
>>>> It is a JAAS wrapper for Hadoop.
>>>>
>>>> Regards
>>>>
>>>> Bertrand
>>>>
>>>>
>>>>
>>>>
>>>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>
>>>>> I am working on some samples where I want to write to HDFS running on
>>>>> another machine (different OS etc.)
>>>>> The identity of my client process is just whatever my OS says it is
>>>>> (e.g., 'oleg') hence:
>>>>>
>>>>> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
>>>>> 192.168.15.20:54310 from oleg got value #2
>>>>>
>>>>> But there is no 'oleg' where the hadoop is running. Instead there is
>>>>> 'hduser'.
>>>>>
>>>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>>>
>>>>> Thanks
>>>>>  Oleg
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Bertrand Dechoux
>>>>
>>>
>>>
>>
>
>
> --
> Bertrand Dechoux
>

Re: Impersonating HDFS user

Posted by Oleg Zhurakousky <ol...@gmail.com>.
Yes I understand that and I guess I am trying to find that 'right property'
I did find one reference to it in hdfs-defaul.xml

<name>dfs.datanode.address</name>

<value>0.0.0.0:50010</value>

so i changed that in my hdfs-site.xml to

<name>dfs.datanode.address</name>

<value>192.168.15.20:50010</value>


But


On Fri, Oct 5, 2012 at 10:33 AM, Bertrand Dechoux <de...@gmail.com>wrote:

> Indeed, you are connecting to localhost and you said it was a remote
> connection so I guess there is nothing there which is relevant for you.
> The main idea is that you need to provide the configuration files. They
> are read by default from the classpath. Any place where you have a
> Configuration/JobConf you could also set up the right properties which
> would be the location of the HDFS master (and mapred if you want to do
> something about it).
>
> Regards
>
> Bertrand
>
>
> On Fri, Oct 5, 2012 at 4:15 PM, Oleg Zhurakousky <
> oleg.zhurakousky@gmail.com> wrote:
>
>> So now I am passed it and able to RunAs 'hduser', but when I attempt to
>> read from FSDataInputStream i see this message in my console
>>
>> 10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
>> 127.0.0.1:50010, add to deadNodes and continuejava.net.ConnectException:
>> Connection refused
>>
>> 10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
>> blk_-4047236896256451627_1003 from any node: java.io.IOException: No
>> live nodes contain current block. Will get new block locations from
>> namenode and retry...
>>
>>
>> I am obviously missing a configuration setting somewhere. . . any idea?
>>
>> Thanks
>>
>> Oleg
>>
>> On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <
>> oleg.zhurakousky@gmail.com> wrote:
>>
>>> After i clicked send I found the same link ;), but thank you anyway.
>>>
>>> Oleg
>>>
>>>
>>> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>>
>>>> Hi,
>>>>
>>>> You might be looking for something like :
>>>> UserGroupInformation.createRemoteUser(user).doAs(
>>>>
>>>> see
>>>>
>>>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>>>
>>>> It is a JAAS wrapper for Hadoop.
>>>>
>>>> Regards
>>>>
>>>> Bertrand
>>>>
>>>>
>>>>
>>>>
>>>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>
>>>>> I am working on some samples where I want to write to HDFS running on
>>>>> another machine (different OS etc.)
>>>>> The identity of my client process is just whatever my OS says it is
>>>>> (e.g., 'oleg') hence:
>>>>>
>>>>> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
>>>>> 192.168.15.20:54310 from oleg got value #2
>>>>>
>>>>> But there is no 'oleg' where the hadoop is running. Instead there is
>>>>> 'hduser'.
>>>>>
>>>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>>>
>>>>> Thanks
>>>>>  Oleg
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Bertrand Dechoux
>>>>
>>>
>>>
>>
>
>
> --
> Bertrand Dechoux
>

Re: Impersonating HDFS user

Posted by Oleg Zhurakousky <ol...@gmail.com>.
Yes I understand that and I guess I am trying to find that 'right property'
I did find one reference to it in hdfs-defaul.xml

<name>dfs.datanode.address</name>

<value>0.0.0.0:50010</value>

so i changed that in my hdfs-site.xml to

<name>dfs.datanode.address</name>

<value>192.168.15.20:50010</value>


But


On Fri, Oct 5, 2012 at 10:33 AM, Bertrand Dechoux <de...@gmail.com>wrote:

> Indeed, you are connecting to localhost and you said it was a remote
> connection so I guess there is nothing there which is relevant for you.
> The main idea is that you need to provide the configuration files. They
> are read by default from the classpath. Any place where you have a
> Configuration/JobConf you could also set up the right properties which
> would be the location of the HDFS master (and mapred if you want to do
> something about it).
>
> Regards
>
> Bertrand
>
>
> On Fri, Oct 5, 2012 at 4:15 PM, Oleg Zhurakousky <
> oleg.zhurakousky@gmail.com> wrote:
>
>> So now I am passed it and able to RunAs 'hduser', but when I attempt to
>> read from FSDataInputStream i see this message in my console
>>
>> 10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
>> 127.0.0.1:50010, add to deadNodes and continuejava.net.ConnectException:
>> Connection refused
>>
>> 10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
>> blk_-4047236896256451627_1003 from any node: java.io.IOException: No
>> live nodes contain current block. Will get new block locations from
>> namenode and retry...
>>
>>
>> I am obviously missing a configuration setting somewhere. . . any idea?
>>
>> Thanks
>>
>> Oleg
>>
>> On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <
>> oleg.zhurakousky@gmail.com> wrote:
>>
>>> After i clicked send I found the same link ;), but thank you anyway.
>>>
>>> Oleg
>>>
>>>
>>> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>>
>>>> Hi,
>>>>
>>>> You might be looking for something like :
>>>> UserGroupInformation.createRemoteUser(user).doAs(
>>>>
>>>> see
>>>>
>>>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>>>
>>>> It is a JAAS wrapper for Hadoop.
>>>>
>>>> Regards
>>>>
>>>> Bertrand
>>>>
>>>>
>>>>
>>>>
>>>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>
>>>>> I am working on some samples where I want to write to HDFS running on
>>>>> another machine (different OS etc.)
>>>>> The identity of my client process is just whatever my OS says it is
>>>>> (e.g., 'oleg') hence:
>>>>>
>>>>> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
>>>>> 192.168.15.20:54310 from oleg got value #2
>>>>>
>>>>> But there is no 'oleg' where the hadoop is running. Instead there is
>>>>> 'hduser'.
>>>>>
>>>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>>>
>>>>> Thanks
>>>>>  Oleg
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Bertrand Dechoux
>>>>
>>>
>>>
>>
>
>
> --
> Bertrand Dechoux
>

Re: Impersonating HDFS user

Posted by Oleg Zhurakousky <ol...@gmail.com>.
Yes I understand that and I guess I am trying to find that 'right property'
I did find one reference to it in hdfs-defaul.xml

<name>dfs.datanode.address</name>

<value>0.0.0.0:50010</value>

so i changed that in my hdfs-site.xml to

<name>dfs.datanode.address</name>

<value>192.168.15.20:50010</value>


But


On Fri, Oct 5, 2012 at 10:33 AM, Bertrand Dechoux <de...@gmail.com>wrote:

> Indeed, you are connecting to localhost and you said it was a remote
> connection so I guess there is nothing there which is relevant for you.
> The main idea is that you need to provide the configuration files. They
> are read by default from the classpath. Any place where you have a
> Configuration/JobConf you could also set up the right properties which
> would be the location of the HDFS master (and mapred if you want to do
> something about it).
>
> Regards
>
> Bertrand
>
>
> On Fri, Oct 5, 2012 at 4:15 PM, Oleg Zhurakousky <
> oleg.zhurakousky@gmail.com> wrote:
>
>> So now I am passed it and able to RunAs 'hduser', but when I attempt to
>> read from FSDataInputStream i see this message in my console
>>
>> 10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
>> 127.0.0.1:50010, add to deadNodes and continuejava.net.ConnectException:
>> Connection refused
>>
>> 10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
>> blk_-4047236896256451627_1003 from any node: java.io.IOException: No
>> live nodes contain current block. Will get new block locations from
>> namenode and retry...
>>
>>
>> I am obviously missing a configuration setting somewhere. . . any idea?
>>
>> Thanks
>>
>> Oleg
>>
>> On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <
>> oleg.zhurakousky@gmail.com> wrote:
>>
>>> After i clicked send I found the same link ;), but thank you anyway.
>>>
>>> Oleg
>>>
>>>
>>> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>>
>>>> Hi,
>>>>
>>>> You might be looking for something like :
>>>> UserGroupInformation.createRemoteUser(user).doAs(
>>>>
>>>> see
>>>>
>>>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>>>
>>>> It is a JAAS wrapper for Hadoop.
>>>>
>>>> Regards
>>>>
>>>> Bertrand
>>>>
>>>>
>>>>
>>>>
>>>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>>>> oleg.zhurakousky@gmail.com> wrote:
>>>>
>>>>> I am working on some samples where I want to write to HDFS running on
>>>>> another machine (different OS etc.)
>>>>> The identity of my client process is just whatever my OS says it is
>>>>> (e.g., 'oleg') hence:
>>>>>
>>>>> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
>>>>> 192.168.15.20:54310 from oleg got value #2
>>>>>
>>>>> But there is no 'oleg' where the hadoop is running. Instead there is
>>>>> 'hduser'.
>>>>>
>>>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>>>
>>>>> Thanks
>>>>>  Oleg
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Bertrand Dechoux
>>>>
>>>
>>>
>>
>
>
> --
> Bertrand Dechoux
>

Re: Impersonating HDFS user

Posted by Bertrand Dechoux <de...@gmail.com>.
Indeed, you are connecting to localhost and you said it was a remote
connection so I guess there is nothing there which is relevant for you.
The main idea is that you need to provide the configuration files. They are
read by default from the classpath. Any place where you have a
Configuration/JobConf you could also set up the right properties which
would be the location of the HDFS master (and mapred if you want to do
something about it).

Regards

Bertrand

On Fri, Oct 5, 2012 at 4:15 PM, Oleg Zhurakousky <oleg.zhurakousky@gmail.com
> wrote:

> So now I am passed it and able to RunAs 'hduser', but when I attempt to
> read from FSDataInputStream i see this message in my console
>
> 10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
> 127.0.0.1:50010, add to deadNodes and continuejava.net.ConnectException:
> Connection refused
>
> 10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
> blk_-4047236896256451627_1003 from any node: java.io.IOException: No live
> nodes contain current block. Will get new block locations from namenode and
> retry...
>
>
> I am obviously missing a configuration setting somewhere. . . any idea?
>
> Thanks
>
> Oleg
>
> On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <
> oleg.zhurakousky@gmail.com> wrote:
>
>> After i clicked send I found the same link ;), but thank you anyway.
>>
>> Oleg
>>
>>
>> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>
>>> Hi,
>>>
>>> You might be looking for something like :
>>> UserGroupInformation.createRemoteUser(user).doAs(
>>>
>>> see
>>>
>>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>>
>>> It is a JAAS wrapper for Hadoop.
>>>
>>> Regards
>>>
>>> Bertrand
>>>
>>>
>>>
>>>
>>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>>> oleg.zhurakousky@gmail.com> wrote:
>>>
>>>> I am working on some samples where I want to write to HDFS running on
>>>> another machine (different OS etc.)
>>>> The identity of my client process is just whatever my OS says it is
>>>> (e.g., 'oleg') hence:
>>>>
>>>> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
>>>> 192.168.15.20:54310 from oleg got value #2
>>>>
>>>> But there is no 'oleg' where the hadoop is running. Instead there is
>>>> 'hduser'.
>>>>
>>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>>
>>>> Thanks
>>>>  Oleg
>>>>
>>>
>>>
>>>
>>> --
>>> Bertrand Dechoux
>>>
>>
>>
>


-- 
Bertrand Dechoux

Re: Impersonating HDFS user

Posted by Bertrand Dechoux <de...@gmail.com>.
Indeed, you are connecting to localhost and you said it was a remote
connection so I guess there is nothing there which is relevant for you.
The main idea is that you need to provide the configuration files. They are
read by default from the classpath. Any place where you have a
Configuration/JobConf you could also set up the right properties which
would be the location of the HDFS master (and mapred if you want to do
something about it).

Regards

Bertrand

On Fri, Oct 5, 2012 at 4:15 PM, Oleg Zhurakousky <oleg.zhurakousky@gmail.com
> wrote:

> So now I am passed it and able to RunAs 'hduser', but when I attempt to
> read from FSDataInputStream i see this message in my console
>
> 10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
> 127.0.0.1:50010, add to deadNodes and continuejava.net.ConnectException:
> Connection refused
>
> 10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
> blk_-4047236896256451627_1003 from any node: java.io.IOException: No live
> nodes contain current block. Will get new block locations from namenode and
> retry...
>
>
> I am obviously missing a configuration setting somewhere. . . any idea?
>
> Thanks
>
> Oleg
>
> On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <
> oleg.zhurakousky@gmail.com> wrote:
>
>> After i clicked send I found the same link ;), but thank you anyway.
>>
>> Oleg
>>
>>
>> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>
>>> Hi,
>>>
>>> You might be looking for something like :
>>> UserGroupInformation.createRemoteUser(user).doAs(
>>>
>>> see
>>>
>>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>>
>>> It is a JAAS wrapper for Hadoop.
>>>
>>> Regards
>>>
>>> Bertrand
>>>
>>>
>>>
>>>
>>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>>> oleg.zhurakousky@gmail.com> wrote:
>>>
>>>> I am working on some samples where I want to write to HDFS running on
>>>> another machine (different OS etc.)
>>>> The identity of my client process is just whatever my OS says it is
>>>> (e.g., 'oleg') hence:
>>>>
>>>> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
>>>> 192.168.15.20:54310 from oleg got value #2
>>>>
>>>> But there is no 'oleg' where the hadoop is running. Instead there is
>>>> 'hduser'.
>>>>
>>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>>
>>>> Thanks
>>>>  Oleg
>>>>
>>>
>>>
>>>
>>> --
>>> Bertrand Dechoux
>>>
>>
>>
>


-- 
Bertrand Dechoux

Re: Impersonating HDFS user

Posted by Bertrand Dechoux <de...@gmail.com>.
Indeed, you are connecting to localhost and you said it was a remote
connection so I guess there is nothing there which is relevant for you.
The main idea is that you need to provide the configuration files. They are
read by default from the classpath. Any place where you have a
Configuration/JobConf you could also set up the right properties which
would be the location of the HDFS master (and mapred if you want to do
something about it).

Regards

Bertrand

On Fri, Oct 5, 2012 at 4:15 PM, Oleg Zhurakousky <oleg.zhurakousky@gmail.com
> wrote:

> So now I am passed it and able to RunAs 'hduser', but when I attempt to
> read from FSDataInputStream i see this message in my console
>
> 10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
> 127.0.0.1:50010, add to deadNodes and continuejava.net.ConnectException:
> Connection refused
>
> 10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
> blk_-4047236896256451627_1003 from any node: java.io.IOException: No live
> nodes contain current block. Will get new block locations from namenode and
> retry...
>
>
> I am obviously missing a configuration setting somewhere. . . any idea?
>
> Thanks
>
> Oleg
>
> On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <
> oleg.zhurakousky@gmail.com> wrote:
>
>> After i clicked send I found the same link ;), but thank you anyway.
>>
>> Oleg
>>
>>
>> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>
>>> Hi,
>>>
>>> You might be looking for something like :
>>> UserGroupInformation.createRemoteUser(user).doAs(
>>>
>>> see
>>>
>>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>>
>>> It is a JAAS wrapper for Hadoop.
>>>
>>> Regards
>>>
>>> Bertrand
>>>
>>>
>>>
>>>
>>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>>> oleg.zhurakousky@gmail.com> wrote:
>>>
>>>> I am working on some samples where I want to write to HDFS running on
>>>> another machine (different OS etc.)
>>>> The identity of my client process is just whatever my OS says it is
>>>> (e.g., 'oleg') hence:
>>>>
>>>> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
>>>> 192.168.15.20:54310 from oleg got value #2
>>>>
>>>> But there is no 'oleg' where the hadoop is running. Instead there is
>>>> 'hduser'.
>>>>
>>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>>
>>>> Thanks
>>>>  Oleg
>>>>
>>>
>>>
>>>
>>> --
>>> Bertrand Dechoux
>>>
>>
>>
>


-- 
Bertrand Dechoux

Re: Impersonating HDFS user

Posted by Bertrand Dechoux <de...@gmail.com>.
Indeed, you are connecting to localhost and you said it was a remote
connection so I guess there is nothing there which is relevant for you.
The main idea is that you need to provide the configuration files. They are
read by default from the classpath. Any place where you have a
Configuration/JobConf you could also set up the right properties which
would be the location of the HDFS master (and mapred if you want to do
something about it).

Regards

Bertrand

On Fri, Oct 5, 2012 at 4:15 PM, Oleg Zhurakousky <oleg.zhurakousky@gmail.com
> wrote:

> So now I am passed it and able to RunAs 'hduser', but when I attempt to
> read from FSDataInputStream i see this message in my console
>
> 10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
> 127.0.0.1:50010, add to deadNodes and continuejava.net.ConnectException:
> Connection refused
>
> 10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
> blk_-4047236896256451627_1003 from any node: java.io.IOException: No live
> nodes contain current block. Will get new block locations from namenode and
> retry...
>
>
> I am obviously missing a configuration setting somewhere. . . any idea?
>
> Thanks
>
> Oleg
>
> On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <
> oleg.zhurakousky@gmail.com> wrote:
>
>> After i clicked send I found the same link ;), but thank you anyway.
>>
>> Oleg
>>
>>
>> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>>
>>> Hi,
>>>
>>> You might be looking for something like :
>>> UserGroupInformation.createRemoteUser(user).doAs(
>>>
>>> see
>>>
>>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>>
>>> It is a JAAS wrapper for Hadoop.
>>>
>>> Regards
>>>
>>> Bertrand
>>>
>>>
>>>
>>>
>>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>>> oleg.zhurakousky@gmail.com> wrote:
>>>
>>>> I am working on some samples where I want to write to HDFS running on
>>>> another machine (different OS etc.)
>>>> The identity of my client process is just whatever my OS says it is
>>>> (e.g., 'oleg') hence:
>>>>
>>>> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
>>>> 192.168.15.20:54310 from oleg got value #2
>>>>
>>>> But there is no 'oleg' where the hadoop is running. Instead there is
>>>> 'hduser'.
>>>>
>>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>>
>>>> Thanks
>>>>  Oleg
>>>>
>>>
>>>
>>>
>>> --
>>> Bertrand Dechoux
>>>
>>
>>
>


-- 
Bertrand Dechoux

Re: Impersonating HDFS user

Posted by Oleg Zhurakousky <ol...@gmail.com>.
So now I am passed it and able to RunAs 'hduser', but when I attempt to
read from FSDataInputStream i see this message in my console

10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
127.0.0.1:50010, add to deadNodes and continuejava.net.ConnectException:
Connection refused

10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
blk_-4047236896256451627_1003 from any node: java.io.IOException: No live
nodes contain current block. Will get new block locations from namenode and
retry...


I am obviously missing a configuration setting somewhere. . . any idea?

Thanks

Oleg

On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <oleg.zhurakousky@gmail.com
> wrote:

> After i clicked send I found the same link ;), but thank you anyway.
>
> Oleg
>
>
> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>
>> Hi,
>>
>> You might be looking for something like :
>> UserGroupInformation.createRemoteUser(user).doAs(
>>
>> see
>>
>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>
>> It is a JAAS wrapper for Hadoop.
>>
>> Regards
>>
>> Bertrand
>>
>>
>>
>>
>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>> oleg.zhurakousky@gmail.com> wrote:
>>
>>> I am working on some samples where I want to write to HDFS running on
>>> another machine (different OS etc.)
>>> The identity of my client process is just whatever my OS says it is
>>> (e.g., 'oleg') hence:
>>>
>>> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
>>> 192.168.15.20:54310 from oleg got value #2
>>>
>>> But there is no 'oleg' where the hadoop is running. Instead there is
>>> 'hduser'.
>>>
>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>
>>> Thanks
>>>  Oleg
>>>
>>
>>
>>
>> --
>> Bertrand Dechoux
>>
>
>

Re: Impersonating HDFS user

Posted by Oleg Zhurakousky <ol...@gmail.com>.
So now I am passed it and able to RunAs 'hduser', but when I attempt to
read from FSDataInputStream i see this message in my console

10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
127.0.0.1:50010, add to deadNodes and continuejava.net.ConnectException:
Connection refused

10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
blk_-4047236896256451627_1003 from any node: java.io.IOException: No live
nodes contain current block. Will get new block locations from namenode and
retry...


I am obviously missing a configuration setting somewhere. . . any idea?

Thanks

Oleg

On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <oleg.zhurakousky@gmail.com
> wrote:

> After i clicked send I found the same link ;), but thank you anyway.
>
> Oleg
>
>
> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>
>> Hi,
>>
>> You might be looking for something like :
>> UserGroupInformation.createRemoteUser(user).doAs(
>>
>> see
>>
>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>
>> It is a JAAS wrapper for Hadoop.
>>
>> Regards
>>
>> Bertrand
>>
>>
>>
>>
>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>> oleg.zhurakousky@gmail.com> wrote:
>>
>>> I am working on some samples where I want to write to HDFS running on
>>> another machine (different OS etc.)
>>> The identity of my client process is just whatever my OS says it is
>>> (e.g., 'oleg') hence:
>>>
>>> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
>>> 192.168.15.20:54310 from oleg got value #2
>>>
>>> But there is no 'oleg' where the hadoop is running. Instead there is
>>> 'hduser'.
>>>
>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>
>>> Thanks
>>>  Oleg
>>>
>>
>>
>>
>> --
>> Bertrand Dechoux
>>
>
>

Re: Impersonating HDFS user

Posted by Oleg Zhurakousky <ol...@gmail.com>.
So now I am passed it and able to RunAs 'hduser', but when I attempt to
read from FSDataInputStream i see this message in my console

10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
127.0.0.1:50010, add to deadNodes and continuejava.net.ConnectException:
Connection refused

10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
blk_-4047236896256451627_1003 from any node: java.io.IOException: No live
nodes contain current block. Will get new block locations from namenode and
retry...


I am obviously missing a configuration setting somewhere. . . any idea?

Thanks

Oleg

On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <oleg.zhurakousky@gmail.com
> wrote:

> After i clicked send I found the same link ;), but thank you anyway.
>
> Oleg
>
>
> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>
>> Hi,
>>
>> You might be looking for something like :
>> UserGroupInformation.createRemoteUser(user).doAs(
>>
>> see
>>
>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>
>> It is a JAAS wrapper for Hadoop.
>>
>> Regards
>>
>> Bertrand
>>
>>
>>
>>
>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>> oleg.zhurakousky@gmail.com> wrote:
>>
>>> I am working on some samples where I want to write to HDFS running on
>>> another machine (different OS etc.)
>>> The identity of my client process is just whatever my OS says it is
>>> (e.g., 'oleg') hence:
>>>
>>> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
>>> 192.168.15.20:54310 from oleg got value #2
>>>
>>> But there is no 'oleg' where the hadoop is running. Instead there is
>>> 'hduser'.
>>>
>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>
>>> Thanks
>>>  Oleg
>>>
>>
>>
>>
>> --
>> Bertrand Dechoux
>>
>
>

Re: Impersonating HDFS user

Posted by Oleg Zhurakousky <ol...@gmail.com>.
So now I am passed it and able to RunAs 'hduser', but when I attempt to
read from FSDataInputStream i see this message in my console

10:12:10,065  WARN main hdfs.DFSClient:2106 - Failed to connect to /
127.0.0.1:50010, add to deadNodes and continuejava.net.ConnectException:
Connection refused

10:12:10,072  INFO main hdfs.DFSClient:2272 - Could not obtain block
blk_-4047236896256451627_1003 from any node: java.io.IOException: No live
nodes contain current block. Will get new block locations from namenode and
retry...


I am obviously missing a configuration setting somewhere. . . any idea?

Thanks

Oleg

On Fri, Oct 5, 2012 at 9:37 AM, Oleg Zhurakousky <oleg.zhurakousky@gmail.com
> wrote:

> After i clicked send I found the same link ;), but thank you anyway.
>
> Oleg
>
>
> On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com>wrote:
>
>> Hi,
>>
>> You might be looking for something like :
>> UserGroupInformation.createRemoteUser(user).doAs(
>>
>> see
>>
>> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>>
>> It is a JAAS wrapper for Hadoop.
>>
>> Regards
>>
>> Bertrand
>>
>>
>>
>>
>> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
>> oleg.zhurakousky@gmail.com> wrote:
>>
>>> I am working on some samples where I want to write to HDFS running on
>>> another machine (different OS etc.)
>>> The identity of my client process is just whatever my OS says it is
>>> (e.g., 'oleg') hence:
>>>
>>> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
>>> 192.168.15.20:54310 from oleg got value #2
>>>
>>> But there is no 'oleg' where the hadoop is running. Instead there is
>>> 'hduser'.
>>>
>>> Is there a way or an equivalent of "RunAs" in hadoop?
>>>
>>> Thanks
>>>  Oleg
>>>
>>
>>
>>
>> --
>> Bertrand Dechoux
>>
>
>

Re: Impersonating HDFS user

Posted by Oleg Zhurakousky <ol...@gmail.com>.
After i clicked send I found the same link ;), but thank you anyway.

Oleg

On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com> wrote:

> Hi,
>
> You might be looking for something like :
> UserGroupInformation.createRemoteUser(user).doAs(
>
> see
>
> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>
> It is a JAAS wrapper for Hadoop.
>
> Regards
>
> Bertrand
>
>
>
>
> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
> oleg.zhurakousky@gmail.com> wrote:
>
>> I am working on some samples where I want to write to HDFS running on
>> another machine (different OS etc.)
>> The identity of my client process is just whatever my OS says it is
>> (e.g., 'oleg') hence:
>>
>> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
>> 192.168.15.20:54310 from oleg got value #2
>>
>> But there is no 'oleg' where the hadoop is running. Instead there is
>> 'hduser'.
>>
>> Is there a way or an equivalent of "RunAs" in hadoop?
>>
>> Thanks
>>  Oleg
>>
>
>
>
> --
> Bertrand Dechoux
>

Re: Impersonating HDFS user

Posted by Oleg Zhurakousky <ol...@gmail.com>.
After i clicked send I found the same link ;), but thank you anyway.

Oleg

On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com> wrote:

> Hi,
>
> You might be looking for something like :
> UserGroupInformation.createRemoteUser(user).doAs(
>
> see
>
> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>
> It is a JAAS wrapper for Hadoop.
>
> Regards
>
> Bertrand
>
>
>
>
> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
> oleg.zhurakousky@gmail.com> wrote:
>
>> I am working on some samples where I want to write to HDFS running on
>> another machine (different OS etc.)
>> The identity of my client process is just whatever my OS says it is
>> (e.g., 'oleg') hence:
>>
>> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
>> 192.168.15.20:54310 from oleg got value #2
>>
>> But there is no 'oleg' where the hadoop is running. Instead there is
>> 'hduser'.
>>
>> Is there a way or an equivalent of "RunAs" in hadoop?
>>
>> Thanks
>>  Oleg
>>
>
>
>
> --
> Bertrand Dechoux
>

Re: Impersonating HDFS user

Posted by Oleg Zhurakousky <ol...@gmail.com>.
After i clicked send I found the same link ;), but thank you anyway.

Oleg

On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com> wrote:

> Hi,
>
> You might be looking for something like :
> UserGroupInformation.createRemoteUser(user).doAs(
>
> see
>
> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>
> It is a JAAS wrapper for Hadoop.
>
> Regards
>
> Bertrand
>
>
>
>
> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
> oleg.zhurakousky@gmail.com> wrote:
>
>> I am working on some samples where I want to write to HDFS running on
>> another machine (different OS etc.)
>> The identity of my client process is just whatever my OS says it is
>> (e.g., 'oleg') hence:
>>
>> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
>> 192.168.15.20:54310 from oleg got value #2
>>
>> But there is no 'oleg' where the hadoop is running. Instead there is
>> 'hduser'.
>>
>> Is there a way or an equivalent of "RunAs" in hadoop?
>>
>> Thanks
>>  Oleg
>>
>
>
>
> --
> Bertrand Dechoux
>

Re: Impersonating HDFS user

Posted by Oleg Zhurakousky <ol...@gmail.com>.
After i clicked send I found the same link ;), but thank you anyway.

Oleg

On Fri, Oct 5, 2012 at 9:34 AM, Bertrand Dechoux <de...@gmail.com> wrote:

> Hi,
>
> You might be looking for something like :
> UserGroupInformation.createRemoteUser(user).doAs(
>
> see
>
> http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html
>
> It is a JAAS wrapper for Hadoop.
>
> Regards
>
> Bertrand
>
>
>
>
> On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <
> oleg.zhurakousky@gmail.com> wrote:
>
>> I am working on some samples where I want to write to HDFS running on
>> another machine (different OS etc.)
>> The identity of my client process is just whatever my OS says it is
>> (e.g., 'oleg') hence:
>>
>> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
>> 192.168.15.20:54310 from oleg got value #2
>>
>> But there is no 'oleg' where the hadoop is running. Instead there is
>> 'hduser'.
>>
>> Is there a way or an equivalent of "RunAs" in hadoop?
>>
>> Thanks
>>  Oleg
>>
>
>
>
> --
> Bertrand Dechoux
>

Re: Impersonating HDFS user

Posted by Bertrand Dechoux <de...@gmail.com>.
Hi,

You might be looking for something like :
UserGroupInformation.createRemoteUser(user).doAs(

see
http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html

It is a JAAS wrapper for Hadoop.

Regards

Bertrand



On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <oleg.zhurakousky@gmail.com
> wrote:

> I am working on some samples where I want to write to HDFS running on
> another machine (different OS etc.)
> The identity of my client process is just whatever my OS says it is
> (e.g., 'oleg') hence:
>
> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
> 192.168.15.20:54310 from oleg got value #2
>
> But there is no 'oleg' where the hadoop is running. Instead there is
> 'hduser'.
>
> Is there a way or an equivalent of "RunAs" in hadoop?
>
> Thanks
> Oleg
>



-- 
Bertrand Dechoux

Re: Impersonating HDFS user

Posted by Bertrand Dechoux <de...@gmail.com>.
Hi,

You might be looking for something like :
UserGroupInformation.createRemoteUser(user).doAs(

see
http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html

It is a JAAS wrapper for Hadoop.

Regards

Bertrand



On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <oleg.zhurakousky@gmail.com
> wrote:

> I am working on some samples where I want to write to HDFS running on
> another machine (different OS etc.)
> The identity of my client process is just whatever my OS says it is
> (e.g., 'oleg') hence:
>
> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
> 192.168.15.20:54310 from oleg got value #2
>
> But there is no 'oleg' where the hadoop is running. Instead there is
> 'hduser'.
>
> Is there a way or an equivalent of "RunAs" in hadoop?
>
> Thanks
> Oleg
>



-- 
Bertrand Dechoux

Re: Impersonating HDFS user

Posted by Bertrand Dechoux <de...@gmail.com>.
Hi,

You might be looking for something like :
UserGroupInformation.createRemoteUser(user).doAs(

see
http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html

It is a JAAS wrapper for Hadoop.

Regards

Bertrand



On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <oleg.zhurakousky@gmail.com
> wrote:

> I am working on some samples where I want to write to HDFS running on
> another machine (different OS etc.)
> The identity of my client process is just whatever my OS says it is
> (e.g., 'oleg') hence:
>
> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
> 192.168.15.20:54310 from oleg got value #2
>
> But there is no 'oleg' where the hadoop is running. Instead there is
> 'hduser'.
>
> Is there a way or an equivalent of "RunAs" in hadoop?
>
> Thanks
> Oleg
>



-- 
Bertrand Dechoux

Re: Impersonating HDFS user

Posted by Bertrand Dechoux <de...@gmail.com>.
Hi,

You might be looking for something like :
UserGroupInformation.createRemoteUser(user).doAs(

see
http://hadoop.apache.org/docs/r1.0.3/api/org/apache/hadoop/security/UserGroupInformation.html

It is a JAAS wrapper for Hadoop.

Regards

Bertrand



On Fri, Oct 5, 2012 at 3:19 PM, Oleg Zhurakousky <oleg.zhurakousky@gmail.com
> wrote:

> I am working on some samples where I want to write to HDFS running on
> another machine (different OS etc.)
> The identity of my client process is just whatever my OS says it is
> (e.g., 'oleg') hence:
>
> 08:56:49,240 DEBUG IPC Client (47) connection to /192.168.15.20:54310from oleg ipc.Client:803 - IPC Client (47) connection to /
> 192.168.15.20:54310 from oleg got value #2
>
> But there is no 'oleg' where the hadoop is running. Instead there is
> 'hduser'.
>
> Is there a way or an equivalent of "RunAs" in hadoop?
>
> Thanks
> Oleg
>



-- 
Bertrand Dechoux