You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Chris K Wensel <ch...@wensel.net> on 2008/06/02 19:14:35 UTC

Re: hadoop on EC2

if you use the new scripts in 0.17.0, just run

 > hadoop-ec2 proxy <cluster-name>

this starts a ssh tunnel to your cluster.

installing foxy proxy in FF gives you whole cluster visibility..

obviously this isn't the best solution if you need to let many semi  
trusted users browse your cluster.

On May 28, 2008, at 1:22 PM, Andreas Kostyrka wrote:

> Hi!
>
> I just wondered what other people use to access the hadoop webservers,
> when running on EC2?
>
> Ideas that I had:
> 1.) opening ports 50030 and so on => not good, data goes unprotected
> over the internet. Even if I could enable some form of  
> authentication it
> would still plain http.
>
> 2.) Some kind of tunneling solution. The problem on this side is that
> each of my cluster node is in a different subnet, plus the dualism
> between the internal and external addresses of the nodes.
>
> Any hints? TIA,
>
> Andreas

Chris K Wensel
chris@wensel.net
http://chris.wensel.net/
http://www.cascading.org/





Re: hadoop on EC2

Posted by Chris K Wensel <ch...@wensel.net>.
These are the FoxyProxy wildcards I use

*compute-1.amazonaws.com*
*.ec2.internal*
*.compute-1.internal*

and w/ hadoop 0.17.0, just type (after booting your cluster)

hadoop-ec2 proxy <cluster-name>

to start the tunnel for that cluster

On Jun 3, 2008, at 11:26 PM, James Moore wrote:

> On Tue, Jun 3, 2008 at 5:04 PM, Andreas Kostyrka  
> <an...@kostyrka.org> wrote:
>> Plus to make it even more painful, you cannot easily run it with  
>> one simple
>> SOCKS server, because you need to defer DNS resolution to the  
>> inside the
>> cluster, because VM names do resolve to external IPs, while the  
>> webservers
>> we'd be all interested in reside on the internal 10/8 IPs.
>
> It's easy with foxyproxy.
>
> Run ssh with the -D command:
>
> ssh -D 2324 ec2-75-101-XXX-XX.compute-1.amazonaws.com
>
> Tell FoxyProxy to "use SOCKS proxy for DNS lookups" (tools > foxyproxy
>> more > global settings > use SOCKS proxy for DNS lookups)
>
> Configure foxyproxy with rules for when to use local port 2324.  Use
> wildcards like http*ec2*internal*.  I put a screenshot on my blog -
> http://blog.restphone.com/2008/6/4/foxyproxy-hadoop-and-socks
>
> All the features I cared about worked when set up this way.
>
> (And of course the choice of 2324 isn't special - use any port you  
> like.)
> -- 
> James Moore | james@restphone.com
> Ruby and Ruby on Rails consulting
> blog.restphone.com

Chris K Wensel
chris@wensel.net
http://chris.wensel.net/
http://www.cascading.org/





Re: hadoop on EC2

Posted by James Moore <ja...@gmail.com>.
On Tue, Jun 3, 2008 at 5:04 PM, Andreas Kostyrka <an...@kostyrka.org> wrote:
> Plus to make it even more painful, you cannot easily run it with one simple
> SOCKS server, because you need to defer DNS resolution to the inside the
> cluster, because VM names do resolve to external IPs, while the webservers
> we'd be all interested in reside on the internal 10/8 IPs.

It's easy with foxyproxy.

Run ssh with the -D command:

ssh -D 2324 ec2-75-101-XXX-XX.compute-1.amazonaws.com

Tell FoxyProxy to "use SOCKS proxy for DNS lookups" (tools > foxyproxy
> more > global settings > use SOCKS proxy for DNS lookups)

Configure foxyproxy with rules for when to use local port 2324.  Use
wildcards like http*ec2*internal*.  I put a screenshot on my blog -
http://blog.restphone.com/2008/6/4/foxyproxy-hadoop-and-socks

All the features I cared about worked when set up this way.

(And of course the choice of 2324 isn't special - use any port you like.)
-- 
James Moore | james@restphone.com
Ruby and Ruby on Rails consulting
blog.restphone.com

Re: hadoop on EC2

Posted by Steve Loughran <st...@apache.org>.
Andreas Kostyrka wrote:
> Well, the basic "trouble" with EC2 is that clusters usually are not networks 
> in the TCP/IP sense.
> 
> This makes it painful to decide which URLs should be resolved where.
> 
> Plus to make it even more painful, you cannot easily run it with one simple 
> SOCKS server, because you need to defer DNS resolution to the inside the 
> cluster, because VM names do resolve to external IPs, while the webservers 
> we'd be all interested in reside on the internal 10/8 IPs.
> 
> Another fun item is that in many situations you will have multiple islands 
> inside EC2 (the contractor working for multiple customers that have EC2 
> deployments come to mind), so you cannot just route everything over one pipe 
> into EC2.
> 
> My current setup relies on a very long list of -L ssh tunnel forwards plus 
> iptables into the nat OUTPUT rule that make external-ip-of-vm1:50030 get 
> redirected to localhost:SOMEPORT that is forwarded to name-of-vm1:50030 via 
> ssh. (Implementation left as an exercise for the reader, or my ugly non-error 
> checking script available on request :-P)
> 
> If one would want to have a more generic solution to redirect TCP ports via a 
> ssh SOCKS tunnel (aka "dynamic port forwarding"), the following components 
> would be needed:
> 
> -) a list of rules what gets forwarded where and how.
> -) a DNS resolver that issues fake IP addresses to capture the "name" of the 
> connected host.
> -) a small forwarding script that checks the "real destination IP" to decide 
> which IP address/port is being requested. (Hint: current Linux kernels don't 
> use getsockname anymore, the real destination is carried nowadays as a socket 
> option)
> 
> One of the uglier parts that I have found no "real" solution was the fact that 
> one cannot be sure that ssh will be able to listen on a given port. 
> 
> Solutions I've found include:
> -) check the port before issueing ssh (Racecondition warning: Going through 
> this hole the whole federation star fleet could get lost.)
> -) using some kind of except to drive ssh through a pty.
> -) roll your own ssh tunnel solution. The only lib that come to my mind is 
> Twisted, in which case one could ignore the need for the SOCKS protocol.
> 
> But luckily for us, the solution is easier, because we only need to tunnel 
> http in the hadoop case, which has the high benefit that we do not need to 
> capture the hostname, because http remembers the hostname inside the payload.

Do you worry/address the risk of someone like me bringing up a machine 
in the EC2 farm that then portscans all the near-neighbours in the 
address space for open hdfs data node/name node ports, and strikes up a 
conversation with your filesystem?



-- 
Steve Loughran                  http://www.1060.org/blogxter/publish/5
Author: Ant in Action           http://antbook.org/

Re: hadoop on EC2

Posted by Andreas Kostyrka <an...@kostyrka.org>.
Well, the basic "trouble" with EC2 is that clusters usually are not networks 
in the TCP/IP sense.

This makes it painful to decide which URLs should be resolved where.

Plus to make it even more painful, you cannot easily run it with one simple 
SOCKS server, because you need to defer DNS resolution to the inside the 
cluster, because VM names do resolve to external IPs, while the webservers 
we'd be all interested in reside on the internal 10/8 IPs.

Another fun item is that in many situations you will have multiple islands 
inside EC2 (the contractor working for multiple customers that have EC2 
deployments come to mind), so you cannot just route everything over one pipe 
into EC2.

My current setup relies on a very long list of -L ssh tunnel forwards plus 
iptables into the nat OUTPUT rule that make external-ip-of-vm1:50030 get 
redirected to localhost:SOMEPORT that is forwarded to name-of-vm1:50030 via 
ssh. (Implementation left as an exercise for the reader, or my ugly non-error 
checking script available on request :-P)

If one would want to have a more generic solution to redirect TCP ports via a 
ssh SOCKS tunnel (aka "dynamic port forwarding"), the following components 
would be needed:

-) a list of rules what gets forwarded where and how.
-) a DNS resolver that issues fake IP addresses to capture the "name" of the 
connected host.
-) a small forwarding script that checks the "real destination IP" to decide 
which IP address/port is being requested. (Hint: current Linux kernels don't 
use getsockname anymore, the real destination is carried nowadays as a socket 
option)

One of the uglier parts that I have found no "real" solution was the fact that 
one cannot be sure that ssh will be able to listen on a given port. 

Solutions I've found include:
-) check the port before issueing ssh (Racecondition warning: Going through 
this hole the whole federation star fleet could get lost.)
-) using some kind of except to drive ssh through a pty.
-) roll your own ssh tunnel solution. The only lib that come to my mind is 
Twisted, in which case one could ignore the need for the SOCKS protocol.

But luckily for us, the solution is easier, because we only need to tunnel 
http in the hadoop case, which has the high benefit that we do not need to 
capture the hostname, because http remembers the hostname inside the payload.

Not tested, but the following should work:
1.) Setup a proxy on the cluster somewhere. Make it do auth (proxy auth might 
work too, but depending upon how one makes the browser access the proxy this 
might be a bad idea).
2.) Make the client access the proxy for the needed hosts/port combinations. 
FoxyProxy or similiar extensions for firefox come to mind, or some 
destination nat rules on your packet firewall should do the trick.

Andreas


On Monday 02 June 2008 20:27:53 Chris K Wensel wrote:
> > obviously this isn't the best solution if you need to let many semi
> > trusted users browse your cluster.
>
> Actually, it would be much more secure if the tunnel service ran on a
> trusted server letting your users connect remotely via SOCKS and then
> browse the cluster. These users wouldn't need any AWS keys etc.
>
>
> Chris K Wensel
> chris@wensel.net
> http://chris.wensel.net/
> http://www.cascading.org/



Re: hadoop on EC2

Posted by Chris K Wensel <ch...@wensel.net>.
> obviously this isn't the best solution if you need to let many semi  
> trusted users browse your cluster.


Actually, it would be much more secure if the tunnel service ran on a  
trusted server letting your users connect remotely via SOCKS and then  
browse the cluster. These users wouldn't need any AWS keys etc.


Chris K Wensel
chris@wensel.net
http://chris.wensel.net/
http://www.cascading.org/