You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Usman Waheed <us...@opera.com> on 2009/04/29 10:34:43 UTC

Can i make a node just an HDFS client to put/get data into hadoop

Hi All,

Is it possible to make a node just a hadoop client so that it can 
put/get files into HDFS but not act as a namenode or datanode?
I already have a master node and 3 datanodes but need to execute 
puts/gets into hadoop in parallel using more than just one machine other 
than the master.

Thanks,
Usman

Re: Can i make a node just an HDFS client to put/get data into hadoop

Posted by Usman Waheed <us...@opera.com>.
Correction it is fs.default.name not fs.dfs.name.
Thanks,
Usman
> Thanks Steve,
> We just conducted a quick test by taking a node which has the same 
> version of hadoop as the namenode and datanodes and we changed the 
> fs.dfs.name on this node to point to the master on port 9000. We did a 
> put/get and it worked.
> It worked. All our machines (potential clients we can use) are on same 
> the LAN.
> This will give us the ability to put a multitude of files into HDFS 
> quickly.
>> Usman Waheed wrote:
>>> Hi All,
>>>
>>> Is it possible to make a node just a hadoop client so that it can 
>>> put/get files into HDFS but not act as a namenode or datanode?
>>> I already have a master node and 3 datanodes but need to execute 
>>> puts/gets into hadoop in parallel using more than just one machine 
>>> other than the master.
>>>
>>
>> Anything on the LAN can be a client of the filesystem, you just need 
>> appropriate hadoop configuration files to talk to the namenode and 
>> job tracker. I don't know how well the (custom) IPC works over long 
>> distances, and you have to keep the versions in sync for everything 
>> to work reliably.


Re: Can i make a node just an HDFS client to put/get data into hadoop

Posted by Usman Waheed <us...@opera.com>.
Thanks Steve,
We just conducted a quick test by taking a node which has the same 
version of hadoop as the namenode and datanodes and we changed the 
fs.dfs.name on this node to point to the master on port 9000. We did a 
put/get and it worked.
It worked. All our machines (potential clients we can use) are on same 
the LAN.
This will give us the ability to put a multitude of files into HDFS quickly.
> Usman Waheed wrote:
>> Hi All,
>>
>> Is it possible to make a node just a hadoop client so that it can 
>> put/get files into HDFS but not act as a namenode or datanode?
>> I already have a master node and 3 datanodes but need to execute 
>> puts/gets into hadoop in parallel using more than just one machine 
>> other than the master.
>>
>
> Anything on the LAN can be a client of the filesystem, you just need 
> appropriate hadoop configuration files to talk to the namenode and job 
> tracker. I don't know how well the (custom) IPC works over long 
> distances, and you have to keep the versions in sync for everything to 
> work reliably.


Re: Can i make a node just an HDFS client to put/get data into hadoop

Posted by Steve Loughran <st...@apache.org>.
Usman Waheed wrote:
> Hi All,
> 
> Is it possible to make a node just a hadoop client so that it can 
> put/get files into HDFS but not act as a namenode or datanode?
> I already have a master node and 3 datanodes but need to execute 
> puts/gets into hadoop in parallel using more than just one machine other 
> than the master.
> 

Anything on the LAN can be a client of the filesystem, you just need 
appropriate hadoop configuration files to talk to the namenode and job 
tracker. I don't know how well the (custom) IPC works over long 
distances, and you have to keep the versions in sync for everything to 
work reliably.