You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by daemeon reiydelle <da...@gmail.com> on 2015/02/02 08:09:31 UTC

Re: Multiple separate Hadoop clusters on same physical machines

Make virtualization an option. Federation will NOT solve your problems.



*.......*






*“Life should not be a journey to the grave with the intention of arriving
safely in apretty and well preserved body, but rather to skid in broadside
in a cloud of smoke,thoroughly used up, totally worn out, and loudly
proclaiming “Wow! What a Ride!” - Hunter ThompsonDaemeon C.M. ReiydelleUSA
(+1) 415.501.0198London (+44) (0) 20 8144 9872*

On Mon, Jan 26, 2015 at 1:34 AM, Azuryy Yu <az...@gmail.com> wrote:

> Hi,
>
> I think the best way is deploy HDFS federation with Hadoop 2.x.
>
> On Mon, Jan 26, 2015 at 5:18 PM, Harun Reşit Zafer <
> harun.zafer@tubitak.gov.tr> wrote:
>
>> Hi everyone,
>>
>> We have set up and been playing with Hadoop 1.2.x and its friends (Hbase,
>> pig, hive etc.) on 7 physical servers. We want to test Hadoop (maybe
>> different versions) and ecosystem on physical machines (virtualization is
>> not an option) from different perspectives.
>>
>> As a bunch of developer we would like to work in parallel. We want every
>> team member play with his/her own cluster. However we have limited amount
>> of servers (strong machines though).
>>
>> So the question is, by changing port numbers, environment variables and
>> other configuration parameters, is it possible to setup several independent
>> clusters on same physical machines. Is there any constraints? What are the
>> possible difficulties we are to face?
>>
>> Thanks in advance
>>
>> --
>> Harun Reşit Zafer
>> TÜBİTAK BİLGEM BTE
>> Bulut Bilişim ve Büyük Veri Analiz Sistemleri Bölümü
>> T +90 262 675 3268
>> W  http://www.hrzafer.com
>>
>>
>