You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@bigtop.apache.org by Konstantin Boudnik <co...@apache.org> on 2014/07/21 20:33:35 UTC

Re: Starting Hadoop in Distributed Mode

Sorry for being a nag - did you install Bigtop 0.7.0?

Cc'ing dev@ list as well
  Cos

On Mon, Jul 21, 2014 at 01:15PM, David Fryer wrote:
> I activated the bigtop yum repository, and installed the required hadoop
> packages via yum. All of the computers in the cluster are running CentOS
> 6.5.
> 
> -David Fryer
> 
> 
> On Mon, Jul 21, 2014 at 1:01 PM, Konstantin Boudnik <co...@apache.org> wrote:
> 
> > I see that your daemon is trying to log to
> > /usr/lib/hadoop/logs whereas Bigtop logs under /car/log as required by
> > Linux services good behavior rules.
> >
> > The way namenode recognizes DNs isn't via slaves file, but by DNs register
> > with NN via RPC mechanism.
> >
> > How did you install the Hadoop? Using Bigtop packages or via a different
> > mechanism? The fact that you are seeing error message about cygwin not
> > found tells me that you are using a derivative bits, not pure Bigtop. Is
> > this the case?
> >
> > Regards
> >   Cos
> >
> > On July 21, 2014 9:32:48 AM PDT, David Fryer <df...@gmail.com> wrote:
> > >When I tried starting hadoop using the init scripts provided, the
> > >master
> > >couldn't find any of the datanodes. It is my understanding that the
> > >masters
> > >file is optional, but the slaves file is required. The scripts that
> > >reference the slaves file are named in plural (instead of
> > >hadoop-daemon.sh,
> > >use hadoop-daemons.sh). I tried modifying the init scripts to run
> > >hadoop-daemons.sh, and the script attempted to spawn processes on the
> > >slaves referenced in the slaves file, but that produced the error:
> > >Starting Hadoop namenode:                                  [  OK  ]
> > >slave2: starting namenode, logging to
> > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-loki.out
> > >master: starting namenode, logging to
> > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-odin.out
> > >slave3: starting namenode, logging to
> > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-tyr.out
> > >slave1: starting namenode, logging to
> > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-thor.out
> > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
> > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
> > >directory
> > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
> > >found
> > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
> > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
> > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
> > >directory
> > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
> > >found
> > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
> > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
> > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
> > >directory
> > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
> > >found
> > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
> > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
> > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
> > >directory
> > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
> > >found
> > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
> > >
> > >-David Fryer
> > >
> > >
> > >On Mon, Jul 21, 2014 at 12:18 PM, Konstantin Boudnik <co...@apache.org>
> > >wrote:
> > >
> > >> Hi David.
> > >>
> > >> Slaves files are really optional if I remember right. In Bigtop we
> > >are
> > >> usually
> > >> deploy Hadoop with provided Puppet recipes which are battle-hardened
> > >over
> > >> the
> > >> years :)
> > >>
> > >> Cos
> > >>
> > >> On Mon, Jul 21, 2014 at 10:53AM, David Fryer wrote:
> > >> > Hi Bigtop!
> > >> >
> > >> > I'm working on trying to get hadoop running in distributed mode,
> > >but the
> > >> > init scripts don't seem to be referencing the slaves file in
> > >> > /etc/hadoop/conf. Has anyone encountered this before?
> > >> >
> > >> > Thanks,
> > >> > David Fryer
> > >>
> >
> >

Re: Starting Hadoop in Distributed Mode

Posted by David Fryer <df...@gmail.com>.
Hey Jay! Were you looking for me?

On Fri, Jan 4, 2019 at 2:04 PM Jay Vyas <ja...@gmail.com> wrote:

> Hahah sorry. Meant to actually send this to Someone on the list :)...
> Gmail app acting wonky.
>
> > On Jan 4, 2019, at 8:52 AM, Jay Vyas <ja...@gmail.com>
> wrote:
> >
> > Hey free to chat ?
> >
> >>> On Jul 22, 2014, at 12:04 PM, Konstantin Boudnik <co...@apache.org>
> wrote:
> >>>
> >>> On July 22, 2014 6:08:11 AM PDT, jay vyas <ja...@gmail.com>
> wrote:
> >>> No prob  cos.
> >>> Y
> >>> Now I'm getting curious : What is our take on passwordless SSH for
> >>> bigtop
> >>> hadoop distro ?
> >>>
> >>> I never set up passwordless ssh in bigtop that i remember (but maybe im
> >>> forgetting).
> >>>
> >>> I think that puppet apply on each slave is sufficient for a cluster,
> >>> because it starts the slaves for you and connects them to the master -
> >>> without need of master ssh'ing anyhwere.
> >>
> >> Yup. Hadoop nor bigtop relies on password-less SSH. Except when you
> need to
> >> configure HA in which case, puppet needs to SSH between primary and
> standby
> >> NNs.
> >>
> >> Cos
> >>
> >>> On Tue, Jul 22, 2014 at 12:20 AM, Konstantin Boudnik <co...@apache.org>
> >>> wrote:
> >>>
> >>>> Well done - thank you very much Jay!
> >>>>
> >>>> Cos
> >>>>
> >>>>> On Mon, Jul 21, 2014 at 10:52PM, jay vyas wrote:
> >>>>> Okay ! Here you go.
> >>>>>
> >>>>>
> >>>>
> >>>
> https://cwiki.apache.org/confluence/display/BIGTOP/How+to+install+BigTop+0.7.0+hadoop+on+CentOS+with+puppet
> >>>>>
> >>>>> I got it working from scratch in very short order.  I think you've
> >>> gone
> >>>> and
> >>>>> done some manual stuff unnecessarily.
> >>>>>
> >>>>> Use puppet, its good for your health :)
> >>>>
> >>
> >> --
> >> Regards,
> >> Cos
> >>
> >> --
> >>
> >>
> >> Listed on the London Stock Exchange: WAND
> >> <http://www.bloomberg.com/quote/WAND:LN>
> >>
> >> THIS MESSAGE AND ANY ATTACHMENTS ARE CONFIDENTIAL, PROPRIETARY, AND MAY
> BE
> >> PRIVILEGED.  If this message was misdirected, WANdisco, Inc. and its
> >> subsidiaries, ("WANdisco") does not waive any confidentiality or
> privilege.
> >> If you are not the intended recipient, please notify us immediately and
> >> destroy the message without disclosing its contents to anyone.  Any
> >> distribution, use or copying of this e-mail or the information it
> contains
> >> by other than an intended recipient is unauthorized.  The views and
> >> opinions expressed in this e-mail message are the author's own and may
> not
> >> reflect the views and opinions of WANdisco, unless the author is
> authorized
> >> by WANdisco to express such views or opinions on its behalf.  All email
> >> sent to or from this address is subject to electronic storage and
> review by
> >> WANdisco.  Although WANdisco operates anti-virus programs, it does not
> >> accept responsibility for any damage whatsoever caused by viruses being
> >> passed.
>
-- 

Sent from my mobile device.

Re: Starting Hadoop in Distributed Mode

Posted by Jay Vyas <ja...@gmail.com>.
Hahah sorry. Meant to actually send this to Someone on the list :)... Gmail app acting wonky.

> On Jan 4, 2019, at 8:52 AM, Jay Vyas <ja...@gmail.com> wrote:
> 
> Hey free to chat ?
> 
>>> On Jul 22, 2014, at 12:04 PM, Konstantin Boudnik <co...@apache.org> wrote:
>>> 
>>> On July 22, 2014 6:08:11 AM PDT, jay vyas <ja...@gmail.com> wrote:
>>> No prob  cos.
>>> Y
>>> Now I'm getting curious : What is our take on passwordless SSH for
>>> bigtop
>>> hadoop distro ?
>>> 
>>> I never set up passwordless ssh in bigtop that i remember (but maybe im
>>> forgetting).
>>> 
>>> I think that puppet apply on each slave is sufficient for a cluster,
>>> because it starts the slaves for you and connects them to the master -
>>> without need of master ssh'ing anyhwere.
>> 
>> Yup. Hadoop nor bigtop relies on password-less SSH. Except when you need to
>> configure HA in which case, puppet needs to SSH between primary and standby
>> NNs.
>> 
>> Cos
>> 
>>> On Tue, Jul 22, 2014 at 12:20 AM, Konstantin Boudnik <co...@apache.org>
>>> wrote:
>>> 
>>>> Well done - thank you very much Jay!
>>>> 
>>>> Cos
>>>> 
>>>>> On Mon, Jul 21, 2014 at 10:52PM, jay vyas wrote:
>>>>> Okay ! Here you go.
>>>>> 
>>>>> 
>>>> 
>>> https://cwiki.apache.org/confluence/display/BIGTOP/How+to+install+BigTop+0.7.0+hadoop+on+CentOS+with+puppet
>>>>> 
>>>>> I got it working from scratch in very short order.  I think you've
>>> gone
>>>> and
>>>>> done some manual stuff unnecessarily.
>>>>> 
>>>>> Use puppet, its good for your health :)
>>>> 
>> 
>> -- 
>> Regards,
>> Cos
>> 
>> -- 
>> 
>> 
>> Listed on the London Stock Exchange: WAND 
>> <http://www.bloomberg.com/quote/WAND:LN>
>> 
>> THIS MESSAGE AND ANY ATTACHMENTS ARE CONFIDENTIAL, PROPRIETARY, AND MAY BE 
>> PRIVILEGED.  If this message was misdirected, WANdisco, Inc. and its 
>> subsidiaries, ("WANdisco") does not waive any confidentiality or privilege. 
>> If you are not the intended recipient, please notify us immediately and 
>> destroy the message without disclosing its contents to anyone.  Any 
>> distribution, use or copying of this e-mail or the information it contains 
>> by other than an intended recipient is unauthorized.  The views and 
>> opinions expressed in this e-mail message are the author's own and may not 
>> reflect the views and opinions of WANdisco, unless the author is authorized 
>> by WANdisco to express such views or opinions on its behalf.  All email 
>> sent to or from this address is subject to electronic storage and review by 
>> WANdisco.  Although WANdisco operates anti-virus programs, it does not 
>> accept responsibility for any damage whatsoever caused by viruses being 
>> passed.

Re: Starting Hadoop in Distributed Mode

Posted by Jay Vyas <ja...@gmail.com>.
Hahah sorry. Meant to actually send this to Someone on the list :)... Gmail app acting wonky.

> On Jan 4, 2019, at 8:52 AM, Jay Vyas <ja...@gmail.com> wrote:
> 
> Hey free to chat ?
> 
>>> On Jul 22, 2014, at 12:04 PM, Konstantin Boudnik <co...@apache.org> wrote:
>>> 
>>> On July 22, 2014 6:08:11 AM PDT, jay vyas <ja...@gmail.com> wrote:
>>> No prob  cos.
>>> Y
>>> Now I'm getting curious : What is our take on passwordless SSH for
>>> bigtop
>>> hadoop distro ?
>>> 
>>> I never set up passwordless ssh in bigtop that i remember (but maybe im
>>> forgetting).
>>> 
>>> I think that puppet apply on each slave is sufficient for a cluster,
>>> because it starts the slaves for you and connects them to the master -
>>> without need of master ssh'ing anyhwere.
>> 
>> Yup. Hadoop nor bigtop relies on password-less SSH. Except when you need to
>> configure HA in which case, puppet needs to SSH between primary and standby
>> NNs.
>> 
>> Cos
>> 
>>> On Tue, Jul 22, 2014 at 12:20 AM, Konstantin Boudnik <co...@apache.org>
>>> wrote:
>>> 
>>>> Well done - thank you very much Jay!
>>>> 
>>>> Cos
>>>> 
>>>>> On Mon, Jul 21, 2014 at 10:52PM, jay vyas wrote:
>>>>> Okay ! Here you go.
>>>>> 
>>>>> 
>>>> 
>>> https://cwiki.apache.org/confluence/display/BIGTOP/How+to+install+BigTop+0.7.0+hadoop+on+CentOS+with+puppet
>>>>> 
>>>>> I got it working from scratch in very short order.  I think you've
>>> gone
>>>> and
>>>>> done some manual stuff unnecessarily.
>>>>> 
>>>>> Use puppet, its good for your health :)
>>>> 
>> 
>> -- 
>> Regards,
>> Cos
>> 
>> -- 
>> 
>> 
>> Listed on the London Stock Exchange: WAND 
>> <http://www.bloomberg.com/quote/WAND:LN>
>> 
>> THIS MESSAGE AND ANY ATTACHMENTS ARE CONFIDENTIAL, PROPRIETARY, AND MAY BE 
>> PRIVILEGED.  If this message was misdirected, WANdisco, Inc. and its 
>> subsidiaries, ("WANdisco") does not waive any confidentiality or privilege. 
>> If you are not the intended recipient, please notify us immediately and 
>> destroy the message without disclosing its contents to anyone.  Any 
>> distribution, use or copying of this e-mail or the information it contains 
>> by other than an intended recipient is unauthorized.  The views and 
>> opinions expressed in this e-mail message are the author's own and may not 
>> reflect the views and opinions of WANdisco, unless the author is authorized 
>> by WANdisco to express such views or opinions on its behalf.  All email 
>> sent to or from this address is subject to electronic storage and review by 
>> WANdisco.  Although WANdisco operates anti-virus programs, it does not 
>> accept responsibility for any damage whatsoever caused by viruses being 
>> passed.

Re: Starting Hadoop in Distributed Mode

Posted by Jay Vyas <ja...@gmail.com>.
Hey free to chat ?

> On Jul 22, 2014, at 12:04 PM, Konstantin Boudnik <co...@apache.org> wrote:
> 
>> On July 22, 2014 6:08:11 AM PDT, jay vyas <ja...@gmail.com> wrote:
>> No prob  cos.
>> Y
>> Now I'm getting curious : What is our take on passwordless SSH for
>> bigtop
>> hadoop distro ?
>> 
>> I never set up passwordless ssh in bigtop that i remember (but maybe im
>> forgetting).
>> 
>> I think that puppet apply on each slave is sufficient for a cluster,
>> because it starts the slaves for you and connects them to the master -
>> without need of master ssh'ing anyhwere.
> 
> Yup. Hadoop nor bigtop relies on password-less SSH. Except when you need to
> configure HA in which case, puppet needs to SSH between primary and standby
> NNs.
> 
> Cos
> 
>> On Tue, Jul 22, 2014 at 12:20 AM, Konstantin Boudnik <co...@apache.org>
>> wrote:
>> 
>>> Well done - thank you very much Jay!
>>> 
>>> Cos
>>> 
>>>> On Mon, Jul 21, 2014 at 10:52PM, jay vyas wrote:
>>>> Okay ! Here you go.
>>>> 
>>>> 
>>> 
>> https://cwiki.apache.org/confluence/display/BIGTOP/How+to+install+BigTop+0.7.0+hadoop+on+CentOS+with+puppet
>>>> 
>>>> I got it working from scratch in very short order.  I think you've
>> gone
>>> and
>>>> done some manual stuff unnecessarily.
>>>> 
>>>> Use puppet, its good for your health :)
>>> 
> 
> -- 
> Regards,
>  Cos
> 
> -- 
> 
> 
> Listed on the London Stock Exchange: WAND 
> <http://www.bloomberg.com/quote/WAND:LN>
> 
> THIS MESSAGE AND ANY ATTACHMENTS ARE CONFIDENTIAL, PROPRIETARY, AND MAY BE 
> PRIVILEGED.  If this message was misdirected, WANdisco, Inc. and its 
> subsidiaries, ("WANdisco") does not waive any confidentiality or privilege. 
> If you are not the intended recipient, please notify us immediately and 
> destroy the message without disclosing its contents to anyone.  Any 
> distribution, use or copying of this e-mail or the information it contains 
> by other than an intended recipient is unauthorized.  The views and 
> opinions expressed in this e-mail message are the author's own and may not 
> reflect the views and opinions of WANdisco, unless the author is authorized 
> by WANdisco to express such views or opinions on its behalf.  All email 
> sent to or from this address is subject to electronic storage and review by 
> WANdisco.  Although WANdisco operates anti-virus programs, it does not 
> accept responsibility for any damage whatsoever caused by viruses being 
> passed.

Re: Starting Hadoop in Distributed Mode

Posted by Jay Vyas <ja...@gmail.com>.
Hey free to chat ?

> On Jul 22, 2014, at 12:04 PM, Konstantin Boudnik <co...@apache.org> wrote:
> 
>> On July 22, 2014 6:08:11 AM PDT, jay vyas <ja...@gmail.com> wrote:
>> No prob  cos.
>> Y
>> Now I'm getting curious : What is our take on passwordless SSH for
>> bigtop
>> hadoop distro ?
>> 
>> I never set up passwordless ssh in bigtop that i remember (but maybe im
>> forgetting).
>> 
>> I think that puppet apply on each slave is sufficient for a cluster,
>> because it starts the slaves for you and connects them to the master -
>> without need of master ssh'ing anyhwere.
> 
> Yup. Hadoop nor bigtop relies on password-less SSH. Except when you need to
> configure HA in which case, puppet needs to SSH between primary and standby
> NNs.
> 
> Cos
> 
>> On Tue, Jul 22, 2014 at 12:20 AM, Konstantin Boudnik <co...@apache.org>
>> wrote:
>> 
>>> Well done - thank you very much Jay!
>>> 
>>> Cos
>>> 
>>>> On Mon, Jul 21, 2014 at 10:52PM, jay vyas wrote:
>>>> Okay ! Here you go.
>>>> 
>>>> 
>>> 
>> https://cwiki.apache.org/confluence/display/BIGTOP/How+to+install+BigTop+0.7.0+hadoop+on+CentOS+with+puppet
>>>> 
>>>> I got it working from scratch in very short order.  I think you've
>> gone
>>> and
>>>> done some manual stuff unnecessarily.
>>>> 
>>>> Use puppet, its good for your health :)
>>> 
> 
> -- 
> Regards,
>  Cos
> 
> -- 
> 
> 
> Listed on the London Stock Exchange: WAND 
> <http://www.bloomberg.com/quote/WAND:LN>
> 
> THIS MESSAGE AND ANY ATTACHMENTS ARE CONFIDENTIAL, PROPRIETARY, AND MAY BE 
> PRIVILEGED.  If this message was misdirected, WANdisco, Inc. and its 
> subsidiaries, ("WANdisco") does not waive any confidentiality or privilege. 
> If you are not the intended recipient, please notify us immediately and 
> destroy the message without disclosing its contents to anyone.  Any 
> distribution, use or copying of this e-mail or the information it contains 
> by other than an intended recipient is unauthorized.  The views and 
> opinions expressed in this e-mail message are the author's own and may not 
> reflect the views and opinions of WANdisco, unless the author is authorized 
> by WANdisco to express such views or opinions on its behalf.  All email 
> sent to or from this address is subject to electronic storage and review by 
> WANdisco.  Although WANdisco operates anti-virus programs, it does not 
> accept responsibility for any damage whatsoever caused by viruses being 
> passed.

Re: Starting Hadoop in Distributed Mode

Posted by Konstantin Boudnik <co...@apache.org>.
On July 22, 2014 6:08:11 AM PDT, jay vyas <ja...@gmail.com> wrote:
>No prob  cos.
>
>Now I'm getting curious : What is our take on passwordless SSH for
>bigtop
>hadoop distro ?
>
>I never set up passwordless ssh in bigtop that i remember (but maybe im
>forgetting).
>
>I think that puppet apply on each slave is sufficient for a cluster,
>because it starts the slaves for you and connects them to the master -
>without need of master ssh'ing anyhwere.

Yup. Hadoop nor bigtop relies on password-less SSH. Except when you need to
configure HA in which case, puppet needs to SSH between primary and standby
NNs.

Cos

>On Tue, Jul 22, 2014 at 12:20 AM, Konstantin Boudnik <co...@apache.org>
>wrote:
>
>> Well done - thank you very much Jay!
>>
>> Cos
>>
>> On Mon, Jul 21, 2014 at 10:52PM, jay vyas wrote:
>> > Okay ! Here you go.
>> >
>> >
>>
>https://cwiki.apache.org/confluence/display/BIGTOP/How+to+install+BigTop+0.7.0+hadoop+on+CentOS+with+puppet
>> >
>> > I got it working from scratch in very short order.  I think you've
>gone
>> and
>> > done some manual stuff unnecessarily.
>> >
>> > Use puppet, its good for your health :)
>>

-- 
Regards,
  Cos

-- 


Listed on the London Stock Exchange: WAND 
<http://www.bloomberg.com/quote/WAND:LN>

THIS MESSAGE AND ANY ATTACHMENTS ARE CONFIDENTIAL, PROPRIETARY, AND MAY BE 
PRIVILEGED.  If this message was misdirected, WANdisco, Inc. and its 
subsidiaries, ("WANdisco") does not waive any confidentiality or privilege. 
 If you are not the intended recipient, please notify us immediately and 
destroy the message without disclosing its contents to anyone.  Any 
distribution, use or copying of this e-mail or the information it contains 
by other than an intended recipient is unauthorized.  The views and 
opinions expressed in this e-mail message are the author's own and may not 
reflect the views and opinions of WANdisco, unless the author is authorized 
by WANdisco to express such views or opinions on its behalf.  All email 
sent to or from this address is subject to electronic storage and review by 
WANdisco.  Although WANdisco operates anti-virus programs, it does not 
accept responsibility for any damage whatsoever caused by viruses being 
passed.

Re: Starting Hadoop in Distributed Mode

Posted by Konstantin Boudnik <co...@apache.org>.
On July 22, 2014 6:08:11 AM PDT, jay vyas <ja...@gmail.com> wrote:
>No prob  cos.
>
>Now I'm getting curious : What is our take on passwordless SSH for
>bigtop
>hadoop distro ?
>
>I never set up passwordless ssh in bigtop that i remember (but maybe im
>forgetting).
>
>I think that puppet apply on each slave is sufficient for a cluster,
>because it starts the slaves for you and connects them to the master -
>without need of master ssh'ing anyhwere.

Yup. Hadoop nor bigtop relies on password-less SSH. Except when you need to
configure HA in which case, puppet needs to SSH between primary and standby
NNs.

Cos

>On Tue, Jul 22, 2014 at 12:20 AM, Konstantin Boudnik <co...@apache.org>
>wrote:
>
>> Well done - thank you very much Jay!
>>
>> Cos
>>
>> On Mon, Jul 21, 2014 at 10:52PM, jay vyas wrote:
>> > Okay ! Here you go.
>> >
>> >
>>
>https://cwiki.apache.org/confluence/display/BIGTOP/How+to+install+BigTop+0.7.0+hadoop+on+CentOS+with+puppet
>> >
>> > I got it working from scratch in very short order.  I think you've
>gone
>> and
>> > done some manual stuff unnecessarily.
>> >
>> > Use puppet, its good for your health :)
>>

-- 
Regards,
  Cos

-- 


Listed on the London Stock Exchange: WAND 
<http://www.bloomberg.com/quote/WAND:LN>

THIS MESSAGE AND ANY ATTACHMENTS ARE CONFIDENTIAL, PROPRIETARY, AND MAY BE 
PRIVILEGED.  If this message was misdirected, WANdisco, Inc. and its 
subsidiaries, ("WANdisco") does not waive any confidentiality or privilege. 
 If you are not the intended recipient, please notify us immediately and 
destroy the message without disclosing its contents to anyone.  Any 
distribution, use or copying of this e-mail or the information it contains 
by other than an intended recipient is unauthorized.  The views and 
opinions expressed in this e-mail message are the author's own and may not 
reflect the views and opinions of WANdisco, unless the author is authorized 
by WANdisco to express such views or opinions on its behalf.  All email 
sent to or from this address is subject to electronic storage and review by 
WANdisco.  Although WANdisco operates anti-virus programs, it does not 
accept responsibility for any damage whatsoever caused by viruses being 
passed.

Re: Starting Hadoop in Distributed Mode

Posted by Konstantin Boudnik <co...@wandisco.com>.

On July 22, 2014 6:08:11 AM PDT, jay vyas <ja...@gmail.com> wrote:
>No prob  cos.
>
>Now I'm getting curious : What is our take on passwordless SSH for
>bigtop
>hadoop distro ?
>
>I never set up passwordless ssh in bigtop that i remember (but maybe im
>forgetting).
>
>I think that puppet apply on each slave is sufficient for a cluster,
>because it starts the slaves for you and connects them to the master -
>without need of master ssh'ing anyhwere.

Yup. Hadoop nor bigtop relies on password-less SSH. Except when you need to configure HA in which case, puppet needs to SSH between primary and standby NNs.

Cos

>On Tue, Jul 22, 2014 at 12:20 AM, Konstantin Boudnik <co...@apache.org>
>wrote:
>
>> Well done - thank you very much Jay!
>>
>> Cos
>>
>> On Mon, Jul 21, 2014 at 10:52PM, jay vyas wrote:
>> > Okay ! Here you go.
>> >
>> >
>>
>https://cwiki.apache.org/confluence/display/BIGTOP/How+to+install+BigTop+0.7.0+hadoop+on+CentOS+with+puppet
>> >
>> > I got it working from scratch in very short order.  I think you've
>gone
>> and
>> > done some manual stuff unnecessarily.
>> >
>> > Use puppet, its good for your health :)
>>

-- 
Regards,
  Cos

-- 


Listed on the London Stock Exchange: WAND 
<http://www.bloomberg.com/quote/WAND:LN>

THIS MESSAGE AND ANY ATTACHMENTS ARE CONFIDENTIAL, PROPRIETARY, AND MAY BE 
PRIVILEGED.  If this message was misdirected, WANdisco, Inc. and its 
subsidiaries, ("WANdisco") does not waive any confidentiality or privilege. 
 If you are not the intended recipient, please notify us immediately and 
destroy the message without disclosing its contents to anyone.  Any 
distribution, use or copying of this e-mail or the information it contains 
by other than an intended recipient is unauthorized.  The views and 
opinions expressed in this e-mail message are the author's own and may not 
reflect the views and opinions of WANdisco, unless the author is authorized 
by WANdisco to express such views or opinions on its behalf.  All email 
sent to or from this address is subject to electronic storage and review by 
WANdisco.  Although WANdisco operates anti-virus programs, it does not 
accept responsibility for any damage whatsoever caused by viruses being 
passed.

Re: Starting Hadoop in Distributed Mode

Posted by jay vyas <ja...@gmail.com>.
No prob  cos.

Now I'm getting curious : What is our take on passwordless SSH for bigtop
hadoop distro ?

I never set up passwordless ssh in bigtop that i remember (but maybe im
forgetting).

I think that puppet apply on each slave is sufficient for a cluster,
because it starts the slaves for you and connects them to the master -
without need of master ssh'ing anyhwere.

On Tue, Jul 22, 2014 at 12:20 AM, Konstantin Boudnik <co...@apache.org> wrote:

> Well done - thank you very much Jay!
>
> Cos
>
> On Mon, Jul 21, 2014 at 10:52PM, jay vyas wrote:
> > Okay ! Here you go.
> >
> >
> https://cwiki.apache.org/confluence/display/BIGTOP/How+to+install+BigTop+0.7.0+hadoop+on+CentOS+with+puppet
> >
> > I got it working from scratch in very short order.  I think you've gone
> and
> > done some manual stuff unnecessarily.
> >
> > Use puppet, its good for your health :)
>



-- 
jay vyas

Re: Starting Hadoop in Distributed Mode

Posted by jay vyas <ja...@gmail.com>.
No prob  cos.

Now I'm getting curious : What is our take on passwordless SSH for bigtop
hadoop distro ?

I never set up passwordless ssh in bigtop that i remember (but maybe im
forgetting).

I think that puppet apply on each slave is sufficient for a cluster,
because it starts the slaves for you and connects them to the master -
without need of master ssh'ing anyhwere.

On Tue, Jul 22, 2014 at 12:20 AM, Konstantin Boudnik <co...@apache.org> wrote:

> Well done - thank you very much Jay!
>
> Cos
>
> On Mon, Jul 21, 2014 at 10:52PM, jay vyas wrote:
> > Okay ! Here you go.
> >
> >
> https://cwiki.apache.org/confluence/display/BIGTOP/How+to+install+BigTop+0.7.0+hadoop+on+CentOS+with+puppet
> >
> > I got it working from scratch in very short order.  I think you've gone
> and
> > done some manual stuff unnecessarily.
> >
> > Use puppet, its good for your health :)
>



-- 
jay vyas

Re: Starting Hadoop in Distributed Mode

Posted by Konstantin Boudnik <co...@apache.org>.
Well done - thank you very much Jay!

Cos

On Mon, Jul 21, 2014 at 10:52PM, jay vyas wrote:
> Okay ! Here you go.
> 
> https://cwiki.apache.org/confluence/display/BIGTOP/How+to+install+BigTop+0.7.0+hadoop+on+CentOS+with+puppet
> 
> I got it working from scratch in very short order.  I think you've gone and
> done some manual stuff unnecessarily.
> 
> Use puppet, its good for your health :)

Re: Starting Hadoop in Distributed Mode

Posted by Konstantin Boudnik <co...@apache.org>.
Well done - thank you very much Jay!

Cos

On Mon, Jul 21, 2014 at 10:52PM, jay vyas wrote:
> Okay ! Here you go.
> 
> https://cwiki.apache.org/confluence/display/BIGTOP/How+to+install+BigTop+0.7.0+hadoop+on+CentOS+with+puppet
> 
> I got it working from scratch in very short order.  I think you've gone and
> done some manual stuff unnecessarily.
> 
> Use puppet, its good for your health :)

Re: Starting Hadoop in Distributed Mode

Posted by jay vyas <ja...@gmail.com>.
Okay ! Here you go.

https://cwiki.apache.org/confluence/display/BIGTOP/How+to+install+BigTop+0.7.0+hadoop+on+CentOS+with+puppet

I got it working from scratch in very short order.  I think you've gone and
done some manual stuff unnecessarily.

Use puppet, its good for your health :)

Re: Starting Hadoop in Distributed Mode

Posted by jay vyas <ja...@gmail.com>.
Okay ! Here you go.

https://cwiki.apache.org/confluence/display/BIGTOP/How+to+install+BigTop+0.7.0+hadoop+on+CentOS+with+puppet

I got it working from scratch in very short order.  I think you've gone and
done some manual stuff unnecessarily.

Use puppet, its good for your health :)

Re: Starting Hadoop in Distributed Mode

Posted by jay vyas <ja...@gmail.com>.
FYI Im going to try what i suggested to make sure im not sending you off on
a wild goose chase @davide :)

got a centos box spun up, will let you know if it works from scratch.  then
you can copy the recipe.

I'll create a wiki page for it.


On Mon, Jul 21, 2014 at 6:44 PM, jay vyas <ja...@gmail.com>
wrote:

> I suggest using puppet as well, way easier than doing it manually.
>
> basically i think you could
>
> - clone down thebigtop github and checkout branch-0.7.0
> - put those puppet recipes on your bare metal nodes, and updeate the
> config csv file to point to ip of master
> - run puppet apply on each node
>
> thats it.  It should all i think just work automagically.
> right?
>
>
>
> On Mon, Jul 21, 2014 at 2:44 PM, David Fryer <df...@gmail.com> wrote:
>
>> Yes, Bigtop 0.7.0 is installed.
>>
>> -David Fryer
>>
>>
>> On Mon, Jul 21, 2014 at 2:33 PM, Konstantin Boudnik <co...@apache.org>
>> wrote:
>>
>>> Sorry for being a nag - did you install Bigtop 0.7.0?
>>>
>>> Cc'ing dev@ list as well
>>>   Cos
>>>
>>> On Mon, Jul 21, 2014 at 01:15PM, David Fryer wrote:
>>> > I activated the bigtop yum repository, and installed the required
>>> hadoop
>>> > packages via yum. All of the computers in the cluster are running
>>> CentOS
>>> > 6.5.
>>> >
>>> > -David Fryer
>>> >
>>> >
>>> > On Mon, Jul 21, 2014 at 1:01 PM, Konstantin Boudnik <co...@apache.org>
>>> wrote:
>>> >
>>> > > I see that your daemon is trying to log to
>>> > > /usr/lib/hadoop/logs whereas Bigtop logs under /car/log as required
>>> by
>>> > > Linux services good behavior rules.
>>> > >
>>> > > The way namenode recognizes DNs isn't via slaves file, but by DNs
>>> register
>>> > > with NN via RPC mechanism.
>>> > >
>>> > > How did you install the Hadoop? Using Bigtop packages or via a
>>> different
>>> > > mechanism? The fact that you are seeing error message about cygwin
>>> not
>>> > > found tells me that you are using a derivative bits, not pure
>>> Bigtop. Is
>>> > > this the case?
>>> > >
>>> > > Regards
>>> > >   Cos
>>> > >
>>> > > On July 21, 2014 9:32:48 AM PDT, David Fryer <df...@gmail.com>
>>> wrote:
>>> > > >When I tried starting hadoop using the init scripts provided, the
>>> > > >master
>>> > > >couldn't find any of the datanodes. It is my understanding that the
>>> > > >masters
>>> > > >file is optional, but the slaves file is required. The scripts that
>>> > > >reference the slaves file are named in plural (instead of
>>> > > >hadoop-daemon.sh,
>>> > > >use hadoop-daemons.sh). I tried modifying the init scripts to run
>>> > > >hadoop-daemons.sh, and the script attempted to spawn processes on
>>> the
>>> > > >slaves referenced in the slaves file, but that produced the error:
>>> > > >Starting Hadoop namenode:                                  [  OK  ]
>>> > > >slave2: starting namenode, logging to
>>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-loki.out
>>> > > >master: starting namenode, logging to
>>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-odin.out
>>> > > >slave3: starting namenode, logging to
>>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-tyr.out
>>> > > >slave1: starting namenode, logging to
>>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-thor.out
>>> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>>> > > >directory
>>> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command
>>> not
>>> > > >found
>>> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>>> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>>> > > >directory
>>> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command
>>> not
>>> > > >found
>>> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>>> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>>> > > >directory
>>> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command
>>> not
>>> > > >found
>>> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>>> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>>> > > >directory
>>> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command
>>> not
>>> > > >found
>>> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>>> > > >
>>> > > >-David Fryer
>>> > > >
>>> > > >
>>> > > >On Mon, Jul 21, 2014 at 12:18 PM, Konstantin Boudnik <
>>> cos@apache.org>
>>> > > >wrote:
>>> > > >
>>> > > >> Hi David.
>>> > > >>
>>> > > >> Slaves files are really optional if I remember right. In Bigtop we
>>> > > >are
>>> > > >> usually
>>> > > >> deploy Hadoop with provided Puppet recipes which are
>>> battle-hardened
>>> > > >over
>>> > > >> the
>>> > > >> years :)
>>> > > >>
>>> > > >> Cos
>>> > > >>
>>> > > >> On Mon, Jul 21, 2014 at 10:53AM, David Fryer wrote:
>>> > > >> > Hi Bigtop!
>>> > > >> >
>>> > > >> > I'm working on trying to get hadoop running in distributed mode,
>>> > > >but the
>>> > > >> > init scripts don't seem to be referencing the slaves file in
>>> > > >> > /etc/hadoop/conf. Has anyone encountered this before?
>>> > > >> >
>>> > > >> > Thanks,
>>> > > >> > David Fryer
>>> > > >>
>>> > >
>>> > >
>>>
>>
>>
>
>
> --
> jay vyas
>



-- 
jay vyas

Re: Starting Hadoop in Distributed Mode

Posted by jay vyas <ja...@gmail.com>.
FYI Im going to try what i suggested to make sure im not sending you off on
a wild goose chase @davide :)

got a centos box spun up, will let you know if it works from scratch.  then
you can copy the recipe.

I'll create a wiki page for it.


On Mon, Jul 21, 2014 at 6:44 PM, jay vyas <ja...@gmail.com>
wrote:

> I suggest using puppet as well, way easier than doing it manually.
>
> basically i think you could
>
> - clone down thebigtop github and checkout branch-0.7.0
> - put those puppet recipes on your bare metal nodes, and updeate the
> config csv file to point to ip of master
> - run puppet apply on each node
>
> thats it.  It should all i think just work automagically.
> right?
>
>
>
> On Mon, Jul 21, 2014 at 2:44 PM, David Fryer <df...@gmail.com> wrote:
>
>> Yes, Bigtop 0.7.0 is installed.
>>
>> -David Fryer
>>
>>
>> On Mon, Jul 21, 2014 at 2:33 PM, Konstantin Boudnik <co...@apache.org>
>> wrote:
>>
>>> Sorry for being a nag - did you install Bigtop 0.7.0?
>>>
>>> Cc'ing dev@ list as well
>>>   Cos
>>>
>>> On Mon, Jul 21, 2014 at 01:15PM, David Fryer wrote:
>>> > I activated the bigtop yum repository, and installed the required
>>> hadoop
>>> > packages via yum. All of the computers in the cluster are running
>>> CentOS
>>> > 6.5.
>>> >
>>> > -David Fryer
>>> >
>>> >
>>> > On Mon, Jul 21, 2014 at 1:01 PM, Konstantin Boudnik <co...@apache.org>
>>> wrote:
>>> >
>>> > > I see that your daemon is trying to log to
>>> > > /usr/lib/hadoop/logs whereas Bigtop logs under /car/log as required
>>> by
>>> > > Linux services good behavior rules.
>>> > >
>>> > > The way namenode recognizes DNs isn't via slaves file, but by DNs
>>> register
>>> > > with NN via RPC mechanism.
>>> > >
>>> > > How did you install the Hadoop? Using Bigtop packages or via a
>>> different
>>> > > mechanism? The fact that you are seeing error message about cygwin
>>> not
>>> > > found tells me that you are using a derivative bits, not pure
>>> Bigtop. Is
>>> > > this the case?
>>> > >
>>> > > Regards
>>> > >   Cos
>>> > >
>>> > > On July 21, 2014 9:32:48 AM PDT, David Fryer <df...@gmail.com>
>>> wrote:
>>> > > >When I tried starting hadoop using the init scripts provided, the
>>> > > >master
>>> > > >couldn't find any of the datanodes. It is my understanding that the
>>> > > >masters
>>> > > >file is optional, but the slaves file is required. The scripts that
>>> > > >reference the slaves file are named in plural (instead of
>>> > > >hadoop-daemon.sh,
>>> > > >use hadoop-daemons.sh). I tried modifying the init scripts to run
>>> > > >hadoop-daemons.sh, and the script attempted to spawn processes on
>>> the
>>> > > >slaves referenced in the slaves file, but that produced the error:
>>> > > >Starting Hadoop namenode:                                  [  OK  ]
>>> > > >slave2: starting namenode, logging to
>>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-loki.out
>>> > > >master: starting namenode, logging to
>>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-odin.out
>>> > > >slave3: starting namenode, logging to
>>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-tyr.out
>>> > > >slave1: starting namenode, logging to
>>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-thor.out
>>> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>>> > > >directory
>>> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command
>>> not
>>> > > >found
>>> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>>> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>>> > > >directory
>>> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command
>>> not
>>> > > >found
>>> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>>> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>>> > > >directory
>>> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command
>>> not
>>> > > >found
>>> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>>> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>>> > > >directory
>>> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command
>>> not
>>> > > >found
>>> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>>> > > >
>>> > > >-David Fryer
>>> > > >
>>> > > >
>>> > > >On Mon, Jul 21, 2014 at 12:18 PM, Konstantin Boudnik <
>>> cos@apache.org>
>>> > > >wrote:
>>> > > >
>>> > > >> Hi David.
>>> > > >>
>>> > > >> Slaves files are really optional if I remember right. In Bigtop we
>>> > > >are
>>> > > >> usually
>>> > > >> deploy Hadoop with provided Puppet recipes which are
>>> battle-hardened
>>> > > >over
>>> > > >> the
>>> > > >> years :)
>>> > > >>
>>> > > >> Cos
>>> > > >>
>>> > > >> On Mon, Jul 21, 2014 at 10:53AM, David Fryer wrote:
>>> > > >> > Hi Bigtop!
>>> > > >> >
>>> > > >> > I'm working on trying to get hadoop running in distributed mode,
>>> > > >but the
>>> > > >> > init scripts don't seem to be referencing the slaves file in
>>> > > >> > /etc/hadoop/conf. Has anyone encountered this before?
>>> > > >> >
>>> > > >> > Thanks,
>>> > > >> > David Fryer
>>> > > >>
>>> > >
>>> > >
>>>
>>
>>
>
>
> --
> jay vyas
>



-- 
jay vyas

Re: Starting Hadoop in Distributed Mode

Posted by jay vyas <ja...@gmail.com>.
I suggest using puppet as well, way easier than doing it manually.

basically i think you could

- clone down thebigtop github and checkout branch-0.7.0
- put those puppet recipes on your bare metal nodes, and updeate the config
csv file to point to ip of master
- run puppet apply on each node

thats it.  It should all i think just work automagically.
right?



On Mon, Jul 21, 2014 at 2:44 PM, David Fryer <df...@gmail.com> wrote:

> Yes, Bigtop 0.7.0 is installed.
>
> -David Fryer
>
>
> On Mon, Jul 21, 2014 at 2:33 PM, Konstantin Boudnik <co...@apache.org>
> wrote:
>
>> Sorry for being a nag - did you install Bigtop 0.7.0?
>>
>> Cc'ing dev@ list as well
>>   Cos
>>
>> On Mon, Jul 21, 2014 at 01:15PM, David Fryer wrote:
>> > I activated the bigtop yum repository, and installed the required hadoop
>> > packages via yum. All of the computers in the cluster are running CentOS
>> > 6.5.
>> >
>> > -David Fryer
>> >
>> >
>> > On Mon, Jul 21, 2014 at 1:01 PM, Konstantin Boudnik <co...@apache.org>
>> wrote:
>> >
>> > > I see that your daemon is trying to log to
>> > > /usr/lib/hadoop/logs whereas Bigtop logs under /car/log as required by
>> > > Linux services good behavior rules.
>> > >
>> > > The way namenode recognizes DNs isn't via slaves file, but by DNs
>> register
>> > > with NN via RPC mechanism.
>> > >
>> > > How did you install the Hadoop? Using Bigtop packages or via a
>> different
>> > > mechanism? The fact that you are seeing error message about cygwin not
>> > > found tells me that you are using a derivative bits, not pure Bigtop.
>> Is
>> > > this the case?
>> > >
>> > > Regards
>> > >   Cos
>> > >
>> > > On July 21, 2014 9:32:48 AM PDT, David Fryer <df...@gmail.com>
>> wrote:
>> > > >When I tried starting hadoop using the init scripts provided, the
>> > > >master
>> > > >couldn't find any of the datanodes. It is my understanding that the
>> > > >masters
>> > > >file is optional, but the slaves file is required. The scripts that
>> > > >reference the slaves file are named in plural (instead of
>> > > >hadoop-daemon.sh,
>> > > >use hadoop-daemons.sh). I tried modifying the init scripts to run
>> > > >hadoop-daemons.sh, and the script attempted to spawn processes on the
>> > > >slaves referenced in the slaves file, but that produced the error:
>> > > >Starting Hadoop namenode:                                  [  OK  ]
>> > > >slave2: starting namenode, logging to
>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-loki.out
>> > > >master: starting namenode, logging to
>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-odin.out
>> > > >slave3: starting namenode, logging to
>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-tyr.out
>> > > >slave1: starting namenode, logging to
>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-thor.out
>> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>> > > >directory
>> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
>> > > >found
>> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>> > > >directory
>> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
>> > > >found
>> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>> > > >directory
>> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
>> > > >found
>> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>> > > >directory
>> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
>> > > >found
>> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>> > > >
>> > > >-David Fryer
>> > > >
>> > > >
>> > > >On Mon, Jul 21, 2014 at 12:18 PM, Konstantin Boudnik <cos@apache.org
>> >
>> > > >wrote:
>> > > >
>> > > >> Hi David.
>> > > >>
>> > > >> Slaves files are really optional if I remember right. In Bigtop we
>> > > >are
>> > > >> usually
>> > > >> deploy Hadoop with provided Puppet recipes which are
>> battle-hardened
>> > > >over
>> > > >> the
>> > > >> years :)
>> > > >>
>> > > >> Cos
>> > > >>
>> > > >> On Mon, Jul 21, 2014 at 10:53AM, David Fryer wrote:
>> > > >> > Hi Bigtop!
>> > > >> >
>> > > >> > I'm working on trying to get hadoop running in distributed mode,
>> > > >but the
>> > > >> > init scripts don't seem to be referencing the slaves file in
>> > > >> > /etc/hadoop/conf. Has anyone encountered this before?
>> > > >> >
>> > > >> > Thanks,
>> > > >> > David Fryer
>> > > >>
>> > >
>> > >
>>
>
>


-- 
jay vyas

Re: Starting Hadoop in Distributed Mode

Posted by jay vyas <ja...@gmail.com>.
I suggest using puppet as well, way easier than doing it manually.

basically i think you could

- clone down thebigtop github and checkout branch-0.7.0
- put those puppet recipes on your bare metal nodes, and updeate the config
csv file to point to ip of master
- run puppet apply on each node

thats it.  It should all i think just work automagically.
right?



On Mon, Jul 21, 2014 at 2:44 PM, David Fryer <df...@gmail.com> wrote:

> Yes, Bigtop 0.7.0 is installed.
>
> -David Fryer
>
>
> On Mon, Jul 21, 2014 at 2:33 PM, Konstantin Boudnik <co...@apache.org>
> wrote:
>
>> Sorry for being a nag - did you install Bigtop 0.7.0?
>>
>> Cc'ing dev@ list as well
>>   Cos
>>
>> On Mon, Jul 21, 2014 at 01:15PM, David Fryer wrote:
>> > I activated the bigtop yum repository, and installed the required hadoop
>> > packages via yum. All of the computers in the cluster are running CentOS
>> > 6.5.
>> >
>> > -David Fryer
>> >
>> >
>> > On Mon, Jul 21, 2014 at 1:01 PM, Konstantin Boudnik <co...@apache.org>
>> wrote:
>> >
>> > > I see that your daemon is trying to log to
>> > > /usr/lib/hadoop/logs whereas Bigtop logs under /car/log as required by
>> > > Linux services good behavior rules.
>> > >
>> > > The way namenode recognizes DNs isn't via slaves file, but by DNs
>> register
>> > > with NN via RPC mechanism.
>> > >
>> > > How did you install the Hadoop? Using Bigtop packages or via a
>> different
>> > > mechanism? The fact that you are seeing error message about cygwin not
>> > > found tells me that you are using a derivative bits, not pure Bigtop.
>> Is
>> > > this the case?
>> > >
>> > > Regards
>> > >   Cos
>> > >
>> > > On July 21, 2014 9:32:48 AM PDT, David Fryer <df...@gmail.com>
>> wrote:
>> > > >When I tried starting hadoop using the init scripts provided, the
>> > > >master
>> > > >couldn't find any of the datanodes. It is my understanding that the
>> > > >masters
>> > > >file is optional, but the slaves file is required. The scripts that
>> > > >reference the slaves file are named in plural (instead of
>> > > >hadoop-daemon.sh,
>> > > >use hadoop-daemons.sh). I tried modifying the init scripts to run
>> > > >hadoop-daemons.sh, and the script attempted to spawn processes on the
>> > > >slaves referenced in the slaves file, but that produced the error:
>> > > >Starting Hadoop namenode:                                  [  OK  ]
>> > > >slave2: starting namenode, logging to
>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-loki.out
>> > > >master: starting namenode, logging to
>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-odin.out
>> > > >slave3: starting namenode, logging to
>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-tyr.out
>> > > >slave1: starting namenode, logging to
>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-thor.out
>> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>> > > >directory
>> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
>> > > >found
>> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>> > > >directory
>> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
>> > > >found
>> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>> > > >directory
>> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
>> > > >found
>> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>> > > >directory
>> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
>> > > >found
>> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>> > > >
>> > > >-David Fryer
>> > > >
>> > > >
>> > > >On Mon, Jul 21, 2014 at 12:18 PM, Konstantin Boudnik <cos@apache.org
>> >
>> > > >wrote:
>> > > >
>> > > >> Hi David.
>> > > >>
>> > > >> Slaves files are really optional if I remember right. In Bigtop we
>> > > >are
>> > > >> usually
>> > > >> deploy Hadoop with provided Puppet recipes which are
>> battle-hardened
>> > > >over
>> > > >> the
>> > > >> years :)
>> > > >>
>> > > >> Cos
>> > > >>
>> > > >> On Mon, Jul 21, 2014 at 10:53AM, David Fryer wrote:
>> > > >> > Hi Bigtop!
>> > > >> >
>> > > >> > I'm working on trying to get hadoop running in distributed mode,
>> > > >but the
>> > > >> > init scripts don't seem to be referencing the slaves file in
>> > > >> > /etc/hadoop/conf. Has anyone encountered this before?
>> > > >> >
>> > > >> > Thanks,
>> > > >> > David Fryer
>> > > >>
>> > >
>> > >
>>
>
>


-- 
jay vyas

Re: Starting Hadoop in Distributed Mode

Posted by David Fryer <df...@gmail.com>.
Yes, Bigtop 0.7.0 is installed.

-David Fryer


On Mon, Jul 21, 2014 at 2:33 PM, Konstantin Boudnik <co...@apache.org> wrote:

> Sorry for being a nag - did you install Bigtop 0.7.0?
>
> Cc'ing dev@ list as well
>   Cos
>
> On Mon, Jul 21, 2014 at 01:15PM, David Fryer wrote:
> > I activated the bigtop yum repository, and installed the required hadoop
> > packages via yum. All of the computers in the cluster are running CentOS
> > 6.5.
> >
> > -David Fryer
> >
> >
> > On Mon, Jul 21, 2014 at 1:01 PM, Konstantin Boudnik <co...@apache.org>
> wrote:
> >
> > > I see that your daemon is trying to log to
> > > /usr/lib/hadoop/logs whereas Bigtop logs under /car/log as required by
> > > Linux services good behavior rules.
> > >
> > > The way namenode recognizes DNs isn't via slaves file, but by DNs
> register
> > > with NN via RPC mechanism.
> > >
> > > How did you install the Hadoop? Using Bigtop packages or via a
> different
> > > mechanism? The fact that you are seeing error message about cygwin not
> > > found tells me that you are using a derivative bits, not pure Bigtop.
> Is
> > > this the case?
> > >
> > > Regards
> > >   Cos
> > >
> > > On July 21, 2014 9:32:48 AM PDT, David Fryer <df...@gmail.com>
> wrote:
> > > >When I tried starting hadoop using the init scripts provided, the
> > > >master
> > > >couldn't find any of the datanodes. It is my understanding that the
> > > >masters
> > > >file is optional, but the slaves file is required. The scripts that
> > > >reference the slaves file are named in plural (instead of
> > > >hadoop-daemon.sh,
> > > >use hadoop-daemons.sh). I tried modifying the init scripts to run
> > > >hadoop-daemons.sh, and the script attempted to spawn processes on the
> > > >slaves referenced in the slaves file, but that produced the error:
> > > >Starting Hadoop namenode:                                  [  OK  ]
> > > >slave2: starting namenode, logging to
> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-loki.out
> > > >master: starting namenode, logging to
> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-odin.out
> > > >slave3: starting namenode, logging to
> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-tyr.out
> > > >slave1: starting namenode, logging to
> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-thor.out
> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
> > > >directory
> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
> > > >found
> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
> > > >directory
> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
> > > >found
> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
> > > >directory
> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
> > > >found
> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
> > > >directory
> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
> > > >found
> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
> > > >
> > > >-David Fryer
> > > >
> > > >
> > > >On Mon, Jul 21, 2014 at 12:18 PM, Konstantin Boudnik <co...@apache.org>
> > > >wrote:
> > > >
> > > >> Hi David.
> > > >>
> > > >> Slaves files are really optional if I remember right. In Bigtop we
> > > >are
> > > >> usually
> > > >> deploy Hadoop with provided Puppet recipes which are battle-hardened
> > > >over
> > > >> the
> > > >> years :)
> > > >>
> > > >> Cos
> > > >>
> > > >> On Mon, Jul 21, 2014 at 10:53AM, David Fryer wrote:
> > > >> > Hi Bigtop!
> > > >> >
> > > >> > I'm working on trying to get hadoop running in distributed mode,
> > > >but the
> > > >> > init scripts don't seem to be referencing the slaves file in
> > > >> > /etc/hadoop/conf. Has anyone encountered this before?
> > > >> >
> > > >> > Thanks,
> > > >> > David Fryer
> > > >>
> > >
> > >
>

Re: Starting Hadoop in Distributed Mode

Posted by David Fryer <df...@gmail.com>.
Yes, Bigtop 0.7.0 is installed.

-David Fryer


On Mon, Jul 21, 2014 at 2:33 PM, Konstantin Boudnik <co...@apache.org> wrote:

> Sorry for being a nag - did you install Bigtop 0.7.0?
>
> Cc'ing dev@ list as well
>   Cos
>
> On Mon, Jul 21, 2014 at 01:15PM, David Fryer wrote:
> > I activated the bigtop yum repository, and installed the required hadoop
> > packages via yum. All of the computers in the cluster are running CentOS
> > 6.5.
> >
> > -David Fryer
> >
> >
> > On Mon, Jul 21, 2014 at 1:01 PM, Konstantin Boudnik <co...@apache.org>
> wrote:
> >
> > > I see that your daemon is trying to log to
> > > /usr/lib/hadoop/logs whereas Bigtop logs under /car/log as required by
> > > Linux services good behavior rules.
> > >
> > > The way namenode recognizes DNs isn't via slaves file, but by DNs
> register
> > > with NN via RPC mechanism.
> > >
> > > How did you install the Hadoop? Using Bigtop packages or via a
> different
> > > mechanism? The fact that you are seeing error message about cygwin not
> > > found tells me that you are using a derivative bits, not pure Bigtop.
> Is
> > > this the case?
> > >
> > > Regards
> > >   Cos
> > >
> > > On July 21, 2014 9:32:48 AM PDT, David Fryer <df...@gmail.com>
> wrote:
> > > >When I tried starting hadoop using the init scripts provided, the
> > > >master
> > > >couldn't find any of the datanodes. It is my understanding that the
> > > >masters
> > > >file is optional, but the slaves file is required. The scripts that
> > > >reference the slaves file are named in plural (instead of
> > > >hadoop-daemon.sh,
> > > >use hadoop-daemons.sh). I tried modifying the init scripts to run
> > > >hadoop-daemons.sh, and the script attempted to spawn processes on the
> > > >slaves referenced in the slaves file, but that produced the error:
> > > >Starting Hadoop namenode:                                  [  OK  ]
> > > >slave2: starting namenode, logging to
> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-loki.out
> > > >master: starting namenode, logging to
> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-odin.out
> > > >slave3: starting namenode, logging to
> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-tyr.out
> > > >slave1: starting namenode, logging to
> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-thor.out
> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
> > > >directory
> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
> > > >found
> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
> > > >directory
> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
> > > >found
> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
> > > >directory
> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
> > > >found
> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
> > > >directory
> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
> > > >found
> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
> > > >
> > > >-David Fryer
> > > >
> > > >
> > > >On Mon, Jul 21, 2014 at 12:18 PM, Konstantin Boudnik <co...@apache.org>
> > > >wrote:
> > > >
> > > >> Hi David.
> > > >>
> > > >> Slaves files are really optional if I remember right. In Bigtop we
> > > >are
> > > >> usually
> > > >> deploy Hadoop with provided Puppet recipes which are battle-hardened
> > > >over
> > > >> the
> > > >> years :)
> > > >>
> > > >> Cos
> > > >>
> > > >> On Mon, Jul 21, 2014 at 10:53AM, David Fryer wrote:
> > > >> > Hi Bigtop!
> > > >> >
> > > >> > I'm working on trying to get hadoop running in distributed mode,
> > > >but the
> > > >> > init scripts don't seem to be referencing the slaves file in
> > > >> > /etc/hadoop/conf. Has anyone encountered this before?
> > > >> >
> > > >> > Thanks,
> > > >> > David Fryer
> > > >>
> > >
> > >
>