You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Telles Nobrega <te...@gmail.com> on 2015/01/27 19:40:29 UTC
Hadoop 2.6.0 Multi Node Setup
Hi, I'm starting to deply Hadoop 2.6.0 multi node.
My first question is:
In the documenation page, it says that the configuration files are under
conf/ but I found them in etc/. Should I move them to conf or is this just
out of date information?
My second question is regarding users permission, I tried installing before
but I was only able to start deamons running as root, is that how it should
be?
For now these are all the question I have.
Thanks
Re: Hadoop 2.6.0 Multi Node Setup
Posted by Telles Nobrega <te...@gmail.com>.
They can resolve each other and I added the names to the slave file
already. Should the ResourceManager machine be in the slaves file?
On Tue Jan 27 2015 at 18:21:53 Ahmed Ossama <ah...@aossama.com> wrote:
> Make sure that all nodes can resolve each other.
>
> You can do this by simply modifying /etc/hosts on each node with the IPs
> of the cluster
>
> Then add them to your /etc/hadoop/slaves file.
>
>
> On 01/27/2015 10:58 PM, Telles Nobrega wrote:
>
> I was able to start some services, but Yarn is failing with
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
> java.io.IOException: Failed on local exception: java.net.SocketException:
> Unresolved address; Host Details : local host is: "telles-hadoop-two";
> destination host is: (unknown):0.
>
> Just to give an overview of my setup. I have 6 machines, they can talk
> to each other passwordless. One is the master with NameNode and a second
> master will run ResourceManager. The slaves will run NodeManager and
> DataNode.
>
> NameNode and DataNodes are ok. ResourceManager is still failing.
>
> On Tue Jan 27 2015 at 16:49:24 Telles Nobrega <te...@gmail.com>
> wrote:
>
>> Thanks.
>>
>> On Tue Jan 27 2015 at 15:59:35 Ahmed Ossama <ah...@aossama.com> wrote:
>>
>>> Hi Telles,
>>>
>>> No, the documentation isn't out of date. Normally hadoop configuration
>>> files are placed under /etc/hadoop/conf, it then referenced to when
>>> starting the cluster with --config $HADOOP_CONF_DIR, this is how hdfs
>>> and yarn know their configuration.
>>>
>>> Second, it's not a good practice to run hadoop with root. What you want
>>> to do is something like this
>>>
>>> # useradd hdfs
>>> # useradd yarn
>>> # groupadd hadoop
>>> # usermod -a -Ghadoop hdfs
>>> # usermod -a -Ghadoop yarn
>>> # mkdir /hdfs/{nn,dn}
>>> # chown -R hdfs:hadoop /hdfs
>>>
>>> Then start your hdfs daemon with hdfs user, and yarn daemon with yarn
>>> user.
>>>
>>>
>>> On 01/27/2015 08:40 PM, Telles Nobrega wrote:
>>>
>>> Hi, I'm starting to deply Hadoop 2.6.0 multi node.
>>> My first question is:
>>> In the documenation page, it says that the configuration files are under
>>> conf/ but I found them in etc/. Should I move them to conf or is this just
>>> out of date information?
>>>
>>> My second question is regarding users permission, I tried installing
>>> before but I was only able to start deamons running as root, is that how it
>>> should be?
>>>
>>> For now these are all the question I have.
>>>
>>> Thanks
>>>
>>>
>>> --
>>> Regards,
>>> Ahmed Ossama
>>>
>>>
> --
> Regards,
> Ahmed Ossama
>
>
Re: Hadoop 2.6.0 Multi Node Setup
Posted by Telles Nobrega <te...@gmail.com>.
They can resolve each other and I added the names to the slave file
already. Should the ResourceManager machine be in the slaves file?
On Tue Jan 27 2015 at 18:21:53 Ahmed Ossama <ah...@aossama.com> wrote:
> Make sure that all nodes can resolve each other.
>
> You can do this by simply modifying /etc/hosts on each node with the IPs
> of the cluster
>
> Then add them to your /etc/hadoop/slaves file.
>
>
> On 01/27/2015 10:58 PM, Telles Nobrega wrote:
>
> I was able to start some services, but Yarn is failing with
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
> java.io.IOException: Failed on local exception: java.net.SocketException:
> Unresolved address; Host Details : local host is: "telles-hadoop-two";
> destination host is: (unknown):0.
>
> Just to give an overview of my setup. I have 6 machines, they can talk
> to each other passwordless. One is the master with NameNode and a second
> master will run ResourceManager. The slaves will run NodeManager and
> DataNode.
>
> NameNode and DataNodes are ok. ResourceManager is still failing.
>
> On Tue Jan 27 2015 at 16:49:24 Telles Nobrega <te...@gmail.com>
> wrote:
>
>> Thanks.
>>
>> On Tue Jan 27 2015 at 15:59:35 Ahmed Ossama <ah...@aossama.com> wrote:
>>
>>> Hi Telles,
>>>
>>> No, the documentation isn't out of date. Normally hadoop configuration
>>> files are placed under /etc/hadoop/conf, it then referenced to when
>>> starting the cluster with --config $HADOOP_CONF_DIR, this is how hdfs
>>> and yarn know their configuration.
>>>
>>> Second, it's not a good practice to run hadoop with root. What you want
>>> to do is something like this
>>>
>>> # useradd hdfs
>>> # useradd yarn
>>> # groupadd hadoop
>>> # usermod -a -Ghadoop hdfs
>>> # usermod -a -Ghadoop yarn
>>> # mkdir /hdfs/{nn,dn}
>>> # chown -R hdfs:hadoop /hdfs
>>>
>>> Then start your hdfs daemon with hdfs user, and yarn daemon with yarn
>>> user.
>>>
>>>
>>> On 01/27/2015 08:40 PM, Telles Nobrega wrote:
>>>
>>> Hi, I'm starting to deply Hadoop 2.6.0 multi node.
>>> My first question is:
>>> In the documenation page, it says that the configuration files are under
>>> conf/ but I found them in etc/. Should I move them to conf or is this just
>>> out of date information?
>>>
>>> My second question is regarding users permission, I tried installing
>>> before but I was only able to start deamons running as root, is that how it
>>> should be?
>>>
>>> For now these are all the question I have.
>>>
>>> Thanks
>>>
>>>
>>> --
>>> Regards,
>>> Ahmed Ossama
>>>
>>>
> --
> Regards,
> Ahmed Ossama
>
>
Re: Hadoop 2.6.0 Multi Node Setup
Posted by Telles Nobrega <te...@gmail.com>.
They can resolve each other and I added the names to the slave file
already. Should the ResourceManager machine be in the slaves file?
On Tue Jan 27 2015 at 18:21:53 Ahmed Ossama <ah...@aossama.com> wrote:
> Make sure that all nodes can resolve each other.
>
> You can do this by simply modifying /etc/hosts on each node with the IPs
> of the cluster
>
> Then add them to your /etc/hadoop/slaves file.
>
>
> On 01/27/2015 10:58 PM, Telles Nobrega wrote:
>
> I was able to start some services, but Yarn is failing with
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
> java.io.IOException: Failed on local exception: java.net.SocketException:
> Unresolved address; Host Details : local host is: "telles-hadoop-two";
> destination host is: (unknown):0.
>
> Just to give an overview of my setup. I have 6 machines, they can talk
> to each other passwordless. One is the master with NameNode and a second
> master will run ResourceManager. The slaves will run NodeManager and
> DataNode.
>
> NameNode and DataNodes are ok. ResourceManager is still failing.
>
> On Tue Jan 27 2015 at 16:49:24 Telles Nobrega <te...@gmail.com>
> wrote:
>
>> Thanks.
>>
>> On Tue Jan 27 2015 at 15:59:35 Ahmed Ossama <ah...@aossama.com> wrote:
>>
>>> Hi Telles,
>>>
>>> No, the documentation isn't out of date. Normally hadoop configuration
>>> files are placed under /etc/hadoop/conf, it then referenced to when
>>> starting the cluster with --config $HADOOP_CONF_DIR, this is how hdfs
>>> and yarn know their configuration.
>>>
>>> Second, it's not a good practice to run hadoop with root. What you want
>>> to do is something like this
>>>
>>> # useradd hdfs
>>> # useradd yarn
>>> # groupadd hadoop
>>> # usermod -a -Ghadoop hdfs
>>> # usermod -a -Ghadoop yarn
>>> # mkdir /hdfs/{nn,dn}
>>> # chown -R hdfs:hadoop /hdfs
>>>
>>> Then start your hdfs daemon with hdfs user, and yarn daemon with yarn
>>> user.
>>>
>>>
>>> On 01/27/2015 08:40 PM, Telles Nobrega wrote:
>>>
>>> Hi, I'm starting to deply Hadoop 2.6.0 multi node.
>>> My first question is:
>>> In the documenation page, it says that the configuration files are under
>>> conf/ but I found them in etc/. Should I move them to conf or is this just
>>> out of date information?
>>>
>>> My second question is regarding users permission, I tried installing
>>> before but I was only able to start deamons running as root, is that how it
>>> should be?
>>>
>>> For now these are all the question I have.
>>>
>>> Thanks
>>>
>>>
>>> --
>>> Regards,
>>> Ahmed Ossama
>>>
>>>
> --
> Regards,
> Ahmed Ossama
>
>
Re: Hadoop 2.6.0 Multi Node Setup
Posted by Telles Nobrega <te...@gmail.com>.
They can resolve each other and I added the names to the slave file
already. Should the ResourceManager machine be in the slaves file?
On Tue Jan 27 2015 at 18:21:53 Ahmed Ossama <ah...@aossama.com> wrote:
> Make sure that all nodes can resolve each other.
>
> You can do this by simply modifying /etc/hosts on each node with the IPs
> of the cluster
>
> Then add them to your /etc/hadoop/slaves file.
>
>
> On 01/27/2015 10:58 PM, Telles Nobrega wrote:
>
> I was able to start some services, but Yarn is failing with
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
> java.io.IOException: Failed on local exception: java.net.SocketException:
> Unresolved address; Host Details : local host is: "telles-hadoop-two";
> destination host is: (unknown):0.
>
> Just to give an overview of my setup. I have 6 machines, they can talk
> to each other passwordless. One is the master with NameNode and a second
> master will run ResourceManager. The slaves will run NodeManager and
> DataNode.
>
> NameNode and DataNodes are ok. ResourceManager is still failing.
>
> On Tue Jan 27 2015 at 16:49:24 Telles Nobrega <te...@gmail.com>
> wrote:
>
>> Thanks.
>>
>> On Tue Jan 27 2015 at 15:59:35 Ahmed Ossama <ah...@aossama.com> wrote:
>>
>>> Hi Telles,
>>>
>>> No, the documentation isn't out of date. Normally hadoop configuration
>>> files are placed under /etc/hadoop/conf, it then referenced to when
>>> starting the cluster with --config $HADOOP_CONF_DIR, this is how hdfs
>>> and yarn know their configuration.
>>>
>>> Second, it's not a good practice to run hadoop with root. What you want
>>> to do is something like this
>>>
>>> # useradd hdfs
>>> # useradd yarn
>>> # groupadd hadoop
>>> # usermod -a -Ghadoop hdfs
>>> # usermod -a -Ghadoop yarn
>>> # mkdir /hdfs/{nn,dn}
>>> # chown -R hdfs:hadoop /hdfs
>>>
>>> Then start your hdfs daemon with hdfs user, and yarn daemon with yarn
>>> user.
>>>
>>>
>>> On 01/27/2015 08:40 PM, Telles Nobrega wrote:
>>>
>>> Hi, I'm starting to deply Hadoop 2.6.0 multi node.
>>> My first question is:
>>> In the documenation page, it says that the configuration files are under
>>> conf/ but I found them in etc/. Should I move them to conf or is this just
>>> out of date information?
>>>
>>> My second question is regarding users permission, I tried installing
>>> before but I was only able to start deamons running as root, is that how it
>>> should be?
>>>
>>> For now these are all the question I have.
>>>
>>> Thanks
>>>
>>>
>>> --
>>> Regards,
>>> Ahmed Ossama
>>>
>>>
> --
> Regards,
> Ahmed Ossama
>
>
Re: Hadoop 2.6.0 Multi Node Setup
Posted by Ahmed Ossama <ah...@aossama.com>.
Make sure that all nodes can resolve each other.
You can do this by simply modifying /etc/hosts on each node with the IPs
of the cluster
Then add them to your /etc/hadoop/slaves file.
On 01/27/2015 10:58 PM, Telles Nobrega wrote:
> I was able to start some services, but Yarn is failing with
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
> java.io.IOException: Failed on local exception:
> java.net.SocketException: Unresolved address; Host Details : local
> host is: "telles-hadoop-two"; destination host is: (unknown):0.
>
> Just to give an overview of my setup. I have 6 machines, they can talk
> to each other passwordless. One is the master with NameNode and a
> second master will run ResourceManager. The slaves will run
> NodeManager and DataNode.
>
> NameNode and DataNodes are ok. ResourceManager is still failing.
>
> On Tue Jan 27 2015 at 16:49:24 Telles Nobrega <tellesnobrega@gmail.com
> <ma...@gmail.com>> wrote:
>
> Thanks.
>
> On Tue Jan 27 2015 at 15:59:35 Ahmed Ossama <ahmed@aossama.com
> <ma...@aossama.com>> wrote:
>
> Hi Telles,
>
> No, the documentation isn't out of date. Normally hadoop
> configuration files are placed under /etc/hadoop/conf, it then
> referenced to when starting the cluster with --config
> $HADOOP_CONF_DIR, this is how hdfs and yarn know their
> configuration.
>
> Second, it's not a good practice to run hadoop with root. What
> you want to do is something like this
>
> # useradd hdfs
> # useradd yarn
> # groupadd hadoop
> # usermod -a -Ghadoop hdfs
> # usermod -a -Ghadoop yarn
> # mkdir /hdfs/{nn,dn}
> # chown -R hdfs:hadoop /hdfs
>
> Then start your hdfs daemon with hdfs user, and yarn daemon
> with yarn user.
>
>
> On 01/27/2015 08:40 PM, Telles Nobrega wrote:
>> Hi, I'm starting to deply Hadoop 2.6.0 multi node.
>> My first question is:
>> In the documenation page, it says that the configuration
>> files are under conf/ but I found them in etc/. Should I move
>> them to conf or is this just out of date information?
>>
>> My second question is regarding users permission, I tried
>> installing before but I was only able to start deamons
>> running as root, is that how it should be?
>>
>> For now these are all the question I have.
>>
>> Thanks
>
> --
> Regards,
> Ahmed Ossama
>
--
Regards,
Ahmed Ossama
Re: Hadoop 2.6.0 Multi Node Setup
Posted by Ahmed Ossama <ah...@aossama.com>.
Make sure that all nodes can resolve each other.
You can do this by simply modifying /etc/hosts on each node with the IPs
of the cluster
Then add them to your /etc/hadoop/slaves file.
On 01/27/2015 10:58 PM, Telles Nobrega wrote:
> I was able to start some services, but Yarn is failing with
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
> java.io.IOException: Failed on local exception:
> java.net.SocketException: Unresolved address; Host Details : local
> host is: "telles-hadoop-two"; destination host is: (unknown):0.
>
> Just to give an overview of my setup. I have 6 machines, they can talk
> to each other passwordless. One is the master with NameNode and a
> second master will run ResourceManager. The slaves will run
> NodeManager and DataNode.
>
> NameNode and DataNodes are ok. ResourceManager is still failing.
>
> On Tue Jan 27 2015 at 16:49:24 Telles Nobrega <tellesnobrega@gmail.com
> <ma...@gmail.com>> wrote:
>
> Thanks.
>
> On Tue Jan 27 2015 at 15:59:35 Ahmed Ossama <ahmed@aossama.com
> <ma...@aossama.com>> wrote:
>
> Hi Telles,
>
> No, the documentation isn't out of date. Normally hadoop
> configuration files are placed under /etc/hadoop/conf, it then
> referenced to when starting the cluster with --config
> $HADOOP_CONF_DIR, this is how hdfs and yarn know their
> configuration.
>
> Second, it's not a good practice to run hadoop with root. What
> you want to do is something like this
>
> # useradd hdfs
> # useradd yarn
> # groupadd hadoop
> # usermod -a -Ghadoop hdfs
> # usermod -a -Ghadoop yarn
> # mkdir /hdfs/{nn,dn}
> # chown -R hdfs:hadoop /hdfs
>
> Then start your hdfs daemon with hdfs user, and yarn daemon
> with yarn user.
>
>
> On 01/27/2015 08:40 PM, Telles Nobrega wrote:
>> Hi, I'm starting to deply Hadoop 2.6.0 multi node.
>> My first question is:
>> In the documenation page, it says that the configuration
>> files are under conf/ but I found them in etc/. Should I move
>> them to conf or is this just out of date information?
>>
>> My second question is regarding users permission, I tried
>> installing before but I was only able to start deamons
>> running as root, is that how it should be?
>>
>> For now these are all the question I have.
>>
>> Thanks
>
> --
> Regards,
> Ahmed Ossama
>
--
Regards,
Ahmed Ossama
Re: Hadoop 2.6.0 Multi Node Setup
Posted by Ahmed Ossama <ah...@aossama.com>.
Make sure that all nodes can resolve each other.
You can do this by simply modifying /etc/hosts on each node with the IPs
of the cluster
Then add them to your /etc/hadoop/slaves file.
On 01/27/2015 10:58 PM, Telles Nobrega wrote:
> I was able to start some services, but Yarn is failing with
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
> java.io.IOException: Failed on local exception:
> java.net.SocketException: Unresolved address; Host Details : local
> host is: "telles-hadoop-two"; destination host is: (unknown):0.
>
> Just to give an overview of my setup. I have 6 machines, they can talk
> to each other passwordless. One is the master with NameNode and a
> second master will run ResourceManager. The slaves will run
> NodeManager and DataNode.
>
> NameNode and DataNodes are ok. ResourceManager is still failing.
>
> On Tue Jan 27 2015 at 16:49:24 Telles Nobrega <tellesnobrega@gmail.com
> <ma...@gmail.com>> wrote:
>
> Thanks.
>
> On Tue Jan 27 2015 at 15:59:35 Ahmed Ossama <ahmed@aossama.com
> <ma...@aossama.com>> wrote:
>
> Hi Telles,
>
> No, the documentation isn't out of date. Normally hadoop
> configuration files are placed under /etc/hadoop/conf, it then
> referenced to when starting the cluster with --config
> $HADOOP_CONF_DIR, this is how hdfs and yarn know their
> configuration.
>
> Second, it's not a good practice to run hadoop with root. What
> you want to do is something like this
>
> # useradd hdfs
> # useradd yarn
> # groupadd hadoop
> # usermod -a -Ghadoop hdfs
> # usermod -a -Ghadoop yarn
> # mkdir /hdfs/{nn,dn}
> # chown -R hdfs:hadoop /hdfs
>
> Then start your hdfs daemon with hdfs user, and yarn daemon
> with yarn user.
>
>
> On 01/27/2015 08:40 PM, Telles Nobrega wrote:
>> Hi, I'm starting to deply Hadoop 2.6.0 multi node.
>> My first question is:
>> In the documenation page, it says that the configuration
>> files are under conf/ but I found them in etc/. Should I move
>> them to conf or is this just out of date information?
>>
>> My second question is regarding users permission, I tried
>> installing before but I was only able to start deamons
>> running as root, is that how it should be?
>>
>> For now these are all the question I have.
>>
>> Thanks
>
> --
> Regards,
> Ahmed Ossama
>
--
Regards,
Ahmed Ossama
Re: Hadoop 2.6.0 Multi Node Setup
Posted by Ahmed Ossama <ah...@aossama.com>.
Make sure that all nodes can resolve each other.
You can do this by simply modifying /etc/hosts on each node with the IPs
of the cluster
Then add them to your /etc/hadoop/slaves file.
On 01/27/2015 10:58 PM, Telles Nobrega wrote:
> I was able to start some services, but Yarn is failing with
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
> java.io.IOException: Failed on local exception:
> java.net.SocketException: Unresolved address; Host Details : local
> host is: "telles-hadoop-two"; destination host is: (unknown):0.
>
> Just to give an overview of my setup. I have 6 machines, they can talk
> to each other passwordless. One is the master with NameNode and a
> second master will run ResourceManager. The slaves will run
> NodeManager and DataNode.
>
> NameNode and DataNodes are ok. ResourceManager is still failing.
>
> On Tue Jan 27 2015 at 16:49:24 Telles Nobrega <tellesnobrega@gmail.com
> <ma...@gmail.com>> wrote:
>
> Thanks.
>
> On Tue Jan 27 2015 at 15:59:35 Ahmed Ossama <ahmed@aossama.com
> <ma...@aossama.com>> wrote:
>
> Hi Telles,
>
> No, the documentation isn't out of date. Normally hadoop
> configuration files are placed under /etc/hadoop/conf, it then
> referenced to when starting the cluster with --config
> $HADOOP_CONF_DIR, this is how hdfs and yarn know their
> configuration.
>
> Second, it's not a good practice to run hadoop with root. What
> you want to do is something like this
>
> # useradd hdfs
> # useradd yarn
> # groupadd hadoop
> # usermod -a -Ghadoop hdfs
> # usermod -a -Ghadoop yarn
> # mkdir /hdfs/{nn,dn}
> # chown -R hdfs:hadoop /hdfs
>
> Then start your hdfs daemon with hdfs user, and yarn daemon
> with yarn user.
>
>
> On 01/27/2015 08:40 PM, Telles Nobrega wrote:
>> Hi, I'm starting to deply Hadoop 2.6.0 multi node.
>> My first question is:
>> In the documenation page, it says that the configuration
>> files are under conf/ but I found them in etc/. Should I move
>> them to conf or is this just out of date information?
>>
>> My second question is regarding users permission, I tried
>> installing before but I was only able to start deamons
>> running as root, is that how it should be?
>>
>> For now these are all the question I have.
>>
>> Thanks
>
> --
> Regards,
> Ahmed Ossama
>
--
Regards,
Ahmed Ossama
Re: Hadoop 2.6.0 Multi Node Setup
Posted by Telles Nobrega <te...@gmail.com>.
I was able to start some services, but Yarn is failing with
org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
java.io.IOException: Failed on local exception: java.net.SocketException:
Unresolved address; Host Details : local host is: "telles-hadoop-two";
destination host is: (unknown):0.
Just to give an overview of my setup. I have 6 machines, they can talk to
each other passwordless. One is the master with NameNode and a second
master will run ResourceManager. The slaves will run NodeManager and
DataNode.
NameNode and DataNodes are ok. ResourceManager is still failing.
On Tue Jan 27 2015 at 16:49:24 Telles Nobrega <te...@gmail.com>
wrote:
> Thanks.
>
> On Tue Jan 27 2015 at 15:59:35 Ahmed Ossama <ah...@aossama.com> wrote:
>
>> Hi Telles,
>>
>> No, the documentation isn't out of date. Normally hadoop configuration
>> files are placed under /etc/hadoop/conf, it then referenced to when
>> starting the cluster with --config $HADOOP_CONF_DIR, this is how hdfs
>> and yarn know their configuration.
>>
>> Second, it's not a good practice to run hadoop with root. What you want
>> to do is something like this
>>
>> # useradd hdfs
>> # useradd yarn
>> # groupadd hadoop
>> # usermod -a -Ghadoop hdfs
>> # usermod -a -Ghadoop yarn
>> # mkdir /hdfs/{nn,dn}
>> # chown -R hdfs:hadoop /hdfs
>>
>> Then start your hdfs daemon with hdfs user, and yarn daemon with yarn
>> user.
>>
>>
>> On 01/27/2015 08:40 PM, Telles Nobrega wrote:
>>
>> Hi, I'm starting to deply Hadoop 2.6.0 multi node.
>> My first question is:
>> In the documenation page, it says that the configuration files are under
>> conf/ but I found them in etc/. Should I move them to conf or is this just
>> out of date information?
>>
>> My second question is regarding users permission, I tried installing
>> before but I was only able to start deamons running as root, is that how it
>> should be?
>>
>> For now these are all the question I have.
>>
>> Thanks
>>
>>
>> --
>> Regards,
>> Ahmed Ossama
>>
>>
Re: Hadoop 2.6.0 Multi Node Setup
Posted by Telles Nobrega <te...@gmail.com>.
I was able to start some services, but Yarn is failing with
org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
java.io.IOException: Failed on local exception: java.net.SocketException:
Unresolved address; Host Details : local host is: "telles-hadoop-two";
destination host is: (unknown):0.
Just to give an overview of my setup. I have 6 machines, they can talk to
each other passwordless. One is the master with NameNode and a second
master will run ResourceManager. The slaves will run NodeManager and
DataNode.
NameNode and DataNodes are ok. ResourceManager is still failing.
On Tue Jan 27 2015 at 16:49:24 Telles Nobrega <te...@gmail.com>
wrote:
> Thanks.
>
> On Tue Jan 27 2015 at 15:59:35 Ahmed Ossama <ah...@aossama.com> wrote:
>
>> Hi Telles,
>>
>> No, the documentation isn't out of date. Normally hadoop configuration
>> files are placed under /etc/hadoop/conf, it then referenced to when
>> starting the cluster with --config $HADOOP_CONF_DIR, this is how hdfs
>> and yarn know their configuration.
>>
>> Second, it's not a good practice to run hadoop with root. What you want
>> to do is something like this
>>
>> # useradd hdfs
>> # useradd yarn
>> # groupadd hadoop
>> # usermod -a -Ghadoop hdfs
>> # usermod -a -Ghadoop yarn
>> # mkdir /hdfs/{nn,dn}
>> # chown -R hdfs:hadoop /hdfs
>>
>> Then start your hdfs daemon with hdfs user, and yarn daemon with yarn
>> user.
>>
>>
>> On 01/27/2015 08:40 PM, Telles Nobrega wrote:
>>
>> Hi, I'm starting to deply Hadoop 2.6.0 multi node.
>> My first question is:
>> In the documenation page, it says that the configuration files are under
>> conf/ but I found them in etc/. Should I move them to conf or is this just
>> out of date information?
>>
>> My second question is regarding users permission, I tried installing
>> before but I was only able to start deamons running as root, is that how it
>> should be?
>>
>> For now these are all the question I have.
>>
>> Thanks
>>
>>
>> --
>> Regards,
>> Ahmed Ossama
>>
>>
Re: Hadoop 2.6.0 Multi Node Setup
Posted by Telles Nobrega <te...@gmail.com>.
I was able to start some services, but Yarn is failing with
org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
java.io.IOException: Failed on local exception: java.net.SocketException:
Unresolved address; Host Details : local host is: "telles-hadoop-two";
destination host is: (unknown):0.
Just to give an overview of my setup. I have 6 machines, they can talk to
each other passwordless. One is the master with NameNode and a second
master will run ResourceManager. The slaves will run NodeManager and
DataNode.
NameNode and DataNodes are ok. ResourceManager is still failing.
On Tue Jan 27 2015 at 16:49:24 Telles Nobrega <te...@gmail.com>
wrote:
> Thanks.
>
> On Tue Jan 27 2015 at 15:59:35 Ahmed Ossama <ah...@aossama.com> wrote:
>
>> Hi Telles,
>>
>> No, the documentation isn't out of date. Normally hadoop configuration
>> files are placed under /etc/hadoop/conf, it then referenced to when
>> starting the cluster with --config $HADOOP_CONF_DIR, this is how hdfs
>> and yarn know their configuration.
>>
>> Second, it's not a good practice to run hadoop with root. What you want
>> to do is something like this
>>
>> # useradd hdfs
>> # useradd yarn
>> # groupadd hadoop
>> # usermod -a -Ghadoop hdfs
>> # usermod -a -Ghadoop yarn
>> # mkdir /hdfs/{nn,dn}
>> # chown -R hdfs:hadoop /hdfs
>>
>> Then start your hdfs daemon with hdfs user, and yarn daemon with yarn
>> user.
>>
>>
>> On 01/27/2015 08:40 PM, Telles Nobrega wrote:
>>
>> Hi, I'm starting to deply Hadoop 2.6.0 multi node.
>> My first question is:
>> In the documenation page, it says that the configuration files are under
>> conf/ but I found them in etc/. Should I move them to conf or is this just
>> out of date information?
>>
>> My second question is regarding users permission, I tried installing
>> before but I was only able to start deamons running as root, is that how it
>> should be?
>>
>> For now these are all the question I have.
>>
>> Thanks
>>
>>
>> --
>> Regards,
>> Ahmed Ossama
>>
>>
Re: Hadoop 2.6.0 Multi Node Setup
Posted by Telles Nobrega <te...@gmail.com>.
I was able to start some services, but Yarn is failing with
org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
java.io.IOException: Failed on local exception: java.net.SocketException:
Unresolved address; Host Details : local host is: "telles-hadoop-two";
destination host is: (unknown):0.
Just to give an overview of my setup. I have 6 machines, they can talk to
each other passwordless. One is the master with NameNode and a second
master will run ResourceManager. The slaves will run NodeManager and
DataNode.
NameNode and DataNodes are ok. ResourceManager is still failing.
On Tue Jan 27 2015 at 16:49:24 Telles Nobrega <te...@gmail.com>
wrote:
> Thanks.
>
> On Tue Jan 27 2015 at 15:59:35 Ahmed Ossama <ah...@aossama.com> wrote:
>
>> Hi Telles,
>>
>> No, the documentation isn't out of date. Normally hadoop configuration
>> files are placed under /etc/hadoop/conf, it then referenced to when
>> starting the cluster with --config $HADOOP_CONF_DIR, this is how hdfs
>> and yarn know their configuration.
>>
>> Second, it's not a good practice to run hadoop with root. What you want
>> to do is something like this
>>
>> # useradd hdfs
>> # useradd yarn
>> # groupadd hadoop
>> # usermod -a -Ghadoop hdfs
>> # usermod -a -Ghadoop yarn
>> # mkdir /hdfs/{nn,dn}
>> # chown -R hdfs:hadoop /hdfs
>>
>> Then start your hdfs daemon with hdfs user, and yarn daemon with yarn
>> user.
>>
>>
>> On 01/27/2015 08:40 PM, Telles Nobrega wrote:
>>
>> Hi, I'm starting to deply Hadoop 2.6.0 multi node.
>> My first question is:
>> In the documenation page, it says that the configuration files are under
>> conf/ but I found them in etc/. Should I move them to conf or is this just
>> out of date information?
>>
>> My second question is regarding users permission, I tried installing
>> before but I was only able to start deamons running as root, is that how it
>> should be?
>>
>> For now these are all the question I have.
>>
>> Thanks
>>
>>
>> --
>> Regards,
>> Ahmed Ossama
>>
>>
Re: Hadoop 2.6.0 Multi Node Setup
Posted by Telles Nobrega <te...@gmail.com>.
Thanks.
On Tue Jan 27 2015 at 15:59:35 Ahmed Ossama <ah...@aossama.com> wrote:
> Hi Telles,
>
> No, the documentation isn't out of date. Normally hadoop configuration
> files are placed under /etc/hadoop/conf, it then referenced to when
> starting the cluster with --config $HADOOP_CONF_DIR, this is how hdfs and
> yarn know their configuration.
>
> Second, it's not a good practice to run hadoop with root. What you want to
> do is something like this
>
> # useradd hdfs
> # useradd yarn
> # groupadd hadoop
> # usermod -a -Ghadoop hdfs
> # usermod -a -Ghadoop yarn
> # mkdir /hdfs/{nn,dn}
> # chown -R hdfs:hadoop /hdfs
>
> Then start your hdfs daemon with hdfs user, and yarn daemon with yarn user.
>
>
> On 01/27/2015 08:40 PM, Telles Nobrega wrote:
>
> Hi, I'm starting to deply Hadoop 2.6.0 multi node.
> My first question is:
> In the documenation page, it says that the configuration files are under
> conf/ but I found them in etc/. Should I move them to conf or is this just
> out of date information?
>
> My second question is regarding users permission, I tried installing
> before but I was only able to start deamons running as root, is that how it
> should be?
>
> For now these are all the question I have.
>
> Thanks
>
>
> --
> Regards,
> Ahmed Ossama
>
>
Re: Hadoop 2.6.0 Multi Node Setup
Posted by Telles Nobrega <te...@gmail.com>.
Thanks.
On Tue Jan 27 2015 at 15:59:35 Ahmed Ossama <ah...@aossama.com> wrote:
> Hi Telles,
>
> No, the documentation isn't out of date. Normally hadoop configuration
> files are placed under /etc/hadoop/conf, it then referenced to when
> starting the cluster with --config $HADOOP_CONF_DIR, this is how hdfs and
> yarn know their configuration.
>
> Second, it's not a good practice to run hadoop with root. What you want to
> do is something like this
>
> # useradd hdfs
> # useradd yarn
> # groupadd hadoop
> # usermod -a -Ghadoop hdfs
> # usermod -a -Ghadoop yarn
> # mkdir /hdfs/{nn,dn}
> # chown -R hdfs:hadoop /hdfs
>
> Then start your hdfs daemon with hdfs user, and yarn daemon with yarn user.
>
>
> On 01/27/2015 08:40 PM, Telles Nobrega wrote:
>
> Hi, I'm starting to deply Hadoop 2.6.0 multi node.
> My first question is:
> In the documenation page, it says that the configuration files are under
> conf/ but I found them in etc/. Should I move them to conf or is this just
> out of date information?
>
> My second question is regarding users permission, I tried installing
> before but I was only able to start deamons running as root, is that how it
> should be?
>
> For now these are all the question I have.
>
> Thanks
>
>
> --
> Regards,
> Ahmed Ossama
>
>
Re: Hadoop 2.6.0 Multi Node Setup
Posted by Telles Nobrega <te...@gmail.com>.
Thanks.
On Tue Jan 27 2015 at 15:59:35 Ahmed Ossama <ah...@aossama.com> wrote:
> Hi Telles,
>
> No, the documentation isn't out of date. Normally hadoop configuration
> files are placed under /etc/hadoop/conf, it then referenced to when
> starting the cluster with --config $HADOOP_CONF_DIR, this is how hdfs and
> yarn know their configuration.
>
> Second, it's not a good practice to run hadoop with root. What you want to
> do is something like this
>
> # useradd hdfs
> # useradd yarn
> # groupadd hadoop
> # usermod -a -Ghadoop hdfs
> # usermod -a -Ghadoop yarn
> # mkdir /hdfs/{nn,dn}
> # chown -R hdfs:hadoop /hdfs
>
> Then start your hdfs daemon with hdfs user, and yarn daemon with yarn user.
>
>
> On 01/27/2015 08:40 PM, Telles Nobrega wrote:
>
> Hi, I'm starting to deply Hadoop 2.6.0 multi node.
> My first question is:
> In the documenation page, it says that the configuration files are under
> conf/ but I found them in etc/. Should I move them to conf or is this just
> out of date information?
>
> My second question is regarding users permission, I tried installing
> before but I was only able to start deamons running as root, is that how it
> should be?
>
> For now these are all the question I have.
>
> Thanks
>
>
> --
> Regards,
> Ahmed Ossama
>
>
Re: Hadoop 2.6.0 Multi Node Setup
Posted by Telles Nobrega <te...@gmail.com>.
Thanks.
On Tue Jan 27 2015 at 15:59:35 Ahmed Ossama <ah...@aossama.com> wrote:
> Hi Telles,
>
> No, the documentation isn't out of date. Normally hadoop configuration
> files are placed under /etc/hadoop/conf, it then referenced to when
> starting the cluster with --config $HADOOP_CONF_DIR, this is how hdfs and
> yarn know their configuration.
>
> Second, it's not a good practice to run hadoop with root. What you want to
> do is something like this
>
> # useradd hdfs
> # useradd yarn
> # groupadd hadoop
> # usermod -a -Ghadoop hdfs
> # usermod -a -Ghadoop yarn
> # mkdir /hdfs/{nn,dn}
> # chown -R hdfs:hadoop /hdfs
>
> Then start your hdfs daemon with hdfs user, and yarn daemon with yarn user.
>
>
> On 01/27/2015 08:40 PM, Telles Nobrega wrote:
>
> Hi, I'm starting to deply Hadoop 2.6.0 multi node.
> My first question is:
> In the documenation page, it says that the configuration files are under
> conf/ but I found them in etc/. Should I move them to conf or is this just
> out of date information?
>
> My second question is regarding users permission, I tried installing
> before but I was only able to start deamons running as root, is that how it
> should be?
>
> For now these are all the question I have.
>
> Thanks
>
>
> --
> Regards,
> Ahmed Ossama
>
>
Re: Hadoop 2.6.0 Multi Node Setup
Posted by Ahmed Ossama <ah...@aossama.com>.
Hi Telles,
No, the documentation isn't out of date. Normally hadoop configuration
files are placed under /etc/hadoop/conf, it then referenced to when
starting the cluster with --config $HADOOP_CONF_DIR, this is how hdfs
and yarn know their configuration.
Second, it's not a good practice to run hadoop with root. What you want
to do is something like this
# useradd hdfs
# useradd yarn
# groupadd hadoop
# usermod -a -Ghadoop hdfs
# usermod -a -Ghadoop yarn
# mkdir /hdfs/{nn,dn}
# chown -R hdfs:hadoop /hdfs
Then start your hdfs daemon with hdfs user, and yarn daemon with yarn user.
On 01/27/2015 08:40 PM, Telles Nobrega wrote:
> Hi, I'm starting to deply Hadoop 2.6.0 multi node.
> My first question is:
> In the documenation page, it says that the configuration files are
> under conf/ but I found them in etc/. Should I move them to conf or is
> this just out of date information?
>
> My second question is regarding users permission, I tried installing
> before but I was only able to start deamons running as root, is that
> how it should be?
>
> For now these are all the question I have.
>
> Thanks
--
Regards,
Ahmed Ossama
Re: Hadoop 2.6.0 Multi Node Setup
Posted by Ahmed Ossama <ah...@aossama.com>.
Hi Telles,
No, the documentation isn't out of date. Normally hadoop configuration
files are placed under /etc/hadoop/conf, it then referenced to when
starting the cluster with --config $HADOOP_CONF_DIR, this is how hdfs
and yarn know their configuration.
Second, it's not a good practice to run hadoop with root. What you want
to do is something like this
# useradd hdfs
# useradd yarn
# groupadd hadoop
# usermod -a -Ghadoop hdfs
# usermod -a -Ghadoop yarn
# mkdir /hdfs/{nn,dn}
# chown -R hdfs:hadoop /hdfs
Then start your hdfs daemon with hdfs user, and yarn daemon with yarn user.
On 01/27/2015 08:40 PM, Telles Nobrega wrote:
> Hi, I'm starting to deply Hadoop 2.6.0 multi node.
> My first question is:
> In the documenation page, it says that the configuration files are
> under conf/ but I found them in etc/. Should I move them to conf or is
> this just out of date information?
>
> My second question is regarding users permission, I tried installing
> before but I was only able to start deamons running as root, is that
> how it should be?
>
> For now these are all the question I have.
>
> Thanks
--
Regards,
Ahmed Ossama
Re: Hadoop 2.6.0 Multi Node Setup
Posted by Ahmed Ossama <ah...@aossama.com>.
Hi Telles,
No, the documentation isn't out of date. Normally hadoop configuration
files are placed under /etc/hadoop/conf, it then referenced to when
starting the cluster with --config $HADOOP_CONF_DIR, this is how hdfs
and yarn know their configuration.
Second, it's not a good practice to run hadoop with root. What you want
to do is something like this
# useradd hdfs
# useradd yarn
# groupadd hadoop
# usermod -a -Ghadoop hdfs
# usermod -a -Ghadoop yarn
# mkdir /hdfs/{nn,dn}
# chown -R hdfs:hadoop /hdfs
Then start your hdfs daemon with hdfs user, and yarn daemon with yarn user.
On 01/27/2015 08:40 PM, Telles Nobrega wrote:
> Hi, I'm starting to deply Hadoop 2.6.0 multi node.
> My first question is:
> In the documenation page, it says that the configuration files are
> under conf/ but I found them in etc/. Should I move them to conf or is
> this just out of date information?
>
> My second question is regarding users permission, I tried installing
> before but I was only able to start deamons running as root, is that
> how it should be?
>
> For now these are all the question I have.
>
> Thanks
--
Regards,
Ahmed Ossama
Re: Hadoop 2.6.0 Multi Node Setup
Posted by Ahmed Ossama <ah...@aossama.com>.
Hi Telles,
No, the documentation isn't out of date. Normally hadoop configuration
files are placed under /etc/hadoop/conf, it then referenced to when
starting the cluster with --config $HADOOP_CONF_DIR, this is how hdfs
and yarn know their configuration.
Second, it's not a good practice to run hadoop with root. What you want
to do is something like this
# useradd hdfs
# useradd yarn
# groupadd hadoop
# usermod -a -Ghadoop hdfs
# usermod -a -Ghadoop yarn
# mkdir /hdfs/{nn,dn}
# chown -R hdfs:hadoop /hdfs
Then start your hdfs daemon with hdfs user, and yarn daemon with yarn user.
On 01/27/2015 08:40 PM, Telles Nobrega wrote:
> Hi, I'm starting to deply Hadoop 2.6.0 multi node.
> My first question is:
> In the documenation page, it says that the configuration files are
> under conf/ but I found them in etc/. Should I move them to conf or is
> this just out of date information?
>
> My second question is regarding users permission, I tried installing
> before but I was only able to start deamons running as root, is that
> how it should be?
>
> For now these are all the question I have.
>
> Thanks
--
Regards,
Ahmed Ossama