You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Shengdi Jin <ji...@gmail.com> on 2015/03/02 18:49:40 UTC
how to check hdfs
Hi all,
I just start to learn hadoop, I have a naive question
I used
hdfs dfs -ls /home/cluster
to check the content inside.
But I get error
ls: No FileSystem for scheme: hdfs
My configuration file core-site.xml is like
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
</configuration>
hdfs-site.xml is like
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:/home/cluster/mydata/hdfs/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:/home/cluster/mydata/hdfs/datanode</value>
</property>
</configuration>
is there any thing wrong ?
Thanks a lot.
Re: how to check hdfs
Posted by Vikas Parashar <pa...@gmail.com>.
Hi Jin,
Please check your hdfs-site.xml in which we specify; what will be our hdfs
path on our local machine.
Below are the parameter that will help you to understand;
dfs.namenode.name.dir
dfs.datanode.name.dir
Rg:
Vicky
On Wed, Mar 4, 2015 at 1:34 AM, Shengdi Jin <ji...@gmail.com> wrote:
> Thanks Vikas.
>
> I run ./hdfs dfs -ls /home/cluster on machine running namenode.
> Do I need to configure a client machine?
>
> In my opinion, I suspect that the local fs /home/cluster is not configured
> as hdfs.
> In core-site.xml, I set the hdfs as hdfs://master:9000.
> So I think that's why the command ./hdfs dfs-ls hdfs://master:9000/ can
> work.
>
> Please correct me, if i was wrong.
>
> On Tue, Mar 3, 2015 at 1:59 PM, Vikas Parashar <pa...@gmail.com>
> wrote:
>
>> Hello,
>>
>> hdfs dfs -ls /home/cluster
>> to check the content inside.
>> But I get error
>> ls: *No FileSystem for scheme: hdfs --> *that means, you don't have
>> hdfs rpm installed at your client machine..
>>
>>
>> For answer of you question:-
>> ./hdfs dfs -mkdir hdfs://master:9000/directory
>>
>>
>> That *directory *will be under / in your hdfs. All data would be stored
>> in data node; but namenode will have the meta data information. For more
>> details; you have to read hdfs
>> http://hadoop.apache.org/docs/r1.2.1/hdfs_design.html.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Mar 3, 2015 at 10:46 PM, Shengdi Jin <ji...@gmail.com>
>> wrote:
>>
>>> I use command
>>> ./hdfs dfs -ls hdfs://master:9000/
>>> It works. So i think hdfs://master:9000/ should be the hdfs.
>>>
>>> I have another questions, if
>>> ./hdfs dfs -mkdir hdfs://master:9000/directory
>>> where should the /directory be stored?
>>> In DataNode or in NameNode? or in the local system of master?
>>>
>>> On Tue, Mar 3, 2015 at 8:06 AM, 杨浩 <ya...@gmail.com> wrote:
>>>
>>>> I don't think it nessary to run the command with daemon in that client,
>>>> and hdfs is not a daemon for hadoop。
>>>>
>>>> 2015-03-03 20:57 GMT+08:00 Somnath Pandeya <Somnath_Pandeya@infosys.com
>>>> >:
>>>>
>>>>> Is your hdfs daemon running on cluster. ? ?
>>>>>
>>>>>
>>>>>
>>>>> *From:* Vikas Parashar [mailto:para.vikas@gmail.com]
>>>>> *Sent:* Tuesday, March 03, 2015 10:33 AM
>>>>> *To:* user@hadoop.apache.org
>>>>> *Subject:* Re: how to check hdfs
>>>>>
>>>>>
>>>>>
>>>>> Hi,
>>>>>
>>>>>
>>>>>
>>>>> Kindly install hadoop-hdfs rpm in your machine..
>>>>>
>>>>>
>>>>>
>>>>> Rg:
>>>>>
>>>>> Vicky
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com>
>>>>> wrote:
>>>>>
>>>>> Hi all,
>>>>>
>>>>> I just start to learn hadoop, I have a naive question
>>>>>
>>>>> I used
>>>>>
>>>>> hdfs dfs -ls /home/cluster
>>>>>
>>>>> to check the content inside.
>>>>>
>>>>> But I get error
>>>>> ls: No FileSystem for scheme: hdfs
>>>>>
>>>>> My configuration file core-site.xml is like
>>>>> <configuration>
>>>>> <property>
>>>>> <name>fs.defaultFS</name>
>>>>> <value>hdfs://master:9000</value>
>>>>> </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>> hdfs-site.xml is like
>>>>> <configuration>
>>>>> <property>
>>>>> <name>dfs.replication</name>
>>>>> <value>2</value>
>>>>> </property>
>>>>> <property>
>>>>> <name>dfs.name.dir</name>
>>>>> <value>file:/home/cluster/mydata/hdfs/namenode</value>
>>>>> </property>
>>>>> <property>
>>>>> <name>dfs.data.dir</name>
>>>>> <value>file:/home/cluster/mydata/hdfs/datanode</value>
>>>>> </property>
>>>>> </configuration>
>>>>>
>>>>> is there any thing wrong ?
>>>>>
>>>>> Thanks a lot.
>>>>>
>>>>>
>>>>>
>>>>> **************** CAUTION - Disclaimer *****************
>>>>> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
>>>>> for the use of the addressee(s). If you are not the intended recipient, please
>>>>> notify the sender by e-mail and delete the original message. Further, you are not
>>>>> to copy, disclose, or distribute this e-mail or its contents to any other person and
>>>>> any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
>>>>> every reasonable precaution to minimize this risk, but is not liable for any damage
>>>>> you may sustain as a result of any virus in this e-mail. You should carry out your
>>>>> own virus checks before opening the e-mail or attachment. Infosys reserves the
>>>>> right to monitor and review the content of all messages sent to or from this e-mail
>>>>> address. Messages sent to or from this e-mail address may be stored on the
>>>>> Infosys e-mail system.
>>>>> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>>>>>
>>>>>
>>>>
>>>
>>
>
Re: how to check hdfs
Posted by Vikas Parashar <pa...@gmail.com>.
Hi Jin,
Please check your hdfs-site.xml in which we specify; what will be our hdfs
path on our local machine.
Below are the parameter that will help you to understand;
dfs.namenode.name.dir
dfs.datanode.name.dir
Rg:
Vicky
On Wed, Mar 4, 2015 at 1:34 AM, Shengdi Jin <ji...@gmail.com> wrote:
> Thanks Vikas.
>
> I run ./hdfs dfs -ls /home/cluster on machine running namenode.
> Do I need to configure a client machine?
>
> In my opinion, I suspect that the local fs /home/cluster is not configured
> as hdfs.
> In core-site.xml, I set the hdfs as hdfs://master:9000.
> So I think that's why the command ./hdfs dfs-ls hdfs://master:9000/ can
> work.
>
> Please correct me, if i was wrong.
>
> On Tue, Mar 3, 2015 at 1:59 PM, Vikas Parashar <pa...@gmail.com>
> wrote:
>
>> Hello,
>>
>> hdfs dfs -ls /home/cluster
>> to check the content inside.
>> But I get error
>> ls: *No FileSystem for scheme: hdfs --> *that means, you don't have
>> hdfs rpm installed at your client machine..
>>
>>
>> For answer of you question:-
>> ./hdfs dfs -mkdir hdfs://master:9000/directory
>>
>>
>> That *directory *will be under / in your hdfs. All data would be stored
>> in data node; but namenode will have the meta data information. For more
>> details; you have to read hdfs
>> http://hadoop.apache.org/docs/r1.2.1/hdfs_design.html.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Mar 3, 2015 at 10:46 PM, Shengdi Jin <ji...@gmail.com>
>> wrote:
>>
>>> I use command
>>> ./hdfs dfs -ls hdfs://master:9000/
>>> It works. So i think hdfs://master:9000/ should be the hdfs.
>>>
>>> I have another questions, if
>>> ./hdfs dfs -mkdir hdfs://master:9000/directory
>>> where should the /directory be stored?
>>> In DataNode or in NameNode? or in the local system of master?
>>>
>>> On Tue, Mar 3, 2015 at 8:06 AM, 杨浩 <ya...@gmail.com> wrote:
>>>
>>>> I don't think it nessary to run the command with daemon in that client,
>>>> and hdfs is not a daemon for hadoop。
>>>>
>>>> 2015-03-03 20:57 GMT+08:00 Somnath Pandeya <Somnath_Pandeya@infosys.com
>>>> >:
>>>>
>>>>> Is your hdfs daemon running on cluster. ? ?
>>>>>
>>>>>
>>>>>
>>>>> *From:* Vikas Parashar [mailto:para.vikas@gmail.com]
>>>>> *Sent:* Tuesday, March 03, 2015 10:33 AM
>>>>> *To:* user@hadoop.apache.org
>>>>> *Subject:* Re: how to check hdfs
>>>>>
>>>>>
>>>>>
>>>>> Hi,
>>>>>
>>>>>
>>>>>
>>>>> Kindly install hadoop-hdfs rpm in your machine..
>>>>>
>>>>>
>>>>>
>>>>> Rg:
>>>>>
>>>>> Vicky
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com>
>>>>> wrote:
>>>>>
>>>>> Hi all,
>>>>>
>>>>> I just start to learn hadoop, I have a naive question
>>>>>
>>>>> I used
>>>>>
>>>>> hdfs dfs -ls /home/cluster
>>>>>
>>>>> to check the content inside.
>>>>>
>>>>> But I get error
>>>>> ls: No FileSystem for scheme: hdfs
>>>>>
>>>>> My configuration file core-site.xml is like
>>>>> <configuration>
>>>>> <property>
>>>>> <name>fs.defaultFS</name>
>>>>> <value>hdfs://master:9000</value>
>>>>> </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>> hdfs-site.xml is like
>>>>> <configuration>
>>>>> <property>
>>>>> <name>dfs.replication</name>
>>>>> <value>2</value>
>>>>> </property>
>>>>> <property>
>>>>> <name>dfs.name.dir</name>
>>>>> <value>file:/home/cluster/mydata/hdfs/namenode</value>
>>>>> </property>
>>>>> <property>
>>>>> <name>dfs.data.dir</name>
>>>>> <value>file:/home/cluster/mydata/hdfs/datanode</value>
>>>>> </property>
>>>>> </configuration>
>>>>>
>>>>> is there any thing wrong ?
>>>>>
>>>>> Thanks a lot.
>>>>>
>>>>>
>>>>>
>>>>> **************** CAUTION - Disclaimer *****************
>>>>> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
>>>>> for the use of the addressee(s). If you are not the intended recipient, please
>>>>> notify the sender by e-mail and delete the original message. Further, you are not
>>>>> to copy, disclose, or distribute this e-mail or its contents to any other person and
>>>>> any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
>>>>> every reasonable precaution to minimize this risk, but is not liable for any damage
>>>>> you may sustain as a result of any virus in this e-mail. You should carry out your
>>>>> own virus checks before opening the e-mail or attachment. Infosys reserves the
>>>>> right to monitor and review the content of all messages sent to or from this e-mail
>>>>> address. Messages sent to or from this e-mail address may be stored on the
>>>>> Infosys e-mail system.
>>>>> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>>>>>
>>>>>
>>>>
>>>
>>
>
Re: how to check hdfs
Posted by Vikas Parashar <pa...@gmail.com>.
Hi Jin,
Please check your hdfs-site.xml in which we specify; what will be our hdfs
path on our local machine.
Below are the parameter that will help you to understand;
dfs.namenode.name.dir
dfs.datanode.name.dir
Rg:
Vicky
On Wed, Mar 4, 2015 at 1:34 AM, Shengdi Jin <ji...@gmail.com> wrote:
> Thanks Vikas.
>
> I run ./hdfs dfs -ls /home/cluster on machine running namenode.
> Do I need to configure a client machine?
>
> In my opinion, I suspect that the local fs /home/cluster is not configured
> as hdfs.
> In core-site.xml, I set the hdfs as hdfs://master:9000.
> So I think that's why the command ./hdfs dfs-ls hdfs://master:9000/ can
> work.
>
> Please correct me, if i was wrong.
>
> On Tue, Mar 3, 2015 at 1:59 PM, Vikas Parashar <pa...@gmail.com>
> wrote:
>
>> Hello,
>>
>> hdfs dfs -ls /home/cluster
>> to check the content inside.
>> But I get error
>> ls: *No FileSystem for scheme: hdfs --> *that means, you don't have
>> hdfs rpm installed at your client machine..
>>
>>
>> For answer of you question:-
>> ./hdfs dfs -mkdir hdfs://master:9000/directory
>>
>>
>> That *directory *will be under / in your hdfs. All data would be stored
>> in data node; but namenode will have the meta data information. For more
>> details; you have to read hdfs
>> http://hadoop.apache.org/docs/r1.2.1/hdfs_design.html.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Mar 3, 2015 at 10:46 PM, Shengdi Jin <ji...@gmail.com>
>> wrote:
>>
>>> I use command
>>> ./hdfs dfs -ls hdfs://master:9000/
>>> It works. So i think hdfs://master:9000/ should be the hdfs.
>>>
>>> I have another questions, if
>>> ./hdfs dfs -mkdir hdfs://master:9000/directory
>>> where should the /directory be stored?
>>> In DataNode or in NameNode? or in the local system of master?
>>>
>>> On Tue, Mar 3, 2015 at 8:06 AM, 杨浩 <ya...@gmail.com> wrote:
>>>
>>>> I don't think it nessary to run the command with daemon in that client,
>>>> and hdfs is not a daemon for hadoop。
>>>>
>>>> 2015-03-03 20:57 GMT+08:00 Somnath Pandeya <Somnath_Pandeya@infosys.com
>>>> >:
>>>>
>>>>> Is your hdfs daemon running on cluster. ? ?
>>>>>
>>>>>
>>>>>
>>>>> *From:* Vikas Parashar [mailto:para.vikas@gmail.com]
>>>>> *Sent:* Tuesday, March 03, 2015 10:33 AM
>>>>> *To:* user@hadoop.apache.org
>>>>> *Subject:* Re: how to check hdfs
>>>>>
>>>>>
>>>>>
>>>>> Hi,
>>>>>
>>>>>
>>>>>
>>>>> Kindly install hadoop-hdfs rpm in your machine..
>>>>>
>>>>>
>>>>>
>>>>> Rg:
>>>>>
>>>>> Vicky
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com>
>>>>> wrote:
>>>>>
>>>>> Hi all,
>>>>>
>>>>> I just start to learn hadoop, I have a naive question
>>>>>
>>>>> I used
>>>>>
>>>>> hdfs dfs -ls /home/cluster
>>>>>
>>>>> to check the content inside.
>>>>>
>>>>> But I get error
>>>>> ls: No FileSystem for scheme: hdfs
>>>>>
>>>>> My configuration file core-site.xml is like
>>>>> <configuration>
>>>>> <property>
>>>>> <name>fs.defaultFS</name>
>>>>> <value>hdfs://master:9000</value>
>>>>> </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>> hdfs-site.xml is like
>>>>> <configuration>
>>>>> <property>
>>>>> <name>dfs.replication</name>
>>>>> <value>2</value>
>>>>> </property>
>>>>> <property>
>>>>> <name>dfs.name.dir</name>
>>>>> <value>file:/home/cluster/mydata/hdfs/namenode</value>
>>>>> </property>
>>>>> <property>
>>>>> <name>dfs.data.dir</name>
>>>>> <value>file:/home/cluster/mydata/hdfs/datanode</value>
>>>>> </property>
>>>>> </configuration>
>>>>>
>>>>> is there any thing wrong ?
>>>>>
>>>>> Thanks a lot.
>>>>>
>>>>>
>>>>>
>>>>> **************** CAUTION - Disclaimer *****************
>>>>> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
>>>>> for the use of the addressee(s). If you are not the intended recipient, please
>>>>> notify the sender by e-mail and delete the original message. Further, you are not
>>>>> to copy, disclose, or distribute this e-mail or its contents to any other person and
>>>>> any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
>>>>> every reasonable precaution to minimize this risk, but is not liable for any damage
>>>>> you may sustain as a result of any virus in this e-mail. You should carry out your
>>>>> own virus checks before opening the e-mail or attachment. Infosys reserves the
>>>>> right to monitor and review the content of all messages sent to or from this e-mail
>>>>> address. Messages sent to or from this e-mail address may be stored on the
>>>>> Infosys e-mail system.
>>>>> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>>>>>
>>>>>
>>>>
>>>
>>
>
Re: how to check hdfs
Posted by Vikas Parashar <pa...@gmail.com>.
Hi Jin,
Please check your hdfs-site.xml in which we specify; what will be our hdfs
path on our local machine.
Below are the parameter that will help you to understand;
dfs.namenode.name.dir
dfs.datanode.name.dir
Rg:
Vicky
On Wed, Mar 4, 2015 at 1:34 AM, Shengdi Jin <ji...@gmail.com> wrote:
> Thanks Vikas.
>
> I run ./hdfs dfs -ls /home/cluster on machine running namenode.
> Do I need to configure a client machine?
>
> In my opinion, I suspect that the local fs /home/cluster is not configured
> as hdfs.
> In core-site.xml, I set the hdfs as hdfs://master:9000.
> So I think that's why the command ./hdfs dfs-ls hdfs://master:9000/ can
> work.
>
> Please correct me, if i was wrong.
>
> On Tue, Mar 3, 2015 at 1:59 PM, Vikas Parashar <pa...@gmail.com>
> wrote:
>
>> Hello,
>>
>> hdfs dfs -ls /home/cluster
>> to check the content inside.
>> But I get error
>> ls: *No FileSystem for scheme: hdfs --> *that means, you don't have
>> hdfs rpm installed at your client machine..
>>
>>
>> For answer of you question:-
>> ./hdfs dfs -mkdir hdfs://master:9000/directory
>>
>>
>> That *directory *will be under / in your hdfs. All data would be stored
>> in data node; but namenode will have the meta data information. For more
>> details; you have to read hdfs
>> http://hadoop.apache.org/docs/r1.2.1/hdfs_design.html.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Mar 3, 2015 at 10:46 PM, Shengdi Jin <ji...@gmail.com>
>> wrote:
>>
>>> I use command
>>> ./hdfs dfs -ls hdfs://master:9000/
>>> It works. So i think hdfs://master:9000/ should be the hdfs.
>>>
>>> I have another questions, if
>>> ./hdfs dfs -mkdir hdfs://master:9000/directory
>>> where should the /directory be stored?
>>> In DataNode or in NameNode? or in the local system of master?
>>>
>>> On Tue, Mar 3, 2015 at 8:06 AM, 杨浩 <ya...@gmail.com> wrote:
>>>
>>>> I don't think it nessary to run the command with daemon in that client,
>>>> and hdfs is not a daemon for hadoop。
>>>>
>>>> 2015-03-03 20:57 GMT+08:00 Somnath Pandeya <Somnath_Pandeya@infosys.com
>>>> >:
>>>>
>>>>> Is your hdfs daemon running on cluster. ? ?
>>>>>
>>>>>
>>>>>
>>>>> *From:* Vikas Parashar [mailto:para.vikas@gmail.com]
>>>>> *Sent:* Tuesday, March 03, 2015 10:33 AM
>>>>> *To:* user@hadoop.apache.org
>>>>> *Subject:* Re: how to check hdfs
>>>>>
>>>>>
>>>>>
>>>>> Hi,
>>>>>
>>>>>
>>>>>
>>>>> Kindly install hadoop-hdfs rpm in your machine..
>>>>>
>>>>>
>>>>>
>>>>> Rg:
>>>>>
>>>>> Vicky
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com>
>>>>> wrote:
>>>>>
>>>>> Hi all,
>>>>>
>>>>> I just start to learn hadoop, I have a naive question
>>>>>
>>>>> I used
>>>>>
>>>>> hdfs dfs -ls /home/cluster
>>>>>
>>>>> to check the content inside.
>>>>>
>>>>> But I get error
>>>>> ls: No FileSystem for scheme: hdfs
>>>>>
>>>>> My configuration file core-site.xml is like
>>>>> <configuration>
>>>>> <property>
>>>>> <name>fs.defaultFS</name>
>>>>> <value>hdfs://master:9000</value>
>>>>> </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>> hdfs-site.xml is like
>>>>> <configuration>
>>>>> <property>
>>>>> <name>dfs.replication</name>
>>>>> <value>2</value>
>>>>> </property>
>>>>> <property>
>>>>> <name>dfs.name.dir</name>
>>>>> <value>file:/home/cluster/mydata/hdfs/namenode</value>
>>>>> </property>
>>>>> <property>
>>>>> <name>dfs.data.dir</name>
>>>>> <value>file:/home/cluster/mydata/hdfs/datanode</value>
>>>>> </property>
>>>>> </configuration>
>>>>>
>>>>> is there any thing wrong ?
>>>>>
>>>>> Thanks a lot.
>>>>>
>>>>>
>>>>>
>>>>> **************** CAUTION - Disclaimer *****************
>>>>> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
>>>>> for the use of the addressee(s). If you are not the intended recipient, please
>>>>> notify the sender by e-mail and delete the original message. Further, you are not
>>>>> to copy, disclose, or distribute this e-mail or its contents to any other person and
>>>>> any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
>>>>> every reasonable precaution to minimize this risk, but is not liable for any damage
>>>>> you may sustain as a result of any virus in this e-mail. You should carry out your
>>>>> own virus checks before opening the e-mail or attachment. Infosys reserves the
>>>>> right to monitor and review the content of all messages sent to or from this e-mail
>>>>> address. Messages sent to or from this e-mail address may be stored on the
>>>>> Infosys e-mail system.
>>>>> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>>>>>
>>>>>
>>>>
>>>
>>
>
Re: how to check hdfs
Posted by Shengdi Jin <ji...@gmail.com>.
Thanks Vikas.
I run ./hdfs dfs -ls /home/cluster on machine running namenode.
Do I need to configure a client machine?
In my opinion, I suspect that the local fs /home/cluster is not configured
as hdfs.
In core-site.xml, I set the hdfs as hdfs://master:9000.
So I think that's why the command ./hdfs dfs-ls hdfs://master:9000/ can
work.
Please correct me, if i was wrong.
On Tue, Mar 3, 2015 at 1:59 PM, Vikas Parashar <pa...@gmail.com> wrote:
> Hello,
>
> hdfs dfs -ls /home/cluster
> to check the content inside.
> But I get error
> ls: *No FileSystem for scheme: hdfs --> *that means, you don't have
> hdfs rpm installed at your client machine..
>
>
> For answer of you question:-
> ./hdfs dfs -mkdir hdfs://master:9000/directory
>
>
> That *directory *will be under / in your hdfs. All data would be stored
> in data node; but namenode will have the meta data information. For more
> details; you have to read hdfs
> http://hadoop.apache.org/docs/r1.2.1/hdfs_design.html.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Tue, Mar 3, 2015 at 10:46 PM, Shengdi Jin <ji...@gmail.com> wrote:
>
>> I use command
>> ./hdfs dfs -ls hdfs://master:9000/
>> It works. So i think hdfs://master:9000/ should be the hdfs.
>>
>> I have another questions, if
>> ./hdfs dfs -mkdir hdfs://master:9000/directory
>> where should the /directory be stored?
>> In DataNode or in NameNode? or in the local system of master?
>>
>> On Tue, Mar 3, 2015 at 8:06 AM, 杨浩 <ya...@gmail.com> wrote:
>>
>>> I don't think it nessary to run the command with daemon in that client,
>>> and hdfs is not a daemon for hadoop。
>>>
>>> 2015-03-03 20:57 GMT+08:00 Somnath Pandeya <So...@infosys.com>
>>> :
>>>
>>>> Is your hdfs daemon running on cluster. ? ?
>>>>
>>>>
>>>>
>>>> *From:* Vikas Parashar [mailto:para.vikas@gmail.com]
>>>> *Sent:* Tuesday, March 03, 2015 10:33 AM
>>>> *To:* user@hadoop.apache.org
>>>> *Subject:* Re: how to check hdfs
>>>>
>>>>
>>>>
>>>> Hi,
>>>>
>>>>
>>>>
>>>> Kindly install hadoop-hdfs rpm in your machine..
>>>>
>>>>
>>>>
>>>> Rg:
>>>>
>>>> Vicky
>>>>
>>>>
>>>>
>>>> On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com>
>>>> wrote:
>>>>
>>>> Hi all,
>>>>
>>>> I just start to learn hadoop, I have a naive question
>>>>
>>>> I used
>>>>
>>>> hdfs dfs -ls /home/cluster
>>>>
>>>> to check the content inside.
>>>>
>>>> But I get error
>>>> ls: No FileSystem for scheme: hdfs
>>>>
>>>> My configuration file core-site.xml is like
>>>> <configuration>
>>>> <property>
>>>> <name>fs.defaultFS</name>
>>>> <value>hdfs://master:9000</value>
>>>> </property>
>>>> </configuration>
>>>>
>>>>
>>>> hdfs-site.xml is like
>>>> <configuration>
>>>> <property>
>>>> <name>dfs.replication</name>
>>>> <value>2</value>
>>>> </property>
>>>> <property>
>>>> <name>dfs.name.dir</name>
>>>> <value>file:/home/cluster/mydata/hdfs/namenode</value>
>>>> </property>
>>>> <property>
>>>> <name>dfs.data.dir</name>
>>>> <value>file:/home/cluster/mydata/hdfs/datanode</value>
>>>> </property>
>>>> </configuration>
>>>>
>>>> is there any thing wrong ?
>>>>
>>>> Thanks a lot.
>>>>
>>>>
>>>>
>>>> **************** CAUTION - Disclaimer *****************
>>>> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
>>>> for the use of the addressee(s). If you are not the intended recipient, please
>>>> notify the sender by e-mail and delete the original message. Further, you are not
>>>> to copy, disclose, or distribute this e-mail or its contents to any other person and
>>>> any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
>>>> every reasonable precaution to minimize this risk, but is not liable for any damage
>>>> you may sustain as a result of any virus in this e-mail. You should carry out your
>>>> own virus checks before opening the e-mail or attachment. Infosys reserves the
>>>> right to monitor and review the content of all messages sent to or from this e-mail
>>>> address. Messages sent to or from this e-mail address may be stored on the
>>>> Infosys e-mail system.
>>>> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>>>>
>>>>
>>>
>>
>
Re: how to check hdfs
Posted by Shengdi Jin <ji...@gmail.com>.
Thanks Vikas.
I run ./hdfs dfs -ls /home/cluster on machine running namenode.
Do I need to configure a client machine?
In my opinion, I suspect that the local fs /home/cluster is not configured
as hdfs.
In core-site.xml, I set the hdfs as hdfs://master:9000.
So I think that's why the command ./hdfs dfs-ls hdfs://master:9000/ can
work.
Please correct me, if i was wrong.
On Tue, Mar 3, 2015 at 1:59 PM, Vikas Parashar <pa...@gmail.com> wrote:
> Hello,
>
> hdfs dfs -ls /home/cluster
> to check the content inside.
> But I get error
> ls: *No FileSystem for scheme: hdfs --> *that means, you don't have
> hdfs rpm installed at your client machine..
>
>
> For answer of you question:-
> ./hdfs dfs -mkdir hdfs://master:9000/directory
>
>
> That *directory *will be under / in your hdfs. All data would be stored
> in data node; but namenode will have the meta data information. For more
> details; you have to read hdfs
> http://hadoop.apache.org/docs/r1.2.1/hdfs_design.html.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Tue, Mar 3, 2015 at 10:46 PM, Shengdi Jin <ji...@gmail.com> wrote:
>
>> I use command
>> ./hdfs dfs -ls hdfs://master:9000/
>> It works. So i think hdfs://master:9000/ should be the hdfs.
>>
>> I have another questions, if
>> ./hdfs dfs -mkdir hdfs://master:9000/directory
>> where should the /directory be stored?
>> In DataNode or in NameNode? or in the local system of master?
>>
>> On Tue, Mar 3, 2015 at 8:06 AM, 杨浩 <ya...@gmail.com> wrote:
>>
>>> I don't think it nessary to run the command with daemon in that client,
>>> and hdfs is not a daemon for hadoop。
>>>
>>> 2015-03-03 20:57 GMT+08:00 Somnath Pandeya <So...@infosys.com>
>>> :
>>>
>>>> Is your hdfs daemon running on cluster. ? ?
>>>>
>>>>
>>>>
>>>> *From:* Vikas Parashar [mailto:para.vikas@gmail.com]
>>>> *Sent:* Tuesday, March 03, 2015 10:33 AM
>>>> *To:* user@hadoop.apache.org
>>>> *Subject:* Re: how to check hdfs
>>>>
>>>>
>>>>
>>>> Hi,
>>>>
>>>>
>>>>
>>>> Kindly install hadoop-hdfs rpm in your machine..
>>>>
>>>>
>>>>
>>>> Rg:
>>>>
>>>> Vicky
>>>>
>>>>
>>>>
>>>> On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com>
>>>> wrote:
>>>>
>>>> Hi all,
>>>>
>>>> I just start to learn hadoop, I have a naive question
>>>>
>>>> I used
>>>>
>>>> hdfs dfs -ls /home/cluster
>>>>
>>>> to check the content inside.
>>>>
>>>> But I get error
>>>> ls: No FileSystem for scheme: hdfs
>>>>
>>>> My configuration file core-site.xml is like
>>>> <configuration>
>>>> <property>
>>>> <name>fs.defaultFS</name>
>>>> <value>hdfs://master:9000</value>
>>>> </property>
>>>> </configuration>
>>>>
>>>>
>>>> hdfs-site.xml is like
>>>> <configuration>
>>>> <property>
>>>> <name>dfs.replication</name>
>>>> <value>2</value>
>>>> </property>
>>>> <property>
>>>> <name>dfs.name.dir</name>
>>>> <value>file:/home/cluster/mydata/hdfs/namenode</value>
>>>> </property>
>>>> <property>
>>>> <name>dfs.data.dir</name>
>>>> <value>file:/home/cluster/mydata/hdfs/datanode</value>
>>>> </property>
>>>> </configuration>
>>>>
>>>> is there any thing wrong ?
>>>>
>>>> Thanks a lot.
>>>>
>>>>
>>>>
>>>> **************** CAUTION - Disclaimer *****************
>>>> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
>>>> for the use of the addressee(s). If you are not the intended recipient, please
>>>> notify the sender by e-mail and delete the original message. Further, you are not
>>>> to copy, disclose, or distribute this e-mail or its contents to any other person and
>>>> any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
>>>> every reasonable precaution to minimize this risk, but is not liable for any damage
>>>> you may sustain as a result of any virus in this e-mail. You should carry out your
>>>> own virus checks before opening the e-mail or attachment. Infosys reserves the
>>>> right to monitor and review the content of all messages sent to or from this e-mail
>>>> address. Messages sent to or from this e-mail address may be stored on the
>>>> Infosys e-mail system.
>>>> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>>>>
>>>>
>>>
>>
>
Re: how to check hdfs
Posted by Shengdi Jin <ji...@gmail.com>.
Thanks Vikas.
I run ./hdfs dfs -ls /home/cluster on machine running namenode.
Do I need to configure a client machine?
In my opinion, I suspect that the local fs /home/cluster is not configured
as hdfs.
In core-site.xml, I set the hdfs as hdfs://master:9000.
So I think that's why the command ./hdfs dfs-ls hdfs://master:9000/ can
work.
Please correct me, if i was wrong.
On Tue, Mar 3, 2015 at 1:59 PM, Vikas Parashar <pa...@gmail.com> wrote:
> Hello,
>
> hdfs dfs -ls /home/cluster
> to check the content inside.
> But I get error
> ls: *No FileSystem for scheme: hdfs --> *that means, you don't have
> hdfs rpm installed at your client machine..
>
>
> For answer of you question:-
> ./hdfs dfs -mkdir hdfs://master:9000/directory
>
>
> That *directory *will be under / in your hdfs. All data would be stored
> in data node; but namenode will have the meta data information. For more
> details; you have to read hdfs
> http://hadoop.apache.org/docs/r1.2.1/hdfs_design.html.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Tue, Mar 3, 2015 at 10:46 PM, Shengdi Jin <ji...@gmail.com> wrote:
>
>> I use command
>> ./hdfs dfs -ls hdfs://master:9000/
>> It works. So i think hdfs://master:9000/ should be the hdfs.
>>
>> I have another questions, if
>> ./hdfs dfs -mkdir hdfs://master:9000/directory
>> where should the /directory be stored?
>> In DataNode or in NameNode? or in the local system of master?
>>
>> On Tue, Mar 3, 2015 at 8:06 AM, 杨浩 <ya...@gmail.com> wrote:
>>
>>> I don't think it nessary to run the command with daemon in that client,
>>> and hdfs is not a daemon for hadoop。
>>>
>>> 2015-03-03 20:57 GMT+08:00 Somnath Pandeya <So...@infosys.com>
>>> :
>>>
>>>> Is your hdfs daemon running on cluster. ? ?
>>>>
>>>>
>>>>
>>>> *From:* Vikas Parashar [mailto:para.vikas@gmail.com]
>>>> *Sent:* Tuesday, March 03, 2015 10:33 AM
>>>> *To:* user@hadoop.apache.org
>>>> *Subject:* Re: how to check hdfs
>>>>
>>>>
>>>>
>>>> Hi,
>>>>
>>>>
>>>>
>>>> Kindly install hadoop-hdfs rpm in your machine..
>>>>
>>>>
>>>>
>>>> Rg:
>>>>
>>>> Vicky
>>>>
>>>>
>>>>
>>>> On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com>
>>>> wrote:
>>>>
>>>> Hi all,
>>>>
>>>> I just start to learn hadoop, I have a naive question
>>>>
>>>> I used
>>>>
>>>> hdfs dfs -ls /home/cluster
>>>>
>>>> to check the content inside.
>>>>
>>>> But I get error
>>>> ls: No FileSystem for scheme: hdfs
>>>>
>>>> My configuration file core-site.xml is like
>>>> <configuration>
>>>> <property>
>>>> <name>fs.defaultFS</name>
>>>> <value>hdfs://master:9000</value>
>>>> </property>
>>>> </configuration>
>>>>
>>>>
>>>> hdfs-site.xml is like
>>>> <configuration>
>>>> <property>
>>>> <name>dfs.replication</name>
>>>> <value>2</value>
>>>> </property>
>>>> <property>
>>>> <name>dfs.name.dir</name>
>>>> <value>file:/home/cluster/mydata/hdfs/namenode</value>
>>>> </property>
>>>> <property>
>>>> <name>dfs.data.dir</name>
>>>> <value>file:/home/cluster/mydata/hdfs/datanode</value>
>>>> </property>
>>>> </configuration>
>>>>
>>>> is there any thing wrong ?
>>>>
>>>> Thanks a lot.
>>>>
>>>>
>>>>
>>>> **************** CAUTION - Disclaimer *****************
>>>> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
>>>> for the use of the addressee(s). If you are not the intended recipient, please
>>>> notify the sender by e-mail and delete the original message. Further, you are not
>>>> to copy, disclose, or distribute this e-mail or its contents to any other person and
>>>> any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
>>>> every reasonable precaution to minimize this risk, but is not liable for any damage
>>>> you may sustain as a result of any virus in this e-mail. You should carry out your
>>>> own virus checks before opening the e-mail or attachment. Infosys reserves the
>>>> right to monitor and review the content of all messages sent to or from this e-mail
>>>> address. Messages sent to or from this e-mail address may be stored on the
>>>> Infosys e-mail system.
>>>> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>>>>
>>>>
>>>
>>
>
Re: how to check hdfs
Posted by Shengdi Jin <ji...@gmail.com>.
Thanks Vikas.
I run ./hdfs dfs -ls /home/cluster on machine running namenode.
Do I need to configure a client machine?
In my opinion, I suspect that the local fs /home/cluster is not configured
as hdfs.
In core-site.xml, I set the hdfs as hdfs://master:9000.
So I think that's why the command ./hdfs dfs-ls hdfs://master:9000/ can
work.
Please correct me, if i was wrong.
On Tue, Mar 3, 2015 at 1:59 PM, Vikas Parashar <pa...@gmail.com> wrote:
> Hello,
>
> hdfs dfs -ls /home/cluster
> to check the content inside.
> But I get error
> ls: *No FileSystem for scheme: hdfs --> *that means, you don't have
> hdfs rpm installed at your client machine..
>
>
> For answer of you question:-
> ./hdfs dfs -mkdir hdfs://master:9000/directory
>
>
> That *directory *will be under / in your hdfs. All data would be stored
> in data node; but namenode will have the meta data information. For more
> details; you have to read hdfs
> http://hadoop.apache.org/docs/r1.2.1/hdfs_design.html.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Tue, Mar 3, 2015 at 10:46 PM, Shengdi Jin <ji...@gmail.com> wrote:
>
>> I use command
>> ./hdfs dfs -ls hdfs://master:9000/
>> It works. So i think hdfs://master:9000/ should be the hdfs.
>>
>> I have another questions, if
>> ./hdfs dfs -mkdir hdfs://master:9000/directory
>> where should the /directory be stored?
>> In DataNode or in NameNode? or in the local system of master?
>>
>> On Tue, Mar 3, 2015 at 8:06 AM, 杨浩 <ya...@gmail.com> wrote:
>>
>>> I don't think it nessary to run the command with daemon in that client,
>>> and hdfs is not a daemon for hadoop。
>>>
>>> 2015-03-03 20:57 GMT+08:00 Somnath Pandeya <So...@infosys.com>
>>> :
>>>
>>>> Is your hdfs daemon running on cluster. ? ?
>>>>
>>>>
>>>>
>>>> *From:* Vikas Parashar [mailto:para.vikas@gmail.com]
>>>> *Sent:* Tuesday, March 03, 2015 10:33 AM
>>>> *To:* user@hadoop.apache.org
>>>> *Subject:* Re: how to check hdfs
>>>>
>>>>
>>>>
>>>> Hi,
>>>>
>>>>
>>>>
>>>> Kindly install hadoop-hdfs rpm in your machine..
>>>>
>>>>
>>>>
>>>> Rg:
>>>>
>>>> Vicky
>>>>
>>>>
>>>>
>>>> On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com>
>>>> wrote:
>>>>
>>>> Hi all,
>>>>
>>>> I just start to learn hadoop, I have a naive question
>>>>
>>>> I used
>>>>
>>>> hdfs dfs -ls /home/cluster
>>>>
>>>> to check the content inside.
>>>>
>>>> But I get error
>>>> ls: No FileSystem for scheme: hdfs
>>>>
>>>> My configuration file core-site.xml is like
>>>> <configuration>
>>>> <property>
>>>> <name>fs.defaultFS</name>
>>>> <value>hdfs://master:9000</value>
>>>> </property>
>>>> </configuration>
>>>>
>>>>
>>>> hdfs-site.xml is like
>>>> <configuration>
>>>> <property>
>>>> <name>dfs.replication</name>
>>>> <value>2</value>
>>>> </property>
>>>> <property>
>>>> <name>dfs.name.dir</name>
>>>> <value>file:/home/cluster/mydata/hdfs/namenode</value>
>>>> </property>
>>>> <property>
>>>> <name>dfs.data.dir</name>
>>>> <value>file:/home/cluster/mydata/hdfs/datanode</value>
>>>> </property>
>>>> </configuration>
>>>>
>>>> is there any thing wrong ?
>>>>
>>>> Thanks a lot.
>>>>
>>>>
>>>>
>>>> **************** CAUTION - Disclaimer *****************
>>>> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
>>>> for the use of the addressee(s). If you are not the intended recipient, please
>>>> notify the sender by e-mail and delete the original message. Further, you are not
>>>> to copy, disclose, or distribute this e-mail or its contents to any other person and
>>>> any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
>>>> every reasonable precaution to minimize this risk, but is not liable for any damage
>>>> you may sustain as a result of any virus in this e-mail. You should carry out your
>>>> own virus checks before opening the e-mail or attachment. Infosys reserves the
>>>> right to monitor and review the content of all messages sent to or from this e-mail
>>>> address. Messages sent to or from this e-mail address may be stored on the
>>>> Infosys e-mail system.
>>>> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>>>>
>>>>
>>>
>>
>
Re: how to check hdfs
Posted by Vikas Parashar <pa...@gmail.com>.
Hello,
hdfs dfs -ls /home/cluster
to check the content inside.
But I get error
ls: *No FileSystem for scheme: hdfs --> *that means, you don't have
hdfs rpm installed at your client machine..
For answer of you question:-
./hdfs dfs -mkdir hdfs://master:9000/directory
That *directory *will be under / in your hdfs. All data would be stored in
data node; but namenode will have the meta data information. For more
details; you have to read hdfs
http://hadoop.apache.org/docs/r1.2.1/hdfs_design.html.
On Tue, Mar 3, 2015 at 10:46 PM, Shengdi Jin <ji...@gmail.com> wrote:
> I use command
> ./hdfs dfs -ls hdfs://master:9000/
> It works. So i think hdfs://master:9000/ should be the hdfs.
>
> I have another questions, if
> ./hdfs dfs -mkdir hdfs://master:9000/directory
> where should the /directory be stored?
> In DataNode or in NameNode? or in the local system of master?
>
> On Tue, Mar 3, 2015 at 8:06 AM, 杨浩 <ya...@gmail.com> wrote:
>
>> I don't think it nessary to run the command with daemon in that client,
>> and hdfs is not a daemon for hadoop。
>>
>> 2015-03-03 20:57 GMT+08:00 Somnath Pandeya <So...@infosys.com>:
>>
>>> Is your hdfs daemon running on cluster. ? ?
>>>
>>>
>>>
>>> *From:* Vikas Parashar [mailto:para.vikas@gmail.com]
>>> *Sent:* Tuesday, March 03, 2015 10:33 AM
>>> *To:* user@hadoop.apache.org
>>> *Subject:* Re: how to check hdfs
>>>
>>>
>>>
>>> Hi,
>>>
>>>
>>>
>>> Kindly install hadoop-hdfs rpm in your machine..
>>>
>>>
>>>
>>> Rg:
>>>
>>> Vicky
>>>
>>>
>>>
>>> On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com>
>>> wrote:
>>>
>>> Hi all,
>>>
>>> I just start to learn hadoop, I have a naive question
>>>
>>> I used
>>>
>>> hdfs dfs -ls /home/cluster
>>>
>>> to check the content inside.
>>>
>>> But I get error
>>> ls: No FileSystem for scheme: hdfs
>>>
>>> My configuration file core-site.xml is like
>>> <configuration>
>>> <property>
>>> <name>fs.defaultFS</name>
>>> <value>hdfs://master:9000</value>
>>> </property>
>>> </configuration>
>>>
>>>
>>> hdfs-site.xml is like
>>> <configuration>
>>> <property>
>>> <name>dfs.replication</name>
>>> <value>2</value>
>>> </property>
>>> <property>
>>> <name>dfs.name.dir</name>
>>> <value>file:/home/cluster/mydata/hdfs/namenode</value>
>>> </property>
>>> <property>
>>> <name>dfs.data.dir</name>
>>> <value>file:/home/cluster/mydata/hdfs/datanode</value>
>>> </property>
>>> </configuration>
>>>
>>> is there any thing wrong ?
>>>
>>> Thanks a lot.
>>>
>>>
>>>
>>> **************** CAUTION - Disclaimer *****************
>>> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
>>> for the use of the addressee(s). If you are not the intended recipient, please
>>> notify the sender by e-mail and delete the original message. Further, you are not
>>> to copy, disclose, or distribute this e-mail or its contents to any other person and
>>> any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
>>> every reasonable precaution to minimize this risk, but is not liable for any damage
>>> you may sustain as a result of any virus in this e-mail. You should carry out your
>>> own virus checks before opening the e-mail or attachment. Infosys reserves the
>>> right to monitor and review the content of all messages sent to or from this e-mail
>>> address. Messages sent to or from this e-mail address may be stored on the
>>> Infosys e-mail system.
>>> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>>>
>>>
>>
>
Re: how to check hdfs
Posted by Vikas Parashar <pa...@gmail.com>.
Hello,
hdfs dfs -ls /home/cluster
to check the content inside.
But I get error
ls: *No FileSystem for scheme: hdfs --> *that means, you don't have
hdfs rpm installed at your client machine..
For answer of you question:-
./hdfs dfs -mkdir hdfs://master:9000/directory
That *directory *will be under / in your hdfs. All data would be stored in
data node; but namenode will have the meta data information. For more
details; you have to read hdfs
http://hadoop.apache.org/docs/r1.2.1/hdfs_design.html.
On Tue, Mar 3, 2015 at 10:46 PM, Shengdi Jin <ji...@gmail.com> wrote:
> I use command
> ./hdfs dfs -ls hdfs://master:9000/
> It works. So i think hdfs://master:9000/ should be the hdfs.
>
> I have another questions, if
> ./hdfs dfs -mkdir hdfs://master:9000/directory
> where should the /directory be stored?
> In DataNode or in NameNode? or in the local system of master?
>
> On Tue, Mar 3, 2015 at 8:06 AM, 杨浩 <ya...@gmail.com> wrote:
>
>> I don't think it nessary to run the command with daemon in that client,
>> and hdfs is not a daemon for hadoop。
>>
>> 2015-03-03 20:57 GMT+08:00 Somnath Pandeya <So...@infosys.com>:
>>
>>> Is your hdfs daemon running on cluster. ? ?
>>>
>>>
>>>
>>> *From:* Vikas Parashar [mailto:para.vikas@gmail.com]
>>> *Sent:* Tuesday, March 03, 2015 10:33 AM
>>> *To:* user@hadoop.apache.org
>>> *Subject:* Re: how to check hdfs
>>>
>>>
>>>
>>> Hi,
>>>
>>>
>>>
>>> Kindly install hadoop-hdfs rpm in your machine..
>>>
>>>
>>>
>>> Rg:
>>>
>>> Vicky
>>>
>>>
>>>
>>> On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com>
>>> wrote:
>>>
>>> Hi all,
>>>
>>> I just start to learn hadoop, I have a naive question
>>>
>>> I used
>>>
>>> hdfs dfs -ls /home/cluster
>>>
>>> to check the content inside.
>>>
>>> But I get error
>>> ls: No FileSystem for scheme: hdfs
>>>
>>> My configuration file core-site.xml is like
>>> <configuration>
>>> <property>
>>> <name>fs.defaultFS</name>
>>> <value>hdfs://master:9000</value>
>>> </property>
>>> </configuration>
>>>
>>>
>>> hdfs-site.xml is like
>>> <configuration>
>>> <property>
>>> <name>dfs.replication</name>
>>> <value>2</value>
>>> </property>
>>> <property>
>>> <name>dfs.name.dir</name>
>>> <value>file:/home/cluster/mydata/hdfs/namenode</value>
>>> </property>
>>> <property>
>>> <name>dfs.data.dir</name>
>>> <value>file:/home/cluster/mydata/hdfs/datanode</value>
>>> </property>
>>> </configuration>
>>>
>>> is there any thing wrong ?
>>>
>>> Thanks a lot.
>>>
>>>
>>>
>>> **************** CAUTION - Disclaimer *****************
>>> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
>>> for the use of the addressee(s). If you are not the intended recipient, please
>>> notify the sender by e-mail and delete the original message. Further, you are not
>>> to copy, disclose, or distribute this e-mail or its contents to any other person and
>>> any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
>>> every reasonable precaution to minimize this risk, but is not liable for any damage
>>> you may sustain as a result of any virus in this e-mail. You should carry out your
>>> own virus checks before opening the e-mail or attachment. Infosys reserves the
>>> right to monitor and review the content of all messages sent to or from this e-mail
>>> address. Messages sent to or from this e-mail address may be stored on the
>>> Infosys e-mail system.
>>> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>>>
>>>
>>
>
Re: how to check hdfs
Posted by Vikas Parashar <pa...@gmail.com>.
Hello,
hdfs dfs -ls /home/cluster
to check the content inside.
But I get error
ls: *No FileSystem for scheme: hdfs --> *that means, you don't have
hdfs rpm installed at your client machine..
For answer of you question:-
./hdfs dfs -mkdir hdfs://master:9000/directory
That *directory *will be under / in your hdfs. All data would be stored in
data node; but namenode will have the meta data information. For more
details; you have to read hdfs
http://hadoop.apache.org/docs/r1.2.1/hdfs_design.html.
On Tue, Mar 3, 2015 at 10:46 PM, Shengdi Jin <ji...@gmail.com> wrote:
> I use command
> ./hdfs dfs -ls hdfs://master:9000/
> It works. So i think hdfs://master:9000/ should be the hdfs.
>
> I have another questions, if
> ./hdfs dfs -mkdir hdfs://master:9000/directory
> where should the /directory be stored?
> In DataNode or in NameNode? or in the local system of master?
>
> On Tue, Mar 3, 2015 at 8:06 AM, 杨浩 <ya...@gmail.com> wrote:
>
>> I don't think it nessary to run the command with daemon in that client,
>> and hdfs is not a daemon for hadoop。
>>
>> 2015-03-03 20:57 GMT+08:00 Somnath Pandeya <So...@infosys.com>:
>>
>>> Is your hdfs daemon running on cluster. ? ?
>>>
>>>
>>>
>>> *From:* Vikas Parashar [mailto:para.vikas@gmail.com]
>>> *Sent:* Tuesday, March 03, 2015 10:33 AM
>>> *To:* user@hadoop.apache.org
>>> *Subject:* Re: how to check hdfs
>>>
>>>
>>>
>>> Hi,
>>>
>>>
>>>
>>> Kindly install hadoop-hdfs rpm in your machine..
>>>
>>>
>>>
>>> Rg:
>>>
>>> Vicky
>>>
>>>
>>>
>>> On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com>
>>> wrote:
>>>
>>> Hi all,
>>>
>>> I just start to learn hadoop, I have a naive question
>>>
>>> I used
>>>
>>> hdfs dfs -ls /home/cluster
>>>
>>> to check the content inside.
>>>
>>> But I get error
>>> ls: No FileSystem for scheme: hdfs
>>>
>>> My configuration file core-site.xml is like
>>> <configuration>
>>> <property>
>>> <name>fs.defaultFS</name>
>>> <value>hdfs://master:9000</value>
>>> </property>
>>> </configuration>
>>>
>>>
>>> hdfs-site.xml is like
>>> <configuration>
>>> <property>
>>> <name>dfs.replication</name>
>>> <value>2</value>
>>> </property>
>>> <property>
>>> <name>dfs.name.dir</name>
>>> <value>file:/home/cluster/mydata/hdfs/namenode</value>
>>> </property>
>>> <property>
>>> <name>dfs.data.dir</name>
>>> <value>file:/home/cluster/mydata/hdfs/datanode</value>
>>> </property>
>>> </configuration>
>>>
>>> is there any thing wrong ?
>>>
>>> Thanks a lot.
>>>
>>>
>>>
>>> **************** CAUTION - Disclaimer *****************
>>> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
>>> for the use of the addressee(s). If you are not the intended recipient, please
>>> notify the sender by e-mail and delete the original message. Further, you are not
>>> to copy, disclose, or distribute this e-mail or its contents to any other person and
>>> any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
>>> every reasonable precaution to minimize this risk, but is not liable for any damage
>>> you may sustain as a result of any virus in this e-mail. You should carry out your
>>> own virus checks before opening the e-mail or attachment. Infosys reserves the
>>> right to monitor and review the content of all messages sent to or from this e-mail
>>> address. Messages sent to or from this e-mail address may be stored on the
>>> Infosys e-mail system.
>>> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>>>
>>>
>>
>
Re: how to check hdfs
Posted by Vikas Parashar <pa...@gmail.com>.
Hello,
hdfs dfs -ls /home/cluster
to check the content inside.
But I get error
ls: *No FileSystem for scheme: hdfs --> *that means, you don't have
hdfs rpm installed at your client machine..
For answer of you question:-
./hdfs dfs -mkdir hdfs://master:9000/directory
That *directory *will be under / in your hdfs. All data would be stored in
data node; but namenode will have the meta data information. For more
details; you have to read hdfs
http://hadoop.apache.org/docs/r1.2.1/hdfs_design.html.
On Tue, Mar 3, 2015 at 10:46 PM, Shengdi Jin <ji...@gmail.com> wrote:
> I use command
> ./hdfs dfs -ls hdfs://master:9000/
> It works. So i think hdfs://master:9000/ should be the hdfs.
>
> I have another questions, if
> ./hdfs dfs -mkdir hdfs://master:9000/directory
> where should the /directory be stored?
> In DataNode or in NameNode? or in the local system of master?
>
> On Tue, Mar 3, 2015 at 8:06 AM, 杨浩 <ya...@gmail.com> wrote:
>
>> I don't think it nessary to run the command with daemon in that client,
>> and hdfs is not a daemon for hadoop。
>>
>> 2015-03-03 20:57 GMT+08:00 Somnath Pandeya <So...@infosys.com>:
>>
>>> Is your hdfs daemon running on cluster. ? ?
>>>
>>>
>>>
>>> *From:* Vikas Parashar [mailto:para.vikas@gmail.com]
>>> *Sent:* Tuesday, March 03, 2015 10:33 AM
>>> *To:* user@hadoop.apache.org
>>> *Subject:* Re: how to check hdfs
>>>
>>>
>>>
>>> Hi,
>>>
>>>
>>>
>>> Kindly install hadoop-hdfs rpm in your machine..
>>>
>>>
>>>
>>> Rg:
>>>
>>> Vicky
>>>
>>>
>>>
>>> On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com>
>>> wrote:
>>>
>>> Hi all,
>>>
>>> I just start to learn hadoop, I have a naive question
>>>
>>> I used
>>>
>>> hdfs dfs -ls /home/cluster
>>>
>>> to check the content inside.
>>>
>>> But I get error
>>> ls: No FileSystem for scheme: hdfs
>>>
>>> My configuration file core-site.xml is like
>>> <configuration>
>>> <property>
>>> <name>fs.defaultFS</name>
>>> <value>hdfs://master:9000</value>
>>> </property>
>>> </configuration>
>>>
>>>
>>> hdfs-site.xml is like
>>> <configuration>
>>> <property>
>>> <name>dfs.replication</name>
>>> <value>2</value>
>>> </property>
>>> <property>
>>> <name>dfs.name.dir</name>
>>> <value>file:/home/cluster/mydata/hdfs/namenode</value>
>>> </property>
>>> <property>
>>> <name>dfs.data.dir</name>
>>> <value>file:/home/cluster/mydata/hdfs/datanode</value>
>>> </property>
>>> </configuration>
>>>
>>> is there any thing wrong ?
>>>
>>> Thanks a lot.
>>>
>>>
>>>
>>> **************** CAUTION - Disclaimer *****************
>>> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
>>> for the use of the addressee(s). If you are not the intended recipient, please
>>> notify the sender by e-mail and delete the original message. Further, you are not
>>> to copy, disclose, or distribute this e-mail or its contents to any other person and
>>> any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
>>> every reasonable precaution to minimize this risk, but is not liable for any damage
>>> you may sustain as a result of any virus in this e-mail. You should carry out your
>>> own virus checks before opening the e-mail or attachment. Infosys reserves the
>>> right to monitor and review the content of all messages sent to or from this e-mail
>>> address. Messages sent to or from this e-mail address may be stored on the
>>> Infosys e-mail system.
>>> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>>>
>>>
>>
>
Re: how to check hdfs
Posted by Shengdi Jin <ji...@gmail.com>.
I use command
./hdfs dfs -ls hdfs://master:9000/
It works. So i think hdfs://master:9000/ should be the hdfs.
I have another questions, if
./hdfs dfs -mkdir hdfs://master:9000/directory
where should the /directory be stored?
In DataNode or in NameNode? or in the local system of master?
On Tue, Mar 3, 2015 at 8:06 AM, 杨浩 <ya...@gmail.com> wrote:
> I don't think it nessary to run the command with daemon in that client,
> and hdfs is not a daemon for hadoop。
>
> 2015-03-03 20:57 GMT+08:00 Somnath Pandeya <So...@infosys.com>:
>
>> Is your hdfs daemon running on cluster. ? ?
>>
>>
>>
>> *From:* Vikas Parashar [mailto:para.vikas@gmail.com]
>> *Sent:* Tuesday, March 03, 2015 10:33 AM
>> *To:* user@hadoop.apache.org
>> *Subject:* Re: how to check hdfs
>>
>>
>>
>> Hi,
>>
>>
>>
>> Kindly install hadoop-hdfs rpm in your machine..
>>
>>
>>
>> Rg:
>>
>> Vicky
>>
>>
>>
>> On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com>
>> wrote:
>>
>> Hi all,
>>
>> I just start to learn hadoop, I have a naive question
>>
>> I used
>>
>> hdfs dfs -ls /home/cluster
>>
>> to check the content inside.
>>
>> But I get error
>> ls: No FileSystem for scheme: hdfs
>>
>> My configuration file core-site.xml is like
>> <configuration>
>> <property>
>> <name>fs.defaultFS</name>
>> <value>hdfs://master:9000</value>
>> </property>
>> </configuration>
>>
>>
>> hdfs-site.xml is like
>> <configuration>
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>> <property>
>> <name>dfs.name.dir</name>
>> <value>file:/home/cluster/mydata/hdfs/namenode</value>
>> </property>
>> <property>
>> <name>dfs.data.dir</name>
>> <value>file:/home/cluster/mydata/hdfs/datanode</value>
>> </property>
>> </configuration>
>>
>> is there any thing wrong ?
>>
>> Thanks a lot.
>>
>>
>>
>> **************** CAUTION - Disclaimer *****************
>> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
>> for the use of the addressee(s). If you are not the intended recipient, please
>> notify the sender by e-mail and delete the original message. Further, you are not
>> to copy, disclose, or distribute this e-mail or its contents to any other person and
>> any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
>> every reasonable precaution to minimize this risk, but is not liable for any damage
>> you may sustain as a result of any virus in this e-mail. You should carry out your
>> own virus checks before opening the e-mail or attachment. Infosys reserves the
>> right to monitor and review the content of all messages sent to or from this e-mail
>> address. Messages sent to or from this e-mail address may be stored on the
>> Infosys e-mail system.
>> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>>
>>
>
Re: how to check hdfs
Posted by Shengdi Jin <ji...@gmail.com>.
I use command
./hdfs dfs -ls hdfs://master:9000/
It works. So i think hdfs://master:9000/ should be the hdfs.
I have another questions, if
./hdfs dfs -mkdir hdfs://master:9000/directory
where should the /directory be stored?
In DataNode or in NameNode? or in the local system of master?
On Tue, Mar 3, 2015 at 8:06 AM, 杨浩 <ya...@gmail.com> wrote:
> I don't think it nessary to run the command with daemon in that client,
> and hdfs is not a daemon for hadoop。
>
> 2015-03-03 20:57 GMT+08:00 Somnath Pandeya <So...@infosys.com>:
>
>> Is your hdfs daemon running on cluster. ? ?
>>
>>
>>
>> *From:* Vikas Parashar [mailto:para.vikas@gmail.com]
>> *Sent:* Tuesday, March 03, 2015 10:33 AM
>> *To:* user@hadoop.apache.org
>> *Subject:* Re: how to check hdfs
>>
>>
>>
>> Hi,
>>
>>
>>
>> Kindly install hadoop-hdfs rpm in your machine..
>>
>>
>>
>> Rg:
>>
>> Vicky
>>
>>
>>
>> On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com>
>> wrote:
>>
>> Hi all,
>>
>> I just start to learn hadoop, I have a naive question
>>
>> I used
>>
>> hdfs dfs -ls /home/cluster
>>
>> to check the content inside.
>>
>> But I get error
>> ls: No FileSystem for scheme: hdfs
>>
>> My configuration file core-site.xml is like
>> <configuration>
>> <property>
>> <name>fs.defaultFS</name>
>> <value>hdfs://master:9000</value>
>> </property>
>> </configuration>
>>
>>
>> hdfs-site.xml is like
>> <configuration>
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>> <property>
>> <name>dfs.name.dir</name>
>> <value>file:/home/cluster/mydata/hdfs/namenode</value>
>> </property>
>> <property>
>> <name>dfs.data.dir</name>
>> <value>file:/home/cluster/mydata/hdfs/datanode</value>
>> </property>
>> </configuration>
>>
>> is there any thing wrong ?
>>
>> Thanks a lot.
>>
>>
>>
>> **************** CAUTION - Disclaimer *****************
>> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
>> for the use of the addressee(s). If you are not the intended recipient, please
>> notify the sender by e-mail and delete the original message. Further, you are not
>> to copy, disclose, or distribute this e-mail or its contents to any other person and
>> any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
>> every reasonable precaution to minimize this risk, but is not liable for any damage
>> you may sustain as a result of any virus in this e-mail. You should carry out your
>> own virus checks before opening the e-mail or attachment. Infosys reserves the
>> right to monitor and review the content of all messages sent to or from this e-mail
>> address. Messages sent to or from this e-mail address may be stored on the
>> Infosys e-mail system.
>> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>>
>>
>
Re: how to check hdfs
Posted by Shengdi Jin <ji...@gmail.com>.
I use command
./hdfs dfs -ls hdfs://master:9000/
It works. So i think hdfs://master:9000/ should be the hdfs.
I have another questions, if
./hdfs dfs -mkdir hdfs://master:9000/directory
where should the /directory be stored?
In DataNode or in NameNode? or in the local system of master?
On Tue, Mar 3, 2015 at 8:06 AM, 杨浩 <ya...@gmail.com> wrote:
> I don't think it nessary to run the command with daemon in that client,
> and hdfs is not a daemon for hadoop。
>
> 2015-03-03 20:57 GMT+08:00 Somnath Pandeya <So...@infosys.com>:
>
>> Is your hdfs daemon running on cluster. ? ?
>>
>>
>>
>> *From:* Vikas Parashar [mailto:para.vikas@gmail.com]
>> *Sent:* Tuesday, March 03, 2015 10:33 AM
>> *To:* user@hadoop.apache.org
>> *Subject:* Re: how to check hdfs
>>
>>
>>
>> Hi,
>>
>>
>>
>> Kindly install hadoop-hdfs rpm in your machine..
>>
>>
>>
>> Rg:
>>
>> Vicky
>>
>>
>>
>> On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com>
>> wrote:
>>
>> Hi all,
>>
>> I just start to learn hadoop, I have a naive question
>>
>> I used
>>
>> hdfs dfs -ls /home/cluster
>>
>> to check the content inside.
>>
>> But I get error
>> ls: No FileSystem for scheme: hdfs
>>
>> My configuration file core-site.xml is like
>> <configuration>
>> <property>
>> <name>fs.defaultFS</name>
>> <value>hdfs://master:9000</value>
>> </property>
>> </configuration>
>>
>>
>> hdfs-site.xml is like
>> <configuration>
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>> <property>
>> <name>dfs.name.dir</name>
>> <value>file:/home/cluster/mydata/hdfs/namenode</value>
>> </property>
>> <property>
>> <name>dfs.data.dir</name>
>> <value>file:/home/cluster/mydata/hdfs/datanode</value>
>> </property>
>> </configuration>
>>
>> is there any thing wrong ?
>>
>> Thanks a lot.
>>
>>
>>
>> **************** CAUTION - Disclaimer *****************
>> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
>> for the use of the addressee(s). If you are not the intended recipient, please
>> notify the sender by e-mail and delete the original message. Further, you are not
>> to copy, disclose, or distribute this e-mail or its contents to any other person and
>> any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
>> every reasonable precaution to minimize this risk, but is not liable for any damage
>> you may sustain as a result of any virus in this e-mail. You should carry out your
>> own virus checks before opening the e-mail or attachment. Infosys reserves the
>> right to monitor and review the content of all messages sent to or from this e-mail
>> address. Messages sent to or from this e-mail address may be stored on the
>> Infosys e-mail system.
>> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>>
>>
>
Re: how to check hdfs
Posted by Shengdi Jin <ji...@gmail.com>.
I use command
./hdfs dfs -ls hdfs://master:9000/
It works. So i think hdfs://master:9000/ should be the hdfs.
I have another questions, if
./hdfs dfs -mkdir hdfs://master:9000/directory
where should the /directory be stored?
In DataNode or in NameNode? or in the local system of master?
On Tue, Mar 3, 2015 at 8:06 AM, 杨浩 <ya...@gmail.com> wrote:
> I don't think it nessary to run the command with daemon in that client,
> and hdfs is not a daemon for hadoop。
>
> 2015-03-03 20:57 GMT+08:00 Somnath Pandeya <So...@infosys.com>:
>
>> Is your hdfs daemon running on cluster. ? ?
>>
>>
>>
>> *From:* Vikas Parashar [mailto:para.vikas@gmail.com]
>> *Sent:* Tuesday, March 03, 2015 10:33 AM
>> *To:* user@hadoop.apache.org
>> *Subject:* Re: how to check hdfs
>>
>>
>>
>> Hi,
>>
>>
>>
>> Kindly install hadoop-hdfs rpm in your machine..
>>
>>
>>
>> Rg:
>>
>> Vicky
>>
>>
>>
>> On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com>
>> wrote:
>>
>> Hi all,
>>
>> I just start to learn hadoop, I have a naive question
>>
>> I used
>>
>> hdfs dfs -ls /home/cluster
>>
>> to check the content inside.
>>
>> But I get error
>> ls: No FileSystem for scheme: hdfs
>>
>> My configuration file core-site.xml is like
>> <configuration>
>> <property>
>> <name>fs.defaultFS</name>
>> <value>hdfs://master:9000</value>
>> </property>
>> </configuration>
>>
>>
>> hdfs-site.xml is like
>> <configuration>
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>> <property>
>> <name>dfs.name.dir</name>
>> <value>file:/home/cluster/mydata/hdfs/namenode</value>
>> </property>
>> <property>
>> <name>dfs.data.dir</name>
>> <value>file:/home/cluster/mydata/hdfs/datanode</value>
>> </property>
>> </configuration>
>>
>> is there any thing wrong ?
>>
>> Thanks a lot.
>>
>>
>>
>> **************** CAUTION - Disclaimer *****************
>> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
>> for the use of the addressee(s). If you are not the intended recipient, please
>> notify the sender by e-mail and delete the original message. Further, you are not
>> to copy, disclose, or distribute this e-mail or its contents to any other person and
>> any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
>> every reasonable precaution to minimize this risk, but is not liable for any damage
>> you may sustain as a result of any virus in this e-mail. You should carry out your
>> own virus checks before opening the e-mail or attachment. Infosys reserves the
>> right to monitor and review the content of all messages sent to or from this e-mail
>> address. Messages sent to or from this e-mail address may be stored on the
>> Infosys e-mail system.
>> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>>
>>
>
Re: how to check hdfs
Posted by 杨浩 <ya...@gmail.com>.
I don't think it nessary to run the command with daemon in that client, and
hdfs is not a daemon for hadoop。
2015-03-03 20:57 GMT+08:00 Somnath Pandeya <So...@infosys.com>:
> Is your hdfs daemon running on cluster. ? ?
>
>
>
> *From:* Vikas Parashar [mailto:para.vikas@gmail.com]
> *Sent:* Tuesday, March 03, 2015 10:33 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: how to check hdfs
>
>
>
> Hi,
>
>
>
> Kindly install hadoop-hdfs rpm in your machine..
>
>
>
> Rg:
>
> Vicky
>
>
>
> On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com> wrote:
>
> Hi all,
>
> I just start to learn hadoop, I have a naive question
>
> I used
>
> hdfs dfs -ls /home/cluster
>
> to check the content inside.
>
> But I get error
> ls: No FileSystem for scheme: hdfs
>
> My configuration file core-site.xml is like
> <configuration>
> <property>
> <name>fs.defaultFS</name>
> <value>hdfs://master:9000</value>
> </property>
> </configuration>
>
>
> hdfs-site.xml is like
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> <property>
> <name>dfs.name.dir</name>
> <value>file:/home/cluster/mydata/hdfs/namenode</value>
> </property>
> <property>
> <name>dfs.data.dir</name>
> <value>file:/home/cluster/mydata/hdfs/datanode</value>
> </property>
> </configuration>
>
> is there any thing wrong ?
>
> Thanks a lot.
>
>
>
> **************** CAUTION - Disclaimer *****************
> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
> for the use of the addressee(s). If you are not the intended recipient, please
> notify the sender by e-mail and delete the original message. Further, you are not
> to copy, disclose, or distribute this e-mail or its contents to any other person and
> any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
> every reasonable precaution to minimize this risk, but is not liable for any damage
> you may sustain as a result of any virus in this e-mail. You should carry out your
> own virus checks before opening the e-mail or attachment. Infosys reserves the
> right to monitor and review the content of all messages sent to or from this e-mail
> address. Messages sent to or from this e-mail address may be stored on the
> Infosys e-mail system.
> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>
>
Re: how to check hdfs
Posted by 杨浩 <ya...@gmail.com>.
I don't think it nessary to run the command with daemon in that client, and
hdfs is not a daemon for hadoop。
2015-03-03 20:57 GMT+08:00 Somnath Pandeya <So...@infosys.com>:
> Is your hdfs daemon running on cluster. ? ?
>
>
>
> *From:* Vikas Parashar [mailto:para.vikas@gmail.com]
> *Sent:* Tuesday, March 03, 2015 10:33 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: how to check hdfs
>
>
>
> Hi,
>
>
>
> Kindly install hadoop-hdfs rpm in your machine..
>
>
>
> Rg:
>
> Vicky
>
>
>
> On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com> wrote:
>
> Hi all,
>
> I just start to learn hadoop, I have a naive question
>
> I used
>
> hdfs dfs -ls /home/cluster
>
> to check the content inside.
>
> But I get error
> ls: No FileSystem for scheme: hdfs
>
> My configuration file core-site.xml is like
> <configuration>
> <property>
> <name>fs.defaultFS</name>
> <value>hdfs://master:9000</value>
> </property>
> </configuration>
>
>
> hdfs-site.xml is like
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> <property>
> <name>dfs.name.dir</name>
> <value>file:/home/cluster/mydata/hdfs/namenode</value>
> </property>
> <property>
> <name>dfs.data.dir</name>
> <value>file:/home/cluster/mydata/hdfs/datanode</value>
> </property>
> </configuration>
>
> is there any thing wrong ?
>
> Thanks a lot.
>
>
>
> **************** CAUTION - Disclaimer *****************
> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
> for the use of the addressee(s). If you are not the intended recipient, please
> notify the sender by e-mail and delete the original message. Further, you are not
> to copy, disclose, or distribute this e-mail or its contents to any other person and
> any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
> every reasonable precaution to minimize this risk, but is not liable for any damage
> you may sustain as a result of any virus in this e-mail. You should carry out your
> own virus checks before opening the e-mail or attachment. Infosys reserves the
> right to monitor and review the content of all messages sent to or from this e-mail
> address. Messages sent to or from this e-mail address may be stored on the
> Infosys e-mail system.
> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>
>
Re: how to check hdfs
Posted by 杨浩 <ya...@gmail.com>.
I don't think it nessary to run the command with daemon in that client, and
hdfs is not a daemon for hadoop。
2015-03-03 20:57 GMT+08:00 Somnath Pandeya <So...@infosys.com>:
> Is your hdfs daemon running on cluster. ? ?
>
>
>
> *From:* Vikas Parashar [mailto:para.vikas@gmail.com]
> *Sent:* Tuesday, March 03, 2015 10:33 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: how to check hdfs
>
>
>
> Hi,
>
>
>
> Kindly install hadoop-hdfs rpm in your machine..
>
>
>
> Rg:
>
> Vicky
>
>
>
> On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com> wrote:
>
> Hi all,
>
> I just start to learn hadoop, I have a naive question
>
> I used
>
> hdfs dfs -ls /home/cluster
>
> to check the content inside.
>
> But I get error
> ls: No FileSystem for scheme: hdfs
>
> My configuration file core-site.xml is like
> <configuration>
> <property>
> <name>fs.defaultFS</name>
> <value>hdfs://master:9000</value>
> </property>
> </configuration>
>
>
> hdfs-site.xml is like
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> <property>
> <name>dfs.name.dir</name>
> <value>file:/home/cluster/mydata/hdfs/namenode</value>
> </property>
> <property>
> <name>dfs.data.dir</name>
> <value>file:/home/cluster/mydata/hdfs/datanode</value>
> </property>
> </configuration>
>
> is there any thing wrong ?
>
> Thanks a lot.
>
>
>
> **************** CAUTION - Disclaimer *****************
> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
> for the use of the addressee(s). If you are not the intended recipient, please
> notify the sender by e-mail and delete the original message. Further, you are not
> to copy, disclose, or distribute this e-mail or its contents to any other person and
> any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
> every reasonable precaution to minimize this risk, but is not liable for any damage
> you may sustain as a result of any virus in this e-mail. You should carry out your
> own virus checks before opening the e-mail or attachment. Infosys reserves the
> right to monitor and review the content of all messages sent to or from this e-mail
> address. Messages sent to or from this e-mail address may be stored on the
> Infosys e-mail system.
> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>
>
Re: how to check hdfs
Posted by 杨浩 <ya...@gmail.com>.
I don't think it nessary to run the command with daemon in that client, and
hdfs is not a daemon for hadoop。
2015-03-03 20:57 GMT+08:00 Somnath Pandeya <So...@infosys.com>:
> Is your hdfs daemon running on cluster. ? ?
>
>
>
> *From:* Vikas Parashar [mailto:para.vikas@gmail.com]
> *Sent:* Tuesday, March 03, 2015 10:33 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: how to check hdfs
>
>
>
> Hi,
>
>
>
> Kindly install hadoop-hdfs rpm in your machine..
>
>
>
> Rg:
>
> Vicky
>
>
>
> On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com> wrote:
>
> Hi all,
>
> I just start to learn hadoop, I have a naive question
>
> I used
>
> hdfs dfs -ls /home/cluster
>
> to check the content inside.
>
> But I get error
> ls: No FileSystem for scheme: hdfs
>
> My configuration file core-site.xml is like
> <configuration>
> <property>
> <name>fs.defaultFS</name>
> <value>hdfs://master:9000</value>
> </property>
> </configuration>
>
>
> hdfs-site.xml is like
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> <property>
> <name>dfs.name.dir</name>
> <value>file:/home/cluster/mydata/hdfs/namenode</value>
> </property>
> <property>
> <name>dfs.data.dir</name>
> <value>file:/home/cluster/mydata/hdfs/datanode</value>
> </property>
> </configuration>
>
> is there any thing wrong ?
>
> Thanks a lot.
>
>
>
> **************** CAUTION - Disclaimer *****************
> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
> for the use of the addressee(s). If you are not the intended recipient, please
> notify the sender by e-mail and delete the original message. Further, you are not
> to copy, disclose, or distribute this e-mail or its contents to any other person and
> any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
> every reasonable precaution to minimize this risk, but is not liable for any damage
> you may sustain as a result of any virus in this e-mail. You should carry out your
> own virus checks before opening the e-mail or attachment. Infosys reserves the
> right to monitor and review the content of all messages sent to or from this e-mail
> address. Messages sent to or from this e-mail address may be stored on the
> Infosys e-mail system.
> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>
>
RE: how to check hdfs
Posted by Somnath Pandeya <So...@infosys.com>.
Is your hdfs daemon running on cluster. ? ?
From: Vikas Parashar [mailto:para.vikas@gmail.com]
Sent: Tuesday, March 03, 2015 10:33 AM
To: user@hadoop.apache.org
Subject: Re: how to check hdfs
Hi,
Kindly install hadoop-hdfs rpm in your machine..
Rg:
Vicky
On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com>> wrote:
Hi all,
I just start to learn hadoop, I have a naive question
I used
hdfs dfs -ls /home/cluster
to check the content inside.
But I get error
ls: No FileSystem for scheme: hdfs
My configuration file core-site.xml is like
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
</configuration>
hdfs-site.xml is like
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:/home/cluster/mydata/hdfs/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:/home/cluster/mydata/hdfs/datanode</value>
</property>
</configuration>
is there any thing wrong ?
Thanks a lot.
**************** CAUTION - Disclaimer *****************
This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
for the use of the addressee(s). If you are not the intended recipient, please
notify the sender by e-mail and delete the original message. Further, you are not
to copy, disclose, or distribute this e-mail or its contents to any other person and
any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
every reasonable precaution to minimize this risk, but is not liable for any damage
you may sustain as a result of any virus in this e-mail. You should carry out your
own virus checks before opening the e-mail or attachment. Infosys reserves the
right to monitor and review the content of all messages sent to or from this e-mail
address. Messages sent to or from this e-mail address may be stored on the
Infosys e-mail system.
***INFOSYS******** End of Disclaimer ********INFOSYS***
RE: how to check hdfs
Posted by Somnath Pandeya <So...@infosys.com>.
Is your hdfs daemon running on cluster. ? ?
From: Vikas Parashar [mailto:para.vikas@gmail.com]
Sent: Tuesday, March 03, 2015 10:33 AM
To: user@hadoop.apache.org
Subject: Re: how to check hdfs
Hi,
Kindly install hadoop-hdfs rpm in your machine..
Rg:
Vicky
On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com>> wrote:
Hi all,
I just start to learn hadoop, I have a naive question
I used
hdfs dfs -ls /home/cluster
to check the content inside.
But I get error
ls: No FileSystem for scheme: hdfs
My configuration file core-site.xml is like
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
</configuration>
hdfs-site.xml is like
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:/home/cluster/mydata/hdfs/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:/home/cluster/mydata/hdfs/datanode</value>
</property>
</configuration>
is there any thing wrong ?
Thanks a lot.
**************** CAUTION - Disclaimer *****************
This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
for the use of the addressee(s). If you are not the intended recipient, please
notify the sender by e-mail and delete the original message. Further, you are not
to copy, disclose, or distribute this e-mail or its contents to any other person and
any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
every reasonable precaution to minimize this risk, but is not liable for any damage
you may sustain as a result of any virus in this e-mail. You should carry out your
own virus checks before opening the e-mail or attachment. Infosys reserves the
right to monitor and review the content of all messages sent to or from this e-mail
address. Messages sent to or from this e-mail address may be stored on the
Infosys e-mail system.
***INFOSYS******** End of Disclaimer ********INFOSYS***
RE: how to check hdfs
Posted by Somnath Pandeya <So...@infosys.com>.
Is your hdfs daemon running on cluster. ? ?
From: Vikas Parashar [mailto:para.vikas@gmail.com]
Sent: Tuesday, March 03, 2015 10:33 AM
To: user@hadoop.apache.org
Subject: Re: how to check hdfs
Hi,
Kindly install hadoop-hdfs rpm in your machine..
Rg:
Vicky
On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com>> wrote:
Hi all,
I just start to learn hadoop, I have a naive question
I used
hdfs dfs -ls /home/cluster
to check the content inside.
But I get error
ls: No FileSystem for scheme: hdfs
My configuration file core-site.xml is like
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
</configuration>
hdfs-site.xml is like
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:/home/cluster/mydata/hdfs/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:/home/cluster/mydata/hdfs/datanode</value>
</property>
</configuration>
is there any thing wrong ?
Thanks a lot.
**************** CAUTION - Disclaimer *****************
This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
for the use of the addressee(s). If you are not the intended recipient, please
notify the sender by e-mail and delete the original message. Further, you are not
to copy, disclose, or distribute this e-mail or its contents to any other person and
any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
every reasonable precaution to minimize this risk, but is not liable for any damage
you may sustain as a result of any virus in this e-mail. You should carry out your
own virus checks before opening the e-mail or attachment. Infosys reserves the
right to monitor and review the content of all messages sent to or from this e-mail
address. Messages sent to or from this e-mail address may be stored on the
Infosys e-mail system.
***INFOSYS******** End of Disclaimer ********INFOSYS***
RE: how to check hdfs
Posted by Somnath Pandeya <So...@infosys.com>.
Is your hdfs daemon running on cluster. ? ?
From: Vikas Parashar [mailto:para.vikas@gmail.com]
Sent: Tuesday, March 03, 2015 10:33 AM
To: user@hadoop.apache.org
Subject: Re: how to check hdfs
Hi,
Kindly install hadoop-hdfs rpm in your machine..
Rg:
Vicky
On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com>> wrote:
Hi all,
I just start to learn hadoop, I have a naive question
I used
hdfs dfs -ls /home/cluster
to check the content inside.
But I get error
ls: No FileSystem for scheme: hdfs
My configuration file core-site.xml is like
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
</configuration>
hdfs-site.xml is like
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:/home/cluster/mydata/hdfs/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:/home/cluster/mydata/hdfs/datanode</value>
</property>
</configuration>
is there any thing wrong ?
Thanks a lot.
**************** CAUTION - Disclaimer *****************
This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
for the use of the addressee(s). If you are not the intended recipient, please
notify the sender by e-mail and delete the original message. Further, you are not
to copy, disclose, or distribute this e-mail or its contents to any other person and
any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
every reasonable precaution to minimize this risk, but is not liable for any damage
you may sustain as a result of any virus in this e-mail. You should carry out your
own virus checks before opening the e-mail or attachment. Infosys reserves the
right to monitor and review the content of all messages sent to or from this e-mail
address. Messages sent to or from this e-mail address may be stored on the
Infosys e-mail system.
***INFOSYS******** End of Disclaimer ********INFOSYS***
Re: how to check hdfs
Posted by Vikas Parashar <pa...@gmail.com>.
Hi,
Kindly install hadoop-hdfs rpm in your machine..
Rg:
Vicky
On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com> wrote:
> Hi all,
> I just start to learn hadoop, I have a naive question
>
> I used
> hdfs dfs -ls /home/cluster
> to check the content inside.
> But I get error
> ls: No FileSystem for scheme: hdfs
>
> My configuration file core-site.xml is like
> <configuration>
> <property>
> <name>fs.defaultFS</name>
> <value>hdfs://master:9000</value>
> </property>
> </configuration>
>
> hdfs-site.xml is like
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> <property>
> <name>dfs.name.dir</name>
> <value>file:/home/cluster/mydata/hdfs/namenode</value>
> </property>
> <property>
> <name>dfs.data.dir</name>
> <value>file:/home/cluster/mydata/hdfs/datanode</value>
> </property>
> </configuration>
>
> is there any thing wrong ?
>
> Thanks a lot.
>
Re: how to check hdfs
Posted by Vikas Parashar <pa...@gmail.com>.
Hi,
Kindly install hadoop-hdfs rpm in your machine..
Rg:
Vicky
On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com> wrote:
> Hi all,
> I just start to learn hadoop, I have a naive question
>
> I used
> hdfs dfs -ls /home/cluster
> to check the content inside.
> But I get error
> ls: No FileSystem for scheme: hdfs
>
> My configuration file core-site.xml is like
> <configuration>
> <property>
> <name>fs.defaultFS</name>
> <value>hdfs://master:9000</value>
> </property>
> </configuration>
>
> hdfs-site.xml is like
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> <property>
> <name>dfs.name.dir</name>
> <value>file:/home/cluster/mydata/hdfs/namenode</value>
> </property>
> <property>
> <name>dfs.data.dir</name>
> <value>file:/home/cluster/mydata/hdfs/datanode</value>
> </property>
> </configuration>
>
> is there any thing wrong ?
>
> Thanks a lot.
>
Re: how to check hdfs
Posted by Vikas Parashar <pa...@gmail.com>.
Hi,
Kindly install hadoop-hdfs rpm in your machine..
Rg:
Vicky
On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com> wrote:
> Hi all,
> I just start to learn hadoop, I have a naive question
>
> I used
> hdfs dfs -ls /home/cluster
> to check the content inside.
> But I get error
> ls: No FileSystem for scheme: hdfs
>
> My configuration file core-site.xml is like
> <configuration>
> <property>
> <name>fs.defaultFS</name>
> <value>hdfs://master:9000</value>
> </property>
> </configuration>
>
> hdfs-site.xml is like
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> <property>
> <name>dfs.name.dir</name>
> <value>file:/home/cluster/mydata/hdfs/namenode</value>
> </property>
> <property>
> <name>dfs.data.dir</name>
> <value>file:/home/cluster/mydata/hdfs/datanode</value>
> </property>
> </configuration>
>
> is there any thing wrong ?
>
> Thanks a lot.
>
Re: how to check hdfs
Posted by Vikas Parashar <pa...@gmail.com>.
Hi,
Kindly install hadoop-hdfs rpm in your machine..
Rg:
Vicky
On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin <ji...@gmail.com> wrote:
> Hi all,
> I just start to learn hadoop, I have a naive question
>
> I used
> hdfs dfs -ls /home/cluster
> to check the content inside.
> But I get error
> ls: No FileSystem for scheme: hdfs
>
> My configuration file core-site.xml is like
> <configuration>
> <property>
> <name>fs.defaultFS</name>
> <value>hdfs://master:9000</value>
> </property>
> </configuration>
>
> hdfs-site.xml is like
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> <property>
> <name>dfs.name.dir</name>
> <value>file:/home/cluster/mydata/hdfs/namenode</value>
> </property>
> <property>
> <name>dfs.data.dir</name>
> <value>file:/home/cluster/mydata/hdfs/datanode</value>
> </property>
> </configuration>
>
> is there any thing wrong ?
>
> Thanks a lot.
>