You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Mahmood Naderan <nt...@yahoo.com> on 2014/04/04 18:53:05 UTC

IOException when using "dfs -put"

Hi,
I want to put a file from local FS to HDFS but at the end I get an error message and the copied file has zero size. Can someone help with an idea?

$ cat urlsdir/urllist.txt
http://lucene.apache.org

$ hadoop dfs -put urlsdir/urllist.txt urlsdir
put: java.io.IOException: File /user/mahmood/urlsdir could only be replicated to 0 nodes, instead of 1

$ hadoop dfs -ls
Found 1 items
-rw-r--r--   1 mahmood supergroup          0 2014-04-04 21:20 /user/mahmood/urlsdir



Regards,
Mahmood

Re: IOException when using "dfs -put"

Posted by Nitin Pawar <ni...@gmail.com>.
Datanodes available: 0 (0 total, 0 dead)

see this .. that means there is no datanode is connected with namenode.

can you check the namenode log and see its up and running
after that check the datanode log and see its up and running



On Sat, Apr 5, 2014 at 12:28 AM, Jitendra Yadav
<je...@gmail.com>wrote:

> Use jps and check what all processes are running, is this a single node
> cluster?
>
>
> On Fri, Apr 4, 2014 at 11:54 AM, Mahmood Naderan <nt...@yahoo.com>wrote:
>
>> Strange! See the output
>>
>> $ ./search/bin/hadoop namenode -format
>> Re-format filesystem in /data/mahmood/nutch-test/filesystem/name ? (Y or
>> N) Y
>>
>> $ ./search/bin/start-all.sh
>> starting namenode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-namenode-orca.out
>> localhost: starting datanode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.out
>> localhost: starting secondarynamenode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-secondarynamenode-orca.out
>> starting jobtracker, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-jobtracker-orca.out
>> localhost: starting tasktracker, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-tasktracker-orca.out
>>
>>
>> $ ./search/bin/hadoop dfsadmin -report
>> Configured Capacity: 0 (0 KB)
>> Present Capacity: 0 (0 KB)
>> DFS Remaining: 0 (0 KB)
>> DFS Used: 0 (0 KB)
>> DFS Used%: �%
>> Under replicated blocks: 0
>> Blocks with corrupt replicas: 0
>> Missing blocks: 0
>>
>> -------------------------------------------------
>> Datanodes available: 0 (0 total, 0 dead)
>>
>>
>> Regards,
>> Mahmood
>>   On Friday, April 4, 2014 11:09 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>   Yes, run that command and check whether you have any live datanode.
>>
>> Thanks
>> jitendra
>>
>>
>> On Fri, Apr 4, 2014 at 11:35 AM, Mahmood Naderan <nt...@yahoo.com>wrote:
>>
>> Jitendra,
>> For the first part, can explain how?
>> For the second part, do you mean "hadoop dfsadmin -report"?
>>
>> Regards,
>> Mahmood
>>
>>
>>   On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>  >Can you check total running datanodes in your cluster and also
>> free hdfs  space?
>> >
>> >
>> >Thanks
>> >Jitendra
>>
>>
>>
>>
>>
>>
>>
>>
>


-- 
Nitin Pawar

Re: IOException when using "dfs -put"

Posted by Mahmood Naderan <nt...@yahoo.com>.
Yes thank you very much. It is now OK.

$ ./bin/stop-all.sh
stopping jobtracker
localhost: no tasktracker to stop
stopping namenode
localhost: no datanode to stop
localhost: stopping secondarynamenode

$ /data/mahmood/openjdk6/build/linux-amd64/bin/jps
12576 Jps

$ rm -rf /data/mahmood/nutch-test/filesystem/name/*
$ rm -rf /data/mahmood/nutch-test/filesystem/name/*

$ ./bin/hadoop namenode -format
Re-format filesystem in /data/mahmood/nutch-test/filesystem/name ? (Y or N) Y

$ ./bin/start-all.sh
starting namenode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-namenode-orca.out
localhost: starting datanode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.out
localhost: starting secondarynamenode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-secondarynamenode-orca.out
starting jobtracker, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-jobtracker-orca.out
localhost: starting tasktracker, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-tasktracker-orca.out

$ /data/mahmood/openjdk6/build/linux-amd64/bin/jps
13490 JobTracker
13810 Jps
13074 DataNode
12801 NameNode
13396 SecondaryNameNode
13740 TaskTracker

$ ./bin/hadoop dfs -put urlsdir/urllist.txt urlsdir

$ ./bin/hadoop dfs -ls

Found 1 items
-rw-r--r--   1 mahmood supergroup         25 2014-04-05 08:23 /user/mahmood/urlsdir


 
Regards,
Mahmood


On Saturday, April 5, 2014 8:14 AM, Jitendra Yadav <je...@gmail.com> wrote:
 
>Shutdown all the hadoop processes and then remove every thing from /data/mahmood/nutch-test/filesystem/name/ >& data/mahmood/nutch-test/filesystem/data/ and then format namenode, now you can start the cluster as normal.
>

>Note:- Make sure you take all the backup of your critical data before cleaning the directories (if any).
>

>Thanks
>Jitendra

Re: IOException when using "dfs -put"

Posted by Mahmood Naderan <nt...@yahoo.com>.
Yes thank you very much. It is now OK.

$ ./bin/stop-all.sh
stopping jobtracker
localhost: no tasktracker to stop
stopping namenode
localhost: no datanode to stop
localhost: stopping secondarynamenode

$ /data/mahmood/openjdk6/build/linux-amd64/bin/jps
12576 Jps

$ rm -rf /data/mahmood/nutch-test/filesystem/name/*
$ rm -rf /data/mahmood/nutch-test/filesystem/name/*

$ ./bin/hadoop namenode -format
Re-format filesystem in /data/mahmood/nutch-test/filesystem/name ? (Y or N) Y

$ ./bin/start-all.sh
starting namenode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-namenode-orca.out
localhost: starting datanode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.out
localhost: starting secondarynamenode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-secondarynamenode-orca.out
starting jobtracker, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-jobtracker-orca.out
localhost: starting tasktracker, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-tasktracker-orca.out

$ /data/mahmood/openjdk6/build/linux-amd64/bin/jps
13490 JobTracker
13810 Jps
13074 DataNode
12801 NameNode
13396 SecondaryNameNode
13740 TaskTracker

$ ./bin/hadoop dfs -put urlsdir/urllist.txt urlsdir

$ ./bin/hadoop dfs -ls

Found 1 items
-rw-r--r--   1 mahmood supergroup         25 2014-04-05 08:23 /user/mahmood/urlsdir


 
Regards,
Mahmood


On Saturday, April 5, 2014 8:14 AM, Jitendra Yadav <je...@gmail.com> wrote:
 
>Shutdown all the hadoop processes and then remove every thing from /data/mahmood/nutch-test/filesystem/name/ >& data/mahmood/nutch-test/filesystem/data/ and then format namenode, now you can start the cluster as normal.
>

>Note:- Make sure you take all the backup of your critical data before cleaning the directories (if any).
>

>Thanks
>Jitendra

Re: IOException when using "dfs -put"

Posted by Mahmood Naderan <nt...@yahoo.com>.
Yes thank you very much. It is now OK.

$ ./bin/stop-all.sh
stopping jobtracker
localhost: no tasktracker to stop
stopping namenode
localhost: no datanode to stop
localhost: stopping secondarynamenode

$ /data/mahmood/openjdk6/build/linux-amd64/bin/jps
12576 Jps

$ rm -rf /data/mahmood/nutch-test/filesystem/name/*
$ rm -rf /data/mahmood/nutch-test/filesystem/name/*

$ ./bin/hadoop namenode -format
Re-format filesystem in /data/mahmood/nutch-test/filesystem/name ? (Y or N) Y

$ ./bin/start-all.sh
starting namenode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-namenode-orca.out
localhost: starting datanode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.out
localhost: starting secondarynamenode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-secondarynamenode-orca.out
starting jobtracker, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-jobtracker-orca.out
localhost: starting tasktracker, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-tasktracker-orca.out

$ /data/mahmood/openjdk6/build/linux-amd64/bin/jps
13490 JobTracker
13810 Jps
13074 DataNode
12801 NameNode
13396 SecondaryNameNode
13740 TaskTracker

$ ./bin/hadoop dfs -put urlsdir/urllist.txt urlsdir

$ ./bin/hadoop dfs -ls

Found 1 items
-rw-r--r--   1 mahmood supergroup         25 2014-04-05 08:23 /user/mahmood/urlsdir


 
Regards,
Mahmood


On Saturday, April 5, 2014 8:14 AM, Jitendra Yadav <je...@gmail.com> wrote:
 
>Shutdown all the hadoop processes and then remove every thing from /data/mahmood/nutch-test/filesystem/name/ >& data/mahmood/nutch-test/filesystem/data/ and then format namenode, now you can start the cluster as normal.
>

>Note:- Make sure you take all the backup of your critical data before cleaning the directories (if any).
>

>Thanks
>Jitendra

Re: IOException when using "dfs -put"

Posted by Mahmood Naderan <nt...@yahoo.com>.
Yes thank you very much. It is now OK.

$ ./bin/stop-all.sh
stopping jobtracker
localhost: no tasktracker to stop
stopping namenode
localhost: no datanode to stop
localhost: stopping secondarynamenode

$ /data/mahmood/openjdk6/build/linux-amd64/bin/jps
12576 Jps

$ rm -rf /data/mahmood/nutch-test/filesystem/name/*
$ rm -rf /data/mahmood/nutch-test/filesystem/name/*

$ ./bin/hadoop namenode -format
Re-format filesystem in /data/mahmood/nutch-test/filesystem/name ? (Y or N) Y

$ ./bin/start-all.sh
starting namenode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-namenode-orca.out
localhost: starting datanode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.out
localhost: starting secondarynamenode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-secondarynamenode-orca.out
starting jobtracker, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-jobtracker-orca.out
localhost: starting tasktracker, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-tasktracker-orca.out

$ /data/mahmood/openjdk6/build/linux-amd64/bin/jps
13490 JobTracker
13810 Jps
13074 DataNode
12801 NameNode
13396 SecondaryNameNode
13740 TaskTracker

$ ./bin/hadoop dfs -put urlsdir/urllist.txt urlsdir

$ ./bin/hadoop dfs -ls

Found 1 items
-rw-r--r--   1 mahmood supergroup         25 2014-04-05 08:23 /user/mahmood/urlsdir


 
Regards,
Mahmood


On Saturday, April 5, 2014 8:14 AM, Jitendra Yadav <je...@gmail.com> wrote:
 
>Shutdown all the hadoop processes and then remove every thing from /data/mahmood/nutch-test/filesystem/name/ >& data/mahmood/nutch-test/filesystem/data/ and then format namenode, now you can start the cluster as normal.
>

>Note:- Make sure you take all the backup of your critical data before cleaning the directories (if any).
>

>Thanks
>Jitendra

Re: IOException when using "dfs -put"

Posted by Jitendra Yadav <je...@gmail.com>.
Shutdown all the hadoop processes and then remove every thing from
/data/mahmood/nutch-test/filesystem/name/ &
data/mahmood/nutch-test/filesystem/data/
and then format namenode, now you can start the cluster as normal.

Note:- Make sure you take all the backup of your critical data before
cleaning the directories (if any).

Thanks
Jitendra


On Fri, Apr 4, 2014 at 8:37 PM, Mahmood Naderan <nt...@yahoo.com>wrote:

> I am trying to setup Nutch+Hadoop from this tutorial http://bit.ly/1iluBPI
> until the section "deploying nutch on multiple machines". So yes, currently
> I am working with a single node.
>
>
>
> >This could be because you re - formatted the Name Node and the
> >versions are not matching. Your Data Mode would then be rejected
> >by the Name Node.
> What do you mean by version mismatch as a result of reformatting?
>
>
>
> >can you check the namenode log and see its up and running
>
> $ cat /data/mahmood/nutch-test/search/logs/hadoop.log
> 2014-04-05 08:02:05,693 ERROR namenode.NameNode - java.io.IOException:
> Cannot lock storage /data/mahmood/nutch-test/filesystem/name. The directory
> is already locked.
>
> $ cat /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.log
> 2014-04-05 08:01:50,787 ERROR datanode.DataNode - java.io.IOException:
> Incompatible namespaceIDs in /data/mahmood/nutch-test/filesystem/data:
> namenode namespaceID = 251177003; datanode namespaceID = 865186086
>
> I don't see any namenode.log at the moment
>
>
>
> >Use jps and check what all processes are running
> $ /data/mahmood/openjdk6/build/linux-amd64/bin/jps
> 19563 JobTracker
> 19463 SecondaryNameNode
> 18955 NameNode
> 10644 Jps
> 19814 TaskTracker
>
>
> Regards,
> Mahmood
>
>
> On Saturday, April 5, 2014 3:39 AM, Chris Mawata <ch...@gmail.com>
> wrote:
>  >How many machines do you have? This could be because you re - formatted
> the Name Node and the >versions are not matching. Your Data Mode would then
> be rejected by the Name Node.
> >Chris
> >On Apr 4, 2014 2:58 PM, "Jitendra Yadav" <je...@gmail.com>
> wrote:
>
> >Use jps and check what all processes are running, is this a single node
> cluster?
>
>
>
>

Re: IOException when using "dfs -put"

Posted by Jitendra Yadav <je...@gmail.com>.
Shutdown all the hadoop processes and then remove every thing from
/data/mahmood/nutch-test/filesystem/name/ &
data/mahmood/nutch-test/filesystem/data/
and then format namenode, now you can start the cluster as normal.

Note:- Make sure you take all the backup of your critical data before
cleaning the directories (if any).

Thanks
Jitendra


On Fri, Apr 4, 2014 at 8:37 PM, Mahmood Naderan <nt...@yahoo.com>wrote:

> I am trying to setup Nutch+Hadoop from this tutorial http://bit.ly/1iluBPI
> until the section "deploying nutch on multiple machines". So yes, currently
> I am working with a single node.
>
>
>
> >This could be because you re - formatted the Name Node and the
> >versions are not matching. Your Data Mode would then be rejected
> >by the Name Node.
> What do you mean by version mismatch as a result of reformatting?
>
>
>
> >can you check the namenode log and see its up and running
>
> $ cat /data/mahmood/nutch-test/search/logs/hadoop.log
> 2014-04-05 08:02:05,693 ERROR namenode.NameNode - java.io.IOException:
> Cannot lock storage /data/mahmood/nutch-test/filesystem/name. The directory
> is already locked.
>
> $ cat /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.log
> 2014-04-05 08:01:50,787 ERROR datanode.DataNode - java.io.IOException:
> Incompatible namespaceIDs in /data/mahmood/nutch-test/filesystem/data:
> namenode namespaceID = 251177003; datanode namespaceID = 865186086
>
> I don't see any namenode.log at the moment
>
>
>
> >Use jps and check what all processes are running
> $ /data/mahmood/openjdk6/build/linux-amd64/bin/jps
> 19563 JobTracker
> 19463 SecondaryNameNode
> 18955 NameNode
> 10644 Jps
> 19814 TaskTracker
>
>
> Regards,
> Mahmood
>
>
> On Saturday, April 5, 2014 3:39 AM, Chris Mawata <ch...@gmail.com>
> wrote:
>  >How many machines do you have? This could be because you re - formatted
> the Name Node and the >versions are not matching. Your Data Mode would then
> be rejected by the Name Node.
> >Chris
> >On Apr 4, 2014 2:58 PM, "Jitendra Yadav" <je...@gmail.com>
> wrote:
>
> >Use jps and check what all processes are running, is this a single node
> cluster?
>
>
>
>

Re: IOException when using "dfs -put"

Posted by Jitendra Yadav <je...@gmail.com>.
Shutdown all the hadoop processes and then remove every thing from
/data/mahmood/nutch-test/filesystem/name/ &
data/mahmood/nutch-test/filesystem/data/
and then format namenode, now you can start the cluster as normal.

Note:- Make sure you take all the backup of your critical data before
cleaning the directories (if any).

Thanks
Jitendra


On Fri, Apr 4, 2014 at 8:37 PM, Mahmood Naderan <nt...@yahoo.com>wrote:

> I am trying to setup Nutch+Hadoop from this tutorial http://bit.ly/1iluBPI
> until the section "deploying nutch on multiple machines". So yes, currently
> I am working with a single node.
>
>
>
> >This could be because you re - formatted the Name Node and the
> >versions are not matching. Your Data Mode would then be rejected
> >by the Name Node.
> What do you mean by version mismatch as a result of reformatting?
>
>
>
> >can you check the namenode log and see its up and running
>
> $ cat /data/mahmood/nutch-test/search/logs/hadoop.log
> 2014-04-05 08:02:05,693 ERROR namenode.NameNode - java.io.IOException:
> Cannot lock storage /data/mahmood/nutch-test/filesystem/name. The directory
> is already locked.
>
> $ cat /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.log
> 2014-04-05 08:01:50,787 ERROR datanode.DataNode - java.io.IOException:
> Incompatible namespaceIDs in /data/mahmood/nutch-test/filesystem/data:
> namenode namespaceID = 251177003; datanode namespaceID = 865186086
>
> I don't see any namenode.log at the moment
>
>
>
> >Use jps and check what all processes are running
> $ /data/mahmood/openjdk6/build/linux-amd64/bin/jps
> 19563 JobTracker
> 19463 SecondaryNameNode
> 18955 NameNode
> 10644 Jps
> 19814 TaskTracker
>
>
> Regards,
> Mahmood
>
>
> On Saturday, April 5, 2014 3:39 AM, Chris Mawata <ch...@gmail.com>
> wrote:
>  >How many machines do you have? This could be because you re - formatted
> the Name Node and the >versions are not matching. Your Data Mode would then
> be rejected by the Name Node.
> >Chris
> >On Apr 4, 2014 2:58 PM, "Jitendra Yadav" <je...@gmail.com>
> wrote:
>
> >Use jps and check what all processes are running, is this a single node
> cluster?
>
>
>
>

Re: IOException when using "dfs -put"

Posted by Jitendra Yadav <je...@gmail.com>.
Shutdown all the hadoop processes and then remove every thing from
/data/mahmood/nutch-test/filesystem/name/ &
data/mahmood/nutch-test/filesystem/data/
and then format namenode, now you can start the cluster as normal.

Note:- Make sure you take all the backup of your critical data before
cleaning the directories (if any).

Thanks
Jitendra


On Fri, Apr 4, 2014 at 8:37 PM, Mahmood Naderan <nt...@yahoo.com>wrote:

> I am trying to setup Nutch+Hadoop from this tutorial http://bit.ly/1iluBPI
> until the section "deploying nutch on multiple machines". So yes, currently
> I am working with a single node.
>
>
>
> >This could be because you re - formatted the Name Node and the
> >versions are not matching. Your Data Mode would then be rejected
> >by the Name Node.
> What do you mean by version mismatch as a result of reformatting?
>
>
>
> >can you check the namenode log and see its up and running
>
> $ cat /data/mahmood/nutch-test/search/logs/hadoop.log
> 2014-04-05 08:02:05,693 ERROR namenode.NameNode - java.io.IOException:
> Cannot lock storage /data/mahmood/nutch-test/filesystem/name. The directory
> is already locked.
>
> $ cat /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.log
> 2014-04-05 08:01:50,787 ERROR datanode.DataNode - java.io.IOException:
> Incompatible namespaceIDs in /data/mahmood/nutch-test/filesystem/data:
> namenode namespaceID = 251177003; datanode namespaceID = 865186086
>
> I don't see any namenode.log at the moment
>
>
>
> >Use jps and check what all processes are running
> $ /data/mahmood/openjdk6/build/linux-amd64/bin/jps
> 19563 JobTracker
> 19463 SecondaryNameNode
> 18955 NameNode
> 10644 Jps
> 19814 TaskTracker
>
>
> Regards,
> Mahmood
>
>
> On Saturday, April 5, 2014 3:39 AM, Chris Mawata <ch...@gmail.com>
> wrote:
>  >How many machines do you have? This could be because you re - formatted
> the Name Node and the >versions are not matching. Your Data Mode would then
> be rejected by the Name Node.
> >Chris
> >On Apr 4, 2014 2:58 PM, "Jitendra Yadav" <je...@gmail.com>
> wrote:
>
> >Use jps and check what all processes are running, is this a single node
> cluster?
>
>
>
>

Re: IOException when using "dfs -put"

Posted by Mahmood Naderan <nt...@yahoo.com>.
I am trying to setup Nutch+Hadoop from this tutorial http://bit.ly/1iluBPI  until the section "deploying nutch on multiple machines". So yes, currently I am working with a single node.



>This could be because you re - formatted the Name Node and the 

>versions 
are not matching. Your Data Mode would then be rejected 

>by the Name 
Node.
What do you mean by version mismatch as a result of reformatting?



>can you check the namenode log and see its up and running

$ cat /data/mahmood/nutch-test/search/logs/hadoop.log
2014-04-05 08:02:05,693 ERROR namenode.NameNode - java.io.IOException: Cannot lock storage /data/mahmood/nutch-test/filesystem/name. The directory is already locked.


$ cat /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.log
2014-04-05 08:01:50,787 ERROR datanode.DataNode - java.io.IOException: Incompatible namespaceIDs in /data/mahmood/nutch-test/filesystem/data: namenode namespaceID = 251177003; datanode namespaceID = 865186086


I don't see any namenode.log at the moment



>Use jps and check what all processes are running
$ /data/mahmood/openjdk6/build/linux-amd64/bin/jps
19563 JobTracker
19463 SecondaryNameNode
18955 NameNode
10644 Jps
19814 TaskTracker


 
Regards,
Mahmood


On Saturday, April 5, 2014 3:39 AM, Chris Mawata <ch...@gmail.com> wrote:
 
>How many machines do you have? This could be because you re - formatted the Name Node and the >versions are not matching. Your Data Mode would then be rejected by the Name Node.
>Chris
>On Apr 4, 2014 2:58 PM, "Jitendra Yadav" <je...@gmail.com> wrote:

>Use jps and check what all processes are running, is this a single node cluster? 
>
>
>

Re: IOException when using "dfs -put"

Posted by Mahmood Naderan <nt...@yahoo.com>.
I am trying to setup Nutch+Hadoop from this tutorial http://bit.ly/1iluBPI  until the section "deploying nutch on multiple machines". So yes, currently I am working with a single node.



>This could be because you re - formatted the Name Node and the 

>versions 
are not matching. Your Data Mode would then be rejected 

>by the Name 
Node.
What do you mean by version mismatch as a result of reformatting?



>can you check the namenode log and see its up and running

$ cat /data/mahmood/nutch-test/search/logs/hadoop.log
2014-04-05 08:02:05,693 ERROR namenode.NameNode - java.io.IOException: Cannot lock storage /data/mahmood/nutch-test/filesystem/name. The directory is already locked.


$ cat /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.log
2014-04-05 08:01:50,787 ERROR datanode.DataNode - java.io.IOException: Incompatible namespaceIDs in /data/mahmood/nutch-test/filesystem/data: namenode namespaceID = 251177003; datanode namespaceID = 865186086


I don't see any namenode.log at the moment



>Use jps and check what all processes are running
$ /data/mahmood/openjdk6/build/linux-amd64/bin/jps
19563 JobTracker
19463 SecondaryNameNode
18955 NameNode
10644 Jps
19814 TaskTracker


 
Regards,
Mahmood


On Saturday, April 5, 2014 3:39 AM, Chris Mawata <ch...@gmail.com> wrote:
 
>How many machines do you have? This could be because you re - formatted the Name Node and the >versions are not matching. Your Data Mode would then be rejected by the Name Node.
>Chris
>On Apr 4, 2014 2:58 PM, "Jitendra Yadav" <je...@gmail.com> wrote:

>Use jps and check what all processes are running, is this a single node cluster? 
>
>
>

Re: IOException when using "dfs -put"

Posted by Mahmood Naderan <nt...@yahoo.com>.
I am trying to setup Nutch+Hadoop from this tutorial http://bit.ly/1iluBPI  until the section "deploying nutch on multiple machines". So yes, currently I am working with a single node.



>This could be because you re - formatted the Name Node and the 

>versions 
are not matching. Your Data Mode would then be rejected 

>by the Name 
Node.
What do you mean by version mismatch as a result of reformatting?



>can you check the namenode log and see its up and running

$ cat /data/mahmood/nutch-test/search/logs/hadoop.log
2014-04-05 08:02:05,693 ERROR namenode.NameNode - java.io.IOException: Cannot lock storage /data/mahmood/nutch-test/filesystem/name. The directory is already locked.


$ cat /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.log
2014-04-05 08:01:50,787 ERROR datanode.DataNode - java.io.IOException: Incompatible namespaceIDs in /data/mahmood/nutch-test/filesystem/data: namenode namespaceID = 251177003; datanode namespaceID = 865186086


I don't see any namenode.log at the moment



>Use jps and check what all processes are running
$ /data/mahmood/openjdk6/build/linux-amd64/bin/jps
19563 JobTracker
19463 SecondaryNameNode
18955 NameNode
10644 Jps
19814 TaskTracker


 
Regards,
Mahmood


On Saturday, April 5, 2014 3:39 AM, Chris Mawata <ch...@gmail.com> wrote:
 
>How many machines do you have? This could be because you re - formatted the Name Node and the >versions are not matching. Your Data Mode would then be rejected by the Name Node.
>Chris
>On Apr 4, 2014 2:58 PM, "Jitendra Yadav" <je...@gmail.com> wrote:

>Use jps and check what all processes are running, is this a single node cluster? 
>
>
>

Re: IOException when using "dfs -put"

Posted by Mahmood Naderan <nt...@yahoo.com>.
I am trying to setup Nutch+Hadoop from this tutorial http://bit.ly/1iluBPI  until the section "deploying nutch on multiple machines". So yes, currently I am working with a single node.



>This could be because you re - formatted the Name Node and the 

>versions 
are not matching. Your Data Mode would then be rejected 

>by the Name 
Node.
What do you mean by version mismatch as a result of reformatting?



>can you check the namenode log and see its up and running

$ cat /data/mahmood/nutch-test/search/logs/hadoop.log
2014-04-05 08:02:05,693 ERROR namenode.NameNode - java.io.IOException: Cannot lock storage /data/mahmood/nutch-test/filesystem/name. The directory is already locked.


$ cat /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.log
2014-04-05 08:01:50,787 ERROR datanode.DataNode - java.io.IOException: Incompatible namespaceIDs in /data/mahmood/nutch-test/filesystem/data: namenode namespaceID = 251177003; datanode namespaceID = 865186086


I don't see any namenode.log at the moment



>Use jps and check what all processes are running
$ /data/mahmood/openjdk6/build/linux-amd64/bin/jps
19563 JobTracker
19463 SecondaryNameNode
18955 NameNode
10644 Jps
19814 TaskTracker


 
Regards,
Mahmood


On Saturday, April 5, 2014 3:39 AM, Chris Mawata <ch...@gmail.com> wrote:
 
>How many machines do you have? This could be because you re - formatted the Name Node and the >versions are not matching. Your Data Mode would then be rejected by the Name Node.
>Chris
>On Apr 4, 2014 2:58 PM, "Jitendra Yadav" <je...@gmail.com> wrote:

>Use jps and check what all processes are running, is this a single node cluster? 
>
>
>

Re: IOException when using "dfs -put"

Posted by Chris Mawata <ch...@gmail.com>.
How many machines do you have? This could be because you re - formatted the
Name Node and the versions are not matching. Your Data Mode would then be
rejected by the Name Node.
Chris
On Apr 4, 2014 2:58 PM, "Jitendra Yadav" <je...@gmail.com> wrote:

> Use jps and check what all processes are running, is this a single node
> cluster?
>
>
> On Fri, Apr 4, 2014 at 11:54 AM, Mahmood Naderan <nt...@yahoo.com>wrote:
>
>> Strange! See the output
>>
>> $ ./search/bin/hadoop namenode -format
>> Re-format filesystem in /data/mahmood/nutch-test/filesystem/name ? (Y or
>> N) Y
>>
>> $ ./search/bin/start-all.sh
>> starting namenode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-namenode-orca.out
>> localhost: starting datanode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.out
>> localhost: starting secondarynamenode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-secondarynamenode-orca.out
>> starting jobtracker, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-jobtracker-orca.out
>> localhost: starting tasktracker, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-tasktracker-orca.out
>>
>>
>> $ ./search/bin/hadoop dfsadmin -report
>> Configured Capacity: 0 (0 KB)
>> Present Capacity: 0 (0 KB)
>> DFS Remaining: 0 (0 KB)
>> DFS Used: 0 (0 KB)
>> DFS Used%: �%
>> Under replicated blocks: 0
>> Blocks with corrupt replicas: 0
>> Missing blocks: 0
>>
>> -------------------------------------------------
>> Datanodes available: 0 (0 total, 0 dead)
>>
>>
>> Regards,
>> Mahmood
>>   On Friday, April 4, 2014 11:09 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>   Yes, run that command and check whether you have any live datanode.
>>
>> Thanks
>> jitendra
>>
>>
>> On Fri, Apr 4, 2014 at 11:35 AM, Mahmood Naderan <nt...@yahoo.com>wrote:
>>
>> Jitendra,
>> For the first part, can explain how?
>> For the second part, do you mean "hadoop dfsadmin -report"?
>>
>> Regards,
>> Mahmood
>>
>>
>>   On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>  >Can you check total running datanodes in your cluster and also
>> free hdfs  space?
>> >
>> >
>> >Thanks
>> >Jitendra
>>
>>
>>
>>
>>
>>
>>
>>
>

Re: IOException when using "dfs -put"

Posted by Chris Mawata <ch...@gmail.com>.
How many machines do you have? This could be because you re - formatted the
Name Node and the versions are not matching. Your Data Mode would then be
rejected by the Name Node.
Chris
On Apr 4, 2014 2:58 PM, "Jitendra Yadav" <je...@gmail.com> wrote:

> Use jps and check what all processes are running, is this a single node
> cluster?
>
>
> On Fri, Apr 4, 2014 at 11:54 AM, Mahmood Naderan <nt...@yahoo.com>wrote:
>
>> Strange! See the output
>>
>> $ ./search/bin/hadoop namenode -format
>> Re-format filesystem in /data/mahmood/nutch-test/filesystem/name ? (Y or
>> N) Y
>>
>> $ ./search/bin/start-all.sh
>> starting namenode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-namenode-orca.out
>> localhost: starting datanode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.out
>> localhost: starting secondarynamenode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-secondarynamenode-orca.out
>> starting jobtracker, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-jobtracker-orca.out
>> localhost: starting tasktracker, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-tasktracker-orca.out
>>
>>
>> $ ./search/bin/hadoop dfsadmin -report
>> Configured Capacity: 0 (0 KB)
>> Present Capacity: 0 (0 KB)
>> DFS Remaining: 0 (0 KB)
>> DFS Used: 0 (0 KB)
>> DFS Used%: �%
>> Under replicated blocks: 0
>> Blocks with corrupt replicas: 0
>> Missing blocks: 0
>>
>> -------------------------------------------------
>> Datanodes available: 0 (0 total, 0 dead)
>>
>>
>> Regards,
>> Mahmood
>>   On Friday, April 4, 2014 11:09 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>   Yes, run that command and check whether you have any live datanode.
>>
>> Thanks
>> jitendra
>>
>>
>> On Fri, Apr 4, 2014 at 11:35 AM, Mahmood Naderan <nt...@yahoo.com>wrote:
>>
>> Jitendra,
>> For the first part, can explain how?
>> For the second part, do you mean "hadoop dfsadmin -report"?
>>
>> Regards,
>> Mahmood
>>
>>
>>   On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>  >Can you check total running datanodes in your cluster and also
>> free hdfs  space?
>> >
>> >
>> >Thanks
>> >Jitendra
>>
>>
>>
>>
>>
>>
>>
>>
>

Re: IOException when using "dfs -put"

Posted by Chris Mawata <ch...@gmail.com>.
How many machines do you have? This could be because you re - formatted the
Name Node and the versions are not matching. Your Data Mode would then be
rejected by the Name Node.
Chris
On Apr 4, 2014 2:58 PM, "Jitendra Yadav" <je...@gmail.com> wrote:

> Use jps and check what all processes are running, is this a single node
> cluster?
>
>
> On Fri, Apr 4, 2014 at 11:54 AM, Mahmood Naderan <nt...@yahoo.com>wrote:
>
>> Strange! See the output
>>
>> $ ./search/bin/hadoop namenode -format
>> Re-format filesystem in /data/mahmood/nutch-test/filesystem/name ? (Y or
>> N) Y
>>
>> $ ./search/bin/start-all.sh
>> starting namenode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-namenode-orca.out
>> localhost: starting datanode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.out
>> localhost: starting secondarynamenode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-secondarynamenode-orca.out
>> starting jobtracker, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-jobtracker-orca.out
>> localhost: starting tasktracker, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-tasktracker-orca.out
>>
>>
>> $ ./search/bin/hadoop dfsadmin -report
>> Configured Capacity: 0 (0 KB)
>> Present Capacity: 0 (0 KB)
>> DFS Remaining: 0 (0 KB)
>> DFS Used: 0 (0 KB)
>> DFS Used%: �%
>> Under replicated blocks: 0
>> Blocks with corrupt replicas: 0
>> Missing blocks: 0
>>
>> -------------------------------------------------
>> Datanodes available: 0 (0 total, 0 dead)
>>
>>
>> Regards,
>> Mahmood
>>   On Friday, April 4, 2014 11:09 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>   Yes, run that command and check whether you have any live datanode.
>>
>> Thanks
>> jitendra
>>
>>
>> On Fri, Apr 4, 2014 at 11:35 AM, Mahmood Naderan <nt...@yahoo.com>wrote:
>>
>> Jitendra,
>> For the first part, can explain how?
>> For the second part, do you mean "hadoop dfsadmin -report"?
>>
>> Regards,
>> Mahmood
>>
>>
>>   On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>  >Can you check total running datanodes in your cluster and also
>> free hdfs  space?
>> >
>> >
>> >Thanks
>> >Jitendra
>>
>>
>>
>>
>>
>>
>>
>>
>

Re: IOException when using "dfs -put"

Posted by Chris Mawata <ch...@gmail.com>.
How many machines do you have? This could be because you re - formatted the
Name Node and the versions are not matching. Your Data Mode would then be
rejected by the Name Node.
Chris
On Apr 4, 2014 2:58 PM, "Jitendra Yadav" <je...@gmail.com> wrote:

> Use jps and check what all processes are running, is this a single node
> cluster?
>
>
> On Fri, Apr 4, 2014 at 11:54 AM, Mahmood Naderan <nt...@yahoo.com>wrote:
>
>> Strange! See the output
>>
>> $ ./search/bin/hadoop namenode -format
>> Re-format filesystem in /data/mahmood/nutch-test/filesystem/name ? (Y or
>> N) Y
>>
>> $ ./search/bin/start-all.sh
>> starting namenode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-namenode-orca.out
>> localhost: starting datanode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.out
>> localhost: starting secondarynamenode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-secondarynamenode-orca.out
>> starting jobtracker, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-jobtracker-orca.out
>> localhost: starting tasktracker, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-tasktracker-orca.out
>>
>>
>> $ ./search/bin/hadoop dfsadmin -report
>> Configured Capacity: 0 (0 KB)
>> Present Capacity: 0 (0 KB)
>> DFS Remaining: 0 (0 KB)
>> DFS Used: 0 (0 KB)
>> DFS Used%: �%
>> Under replicated blocks: 0
>> Blocks with corrupt replicas: 0
>> Missing blocks: 0
>>
>> -------------------------------------------------
>> Datanodes available: 0 (0 total, 0 dead)
>>
>>
>> Regards,
>> Mahmood
>>   On Friday, April 4, 2014 11:09 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>   Yes, run that command and check whether you have any live datanode.
>>
>> Thanks
>> jitendra
>>
>>
>> On Fri, Apr 4, 2014 at 11:35 AM, Mahmood Naderan <nt...@yahoo.com>wrote:
>>
>> Jitendra,
>> For the first part, can explain how?
>> For the second part, do you mean "hadoop dfsadmin -report"?
>>
>> Regards,
>> Mahmood
>>
>>
>>   On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>  >Can you check total running datanodes in your cluster and also
>> free hdfs  space?
>> >
>> >
>> >Thanks
>> >Jitendra
>>
>>
>>
>>
>>
>>
>>
>>
>

Re: IOException when using "dfs -put"

Posted by Nitin Pawar <ni...@gmail.com>.
Datanodes available: 0 (0 total, 0 dead)

see this .. that means there is no datanode is connected with namenode.

can you check the namenode log and see its up and running
after that check the datanode log and see its up and running



On Sat, Apr 5, 2014 at 12:28 AM, Jitendra Yadav
<je...@gmail.com>wrote:

> Use jps and check what all processes are running, is this a single node
> cluster?
>
>
> On Fri, Apr 4, 2014 at 11:54 AM, Mahmood Naderan <nt...@yahoo.com>wrote:
>
>> Strange! See the output
>>
>> $ ./search/bin/hadoop namenode -format
>> Re-format filesystem in /data/mahmood/nutch-test/filesystem/name ? (Y or
>> N) Y
>>
>> $ ./search/bin/start-all.sh
>> starting namenode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-namenode-orca.out
>> localhost: starting datanode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.out
>> localhost: starting secondarynamenode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-secondarynamenode-orca.out
>> starting jobtracker, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-jobtracker-orca.out
>> localhost: starting tasktracker, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-tasktracker-orca.out
>>
>>
>> $ ./search/bin/hadoop dfsadmin -report
>> Configured Capacity: 0 (0 KB)
>> Present Capacity: 0 (0 KB)
>> DFS Remaining: 0 (0 KB)
>> DFS Used: 0 (0 KB)
>> DFS Used%: �%
>> Under replicated blocks: 0
>> Blocks with corrupt replicas: 0
>> Missing blocks: 0
>>
>> -------------------------------------------------
>> Datanodes available: 0 (0 total, 0 dead)
>>
>>
>> Regards,
>> Mahmood
>>   On Friday, April 4, 2014 11:09 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>   Yes, run that command and check whether you have any live datanode.
>>
>> Thanks
>> jitendra
>>
>>
>> On Fri, Apr 4, 2014 at 11:35 AM, Mahmood Naderan <nt...@yahoo.com>wrote:
>>
>> Jitendra,
>> For the first part, can explain how?
>> For the second part, do you mean "hadoop dfsadmin -report"?
>>
>> Regards,
>> Mahmood
>>
>>
>>   On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>  >Can you check total running datanodes in your cluster and also
>> free hdfs  space?
>> >
>> >
>> >Thanks
>> >Jitendra
>>
>>
>>
>>
>>
>>
>>
>>
>


-- 
Nitin Pawar

Re: IOException when using "dfs -put"

Posted by Nitin Pawar <ni...@gmail.com>.
Datanodes available: 0 (0 total, 0 dead)

see this .. that means there is no datanode is connected with namenode.

can you check the namenode log and see its up and running
after that check the datanode log and see its up and running



On Sat, Apr 5, 2014 at 12:28 AM, Jitendra Yadav
<je...@gmail.com>wrote:

> Use jps and check what all processes are running, is this a single node
> cluster?
>
>
> On Fri, Apr 4, 2014 at 11:54 AM, Mahmood Naderan <nt...@yahoo.com>wrote:
>
>> Strange! See the output
>>
>> $ ./search/bin/hadoop namenode -format
>> Re-format filesystem in /data/mahmood/nutch-test/filesystem/name ? (Y or
>> N) Y
>>
>> $ ./search/bin/start-all.sh
>> starting namenode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-namenode-orca.out
>> localhost: starting datanode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.out
>> localhost: starting secondarynamenode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-secondarynamenode-orca.out
>> starting jobtracker, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-jobtracker-orca.out
>> localhost: starting tasktracker, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-tasktracker-orca.out
>>
>>
>> $ ./search/bin/hadoop dfsadmin -report
>> Configured Capacity: 0 (0 KB)
>> Present Capacity: 0 (0 KB)
>> DFS Remaining: 0 (0 KB)
>> DFS Used: 0 (0 KB)
>> DFS Used%: �%
>> Under replicated blocks: 0
>> Blocks with corrupt replicas: 0
>> Missing blocks: 0
>>
>> -------------------------------------------------
>> Datanodes available: 0 (0 total, 0 dead)
>>
>>
>> Regards,
>> Mahmood
>>   On Friday, April 4, 2014 11:09 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>   Yes, run that command and check whether you have any live datanode.
>>
>> Thanks
>> jitendra
>>
>>
>> On Fri, Apr 4, 2014 at 11:35 AM, Mahmood Naderan <nt...@yahoo.com>wrote:
>>
>> Jitendra,
>> For the first part, can explain how?
>> For the second part, do you mean "hadoop dfsadmin -report"?
>>
>> Regards,
>> Mahmood
>>
>>
>>   On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>  >Can you check total running datanodes in your cluster and also
>> free hdfs  space?
>> >
>> >
>> >Thanks
>> >Jitendra
>>
>>
>>
>>
>>
>>
>>
>>
>


-- 
Nitin Pawar

Re: IOException when using "dfs -put"

Posted by Nitin Pawar <ni...@gmail.com>.
Datanodes available: 0 (0 total, 0 dead)

see this .. that means there is no datanode is connected with namenode.

can you check the namenode log and see its up and running
after that check the datanode log and see its up and running



On Sat, Apr 5, 2014 at 12:28 AM, Jitendra Yadav
<je...@gmail.com>wrote:

> Use jps and check what all processes are running, is this a single node
> cluster?
>
>
> On Fri, Apr 4, 2014 at 11:54 AM, Mahmood Naderan <nt...@yahoo.com>wrote:
>
>> Strange! See the output
>>
>> $ ./search/bin/hadoop namenode -format
>> Re-format filesystem in /data/mahmood/nutch-test/filesystem/name ? (Y or
>> N) Y
>>
>> $ ./search/bin/start-all.sh
>> starting namenode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-namenode-orca.out
>> localhost: starting datanode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.out
>> localhost: starting secondarynamenode, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-secondarynamenode-orca.out
>> starting jobtracker, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-jobtracker-orca.out
>> localhost: starting tasktracker, logging to
>> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-tasktracker-orca.out
>>
>>
>> $ ./search/bin/hadoop dfsadmin -report
>> Configured Capacity: 0 (0 KB)
>> Present Capacity: 0 (0 KB)
>> DFS Remaining: 0 (0 KB)
>> DFS Used: 0 (0 KB)
>> DFS Used%: �%
>> Under replicated blocks: 0
>> Blocks with corrupt replicas: 0
>> Missing blocks: 0
>>
>> -------------------------------------------------
>> Datanodes available: 0 (0 total, 0 dead)
>>
>>
>> Regards,
>> Mahmood
>>   On Friday, April 4, 2014 11:09 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>   Yes, run that command and check whether you have any live datanode.
>>
>> Thanks
>> jitendra
>>
>>
>> On Fri, Apr 4, 2014 at 11:35 AM, Mahmood Naderan <nt...@yahoo.com>wrote:
>>
>> Jitendra,
>> For the first part, can explain how?
>> For the second part, do you mean "hadoop dfsadmin -report"?
>>
>> Regards,
>> Mahmood
>>
>>
>>   On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>  >Can you check total running datanodes in your cluster and also
>> free hdfs  space?
>> >
>> >
>> >Thanks
>> >Jitendra
>>
>>
>>
>>
>>
>>
>>
>>
>


-- 
Nitin Pawar

Re: IOException when using "dfs -put"

Posted by Jitendra Yadav <je...@gmail.com>.
Use jps and check what all processes are running, is this a single node
cluster?


On Fri, Apr 4, 2014 at 11:54 AM, Mahmood Naderan <nt...@yahoo.com>wrote:

> Strange! See the output
>
> $ ./search/bin/hadoop namenode -format
> Re-format filesystem in /data/mahmood/nutch-test/filesystem/name ? (Y or
> N) Y
>
> $ ./search/bin/start-all.sh
> starting namenode, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-namenode-orca.out
> localhost: starting datanode, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.out
> localhost: starting secondarynamenode, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-secondarynamenode-orca.out
> starting jobtracker, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-jobtracker-orca.out
> localhost: starting tasktracker, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-tasktracker-orca.out
>
>
> $ ./search/bin/hadoop dfsadmin -report
> Configured Capacity: 0 (0 KB)
> Present Capacity: 0 (0 KB)
> DFS Remaining: 0 (0 KB)
> DFS Used: 0 (0 KB)
> DFS Used%: �%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> -------------------------------------------------
> Datanodes available: 0 (0 total, 0 dead)
>
>
> Regards,
> Mahmood
>   On Friday, April 4, 2014 11:09 PM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>  Yes, run that command and check whether you have any live datanode.
>
> Thanks
> jitendra
>
>
> On Fri, Apr 4, 2014 at 11:35 AM, Mahmood Naderan <nt...@yahoo.com>wrote:
>
> Jitendra,
> For the first part, can explain how?
> For the second part, do you mean "hadoop dfsadmin -report"?
>
> Regards,
> Mahmood
>
>
>   On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>  >Can you check total running datanodes in your cluster and also
> free hdfs  space?
> >
> >
> >Thanks
> >Jitendra
>
>
>
>
>
>
>
>

Re: IOException when using "dfs -put"

Posted by Jitendra Yadav <je...@gmail.com>.
Use jps and check what all processes are running, is this a single node
cluster?


On Fri, Apr 4, 2014 at 11:54 AM, Mahmood Naderan <nt...@yahoo.com>wrote:

> Strange! See the output
>
> $ ./search/bin/hadoop namenode -format
> Re-format filesystem in /data/mahmood/nutch-test/filesystem/name ? (Y or
> N) Y
>
> $ ./search/bin/start-all.sh
> starting namenode, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-namenode-orca.out
> localhost: starting datanode, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.out
> localhost: starting secondarynamenode, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-secondarynamenode-orca.out
> starting jobtracker, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-jobtracker-orca.out
> localhost: starting tasktracker, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-tasktracker-orca.out
>
>
> $ ./search/bin/hadoop dfsadmin -report
> Configured Capacity: 0 (0 KB)
> Present Capacity: 0 (0 KB)
> DFS Remaining: 0 (0 KB)
> DFS Used: 0 (0 KB)
> DFS Used%: �%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> -------------------------------------------------
> Datanodes available: 0 (0 total, 0 dead)
>
>
> Regards,
> Mahmood
>   On Friday, April 4, 2014 11:09 PM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>  Yes, run that command and check whether you have any live datanode.
>
> Thanks
> jitendra
>
>
> On Fri, Apr 4, 2014 at 11:35 AM, Mahmood Naderan <nt...@yahoo.com>wrote:
>
> Jitendra,
> For the first part, can explain how?
> For the second part, do you mean "hadoop dfsadmin -report"?
>
> Regards,
> Mahmood
>
>
>   On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>  >Can you check total running datanodes in your cluster and also
> free hdfs  space?
> >
> >
> >Thanks
> >Jitendra
>
>
>
>
>
>
>
>

Re: IOException when using "dfs -put"

Posted by Jitendra Yadav <je...@gmail.com>.
Use jps and check what all processes are running, is this a single node
cluster?


On Fri, Apr 4, 2014 at 11:54 AM, Mahmood Naderan <nt...@yahoo.com>wrote:

> Strange! See the output
>
> $ ./search/bin/hadoop namenode -format
> Re-format filesystem in /data/mahmood/nutch-test/filesystem/name ? (Y or
> N) Y
>
> $ ./search/bin/start-all.sh
> starting namenode, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-namenode-orca.out
> localhost: starting datanode, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.out
> localhost: starting secondarynamenode, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-secondarynamenode-orca.out
> starting jobtracker, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-jobtracker-orca.out
> localhost: starting tasktracker, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-tasktracker-orca.out
>
>
> $ ./search/bin/hadoop dfsadmin -report
> Configured Capacity: 0 (0 KB)
> Present Capacity: 0 (0 KB)
> DFS Remaining: 0 (0 KB)
> DFS Used: 0 (0 KB)
> DFS Used%: �%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> -------------------------------------------------
> Datanodes available: 0 (0 total, 0 dead)
>
>
> Regards,
> Mahmood
>   On Friday, April 4, 2014 11:09 PM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>  Yes, run that command and check whether you have any live datanode.
>
> Thanks
> jitendra
>
>
> On Fri, Apr 4, 2014 at 11:35 AM, Mahmood Naderan <nt...@yahoo.com>wrote:
>
> Jitendra,
> For the first part, can explain how?
> For the second part, do you mean "hadoop dfsadmin -report"?
>
> Regards,
> Mahmood
>
>
>   On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>  >Can you check total running datanodes in your cluster and also
> free hdfs  space?
> >
> >
> >Thanks
> >Jitendra
>
>
>
>
>
>
>
>

Re: IOException when using "dfs -put"

Posted by Jitendra Yadav <je...@gmail.com>.
Use jps and check what all processes are running, is this a single node
cluster?


On Fri, Apr 4, 2014 at 11:54 AM, Mahmood Naderan <nt...@yahoo.com>wrote:

> Strange! See the output
>
> $ ./search/bin/hadoop namenode -format
> Re-format filesystem in /data/mahmood/nutch-test/filesystem/name ? (Y or
> N) Y
>
> $ ./search/bin/start-all.sh
> starting namenode, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-namenode-orca.out
> localhost: starting datanode, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.out
> localhost: starting secondarynamenode, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-secondarynamenode-orca.out
> starting jobtracker, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-jobtracker-orca.out
> localhost: starting tasktracker, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-tasktracker-orca.out
>
>
> $ ./search/bin/hadoop dfsadmin -report
> Configured Capacity: 0 (0 KB)
> Present Capacity: 0 (0 KB)
> DFS Remaining: 0 (0 KB)
> DFS Used: 0 (0 KB)
> DFS Used%: �%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> -------------------------------------------------
> Datanodes available: 0 (0 total, 0 dead)
>
>
> Regards,
> Mahmood
>   On Friday, April 4, 2014 11:09 PM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>  Yes, run that command and check whether you have any live datanode.
>
> Thanks
> jitendra
>
>
> On Fri, Apr 4, 2014 at 11:35 AM, Mahmood Naderan <nt...@yahoo.com>wrote:
>
> Jitendra,
> For the first part, can explain how?
> For the second part, do you mean "hadoop dfsadmin -report"?
>
> Regards,
> Mahmood
>
>
>   On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>  >Can you check total running datanodes in your cluster and also
> free hdfs  space?
> >
> >
> >Thanks
> >Jitendra
>
>
>
>
>
>
>
>

Re: IOException when using "dfs -put"

Posted by Mahmood Naderan <nt...@yahoo.com>.
Strange! See the output

$ ./search/bin/hadoop namenode -format
Re-format filesystem in /data/mahmood/nutch-test/filesystem/name ? (Y or N) Y


$ ./search/bin/start-all.sh
starting namenode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-namenode-orca.out
localhost: starting datanode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.out
localhost: starting secondarynamenode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-secondarynamenode-orca.out
starting jobtracker, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-jobtracker-orca.out
localhost: starting tasktracker, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-tasktracker-orca.out



$ ./search/bin/hadoop dfsadmin -report
Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: �%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)



 
Regards,
Mahmood
On Friday, April 4, 2014 11:09 PM, Jitendra Yadav <je...@gmail.com> wrote:
 
Yes, run that command and check whether you have any live datanode.

Thanks
jitendra



On Fri, Apr 4, 2014 at 11:35 AM, Mahmood Naderan <nt...@yahoo.com> wrote:

Jitendra,
>For the first part, can explain how?
>For the second part, do you mean "hadoop dfsadmin -report"?
>
> 
>Regards,
>Mahmood
>
>
>
>
>On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <je...@gmail.com> wrote:
> 
>>Can you check total running datanodes in your cluster and also free hdfs  space?
>>
>
>>
>
>>Thanks
>>Jitendra
>
>
>
>
>

Re: IOException when using "dfs -put"

Posted by Mahmood Naderan <nt...@yahoo.com>.
Strange! See the output

$ ./search/bin/hadoop namenode -format
Re-format filesystem in /data/mahmood/nutch-test/filesystem/name ? (Y or N) Y


$ ./search/bin/start-all.sh
starting namenode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-namenode-orca.out
localhost: starting datanode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.out
localhost: starting secondarynamenode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-secondarynamenode-orca.out
starting jobtracker, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-jobtracker-orca.out
localhost: starting tasktracker, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-tasktracker-orca.out



$ ./search/bin/hadoop dfsadmin -report
Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: �%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)



 
Regards,
Mahmood
On Friday, April 4, 2014 11:09 PM, Jitendra Yadav <je...@gmail.com> wrote:
 
Yes, run that command and check whether you have any live datanode.

Thanks
jitendra



On Fri, Apr 4, 2014 at 11:35 AM, Mahmood Naderan <nt...@yahoo.com> wrote:

Jitendra,
>For the first part, can explain how?
>For the second part, do you mean "hadoop dfsadmin -report"?
>
> 
>Regards,
>Mahmood
>
>
>
>
>On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <je...@gmail.com> wrote:
> 
>>Can you check total running datanodes in your cluster and also free hdfs  space?
>>
>
>>
>
>>Thanks
>>Jitendra
>
>
>
>
>

Re: IOException when using "dfs -put"

Posted by Mahmood Naderan <nt...@yahoo.com>.
Strange! See the output

$ ./search/bin/hadoop namenode -format
Re-format filesystem in /data/mahmood/nutch-test/filesystem/name ? (Y or N) Y


$ ./search/bin/start-all.sh
starting namenode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-namenode-orca.out
localhost: starting datanode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.out
localhost: starting secondarynamenode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-secondarynamenode-orca.out
starting jobtracker, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-jobtracker-orca.out
localhost: starting tasktracker, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-tasktracker-orca.out



$ ./search/bin/hadoop dfsadmin -report
Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: �%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)



 
Regards,
Mahmood
On Friday, April 4, 2014 11:09 PM, Jitendra Yadav <je...@gmail.com> wrote:
 
Yes, run that command and check whether you have any live datanode.

Thanks
jitendra



On Fri, Apr 4, 2014 at 11:35 AM, Mahmood Naderan <nt...@yahoo.com> wrote:

Jitendra,
>For the first part, can explain how?
>For the second part, do you mean "hadoop dfsadmin -report"?
>
> 
>Regards,
>Mahmood
>
>
>
>
>On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <je...@gmail.com> wrote:
> 
>>Can you check total running datanodes in your cluster and also free hdfs  space?
>>
>
>>
>
>>Thanks
>>Jitendra
>
>
>
>
>

Re: IOException when using "dfs -put"

Posted by Mahmood Naderan <nt...@yahoo.com>.
Strange! See the output

$ ./search/bin/hadoop namenode -format
Re-format filesystem in /data/mahmood/nutch-test/filesystem/name ? (Y or N) Y


$ ./search/bin/start-all.sh
starting namenode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-namenode-orca.out
localhost: starting datanode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.out
localhost: starting secondarynamenode, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-secondarynamenode-orca.out
starting jobtracker, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-jobtracker-orca.out
localhost: starting tasktracker, logging to /data/mahmood/nutch-test/search/logs/hadoop-mahmood-tasktracker-orca.out



$ ./search/bin/hadoop dfsadmin -report
Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: �%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)



 
Regards,
Mahmood
On Friday, April 4, 2014 11:09 PM, Jitendra Yadav <je...@gmail.com> wrote:
 
Yes, run that command and check whether you have any live datanode.

Thanks
jitendra



On Fri, Apr 4, 2014 at 11:35 AM, Mahmood Naderan <nt...@yahoo.com> wrote:

Jitendra,
>For the first part, can explain how?
>For the second part, do you mean "hadoop dfsadmin -report"?
>
> 
>Regards,
>Mahmood
>
>
>
>
>On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <je...@gmail.com> wrote:
> 
>>Can you check total running datanodes in your cluster and also free hdfs  space?
>>
>
>>
>
>>Thanks
>>Jitendra
>
>
>
>
>

Re: IOException when using "dfs -put"

Posted by Jitendra Yadav <je...@gmail.com>.
Yes, run that command and check whether you have any live datanode.

Thanks
jitendra


On Fri, Apr 4, 2014 at 11:35 AM, Mahmood Naderan <nt...@yahoo.com>wrote:

> Jitendra,
> For the first part, can explain how?
> For the second part, do you mean "hadoop dfsadmin -report"?
>
> Regards,
> Mahmood
>
>
>   On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>  >Can you check total running datanodes in your cluster and also
> free hdfs  space?
> >
> >
> >Thanks
> >Jitendra
>
>
>
>
>

Re: IOException when using "dfs -put"

Posted by Jitendra Yadav <je...@gmail.com>.
Yes, run that command and check whether you have any live datanode.

Thanks
jitendra


On Fri, Apr 4, 2014 at 11:35 AM, Mahmood Naderan <nt...@yahoo.com>wrote:

> Jitendra,
> For the first part, can explain how?
> For the second part, do you mean "hadoop dfsadmin -report"?
>
> Regards,
> Mahmood
>
>
>   On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>  >Can you check total running datanodes in your cluster and also
> free hdfs  space?
> >
> >
> >Thanks
> >Jitendra
>
>
>
>
>

Re: IOException when using "dfs -put"

Posted by Jitendra Yadav <je...@gmail.com>.
Yes, run that command and check whether you have any live datanode.

Thanks
jitendra


On Fri, Apr 4, 2014 at 11:35 AM, Mahmood Naderan <nt...@yahoo.com>wrote:

> Jitendra,
> For the first part, can explain how?
> For the second part, do you mean "hadoop dfsadmin -report"?
>
> Regards,
> Mahmood
>
>
>   On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>  >Can you check total running datanodes in your cluster and also
> free hdfs  space?
> >
> >
> >Thanks
> >Jitendra
>
>
>
>
>

Re: IOException when using "dfs -put"

Posted by Jitendra Yadav <je...@gmail.com>.
Yes, run that command and check whether you have any live datanode.

Thanks
jitendra


On Fri, Apr 4, 2014 at 11:35 AM, Mahmood Naderan <nt...@yahoo.com>wrote:

> Jitendra,
> For the first part, can explain how?
> For the second part, do you mean "hadoop dfsadmin -report"?
>
> Regards,
> Mahmood
>
>
>   On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>  >Can you check total running datanodes in your cluster and also
> free hdfs  space?
> >
> >
> >Thanks
> >Jitendra
>
>
>
>
>

Re: IOException when using "dfs -put"

Posted by Mahmood Naderan <nt...@yahoo.com>.
Jitendra,
For the first part, can explain how?
For the second part, do you mean "hadoop dfsadmin -report"?

 
Regards,
Mahmood


On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <je...@gmail.com> wrote:
 
>Can you check total running datanodes in your cluster and also free hdfs  space?
>

>

>Thanks
>Jitendra

Re: IOException when using "dfs -put"

Posted by Mahmood Naderan <nt...@yahoo.com>.
Jitendra,
For the first part, can explain how?
For the second part, do you mean "hadoop dfsadmin -report"?

 
Regards,
Mahmood


On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <je...@gmail.com> wrote:
 
>Can you check total running datanodes in your cluster and also free hdfs  space?
>

>

>Thanks
>Jitendra

Re: IOException when using "dfs -put"

Posted by Mahmood Naderan <nt...@yahoo.com>.
Jitendra,
For the first part, can explain how?
For the second part, do you mean "hadoop dfsadmin -report"?

 
Regards,
Mahmood


On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <je...@gmail.com> wrote:
 
>Can you check total running datanodes in your cluster and also free hdfs  space?
>

>

>Thanks
>Jitendra

Re: IOException when using "dfs -put"

Posted by Mahmood Naderan <nt...@yahoo.com>.
Jitendra,
For the first part, can explain how?
For the second part, do you mean "hadoop dfsadmin -report"?

 
Regards,
Mahmood


On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <je...@gmail.com> wrote:
 
>Can you check total running datanodes in your cluster and also free hdfs  space?
>

>

>Thanks
>Jitendra

Re: IOException when using "dfs -put"

Posted by Jitendra Yadav <je...@gmail.com>.
Can you check total running datanodes in your cluster and also free hdfs
 space?


Thanks
Jitendra
On Fri, Apr 4, 2014 at 9:53 AM, Mahmood Naderan <nt...@yahoo.com>wrote:

> Hi,
> I want to put a file from local FS to HDFS but at the end I get an error
> message and the copied file has zero size. Can someone help with an idea?
>
>
> $ cat urlsdir/urllist.txt
> http://lucene.apache.org
>
> $ hadoop dfs -put urlsdir/urllist.txt urlsdir
> put: java.io.IOException: File /user/mahmood/urlsdir could only be
> replicated to 0 nodes, instead of 1
>
> $ hadoop dfs -ls
> Found 1 items
> -rw-r--r--   1 mahmood supergroup          0 2014-04-04 21:20
> /user/mahmood/urlsdir
>
>
> Regards,
> Mahmood
>

Re: IOException when using "dfs -put"

Posted by Jitendra Yadav <je...@gmail.com>.
Can you check total running datanodes in your cluster and also free hdfs
 space?


Thanks
Jitendra
On Fri, Apr 4, 2014 at 9:53 AM, Mahmood Naderan <nt...@yahoo.com>wrote:

> Hi,
> I want to put a file from local FS to HDFS but at the end I get an error
> message and the copied file has zero size. Can someone help with an idea?
>
>
> $ cat urlsdir/urllist.txt
> http://lucene.apache.org
>
> $ hadoop dfs -put urlsdir/urllist.txt urlsdir
> put: java.io.IOException: File /user/mahmood/urlsdir could only be
> replicated to 0 nodes, instead of 1
>
> $ hadoop dfs -ls
> Found 1 items
> -rw-r--r--   1 mahmood supergroup          0 2014-04-04 21:20
> /user/mahmood/urlsdir
>
>
> Regards,
> Mahmood
>

Re: IOException when using "dfs -put"

Posted by Jitendra Yadav <je...@gmail.com>.
Can you check total running datanodes in your cluster and also free hdfs
 space?


Thanks
Jitendra
On Fri, Apr 4, 2014 at 9:53 AM, Mahmood Naderan <nt...@yahoo.com>wrote:

> Hi,
> I want to put a file from local FS to HDFS but at the end I get an error
> message and the copied file has zero size. Can someone help with an idea?
>
>
> $ cat urlsdir/urllist.txt
> http://lucene.apache.org
>
> $ hadoop dfs -put urlsdir/urllist.txt urlsdir
> put: java.io.IOException: File /user/mahmood/urlsdir could only be
> replicated to 0 nodes, instead of 1
>
> $ hadoop dfs -ls
> Found 1 items
> -rw-r--r--   1 mahmood supergroup          0 2014-04-04 21:20
> /user/mahmood/urlsdir
>
>
> Regards,
> Mahmood
>

Re: IOException when using "dfs -put"

Posted by Jitendra Yadav <je...@gmail.com>.
Can you check total running datanodes in your cluster and also free hdfs
 space?


Thanks
Jitendra
On Fri, Apr 4, 2014 at 9:53 AM, Mahmood Naderan <nt...@yahoo.com>wrote:

> Hi,
> I want to put a file from local FS to HDFS but at the end I get an error
> message and the copied file has zero size. Can someone help with an idea?
>
>
> $ cat urlsdir/urllist.txt
> http://lucene.apache.org
>
> $ hadoop dfs -put urlsdir/urllist.txt urlsdir
> put: java.io.IOException: File /user/mahmood/urlsdir could only be
> replicated to 0 nodes, instead of 1
>
> $ hadoop dfs -ls
> Found 1 items
> -rw-r--r--   1 mahmood supergroup          0 2014-04-04 21:20
> /user/mahmood/urlsdir
>
>
> Regards,
> Mahmood
>