You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Arindam Choudhury <ar...@gmail.com> on 2013/03/04 15:42:47 UTC

jobtracker.info could only be replicated to 0 nodes, instead of 1

Hi,

I think its a well known and old problem. but I the reformatting trick
failed to work for me.

I can start the namenode and datanode properly.

hadoop dfsadmin -report
Configured Capacity: 21740363776 (20.25 GB)
Present Capacity: 14260768768 (13.28 GB)
DFS Remaining: 14260740096 (13.28 GB)
DFS Used: 28672 (28 KB)
DFS Used%: 0%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)

Name: 192.168.122.32:50010
Decommission Status : Normal
Configured Capacity: 21740363776 (20.25 GB)
DFS Used: 28672 (28 KB)
Non DFS Used: 7479595008 (6.97 GB)
DFS Remaining: 14260740096(13.28 GB)
DFS Used%: 0%
DFS Remaining%: 65.6%
Last contact: Mon Mar 04 15:36:47 CET 2013

but when I try to start the jobtracker, I get the following error:

13/03/04 15:39:20 WARN hdfs.DFSClient: DataStreamer Exception:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/home/hadoop/hadoop-dir/system-dir/jobtracker.info could only be replicated
to 0 nodes, instead of 1

Re: jobtracker.info could only be replicated to 0 nodes, instead of 1

Posted by Arindam Choudhury <ar...@gmail.com>.
solved.
I have a virtual machine with 2GB of memory. So, I can not put block size
of 4GB.


On Mon, Mar 4, 2013 at 3:42 PM, Arindam Choudhury <
arindamchoudhury0@gmail.com> wrote:

> Hi,
>
> I think its a well known and old problem. but I the reformatting trick
> failed to work for me.
>
> I can start the namenode and datanode properly.
>
> hadoop dfsadmin -report
> Configured Capacity: 21740363776 (20.25 GB)
> Present Capacity: 14260768768 (13.28 GB)
> DFS Remaining: 14260740096 (13.28 GB)
> DFS Used: 28672 (28 KB)
> DFS Used%: 0%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> -------------------------------------------------
> Datanodes available: 1 (1 total, 0 dead)
>
> Name: 192.168.122.32:50010
> Decommission Status : Normal
> Configured Capacity: 21740363776 (20.25 GB)
> DFS Used: 28672 (28 KB)
> Non DFS Used: 7479595008 (6.97 GB)
> DFS Remaining: 14260740096(13.28 GB)
> DFS Used%: 0%
> DFS Remaining%: 65.6%
> Last contact: Mon Mar 04 15:36:47 CET 2013
>
> but when I try to start the jobtracker, I get the following error:
>
> 13/03/04 15:39:20 WARN hdfs.DFSClient: DataStreamer Exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /home/hadoop/hadoop-dir/system-dir/jobtracker.info could only be
> replicated to 0 nodes, instead of 1
>

Re: jobtracker.info could only be replicated to 0 nodes, instead of 1

Posted by Arindam Choudhury <ar...@gmail.com>.
solved.
I have a virtual machine with 2GB of memory. So, I can not put block size
of 4GB.


On Mon, Mar 4, 2013 at 3:42 PM, Arindam Choudhury <
arindamchoudhury0@gmail.com> wrote:

> Hi,
>
> I think its a well known and old problem. but I the reformatting trick
> failed to work for me.
>
> I can start the namenode and datanode properly.
>
> hadoop dfsadmin -report
> Configured Capacity: 21740363776 (20.25 GB)
> Present Capacity: 14260768768 (13.28 GB)
> DFS Remaining: 14260740096 (13.28 GB)
> DFS Used: 28672 (28 KB)
> DFS Used%: 0%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> -------------------------------------------------
> Datanodes available: 1 (1 total, 0 dead)
>
> Name: 192.168.122.32:50010
> Decommission Status : Normal
> Configured Capacity: 21740363776 (20.25 GB)
> DFS Used: 28672 (28 KB)
> Non DFS Used: 7479595008 (6.97 GB)
> DFS Remaining: 14260740096(13.28 GB)
> DFS Used%: 0%
> DFS Remaining%: 65.6%
> Last contact: Mon Mar 04 15:36:47 CET 2013
>
> but when I try to start the jobtracker, I get the following error:
>
> 13/03/04 15:39:20 WARN hdfs.DFSClient: DataStreamer Exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /home/hadoop/hadoop-dir/system-dir/jobtracker.info could only be
> replicated to 0 nodes, instead of 1
>

Re: jobtracker.info could only be replicated to 0 nodes, instead of 1

Posted by Arindam Choudhury <ar...@gmail.com>.
solved.
I have a virtual machine with 2GB of memory. So, I can not put block size
of 4GB.


On Mon, Mar 4, 2013 at 3:42 PM, Arindam Choudhury <
arindamchoudhury0@gmail.com> wrote:

> Hi,
>
> I think its a well known and old problem. but I the reformatting trick
> failed to work for me.
>
> I can start the namenode and datanode properly.
>
> hadoop dfsadmin -report
> Configured Capacity: 21740363776 (20.25 GB)
> Present Capacity: 14260768768 (13.28 GB)
> DFS Remaining: 14260740096 (13.28 GB)
> DFS Used: 28672 (28 KB)
> DFS Used%: 0%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> -------------------------------------------------
> Datanodes available: 1 (1 total, 0 dead)
>
> Name: 192.168.122.32:50010
> Decommission Status : Normal
> Configured Capacity: 21740363776 (20.25 GB)
> DFS Used: 28672 (28 KB)
> Non DFS Used: 7479595008 (6.97 GB)
> DFS Remaining: 14260740096(13.28 GB)
> DFS Used%: 0%
> DFS Remaining%: 65.6%
> Last contact: Mon Mar 04 15:36:47 CET 2013
>
> but when I try to start the jobtracker, I get the following error:
>
> 13/03/04 15:39:20 WARN hdfs.DFSClient: DataStreamer Exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /home/hadoop/hadoop-dir/system-dir/jobtracker.info could only be
> replicated to 0 nodes, instead of 1
>

Re: jobtracker.info could only be replicated to 0 nodes, instead of 1

Posted by Arindam Choudhury <ar...@gmail.com>.
solved.
I have a virtual machine with 2GB of memory. So, I can not put block size
of 4GB.


On Mon, Mar 4, 2013 at 3:42 PM, Arindam Choudhury <
arindamchoudhury0@gmail.com> wrote:

> Hi,
>
> I think its a well known and old problem. but I the reformatting trick
> failed to work for me.
>
> I can start the namenode and datanode properly.
>
> hadoop dfsadmin -report
> Configured Capacity: 21740363776 (20.25 GB)
> Present Capacity: 14260768768 (13.28 GB)
> DFS Remaining: 14260740096 (13.28 GB)
> DFS Used: 28672 (28 KB)
> DFS Used%: 0%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> -------------------------------------------------
> Datanodes available: 1 (1 total, 0 dead)
>
> Name: 192.168.122.32:50010
> Decommission Status : Normal
> Configured Capacity: 21740363776 (20.25 GB)
> DFS Used: 28672 (28 KB)
> Non DFS Used: 7479595008 (6.97 GB)
> DFS Remaining: 14260740096(13.28 GB)
> DFS Used%: 0%
> DFS Remaining%: 65.6%
> Last contact: Mon Mar 04 15:36:47 CET 2013
>
> but when I try to start the jobtracker, I get the following error:
>
> 13/03/04 15:39:20 WARN hdfs.DFSClient: DataStreamer Exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /home/hadoop/hadoop-dir/system-dir/jobtracker.info could only be
> replicated to 0 nodes, instead of 1
>