You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Jean-Daniel Cryans <jd...@apache.org> on 2010/01/13 18:37:32 UTC

Re: problem regarding hadoop

This is probably a question better for common-user rather than hbase.

But to answer your problem, your JobTracker is able to talk to your
Namenode but there's something wrong with the Datanode, your should
grep its log for any exception.

J-D

On Wed, Jan 13, 2010 at 3:11 AM, Muhammad Mudassar <mu...@gmail.com> wrote:
> hi i am running hadoop 0.20.1 on single node and  i am getting some problem
> My hdfs-site configurations are
> <configuration>
> <property>
>    <name>dfs.replication</name>
>    <value>1</value>
>  </property>
> <property>
>  <name>hadoop.tmp.dir</name>
>  <value>/home/hadoop/Desktop/hadoop-store/hadoop-$hadoop</value>
>  <description>A base for other temporary directories.</description>
> </property>
> </configuration>
>
>
> and core site configurations are
> <configuration>
>  <property>
>    <name>fs.default.name</name>
>    <value>hdfs://localhost:54310</value>
>  </property>
> <property>
>  <name>hadoop.tmp.dir</name>
>  <value>/home/hadoop/Desktop/hadoop-store/hadoop-$hadoop</value>
>  <description>A base for other temporary directories.</description>
> </property>
> </configuration>
>
>
> the problem is with jobtracker log file says that
>
> 2010-01-13 16:00:33,015 INFO org.apache.hadoop.mapred.JobTracker: Scheduler
> configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
> limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
> 2010-01-13 16:00:33,043 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=JobTracker, port=54311
> 2010-01-13 16:00:38,309 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2010-01-13 16:00:38,407 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is -1.
> Opening the listener on 50030
> 2010-01-13 16:00:38,408 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50030
> webServer.getConnectors()[0].getLocalPort() returned 50030
> 2010-01-13 16:00:38,408 INFO org.apache.hadoop.http.HttpServer: Jetty bound
> to port 50030
> 2010-01-13 16:00:38,408 INFO org.mortbay.log: jetty-6.1.14
> 2010-01-13 16:00:51,429 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50030
> 2010-01-13 16:00:51,430 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=JobTracker, sessionId=
> 2010-01-13 16:00:51,431 INFO org.apache.hadoop.mapred.JobTracker: JobTracker
> up at: 54311
> 2010-01-13 16:00:51,431 INFO org.apache.hadoop.mapred.JobTracker: JobTracker
> webserver: 50030
> 2010-01-13 16:00:51,574 INFO org.apache.hadoop.mapred.JobTracker: Cleaning
> up the system directory
> 2010-01-13 16:00:51,643 INFO
> org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is
> inactive
> 2010-01-13 16:00:51,674 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer
> Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /home/hadoop/Desktop/hadoop-store/hadoop-$hadoop/mapred/system/
> jobtracker.info could only be replicated to 0 nodes, instead of 1
>    at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
>    at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
>    at org.apache.hadoop.ipc.Client.call(Client.java:739)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>    at $Proxy4.addBlock(Unknown Source)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>    at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>    at $Proxy4.addBlock(Unknown Source)
>    at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
>    at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
>    at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
>    at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
>
> 2*010-01-13 16:00:51,674 WARN org.apache.hadoop.hdfs.DFSClient: Error
> Recovery for block null bad datanode[0] nodes == null
> 2010-01-13 16:00:51,674 WARN org.apache.hadoop.hdfs.DFSClient: Could not get
> block locations. Source file
> "/home/hadoop/Desktop/hadoop-store/hadoop-$hadoop/mapred/system/
> jobtracker.info" - Aborting...
> 2010-01-13 16:00:51,674 WARN org.apache.hadoop.mapred.JobTracker: Writing to
> file
> hdfs://localhost:54310/home/hadoop/Desktop/hadoop-store/hadoop-$hadoop/mapred/system/
> jobtracker.info failed!
> 2010-01-13 16:00:51,674 WARN org.apache.hadoop.mapred.JobTracker: FileSystem
> is not ready yet!
> 2010-01-13 16:00:51,679 WARN org.apache.hadoop.mapred.JobTracker: Failed to
> initialize recovery manager.
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /home/hadoop/Desktop/hadoop-store/hadoop-$hadoop/mapred/system/
> jobtracker.info could only be replicated to 0 nodes, instead of 1*
>    at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
>    at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
>    at org.apache.hadoop.ipc.Client.call(Client.java:739)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>    at $Proxy4.addBlock(Unknown Source)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>    at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>    at $Proxy4.addBlock(Unknown Source)
>    at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
>    at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
>    at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
>    at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
>
>
>
> i checked with jps it says that  processes are running
>
> 15030 SecondaryNameNode
> 14904 DataNode
> 15129 JobTracker
> 15231 TaskTracker
> 14787 NameNode
>
> but log file has errors can any one tell what the problem is
>

Re: problem regarding hadoop

Posted by Jean-Daniel Cryans <jd...@apache.org>.
There seems to be a mismatch between the hbase versions you are using.
In particular, there is a known bug when using hbase 0.20.0 with
0.20.1 and 0.20.2. The best is to just upgrade to 0.20.2

J-D

On Thu, Jan 14, 2010 at 12:11 AM, Muhammad Mudassar
<mu...@gmail.com> wrote:
> Basically I am trying to create table in Hbase by using *hbaseAdmin* by
> using a java programe but i am getting trouble however table is created but
> it does not store anything in it when i use *batchUpdate.put* to insert
> anything in it the exception shown in ide is
>
> Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
>        at $Proxy1.getRegionInfo(Unknown Source)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:795)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:465)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:440)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:515)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:474)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:440)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:515)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:478)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:440)
>        at
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:159)
>
> Caused by: org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> java.lang.NoSuchMethodException:
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRow([B)
>        at java.lang.Class.getMethod(Class.java:1605)
>        at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:627)
>        at
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:912)
>
>        at
> org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:701)
>        at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:321)
>        ... 14 more
> Java Result: 1
>
> when i checked logs of hbase master
>
> On Wed, Jan 13, 2010 at 10:37 PM, Jean-Daniel Cryans <jd...@apache.org>wrote:
>
>> This is probably a question better for common-user rather than hbase.
>>
>> But to answer your problem, your JobTracker is able to talk to your
>> Namenode but there's something wrong with the Datanode, your should
>> grep its log for any exception.
>>
>> J-D
>>
>> On Wed, Jan 13, 2010 at 3:11 AM, Muhammad Mudassar <mu...@gmail.com>
>> wrote:
>> > hi i am running hadoop 0.20.1 on single node and  i am getting some
>> problem
>> > My hdfs-site configurations are
>> > <configuration>
>> > <property>
>> >    <name>dfs.replication</name>
>> >    <value>1</value>
>> >  </property>
>> > <property>
>> >  <name>hadoop.tmp.dir</name>
>> >  <value>/home/hadoop/Desktop/hadoop-store/hadoop-$hadoop</value>
>> >  <description>A base for other temporary directories.</description>
>> > </property>
>> > </configuration>
>> >
>> >
>> > and core site configurations are
>> > <configuration>
>> >  <property>
>> >    <name>fs.default.name</name>
>> >    <value>hdfs://localhost:54310</value>
>> >  </property>
>> > <property>
>> >  <name>hadoop.tmp.dir</name>
>> >  <value>/home/hadoop/Desktop/hadoop-store/hadoop-$hadoop</value>
>> >  <description>A base for other temporary directories.</description>
>> > </property>
>> > </configuration>
>> >
>> >
>> > the problem is with jobtracker log file says that
>> >
>> > 2010-01-13 16:00:33,015 INFO org.apache.hadoop.mapred.JobTracker:
>> Scheduler
>> > configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
>> > limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
>> > 2010-01-13 16:00:33,043 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
>> > Initializing RPC Metrics with hostName=JobTracker, port=54311
>> > 2010-01-13 16:00:38,309 INFO org.mortbay.log: Logging to
>> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> > org.mortbay.log.Slf4jLog
>> > 2010-01-13 16:00:38,407 INFO org.apache.hadoop.http.HttpServer: Port
>> > returned by webServer.getConnectors()[0].getLocalPort() before open() is
>> -1.
>> > Opening the listener on 50030
>> > 2010-01-13 16:00:38,408 INFO org.apache.hadoop.http.HttpServer:
>> > listener.getLocalPort() returned 50030
>> > webServer.getConnectors()[0].getLocalPort() returned 50030
>> > 2010-01-13 16:00:38,408 INFO org.apache.hadoop.http.HttpServer: Jetty
>> bound
>> > to port 50030
>> > 2010-01-13 16:00:38,408 INFO org.mortbay.log: jetty-6.1.14
>> > 2010-01-13 16:00:51,429 INFO org.mortbay.log: Started
>> > SelectChannelConnector@0.0.0.0:50030
>> > 2010-01-13 16:00:51,430 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> > Initializing JVM Metrics with processName=JobTracker, sessionId=
>> > 2010-01-13 16:00:51,431 INFO org.apache.hadoop.mapred.JobTracker:
>> JobTracker
>> > up at: 54311
>> > 2010-01-13 16:00:51,431 INFO org.apache.hadoop.mapred.JobTracker:
>> JobTracker
>> > webserver: 50030
>> > 2010-01-13 16:00:51,574 INFO org.apache.hadoop.mapred.JobTracker:
>> Cleaning
>> > up the system directory
>> > 2010-01-13 16:00:51,643 INFO
>> > org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is
>> > inactive
>> > 2010-01-13 16:00:51,674 WARN org.apache.hadoop.hdfs.DFSClient:
>> DataStreamer
>> > Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> File
>> > /home/hadoop/Desktop/hadoop-store/hadoop-$hadoop/mapred/system/
>> > jobtracker.info could only be replicated to 0 nodes, instead of 1
>> >    at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
>> >    at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >    at
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >    at
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>> >    at java.security.AccessController.doPrivileged(Native Method)
>> >    at javax.security.auth.Subject.doAs(Subject.java:396)
>> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>> >
>> >    at org.apache.hadoop.ipc.Client.call(Client.java:739)
>> >    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>> >    at $Proxy4.addBlock(Unknown Source)
>> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >    at
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >    at
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>> >    at
>> >
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>> >    at
>> >
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>> >    at $Proxy4.addBlock(Unknown Source)
>> >    at
>> >
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
>> >    at
>> >
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
>> >    at
>> >
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
>> >    at
>> >
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
>> >
>> > 2*010-01-13 16:00:51,674 WARN org.apache.hadoop.hdfs.DFSClient: Error
>> > Recovery for block null bad datanode[0] nodes == null
>> > 2010-01-13 16:00:51,674 WARN org.apache.hadoop.hdfs.DFSClient: Could not
>> get
>> > block locations. Source file
>> > "/home/hadoop/Desktop/hadoop-store/hadoop-$hadoop/mapred/system/
>> > jobtracker.info" - Aborting...
>> > 2010-01-13 16:00:51,674 WARN org.apache.hadoop.mapred.JobTracker: Writing
>> to
>> > file
>> >
>> hdfs://localhost:54310/home/hadoop/Desktop/hadoop-store/hadoop-$hadoop/mapred/system/
>> > jobtracker.info failed!
>> > 2010-01-13 16:00:51,674 WARN org.apache.hadoop.mapred.JobTracker:
>> FileSystem
>> > is not ready yet!
>> > 2010-01-13 16:00:51,679 WARN org.apache.hadoop.mapred.JobTracker: Failed
>> to
>> > initialize recovery manager.
>> > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>> > /home/hadoop/Desktop/hadoop-store/hadoop-$hadoop/mapred/system/
>> > jobtracker.info could only be replicated to 0 nodes, instead of 1*
>> >    at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
>> >    at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >    at
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >    at
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>> >    at java.security.AccessController.doPrivileged(Native Method)
>> >    at javax.security.auth.Subject.doAs(Subject.java:396)
>> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>> >
>> >    at org.apache.hadoop.ipc.Client.call(Client.java:739)
>> >    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>> >    at $Proxy4.addBlock(Unknown Source)
>> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >    at
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >    at
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>> >    at
>> >
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>> >    at
>> >
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>> >    at $Proxy4.addBlock(Unknown Source)
>> >    at
>> >
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
>> >    at
>> >
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
>> >    at
>> >
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
>> >    at
>> >
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
>> >
>> >
>> >
>> > i checked with jps it says that  processes are running
>> >
>> > 15030 SecondaryNameNode
>> > 14904 DataNode
>> > 15129 JobTracker
>> > 15231 TaskTracker
>> > 14787 NameNode
>> >
>> > but log file has errors can any one tell what the problem is
>> >
>>
>

Re: problem regarding hadoop

Posted by Jean-Daniel Cryans <jd...@apache.org>.
There seems to be a mismatch between the hbase versions you are using.
In particular, there is a known bug when using hbase 0.20.0 with
0.20.1 and 0.20.2. The best is to just upgrade to 0.20.2

J-D

On Thu, Jan 14, 2010 at 12:11 AM, Muhammad Mudassar
<mu...@gmail.com> wrote:
> Basically I am trying to create table in Hbase by using *hbaseAdmin* by
> using a java programe but i am getting trouble however table is created but
> it does not store anything in it when i use *batchUpdate.put* to insert
> anything in it the exception shown in ide is
>
> Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
>        at $Proxy1.getRegionInfo(Unknown Source)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:795)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:465)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:440)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:515)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:474)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:440)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:515)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:478)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:440)
>        at
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:159)
>
> Caused by: org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> java.lang.NoSuchMethodException:
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRow([B)
>        at java.lang.Class.getMethod(Class.java:1605)
>        at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:627)
>        at
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:912)
>
>        at
> org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:701)
>        at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:321)
>        ... 14 more
> Java Result: 1
>
> when i checked logs of hbase master
>
> On Wed, Jan 13, 2010 at 10:37 PM, Jean-Daniel Cryans <jd...@apache.org>wrote:
>
>> This is probably a question better for common-user rather than hbase.
>>
>> But to answer your problem, your JobTracker is able to talk to your
>> Namenode but there's something wrong with the Datanode, your should
>> grep its log for any exception.
>>
>> J-D
>>
>> On Wed, Jan 13, 2010 at 3:11 AM, Muhammad Mudassar <mu...@gmail.com>
>> wrote:
>> > hi i am running hadoop 0.20.1 on single node and  i am getting some
>> problem
>> > My hdfs-site configurations are
>> > <configuration>
>> > <property>
>> >    <name>dfs.replication</name>
>> >    <value>1</value>
>> >  </property>
>> > <property>
>> >  <name>hadoop.tmp.dir</name>
>> >  <value>/home/hadoop/Desktop/hadoop-store/hadoop-$hadoop</value>
>> >  <description>A base for other temporary directories.</description>
>> > </property>
>> > </configuration>
>> >
>> >
>> > and core site configurations are
>> > <configuration>
>> >  <property>
>> >    <name>fs.default.name</name>
>> >    <value>hdfs://localhost:54310</value>
>> >  </property>
>> > <property>
>> >  <name>hadoop.tmp.dir</name>
>> >  <value>/home/hadoop/Desktop/hadoop-store/hadoop-$hadoop</value>
>> >  <description>A base for other temporary directories.</description>
>> > </property>
>> > </configuration>
>> >
>> >
>> > the problem is with jobtracker log file says that
>> >
>> > 2010-01-13 16:00:33,015 INFO org.apache.hadoop.mapred.JobTracker:
>> Scheduler
>> > configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
>> > limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
>> > 2010-01-13 16:00:33,043 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
>> > Initializing RPC Metrics with hostName=JobTracker, port=54311
>> > 2010-01-13 16:00:38,309 INFO org.mortbay.log: Logging to
>> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> > org.mortbay.log.Slf4jLog
>> > 2010-01-13 16:00:38,407 INFO org.apache.hadoop.http.HttpServer: Port
>> > returned by webServer.getConnectors()[0].getLocalPort() before open() is
>> -1.
>> > Opening the listener on 50030
>> > 2010-01-13 16:00:38,408 INFO org.apache.hadoop.http.HttpServer:
>> > listener.getLocalPort() returned 50030
>> > webServer.getConnectors()[0].getLocalPort() returned 50030
>> > 2010-01-13 16:00:38,408 INFO org.apache.hadoop.http.HttpServer: Jetty
>> bound
>> > to port 50030
>> > 2010-01-13 16:00:38,408 INFO org.mortbay.log: jetty-6.1.14
>> > 2010-01-13 16:00:51,429 INFO org.mortbay.log: Started
>> > SelectChannelConnector@0.0.0.0:50030
>> > 2010-01-13 16:00:51,430 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> > Initializing JVM Metrics with processName=JobTracker, sessionId=
>> > 2010-01-13 16:00:51,431 INFO org.apache.hadoop.mapred.JobTracker:
>> JobTracker
>> > up at: 54311
>> > 2010-01-13 16:00:51,431 INFO org.apache.hadoop.mapred.JobTracker:
>> JobTracker
>> > webserver: 50030
>> > 2010-01-13 16:00:51,574 INFO org.apache.hadoop.mapred.JobTracker:
>> Cleaning
>> > up the system directory
>> > 2010-01-13 16:00:51,643 INFO
>> > org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is
>> > inactive
>> > 2010-01-13 16:00:51,674 WARN org.apache.hadoop.hdfs.DFSClient:
>> DataStreamer
>> > Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> File
>> > /home/hadoop/Desktop/hadoop-store/hadoop-$hadoop/mapred/system/
>> > jobtracker.info could only be replicated to 0 nodes, instead of 1
>> >    at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
>> >    at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >    at
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >    at
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>> >    at java.security.AccessController.doPrivileged(Native Method)
>> >    at javax.security.auth.Subject.doAs(Subject.java:396)
>> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>> >
>> >    at org.apache.hadoop.ipc.Client.call(Client.java:739)
>> >    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>> >    at $Proxy4.addBlock(Unknown Source)
>> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >    at
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >    at
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>> >    at
>> >
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>> >    at
>> >
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>> >    at $Proxy4.addBlock(Unknown Source)
>> >    at
>> >
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
>> >    at
>> >
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
>> >    at
>> >
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
>> >    at
>> >
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
>> >
>> > 2*010-01-13 16:00:51,674 WARN org.apache.hadoop.hdfs.DFSClient: Error
>> > Recovery for block null bad datanode[0] nodes == null
>> > 2010-01-13 16:00:51,674 WARN org.apache.hadoop.hdfs.DFSClient: Could not
>> get
>> > block locations. Source file
>> > "/home/hadoop/Desktop/hadoop-store/hadoop-$hadoop/mapred/system/
>> > jobtracker.info" - Aborting...
>> > 2010-01-13 16:00:51,674 WARN org.apache.hadoop.mapred.JobTracker: Writing
>> to
>> > file
>> >
>> hdfs://localhost:54310/home/hadoop/Desktop/hadoop-store/hadoop-$hadoop/mapred/system/
>> > jobtracker.info failed!
>> > 2010-01-13 16:00:51,674 WARN org.apache.hadoop.mapred.JobTracker:
>> FileSystem
>> > is not ready yet!
>> > 2010-01-13 16:00:51,679 WARN org.apache.hadoop.mapred.JobTracker: Failed
>> to
>> > initialize recovery manager.
>> > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>> > /home/hadoop/Desktop/hadoop-store/hadoop-$hadoop/mapred/system/
>> > jobtracker.info could only be replicated to 0 nodes, instead of 1*
>> >    at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
>> >    at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >    at
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >    at
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>> >    at java.security.AccessController.doPrivileged(Native Method)
>> >    at javax.security.auth.Subject.doAs(Subject.java:396)
>> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>> >
>> >    at org.apache.hadoop.ipc.Client.call(Client.java:739)
>> >    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>> >    at $Proxy4.addBlock(Unknown Source)
>> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >    at
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >    at
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>> >    at
>> >
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>> >    at
>> >
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>> >    at $Proxy4.addBlock(Unknown Source)
>> >    at
>> >
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
>> >    at
>> >
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
>> >    at
>> >
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
>> >    at
>> >
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
>> >
>> >
>> >
>> > i checked with jps it says that  processes are running
>> >
>> > 15030 SecondaryNameNode
>> > 14904 DataNode
>> > 15129 JobTracker
>> > 15231 TaskTracker
>> > 14787 NameNode
>> >
>> > but log file has errors can any one tell what the problem is
>> >
>>
>

Re: problem regarding hadoop

Posted by Muhammad Mudassar <mu...@gmail.com>.
Basically I am trying to create table in Hbase by using *hbaseAdmin* by
using a java programe but i am getting trouble however table is created but
it does not store anything in it when i use *batchUpdate.put* to insert
anything in it the exception shown in ide is

Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
        at $Proxy1.getRegionInfo(Unknown Source)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:795)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:465)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:440)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:515)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:474)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:440)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:515)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:478)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:440)
        at
org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:159)

Caused by: org.apache.hadoop.ipc.RemoteException: java.io.IOException:
java.lang.NoSuchMethodException:
org.apache.hadoop.hbase.regionserver.HRegionServer.getRow([B)
        at java.lang.Class.getMethod(Class.java:1605)
        at
org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:627)
        at
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:912)

        at
org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:701)
        at
org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:321)
        ... 14 more
Java Result: 1

when i checked logs of hbase master

On Wed, Jan 13, 2010 at 10:37 PM, Jean-Daniel Cryans <jd...@apache.org>wrote:

> This is probably a question better for common-user rather than hbase.
>
> But to answer your problem, your JobTracker is able to talk to your
> Namenode but there's something wrong with the Datanode, your should
> grep its log for any exception.
>
> J-D
>
> On Wed, Jan 13, 2010 at 3:11 AM, Muhammad Mudassar <mu...@gmail.com>
> wrote:
> > hi i am running hadoop 0.20.1 on single node and  i am getting some
> problem
> > My hdfs-site configurations are
> > <configuration>
> > <property>
> >    <name>dfs.replication</name>
> >    <value>1</value>
> >  </property>
> > <property>
> >  <name>hadoop.tmp.dir</name>
> >  <value>/home/hadoop/Desktop/hadoop-store/hadoop-$hadoop</value>
> >  <description>A base for other temporary directories.</description>
> > </property>
> > </configuration>
> >
> >
> > and core site configurations are
> > <configuration>
> >  <property>
> >    <name>fs.default.name</name>
> >    <value>hdfs://localhost:54310</value>
> >  </property>
> > <property>
> >  <name>hadoop.tmp.dir</name>
> >  <value>/home/hadoop/Desktop/hadoop-store/hadoop-$hadoop</value>
> >  <description>A base for other temporary directories.</description>
> > </property>
> > </configuration>
> >
> >
> > the problem is with jobtracker log file says that
> >
> > 2010-01-13 16:00:33,015 INFO org.apache.hadoop.mapred.JobTracker:
> Scheduler
> > configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
> > limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
> > 2010-01-13 16:00:33,043 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> > Initializing RPC Metrics with hostName=JobTracker, port=54311
> > 2010-01-13 16:00:38,309 INFO org.mortbay.log: Logging to
> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> > org.mortbay.log.Slf4jLog
> > 2010-01-13 16:00:38,407 INFO org.apache.hadoop.http.HttpServer: Port
> > returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1.
> > Opening the listener on 50030
> > 2010-01-13 16:00:38,408 INFO org.apache.hadoop.http.HttpServer:
> > listener.getLocalPort() returned 50030
> > webServer.getConnectors()[0].getLocalPort() returned 50030
> > 2010-01-13 16:00:38,408 INFO org.apache.hadoop.http.HttpServer: Jetty
> bound
> > to port 50030
> > 2010-01-13 16:00:38,408 INFO org.mortbay.log: jetty-6.1.14
> > 2010-01-13 16:00:51,429 INFO org.mortbay.log: Started
> > SelectChannelConnector@0.0.0.0:50030
> > 2010-01-13 16:00:51,430 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> > Initializing JVM Metrics with processName=JobTracker, sessionId=
> > 2010-01-13 16:00:51,431 INFO org.apache.hadoop.mapred.JobTracker:
> JobTracker
> > up at: 54311
> > 2010-01-13 16:00:51,431 INFO org.apache.hadoop.mapred.JobTracker:
> JobTracker
> > webserver: 50030
> > 2010-01-13 16:00:51,574 INFO org.apache.hadoop.mapred.JobTracker:
> Cleaning
> > up the system directory
> > 2010-01-13 16:00:51,643 INFO
> > org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is
> > inactive
> > 2010-01-13 16:00:51,674 WARN org.apache.hadoop.hdfs.DFSClient:
> DataStreamer
> > Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> File
> > /home/hadoop/Desktop/hadoop-store/hadoop-$hadoop/mapred/system/
> > jobtracker.info could only be replicated to 0 nodes, instead of 1
> >    at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
> >    at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >    at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >    at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> >    at java.security.AccessController.doPrivileged(Native Method)
> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> >
> >    at org.apache.hadoop.ipc.Client.call(Client.java:739)
> >    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> >    at $Proxy4.addBlock(Unknown Source)
> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >    at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >    at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >    at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> >    at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> >    at $Proxy4.addBlock(Unknown Source)
> >    at
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
> >    at
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
> >    at
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
> >    at
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
> >
> > 2*010-01-13 16:00:51,674 WARN org.apache.hadoop.hdfs.DFSClient: Error
> > Recovery for block null bad datanode[0] nodes == null
> > 2010-01-13 16:00:51,674 WARN org.apache.hadoop.hdfs.DFSClient: Could not
> get
> > block locations. Source file
> > "/home/hadoop/Desktop/hadoop-store/hadoop-$hadoop/mapred/system/
> > jobtracker.info" - Aborting...
> > 2010-01-13 16:00:51,674 WARN org.apache.hadoop.mapred.JobTracker: Writing
> to
> > file
> >
> hdfs://localhost:54310/home/hadoop/Desktop/hadoop-store/hadoop-$hadoop/mapred/system/
> > jobtracker.info failed!
> > 2010-01-13 16:00:51,674 WARN org.apache.hadoop.mapred.JobTracker:
> FileSystem
> > is not ready yet!
> > 2010-01-13 16:00:51,679 WARN org.apache.hadoop.mapred.JobTracker: Failed
> to
> > initialize recovery manager.
> > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> > /home/hadoop/Desktop/hadoop-store/hadoop-$hadoop/mapred/system/
> > jobtracker.info could only be replicated to 0 nodes, instead of 1*
> >    at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
> >    at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >    at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >    at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> >    at java.security.AccessController.doPrivileged(Native Method)
> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> >
> >    at org.apache.hadoop.ipc.Client.call(Client.java:739)
> >    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> >    at $Proxy4.addBlock(Unknown Source)
> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >    at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >    at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >    at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> >    at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> >    at $Proxy4.addBlock(Unknown Source)
> >    at
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
> >    at
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
> >    at
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
> >    at
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
> >
> >
> >
> > i checked with jps it says that  processes are running
> >
> > 15030 SecondaryNameNode
> > 14904 DataNode
> > 15129 JobTracker
> > 15231 TaskTracker
> > 14787 NameNode
> >
> > but log file has errors can any one tell what the problem is
> >
>

Re: memory for map task

Posted by Todd Lipcon <to...@cloudera.com>.
More or less correct, minus some wiggle room for JVM, accounting structures,
thread stacks, etc.

-Todd

2010/1/13 Gang Luo <lg...@yahoo.com.cn>

> Hi all,
> the parameter mapred.child.java.opts defines the memory for each task (e.g.
> 200mb), while io.sort.mb defines the memory size for sorting (e.g. 100mb).
> So, for each map task, the task memory minus io memory is the maximum memory
> the map function can use. Is it correct?
>
> -Gang
>
>
>
>      ___________________________________________________________
>  好玩贺卡等你发,邮箱贺卡全新上线!
> http://card.mail.cn.yahoo.com/
>

memory for map task

Posted by Gang Luo <lg...@yahoo.com.cn>.
Hi all,
the parameter mapred.child.java.opts defines the memory for each task (e.g. 200mb), while io.sort.mb defines the memory size for sorting (e.g. 100mb). So, for each map task, the task memory minus io memory is the maximum memory the map function can use. Is it correct?

-Gang



      ___________________________________________________________ 
  好玩贺卡等你发,邮箱贺卡全新上线! 
http://card.mail.cn.yahoo.com/