You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Fei Dong <do...@gmail.com> on 2012/01/24 02:32:40 UTC

NoClassDefFoundError when running Hadoop with HBase

Hello guys,

I setup a Hadoop and HBase in EC2. My Settings as follows:
Apache Official Version
Hadoop 0.20.203.0
HBase 0.90.4
1 master node for Hadoop and HBase , 1 tasktracker/regionserver for
Hadoop/HBase.

I already set the HADOOP_CLASSPATH in hadoop-env.sh

export HADOOP_CLASSPATH="$HADOOP_CLASSPATH:$HBASE_HOME/lib/zookeeper.jar"
export HADOOP_CLASSPATH="$HADOOP_CLASSPATH:$HBASE_HOME/hbase.jar"

Then I test  HBase which can create table through Java Client and Hadoop
framework can work (I test a MapReduce program to generate data
successfully)

The problems I meet
1) A mapreduce job failed at

JobTracker shows:
"""
org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able to
connect to ZooKeeper but the connection closes immediately. This could
be a sign that the server has too many connections (30 is the
default). Consider inspecting your ZK server logs for that error and
then make sure you are reusing HBaseConfiguration as often as you can.
See HTable's javadoc for more information.
       at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:155)
       at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1002)
       at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.setupZookeeperTrackers(HConnectionManager.java:304)
       at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:295)
       at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:157)
"""

2)
When another mapreduce job:

/usr/local/hadoop-0.20.203.0/bin/hadoop jar
./bin/../dist/xxxxxx.jar pMapReduce.SmartRunner -numReducers
80 -inDir /root/test1/input -outDir /root/test1/output -landmarkTable
Landmarks -resultsTable test_one -numIter 10 -maxLatency 75
-filterMinDist 10 -hostAnswerWeight 5 -minNumLandmarks 1 -minNumMeas 1
-alwaysUseWeightedIxn -writeFullDetails -weightMonte -allTarg
-allLookup -clean -cleanResultsTable

JobTracker shows error:
""
12/01/23 00:51:31 INFO mapred.JobClient: Running job: job_201201212243_0009
12/01/23 00:51:32 INFO mapred.JobClient:  map 0% reduce 0%
12/01/23 00:51:40 INFO mapred.JobClient: Task Id :
attempt_201201212243_0009_m_000174_0, Status : FAILED
java.lang.Throwable: Child Error
       at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
       at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
"""

TaskTracker log:
"""
Could not find the main class: .  Program will exit.
Exception in thread "main" java.lang.NoClassDefFoundError:
Caused by: java.lang.ClassNotFoundException:
       at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
       at java.security.AccessController.doPrivileged(Native Method)
       at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
       at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
       at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
       at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
Could not find the main class: .  Program will exit.
"""

The real entry is the main() in SmartRunner.class
 jar tf ./bin/../dist/xxxxxx.jar|grep SmartRunner
pMapReduce/SmartRunner.class

Can anyone help me, thanks a lot.
-- 
Best Regards,
--
Fei Dong

Re: NoClassDefFoundError when running Hadoop with HBase

Posted by Fei Dong <do...@gmail.com>.
Hi Stack,

I try ./zkCli.sh -server 10.114.45.186:2181, it works. I will list
hbase-site.xml, zookeeper log, and Hadoop logs. Could you take a look?
Thanks.

 hbase-site.xml:
<property>
    <name>hbase.zookeeper.property.clientPort</name>
    <value>2181</value>
  </property>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>10.114.45.186</value>
  </property>
  <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/mnt/zookeeper</value>
  </property>
  <property>
    <name>hbase.zookeeper.property.maxClientCnxns</name>
    <value>1000</value>
  </property>

Zookeeper log shows:
2012-01-24 16:17:35,665 INFO
org.apache.zookeeper.server.NIOServerCnxn: Accepted socket connection
from /127.0.0.1:41807
2012-01-24 16:17:35,668 INFO
org.apache.zookeeper.server.NIOServerCnxn: Client attempting to
establish new session at /127.0.0.1:41807
2012-01-24 16:17:35,670 INFO
org.apache.zookeeper.server.NIOServerCnxn: Established session
0x13510f94d2d002c with negotiated timeout 180000 for client
/127.0.0.1:41807
2012-01-24 16:19:12,313 WARN
org.apache.zookeeper.server.NIOServerCnxn: EndOfStreamException:
Unable to read additional data from client sessionid
0x13510f94d2d002c, likely client has closed socket
2012-01-24 16:19:12,315 INFO
org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection
for client /127.0.0.1:41807 which had sessionid 0x13510f94d2d002c


Hadoop Information and Errors:
12/01/24 16:17:35 INFO zookeeper.ZooKeeper: Client
environment:java.library.path=/usr/local/hadoop-0.20.205.0/libexec/../lib
12/01/24 16:17:35 INFO zookeeper.ZooKeeper: Client
environment:java.io.tmpdir=/tmp
12/01/24 16:17:35 INFO zookeeper.ZooKeeper: Client
environment:java.compiler=<NA>
12/01/24 16:17:35 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
12/01/24 16:17:35 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
12/01/24 16:17:35 INFO zookeeper.ZooKeeper: Client
environment:os.version=2.6.35.6-48.fc14.x86_64
12/01/24 16:17:35 INFO zookeeper.ZooKeeper: Client environment:user.name=root
12/01/24 16:17:35 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
12/01/24 16:17:35 INFO zookeeper.ZooKeeper: Client
environment:user.dir=/root/test/Alidade2
12/01/24 16:17:35 INFO zookeeper.ZooKeeper: Initiating client
connection, connectString=localhost:2181 sessionTimeout=180000
watcher=hconnection
12/01/24 16:17:35 INFO zookeeper.ClientCnxn: Opening socket connection
to server localhost/127.0.0.1:2181
12/01/24 16:17:35 INFO zookeeper.ClientCnxn: Socket connection
established to localhost/127.0.0.1:2181, initiating session
12/01/24 16:17:35 INFO zookeeper.ClientCnxn: Session establishment
complete on server localhost/127.0.0.1:2181, sessionid =
0x13510f94d2d002c, negotiated timeout = 180000
Submitting job
12/01/24 16:17:37 INFO mapred.JobClient: Running job: job_201201241325_0027
12/01/24 16:17:39 INFO mapred.JobClient:  map 0% reduce 0%
12/01/24 16:18:03 INFO mapred.JobClient: Task Id :
attempt_201201241325_0027_m_000000_0, Status : FAILED
org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able to
connect to ZooKeeper but the connection closes immediately. This could
be a sign that the server has too many connections (30 is the
default). Consider inspecting your ZK server logs for that error and
then make sure you are reusing HBaseConfiguration as often as you can.
See HTable's javadoc for more information.
	at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:155)
	at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1002)
	at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.setupZookeeperTrackers(HConnectionManager.java:304)
	at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:295)
	at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:157)
	at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:169)
	at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:147)
	at Common.DB.LandmarkDB.<init>(LandmarkDB.java:73)
	at Common.DB.LandmarkDB.getInstance(LandmarkDB.java:54)
	at Common.Data.ObservationRecord.init(ObservationRecord.java:301)
	at pMapReduce.Map.RawInputMapper.setup(RawInputMapper.java:45)
	at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142)
	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
	at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
	at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /hbase
	at org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
	at org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
	at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:809)
	at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:837)
	at org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:903)
	at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:133)
	... 18 more

attempt_201201241325_0027_m_000000_0:  INIT: We are starting with
85602712 bytes free, 416284672 bytes total
attempt_201201241325_0027_m_000000_0: Subnet Bit len is 0

On Tue, Jan 24, 2012 at 3:10 PM, Fei Dong <do...@gmail.com> wrote:
> Hi Stack
>
> On Tue, Jan 24, 2012 at 12:11 PM, Stack <st...@duboce.net> wrote:
>> On Tue, Jan 24, 2012 at 8:11 AM, Fei Dong <do...@gmail.com> wrote:
>>> It says only Hadoop 0.20.205.x can match?
>>>
>>
>> No.  Also includes 1.0.0 and other hadoop offerings (read through that section)
>>
>>
>>
>>> I did not run any application before, so it should not have concurrent
>>> problem. Then I set it in hbase-site.xml, it still reports such error.
>>>    <name>hbase.zookeeper.property.maxClientCnxns</name>
>>>    <value>1000</value>
>>>
>>
>> Anything in zk logs?  If you connect to it w/ zkcli does it say > 1000
>> connections?
>>
> I test the hbase shell on another machine, which can "put", "get"
> record successfully. So I guess Zookeeper is running.
>
>>
>
> It is weird that it does not mention any path or class name behind
> "NoClassDefFoundError"
> It seems some error occurs when copying jar from JobTracker to
> TaskTracker, or it does not copy.
>
> The task tracker error log:
>
> Exception in thread "main" java.lang.NoClassDefFoundError:
> Caused by: java.lang.ClassNotFoundException:
>        at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>        at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
>        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>        at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
> Could not find the main class: .  Program will exit.
>>
>> St.Ack
>
>
>
> --
> Best Regards,
> --
> Fei Dong



-- 
Best Regards,
--
Fei Dong

Re: NoClassDefFoundError when running Hadoop with HBase

Posted by Fei Dong <do...@gmail.com>.
On Tue, Jan 24, 2012 at 5:01 PM, Stack <st...@duboce.net> wrote:

> On Tue, Jan 24, 2012 at 12:10 PM, Fei Dong <do...@gmail.com> wrote:
> > I test the hbase shell on another machine, which can "put", "get"
> > record successfully. So I guess Zookeeper is running.
> >
>
> Whats difference between two machines?
>
> Oh, I launch one master and one slave on EC2 which have the same config.
I run a zkcli.sh on one slave machine and test hbase shell which shows it
can connect to HBASE master node and do put/get operations.


> > It is weird that it does not mention any path or class name behind
> > "NoClassDefFoundError"
> > It seems some error occurs when copying jar from JobTracker to
> > TaskTracker, or it does not copy.
> >
> > The task tracker error log:
> >
> > Exception in thread "main" java.lang.NoClassDefFoundError:
> > Caused by: java.lang.ClassNotFoundException:
>
>
> This says that you likely have mangled CLASSPATH:
>
> http://stackoverflow.com/questions/2159006/noclassdeffounderror-without-any-class-name
>
> Is that possible?
>
> Thanks. I comment

/*
String std_child_opts="-server "+
    "-XX:+HeapDumpOnOutOfMemoryError "+
    "-XX:+UseConcMarkSweepGC "+
    "-XX:+UseParNewGC ";
    //"-XX:ParallelGCThreads=8";

                conf.set("mapred.map.child.java.opts","-Xmx500m
"+std_child_opts);
                conf.set("mapred.reduce.child.java.opts","-Xmx1000m
"+std_child_opts);
conf.set("mapred.map.output.compression.codec","org.apache.hadoop.io.compress.SnappyCodec");

conf.set("mapred.output.compression.codec","org.apache.hadoop.io.compress.SnappyCodec");
*/
Then NoClassDefFoundError disappear, instead it shows the

"HBase is able to
connect to ZooKeeper but the connection closes immediately. This could
be a sign that the server has too many connections (30 is the
default)."

St.Ack
>
> >        at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
> >        at java.security.AccessController.doPrivileged(Native Method)
> >        at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
> >        at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
> >        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
> >        at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
> > Could not find the main class: .  Program will exit.
> >>
> >> St.Ack
> >
> >
> >
> > --
> > Best Regards,
> > --
> > Fei Dong
>



-- 
Best Regards,
-- 
Fei Dong
**

Re: NoClassDefFoundError when running Hadoop with HBase

Posted by Stack <st...@duboce.net>.
On Tue, Jan 24, 2012 at 12:10 PM, Fei Dong <do...@gmail.com> wrote:
> I test the hbase shell on another machine, which can "put", "get"
> record successfully. So I guess Zookeeper is running.
>

Whats difference between two machines?

> It is weird that it does not mention any path or class name behind
> "NoClassDefFoundError"
> It seems some error occurs when copying jar from JobTracker to
> TaskTracker, or it does not copy.
>
> The task tracker error log:
>
> Exception in thread "main" java.lang.NoClassDefFoundError:
> Caused by: java.lang.ClassNotFoundException:


This says that you likely have mangled CLASSPATH:
http://stackoverflow.com/questions/2159006/noclassdeffounderror-without-any-class-name

Is that possible?

St.Ack

>        at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>        at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
>        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>        at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
> Could not find the main class: .  Program will exit.
>>
>> St.Ack
>
>
>
> --
> Best Regards,
> --
> Fei Dong

Re: NoClassDefFoundError when running Hadoop with HBase

Posted by Fei Dong <do...@gmail.com>.
Hi Stack

On Tue, Jan 24, 2012 at 12:11 PM, Stack <st...@duboce.net> wrote:
> On Tue, Jan 24, 2012 at 8:11 AM, Fei Dong <do...@gmail.com> wrote:
>> It says only Hadoop 0.20.205.x can match?
>>
>
> No.  Also includes 1.0.0 and other hadoop offerings (read through that section)
>
>
>
>> I did not run any application before, so it should not have concurrent
>> problem. Then I set it in hbase-site.xml, it still reports such error.
>>    <name>hbase.zookeeper.property.maxClientCnxns</name>
>>    <value>1000</value>
>>
>
> Anything in zk logs?  If you connect to it w/ zkcli does it say > 1000
> connections?
>
I test the hbase shell on another machine, which can "put", "get"
record successfully. So I guess Zookeeper is running.

>

It is weird that it does not mention any path or class name behind
"NoClassDefFoundError"
It seems some error occurs when copying jar from JobTracker to
TaskTracker, or it does not copy.

The task tracker error log:

Exception in thread "main" java.lang.NoClassDefFoundError:
Caused by: java.lang.ClassNotFoundException:
	at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
Could not find the main class: .  Program will exit.
>
> St.Ack



-- 
Best Regards,
--
Fei Dong

Re: NoClassDefFoundError when running Hadoop with HBase

Posted by Stack <st...@duboce.net>.
On Tue, Jan 24, 2012 at 8:11 AM, Fei Dong <do...@gmail.com> wrote:
> It says only Hadoop 0.20.205.x can match?
>

No.  Also includes 1.0.0 and other hadoop offerings (read through that section)



> I did not run any application before, so it should not have concurrent
> problem. Then I set it in hbase-site.xml, it still reports such error.
>    <name>hbase.zookeeper.property.maxClientCnxns</name>
>    <value>1000</value>
>

Anything in zk logs?  If you connect to it w/ zkcli does it say > 1000
connections?



St.Ack

Re: NoClassDefFoundError when running Hadoop with HBase

Posted by Harsh J <ha...@cloudera.com>.
Try starting it as a regular, non-root user. This is an issue with
0.20.205 scripts, perhaps fixed in more recent releases, but you do
not want to be running Hadoop as root anyway.

On Tue, Jan 24, 2012 at 10:22 PM, Fei Dong <do...@gmail.com> wrote:
> When I install hadoop-0.20.205.0 , the namenode can not start.
>
> [root@ip-10-114-45-186 logs]# /usr/local/hadoop-0.20.205.0/bin/start-dfs.sh
> starting namenode, logging to
> /mnt/hadoop/logs/hadoop-root-namenode-ip-10-114-45-186.out
> ip-10-12-55-242.ec2.internal: starting datanode, logging to
> /mnt/hadoop/logs/hadoop-root-datanode-ip-10-12-55-242.out
> ip-10-12-55-242.ec2.internal: Unrecognized option: -jvm
> ip-10-12-55-242.ec2.internal: Could not create the Java virtual machine.
>
> I did not find the place of "-jvm" in config file. Do you misconfig something?
>
> On Tue, Jan 24, 2012 at 11:11 AM, Fei Dong <do...@gmail.com> wrote:
>> Thanks Stack,
>>
>> On Tue, Jan 24, 2012 at 1:07 AM, Stack <st...@duboce.net> wrote:
>>> On Mon, Jan 23, 2012 at 5:32 PM, Fei Dong <do...@gmail.com> wrote:
>>>> Hello guys,
>>>>
>>>> I setup a Hadoop and HBase in EC2. My Settings as follows:
>>>> Apache Official Version
>>>> Hadoop 0.20.203.0
>>>
>>> HBase won't work on this version of hadoop.  See
>>> http://hbase.apache.org/book.html#hadoop
>>>
>> It says only Hadoop 0.20.205.x can match?
>>
>>>
>>>> export HADOOP_CLASSPATH="$HADOOP_CLASSPATH:$HBASE_HOME/lib/zookeeper.jar"
>>>> export HADOOP_CLASSPATH="$HADOOP_CLASSPATH:$HBASE_HOME/hbase.jar"
>>>>
>>>
>>> The jars are not normally named as you have them above.  Usually there
>>> is a version on the jar name.
>>>
>> I soft-linked the zookeeper.jar and hbase to the version ones.
>>
>>>
>>>> org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able to
>>>> connect to ZooKeeper but the connection closes immediately. This could
>>>> be a sign that the server has too many connections (30 is the
>>>> default). Consider inspecting your ZK server logs for that error and
>>>> then make sure you are reusing HBaseConfiguration as often as you can.
>>>> See HTable's javadoc for more information.
>>>>        at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:155)
>>>>        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1002)
>>>>        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.setupZookeeperTrackers(HConnectionManager.java:304)
>>>>        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:295)
>>>>        at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:157)
>>>> """
>>>>
>>>
>>> Search this mailing list archive for similar reports to above.    Up
>>> your maximum count of concurrent zookeeper connections as work around.
>>>
>> I did not run any application before, so it should not have concurrent
>> problem. Then I set it in hbase-site.xml, it still reports such error.
>>    <name>hbase.zookeeper.property.maxClientCnxns</name>
>>    <value>1000</value>
>>
>>>
>>>> 2)
>>>> When another mapreduce job:
>>>>
>>>> /usr/local/hadoop-0.20.203.0/bin/hadoop jar
>>>> ./bin/../dist/xxxxxx.jar pMapReduce.SmartRunner -numReducers
>>>> 80 -inDir /root/test1/input -outDir /root/test1/output -landmarkTable
>>>> Landmarks -resultsTable test_one -numIter 10 -maxLatency 75
>>>> -filterMinDist 10 -hostAnswerWeight 5 -minNumLandmarks 1 -minNumMeas 1
>>>> -alwaysUseWeightedIxn -writeFullDetails -weightMonte -allTarg
>>>> -allLookup -clean -cleanResultsTable
>>>>
>>>> JobTracker shows error:
>>>> ""
>>>> 12/01/23 00:51:31 INFO mapred.JobClient: Running job: job_201201212243_0009
>>>> 12/01/23 00:51:32 INFO mapred.JobClient:  map 0% reduce 0%
>>>> 12/01/23 00:51:40 INFO mapred.JobClient: Task Id :
>>>> attempt_201201212243_0009_m_000174_0, Status : FAILED
>>>> java.lang.Throwable: Child Error
>>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>>>> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
>>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>>> """
>>>>
>>>> TaskTracker log:
>>>> """
>>>> Could not find the main class: .  Program will exit.
>>>> Exception in thread "main" java.lang.NoClassDefFoundError:
>>>> Caused by: java.lang.ClassNotFoundException:
>>>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>>        at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
>>>>        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
>>>> Could not find the main class: .  Program will exit.
>>>> """
>>>
>>> Thats a pretty basic failure; it couldn't find basic class java class
>>> in classpath.  Can you dig in more on this?  You've seen this:
>>> http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html#classpath
>>>
>> I will first install Hadoop0.20.205 and try again.
>>
>>> St.Ack
>>>
>>>>
>>>> The real entry is the main() in SmartRunner.class
>>>>  jar tf ./bin/../dist/xxxxxx.jar|grep SmartRunner
>>>> pMapReduce/SmartRunner.class
>>>>
>>>> Can anyone help me, thanks a lot.
>>>> --
>>>> Best Regards,
>>>> --
>>>> Fei Dong
>>
>>
>>
>> --
>> Best Regards,
>> --
>> Fei Dong
>
>
>
> --
> Best Regards,
> --
> Fei Dong



-- 
Harsh J
Customer Ops. Engineer, Cloudera

Re: NoClassDefFoundError when running Hadoop with HBase

Posted by Fei Dong <do...@gmail.com>.
When I install hadoop-0.20.205.0 , the namenode can not start.

[root@ip-10-114-45-186 logs]# /usr/local/hadoop-0.20.205.0/bin/start-dfs.sh
starting namenode, logging to
/mnt/hadoop/logs/hadoop-root-namenode-ip-10-114-45-186.out
ip-10-12-55-242.ec2.internal: starting datanode, logging to
/mnt/hadoop/logs/hadoop-root-datanode-ip-10-12-55-242.out
ip-10-12-55-242.ec2.internal: Unrecognized option: -jvm
ip-10-12-55-242.ec2.internal: Could not create the Java virtual machine.

I did not find the place of "-jvm" in config file. Do you misconfig something?

On Tue, Jan 24, 2012 at 11:11 AM, Fei Dong <do...@gmail.com> wrote:
> Thanks Stack,
>
> On Tue, Jan 24, 2012 at 1:07 AM, Stack <st...@duboce.net> wrote:
>> On Mon, Jan 23, 2012 at 5:32 PM, Fei Dong <do...@gmail.com> wrote:
>>> Hello guys,
>>>
>>> I setup a Hadoop and HBase in EC2. My Settings as follows:
>>> Apache Official Version
>>> Hadoop 0.20.203.0
>>
>> HBase won't work on this version of hadoop.  See
>> http://hbase.apache.org/book.html#hadoop
>>
> It says only Hadoop 0.20.205.x can match?
>
>>
>>> export HADOOP_CLASSPATH="$HADOOP_CLASSPATH:$HBASE_HOME/lib/zookeeper.jar"
>>> export HADOOP_CLASSPATH="$HADOOP_CLASSPATH:$HBASE_HOME/hbase.jar"
>>>
>>
>> The jars are not normally named as you have them above.  Usually there
>> is a version on the jar name.
>>
> I soft-linked the zookeeper.jar and hbase to the version ones.
>
>>
>>> org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able to
>>> connect to ZooKeeper but the connection closes immediately. This could
>>> be a sign that the server has too many connections (30 is the
>>> default). Consider inspecting your ZK server logs for that error and
>>> then make sure you are reusing HBaseConfiguration as often as you can.
>>> See HTable's javadoc for more information.
>>>        at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:155)
>>>        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1002)
>>>        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.setupZookeeperTrackers(HConnectionManager.java:304)
>>>        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:295)
>>>        at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:157)
>>> """
>>>
>>
>> Search this mailing list archive for similar reports to above.    Up
>> your maximum count of concurrent zookeeper connections as work around.
>>
> I did not run any application before, so it should not have concurrent
> problem. Then I set it in hbase-site.xml, it still reports such error.
>    <name>hbase.zookeeper.property.maxClientCnxns</name>
>    <value>1000</value>
>
>>
>>> 2)
>>> When another mapreduce job:
>>>
>>> /usr/local/hadoop-0.20.203.0/bin/hadoop jar
>>> ./bin/../dist/xxxxxx.jar pMapReduce.SmartRunner -numReducers
>>> 80 -inDir /root/test1/input -outDir /root/test1/output -landmarkTable
>>> Landmarks -resultsTable test_one -numIter 10 -maxLatency 75
>>> -filterMinDist 10 -hostAnswerWeight 5 -minNumLandmarks 1 -minNumMeas 1
>>> -alwaysUseWeightedIxn -writeFullDetails -weightMonte -allTarg
>>> -allLookup -clean -cleanResultsTable
>>>
>>> JobTracker shows error:
>>> ""
>>> 12/01/23 00:51:31 INFO mapred.JobClient: Running job: job_201201212243_0009
>>> 12/01/23 00:51:32 INFO mapred.JobClient:  map 0% reduce 0%
>>> 12/01/23 00:51:40 INFO mapred.JobClient: Task Id :
>>> attempt_201201212243_0009_m_000174_0, Status : FAILED
>>> java.lang.Throwable: Child Error
>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>>> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>> """
>>>
>>> TaskTracker log:
>>> """
>>> Could not find the main class: .  Program will exit.
>>> Exception in thread "main" java.lang.NoClassDefFoundError:
>>> Caused by: java.lang.ClassNotFoundException:
>>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>        at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
>>>        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
>>> Could not find the main class: .  Program will exit.
>>> """
>>
>> Thats a pretty basic failure; it couldn't find basic class java class
>> in classpath.  Can you dig in more on this?  You've seen this:
>> http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html#classpath
>>
> I will first install Hadoop0.20.205 and try again.
>
>> St.Ack
>>
>>>
>>> The real entry is the main() in SmartRunner.class
>>>  jar tf ./bin/../dist/xxxxxx.jar|grep SmartRunner
>>> pMapReduce/SmartRunner.class
>>>
>>> Can anyone help me, thanks a lot.
>>> --
>>> Best Regards,
>>> --
>>> Fei Dong
>
>
>
> --
> Best Regards,
> --
> Fei Dong



-- 
Best Regards,
--
Fei Dong

Re: NoClassDefFoundError when running Hadoop with HBase

Posted by Fei Dong <do...@gmail.com>.
Thanks Stack,

On Tue, Jan 24, 2012 at 1:07 AM, Stack <st...@duboce.net> wrote:
> On Mon, Jan 23, 2012 at 5:32 PM, Fei Dong <do...@gmail.com> wrote:
>> Hello guys,
>>
>> I setup a Hadoop and HBase in EC2. My Settings as follows:
>> Apache Official Version
>> Hadoop 0.20.203.0
>
> HBase won't work on this version of hadoop.  See
> http://hbase.apache.org/book.html#hadoop
>
It says only Hadoop 0.20.205.x can match?

>
>> export HADOOP_CLASSPATH="$HADOOP_CLASSPATH:$HBASE_HOME/lib/zookeeper.jar"
>> export HADOOP_CLASSPATH="$HADOOP_CLASSPATH:$HBASE_HOME/hbase.jar"
>>
>
> The jars are not normally named as you have them above.  Usually there
> is a version on the jar name.
>
I soft-linked the zookeeper.jar and hbase to the version ones.

>
>> org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able to
>> connect to ZooKeeper but the connection closes immediately. This could
>> be a sign that the server has too many connections (30 is the
>> default). Consider inspecting your ZK server logs for that error and
>> then make sure you are reusing HBaseConfiguration as often as you can.
>> See HTable's javadoc for more information.
>>        at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:155)
>>        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1002)
>>        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.setupZookeeperTrackers(HConnectionManager.java:304)
>>        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:295)
>>        at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:157)
>> """
>>
>
> Search this mailing list archive for similar reports to above.    Up
> your maximum count of concurrent zookeeper connections as work around.
>
I did not run any application before, so it should not have concurrent
problem. Then I set it in hbase-site.xml, it still reports such error.
    <name>hbase.zookeeper.property.maxClientCnxns</name>
    <value>1000</value>

>
>> 2)
>> When another mapreduce job:
>>
>> /usr/local/hadoop-0.20.203.0/bin/hadoop jar
>> ./bin/../dist/xxxxxx.jar pMapReduce.SmartRunner -numReducers
>> 80 -inDir /root/test1/input -outDir /root/test1/output -landmarkTable
>> Landmarks -resultsTable test_one -numIter 10 -maxLatency 75
>> -filterMinDist 10 -hostAnswerWeight 5 -minNumLandmarks 1 -minNumMeas 1
>> -alwaysUseWeightedIxn -writeFullDetails -weightMonte -allTarg
>> -allLookup -clean -cleanResultsTable
>>
>> JobTracker shows error:
>> ""
>> 12/01/23 00:51:31 INFO mapred.JobClient: Running job: job_201201212243_0009
>> 12/01/23 00:51:32 INFO mapred.JobClient:  map 0% reduce 0%
>> 12/01/23 00:51:40 INFO mapred.JobClient: Task Id :
>> attempt_201201212243_0009_m_000174_0, Status : FAILED
>> java.lang.Throwable: Child Error
>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>> """
>>
>> TaskTracker log:
>> """
>> Could not find the main class: .  Program will exit.
>> Exception in thread "main" java.lang.NoClassDefFoundError:
>> Caused by: java.lang.ClassNotFoundException:
>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>>        at java.security.AccessController.doPrivileged(Native Method)
>>        at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
>>        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
>> Could not find the main class: .  Program will exit.
>> """
>
> Thats a pretty basic failure; it couldn't find basic class java class
> in classpath.  Can you dig in more on this?  You've seen this:
> http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html#classpath
>
I will first install Hadoop0.20.205 and try again.

> St.Ack
>
>>
>> The real entry is the main() in SmartRunner.class
>>  jar tf ./bin/../dist/xxxxxx.jar|grep SmartRunner
>> pMapReduce/SmartRunner.class
>>
>> Can anyone help me, thanks a lot.
>> --
>> Best Regards,
>> --
>> Fei Dong



-- 
Best Regards,
--
Fei Dong

Re: NoClassDefFoundError when running Hadoop with HBase

Posted by Stack <st...@duboce.net>.
On Mon, Jan 23, 2012 at 5:32 PM, Fei Dong <do...@gmail.com> wrote:
> Hello guys,
>
> I setup a Hadoop and HBase in EC2. My Settings as follows:
> Apache Official Version
> Hadoop 0.20.203.0

HBase won't work on this version of hadoop.  See
http://hbase.apache.org/book.html#hadoop


> export HADOOP_CLASSPATH="$HADOOP_CLASSPATH:$HBASE_HOME/lib/zookeeper.jar"
> export HADOOP_CLASSPATH="$HADOOP_CLASSPATH:$HBASE_HOME/hbase.jar"
>

The jars are not normally named as you have them above.  Usually there
is a version on the jar name.


> org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able to
> connect to ZooKeeper but the connection closes immediately. This could
> be a sign that the server has too many connections (30 is the
> default). Consider inspecting your ZK server logs for that error and
> then make sure you are reusing HBaseConfiguration as often as you can.
> See HTable's javadoc for more information.
>        at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:155)
>        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1002)
>        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.setupZookeeperTrackers(HConnectionManager.java:304)
>        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:295)
>        at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:157)
> """
>

Search this mailing list archive for similar reports to above.    Up
your maximum count of concurrent zookeeper connections as work around.


> 2)
> When another mapreduce job:
>
> /usr/local/hadoop-0.20.203.0/bin/hadoop jar
> ./bin/../dist/xxxxxx.jar pMapReduce.SmartRunner -numReducers
> 80 -inDir /root/test1/input -outDir /root/test1/output -landmarkTable
> Landmarks -resultsTable test_one -numIter 10 -maxLatency 75
> -filterMinDist 10 -hostAnswerWeight 5 -minNumLandmarks 1 -minNumMeas 1
> -alwaysUseWeightedIxn -writeFullDetails -weightMonte -allTarg
> -allLookup -clean -cleanResultsTable
>
> JobTracker shows error:
> ""
> 12/01/23 00:51:31 INFO mapred.JobClient: Running job: job_201201212243_0009
> 12/01/23 00:51:32 INFO mapred.JobClient:  map 0% reduce 0%
> 12/01/23 00:51:40 INFO mapred.JobClient: Task Id :
> attempt_201201212243_0009_m_000174_0, Status : FAILED
> java.lang.Throwable: Child Error
>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
> """
>
> TaskTracker log:
> """
> Could not find the main class: .  Program will exit.
> Exception in thread "main" java.lang.NoClassDefFoundError:
> Caused by: java.lang.ClassNotFoundException:
>        at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>        at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
>        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>        at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
> Could not find the main class: .  Program will exit.
> """

Thats a pretty basic failure; it couldn't find basic class java class
in classpath.  Can you dig in more on this?  You've seen this:
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html#classpath

St.Ack

>
> The real entry is the main() in SmartRunner.class
>  jar tf ./bin/../dist/xxxxxx.jar|grep SmartRunner
> pMapReduce/SmartRunner.class
>
> Can anyone help me, thanks a lot.
> --
> Best Regards,
> --
> Fei Dong