You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by llpind <so...@hotmail.com> on 2009/06/22 19:45:13 UTC

Running programs under HBase 0.20.0 alpha

Hey All,

I've got HBase 0.20.0 Alpha installed and running.  When i write a class,
and try to run it, I get:

09/06/22 10:39:48 FATAL zookeeper.ZooKeeperWrapper: Fail to read properties
from zoo.cfg
java.io.IOException: zoo.cfg not found
        at
org.apache.hadoop.hbase.zookeeper.HQuorumPeer.parseZooKeeperConfig(HQuorumPeer.java:86)
        at
org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.loadZooKeeperConfig(ZooKeeperWrapper.java:188)
        at
org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.<clinit>(ZooKeeperWrapper.java:61)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getZooKeeperWrapper(HConnectionManager.java:818)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:835)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:514)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:490)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:564)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:523)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:490)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:564)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:527)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:490)
        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:124)
        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:107)
        at HBaseLoaderEntity.populateEntities(HBaseLoaderEntity.java:49)
        at HBaseLoaderEntity.main(HBaseLoaderEntity.java:80)

=======================================================================


I receive this exception when i package a jar or standalone .class file.

-- 
View this message in context: http://www.nabble.com/Running-programs-under-HBase-0.20.0-alpha-tp24152144p24152144.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: Running programs under HBase 0.20.0 alpha

Posted by Ryan Rawson <ry...@gmail.com>.
hey,

right now for TIF you have to specific column names, not just family specs
'link:'.   Hopefully in this iteration you can do so.

-ryan

On Tue, Jun 23, 2009 at 12:04 PM, llpind <so...@hotmail.com> wrote:

>
> Okay thanks.
>
> Still unable to do a map/reduce job.  I changed the API to use Put instead
> of BatchUpdate, along with some of the moved Classes.  This is a map reduce
> program which reads from an input table, and outputs to another HBase
> table.
> Still using JobConf here:
>
> ======================================================================
>                JobConf c = new JobConf(getConf(), getClass());
>                c.setJobName(getClass().getName());
>
>                String inputTableName = args[0];
>
>                System.out.println(" STARTING... 7");
>
> c.setInputFormat(org.apache.hadoop.hbase.mapreduce.TableInputFormat.class);
>
>
> c.setOutputFormat(org.apache.hadoop.hbase.mapreduce.TableOutputFormat.class);
>
>            c.setInputFormat(TableInputFormat.class);
>            c.setMapOutputKeyClass(ImmutableBytesWritable.class);
>            c.setMapOutputValueClass(IntWritable.class);
>            c.setMapperClass(MyMapper.class);
>
>            FileInputFormat.addInputPaths(c, inputTableName);
>            c.set(TableInputFormat.COLUMN_LIST, "link:");
>
>
>            c.setOutputFormat(TableOutputFormat.class);
>            c.setReducerClass(MyReducer.class);
>            c.set(TableOutputFormat.OUTPUT_TABLE, outputTableName);
>            c.setOutputKeyClass(ImmutableBytesWritable.class);
>            c.setOutputValueClass(Put.class);
>
>                JobClient.runJob(c);
>
> =====================================================================
>
>                @Override
>                public void map(
>                                ImmutableBytesWritable key,
>                                Result row,
>                                OutputCollector<ImmutableBytesWritable,
> IntWritable> collector,
>                                Reporter r) throws IOException {
>
>                        collector.collect(new
> ImmutableBytesWritable(extractEntity(key.get())),
> one);
>                }
>
>
> =============================================================================
>                private Put put = new Put();
>                @Override
>                public void reduce(ImmutableBytesWritable k,
>                                Iterator<IntWritable> v,
>                                OutputCollector<ImmutableBytesWritable, Put>
> c,
>                                Reporter r) throws IOException {
>
>                        put = new Put(k.get());
>                int sum = 0;
>                while (v.hasNext()) {
>                  sum += v.next().get();
>                }
>                put.add(Bytes.toBytes("link"), Bytes.toBytes("count"),
> Bytes.toBytes(sum) );
>                c.collect(k, put);
>
>
>
>                }
> ====================================================================
> 2009-06-23 11:37:30,083 INFO org.apache.zookeeper.ClientCnxn: Server
> connection successful
> 2009-06-23 11:37:30,410 INFO org.apache.hadoop.mapred.MapTask:
> numReduceTasks: 6
> 2009-06-23 11:37:30,419 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb =
> 100
> 2009-06-23 11:37:30,587 INFO org.apache.hadoop.mapred.MapTask: data buffer
> =
> 79691776/99614720
> 2009-06-23 11:37:30,587 INFO org.apache.hadoop.mapred.MapTask: record
> buffer
> = 262144/327680
> 2009-06-23 11:44:20,251 WARN org.apache.hadoop.mapred.TaskTracker: Error
> running child
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to contact
> region server null for region , row '', but failed after 10 attempts.
> Exceptions:
> java.lang.NullPointerException
> java.lang.NullPointerException
> java.lang.NullPointerException
> java.lang.NullPointerException
> java.lang.NullPointerException
> java.lang.NullPointerException
> java.lang.NullPointerException
> java.lang.NullPointerException
> java.lang.NullPointerException
> java.lang.NullPointerException
>
>        at
>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getRegionServerWithRetries(HConnectionManager.java:935)
>        at
>
> org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1797)
>        at
>
> org.apache.hadoop.hbase.client.HTable$ClientScanner.initialize(HTable.java:1745)
>        at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:369)
>        at
>
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.restart(TableInputFormatBase.java:118)
>        at
>
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.next(TableInputFormatBase.java:219)
>        at
>
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.next(TableInputFormatBase.java:87)
>        at
>
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:191)
>        at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:175)
>        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
>        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:356)
>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
>        at org.apache.hadoop.mapred.Child.main(Child.java:170)
> 2009-06-23 11:44:20,253 INFO org.apache.hadoop.mapred.TaskRunner: Runnning
> cleanup for the task
>
>
>
>
> ============================================================================
>
>
> Looks like the ScannerCallable is coming into getRegionServerWithRetries as
> Null?
>
>
>
>
> stack-3 wrote:
> >
> > On Tue, Jun 23, 2009 at 10:07 AM, llpind <so...@hotmail.com> wrote:
> >
> >>
> >> 1 other question.  Do I list all my servers in the zoo.cfg?  Not sure
> >> what
> >> role zookeeper plays in map/reduce, please explain.
> >
> >
> >
> > It plays no role in MR.
> >
> > Please read the 'Getting Started' document.  Has pointers to what zk is
> > and
> > its role in hbase.
> >
> > St.Ack
> >
> >
> >
> >>
> >> --
> >> View this message in context:
> >>
> http://www.nabble.com/Running-programs-under-HBase-0.20.0-alpha-tp24152144p24167811.html
> >> Sent from the HBase User mailing list archive at Nabble.com.
> >>
> >>
> >
> >
>
> --
> View this message in context:
> http://www.nabble.com/Running-programs-under-HBase-0.20.0-alpha-tp24152144p24171189.html
> Sent from the HBase User mailing list archive at Nabble.com.
>
>

Re: Running programs under HBase 0.20.0 alpha

Posted by llpind <so...@hotmail.com>.
The map/reduce job finished.  

The input table has ~ 50 million records with row key in form
(TYPE|VALUE|ID), the output table held row key (TYPE|VALUE) with a single
column family which held a value of count (the # of IDs in TYPE|VALUE).  

Here is the output for the job:

09/06/24 09:10:44 INFO mapred.JobClient: Running job: job_200906240907_0001
09/06/24 09:10:45 INFO mapred.JobClient:  map 0% reduce 0%
09/06/24 09:11:19 WARN zookeeper.ClientCnxn: Exception closing session
0x12212fd59fe000f to sun.nio.ch.SelectionKeyImpl@654df764
java.io.IOException: TIMED OUT
        at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:858)
09/06/24 09:11:21 INFO zookeeper.ClientCnxn: Attempting connection to server
server55/192.168.0.55:2181
09/06/24 09:11:21 INFO zookeeper.ClientCnxn: Priming connection to
java.nio.channels.SocketChannel[connected local=/192.168.0.55:59551
remote=server55/192.168.0.55:2181]
09/06/24 09:11:21 INFO zookeeper.ClientCnxn: Server connection successful
09/06/24 09:11:28 WARN zookeeper.ClientCnxn: Exception closing session
0x12212fd59fe000f to sun.nio.ch.SelectionKeyImpl@74c6eff5
java.io.IOException: TIMED OUT
        at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:858)
09/06/24 09:11:29 INFO zookeeper.ClientCnxn: Attempting connection to server
server55/192.168.0.55:2181
09/06/24 09:11:29 INFO zookeeper.ClientCnxn: Priming connection to
java.nio.channels.SocketChannel[connected local=/192.168.0.55:59558
remote=server55/192.168.0.55:2181]
09/06/24 09:11:29 INFO zookeeper.ClientCnxn: Server connection successful
09/06/24 09:11:36 WARN zookeeper.ClientCnxn: Exception closing session
0x12212fd59fe000f to sun.nio.ch.SelectionKeyImpl@5567d7fb
java.io.IOException: TIMED OUT
        at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:858)
09/06/24 09:11:38 INFO zookeeper.ClientCnxn: Attempting connection to server
server55/192.168.0.55:2181
09/06/24 09:11:38 INFO zookeeper.ClientCnxn: Priming connection to
java.nio.channels.SocketChannel[connected local=/192.168.0.55:59571
remote=server55/192.168.0.55:2181]
09/06/24 09:11:38 INFO zookeeper.ClientCnxn: Server connection successful
09/06/24 09:11:51 WARN zookeeper.ClientCnxn: Exception closing session
0x12212fd59fe000f to sun.nio.ch.SelectionKeyImpl@51701bdc
java.io.IOException: TIMED OUT
        at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:858)
09/06/24 09:11:52 INFO zookeeper.ClientCnxn: Attempting connection to server
server55/192.168.0.55:2181
09/06/24 09:11:52 INFO zookeeper.ClientCnxn: Priming connection to
java.nio.channels.SocketChannel[connected local=/192.168.0.55:59575
remote=server55/192.168.0.55:2181]
09/06/24 09:11:52 INFO zookeeper.ClientCnxn: Server connection successful
09/06/24 09:25:15 INFO mapred.JobClient:  map 3% reduce 0%
09/06/24 09:27:13 INFO mapred.JobClient:  map 7% reduce 0%
09/06/24 09:27:32 INFO mapred.JobClient:  map 7% reduce 1%
09/06/24 09:28:17 INFO mapred.JobClient:  map 7% reduce 2%
09/06/24 09:28:35 INFO mapred.JobClient:  map 10% reduce 2%
09/06/24 09:28:48 INFO mapred.JobClient:  map 10% reduce 3%
09/06/24 09:31:30 INFO mapred.JobClient:  map 14% reduce 3%
09/06/24 09:31:40 INFO mapred.JobClient:  map 14% reduce 4%
09/06/24 09:39:54 INFO mapred.JobClient:  map 17% reduce 4%
09/06/24 09:40:03 INFO mapred.JobClient:  map 17% reduce 5%
09/06/24 09:42:12 INFO mapred.JobClient:  map 25% reduce 5%
09/06/24 09:42:21 INFO mapred.JobClient:  map 25% reduce 6%
09/06/24 09:42:23 INFO mapred.JobClient:  map 25% reduce 7%
09/06/24 09:42:25 INFO mapred.JobClient:  map 25% reduce 8%
09/06/24 09:42:52 INFO mapred.JobClient:  map 28% reduce 8%
09/06/24 09:43:04 INFO mapred.JobClient:  map 28% reduce 9%
09/06/24 09:43:10 INFO mapred.JobClient:  map 32% reduce 9%
09/06/24 09:43:21 INFO mapred.JobClient:  map 32% reduce 10%
09/06/24 09:48:43 INFO mapred.JobClient:  map 35% reduce 10%
09/06/24 09:48:49 INFO mapred.JobClient:  map 35% reduce 11%
09/06/24 09:51:34 INFO mapred.JobClient:  map 39% reduce 11%
09/06/24 09:51:43 INFO mapred.JobClient:  map 39% reduce 12%
09/06/24 09:51:46 INFO mapred.JobClient:  map 39% reduce 13%
09/06/24 09:54:56 INFO mapred.JobClient:  map 42% reduce 13%
09/06/24 09:54:59 INFO mapred.JobClient:  map 46% reduce 13%
09/06/24 09:55:05 INFO mapred.JobClient:  map 46% reduce 15%
09/06/24 09:57:42 INFO mapred.JobClient:  map 50% reduce 15%
09/06/24 09:57:54 INFO mapred.JobClient:  map 50% reduce 16%
09/06/24 09:59:03 INFO mapred.JobClient:  map 53% reduce 16%
09/06/24 09:59:14 INFO mapred.JobClient:  map 53% reduce 17%
09/06/24 10:04:46 INFO mapred.JobClient:  map 57% reduce 17%
09/06/24 10:04:51 INFO mapred.JobClient:  map 57% reduce 18%
09/06/24 10:04:58 INFO mapred.JobClient:  map 57% reduce 19%
09/06/24 10:06:00 INFO mapred.JobClient:  map 60% reduce 19%
09/06/24 10:06:13 INFO mapred.JobClient:  map 60% reduce 20%
09/06/24 10:06:29 INFO mapred.JobClient:  map 64% reduce 20%
09/06/24 10:06:43 INFO mapred.JobClient:  map 64% reduce 21%
09/06/24 10:08:07 INFO mapred.JobClient:  map 67% reduce 21%
09/06/24 10:08:16 INFO mapred.JobClient:  map 67% reduce 22%
09/06/24 10:10:24 INFO mapred.JobClient:  map 71% reduce 22%
09/06/24 10:10:30 INFO mapred.JobClient:  map 71% reduce 23%
09/06/24 10:10:40 INFO mapred.JobClient:  map 74% reduce 23%
09/06/24 10:10:43 INFO mapred.JobClient:  map 74% reduce 24%
09/06/24 10:14:17 INFO mapred.JobClient:  map 78% reduce 24%
09/06/24 10:14:21 INFO mapred.JobClient:  map 78% reduce 25%
09/06/24 10:14:28 INFO mapred.JobClient:  map 78% reduce 26%
09/06/24 10:15:30 INFO mapred.JobClient:  map 82% reduce 26%
09/06/24 10:15:41 INFO mapred.JobClient:  map 82% reduce 27%
09/06/24 10:17:43 INFO mapred.JobClient:  map 85% reduce 27%
09/06/24 10:17:52 INFO mapred.JobClient:  map 85% reduce 28%
09/06/24 10:25:20 INFO mapred.JobClient:  map 89% reduce 28%
09/06/24 10:25:31 INFO mapred.JobClient:  map 89% reduce 29%
09/06/24 10:25:58 INFO mapred.JobClient:  map 92% reduce 29%
09/06/24 10:26:06 INFO mapred.JobClient:  map 92% reduce 30%
09/06/24 10:29:52 INFO mapred.JobClient:  map 96% reduce 30%
09/06/24 10:30:00 INFO mapred.JobClient:  map 96% reduce 31%
09/06/24 10:30:04 INFO mapred.JobClient:  map 96% reduce 32%
09/06/24 10:32:04 INFO mapred.JobClient:  map 100% reduce 32%
09/06/24 10:32:16 INFO mapred.JobClient:  map 100% reduce 33%
09/06/24 10:32:24 INFO mapred.JobClient:  map 100% reduce 39%
09/06/24 10:32:26 INFO mapred.JobClient:  map 100% reduce 45%
09/06/24 10:32:28 INFO mapred.JobClient:  map 100% reduce 51%
09/06/24 10:32:29 INFO mapred.JobClient:  map 100% reduce 57%
09/06/24 10:32:32 INFO mapred.JobClient:  map 100% reduce 69%
09/06/24 10:32:35 INFO mapred.JobClient:  map 100% reduce 70%
09/06/24 10:32:38 INFO mapred.JobClient:  map 100% reduce 71%
09/06/24 10:32:49 INFO mapred.JobClient:  map 100% reduce 72%
09/06/24 10:32:56 INFO mapred.JobClient:  map 100% reduce 74%
09/06/24 10:32:58 INFO mapred.JobClient:  map 100% reduce 75%
09/06/24 10:33:10 INFO mapred.JobClient:  map 100% reduce 76%
09/06/24 10:33:20 INFO mapred.JobClient:  map 100% reduce 77%
09/06/24 10:33:31 INFO mapred.JobClient:  map 100% reduce 78%
09/06/24 10:34:06 INFO mapred.JobClient:  map 100% reduce 79%
09/06/24 10:34:21 INFO mapred.JobClient:  map 100% reduce 80%
09/06/24 10:34:25 INFO mapred.JobClient:  map 100% reduce 82%
09/06/24 10:34:27 INFO mapred.JobClient:  map 100% reduce 83%
09/06/24 10:34:36 INFO mapred.JobClient:  map 100% reduce 84%
09/06/24 10:34:37 INFO mapred.JobClient:  map 100% reduce 85%
09/06/24 10:34:57 INFO mapred.JobClient:  map 100% reduce 86%
09/06/24 10:35:43 INFO mapred.JobClient:  map 100% reduce 87%
09/06/24 10:36:29 INFO mapred.JobClient:  map 100% reduce 88%
09/06/24 10:37:02 INFO mapred.JobClient:  map 100% reduce 89%
09/06/24 10:37:10 INFO mapred.JobClient:  map 100% reduce 90%
09/06/24 10:37:19 INFO mapred.JobClient:  map 100% reduce 91%
09/06/24 10:37:24 INFO mapred.JobClient:  map 100% reduce 92%
09/06/24 10:37:27 INFO mapred.JobClient:  map 100% reduce 93%
09/06/24 10:37:30 INFO mapred.JobClient:  map 100% reduce 94%
09/06/24 10:37:36 INFO mapred.JobClient:  map 100% reduce 95%
09/06/24 10:37:39 INFO mapred.JobClient:  map 100% reduce 96%
09/06/24 10:37:41 INFO mapred.JobClient:  map 100% reduce 97%
09/06/24 10:37:43 INFO mapred.JobClient:  map 100% reduce 98%
09/06/24 10:37:44 INFO mapred.JobClient:  map 100% reduce 100%
09/06/24 10:37:46 INFO mapred.JobClient: Job complete: job_200906240907_0001
09/06/24 10:37:46 INFO mapred.JobClient: Counters: 17
09/06/24 10:37:46 INFO mapred.JobClient:   Job Counters
09/06/24 10:37:46 INFO mapred.JobClient:     Launched reduce tasks=8
09/06/24 10:37:46 INFO mapred.JobClient:     Rack-local map tasks=9
09/06/24 10:37:46 INFO mapred.JobClient:     Launched map tasks=35
09/06/24 10:37:46 INFO mapred.JobClient:     Data-local map tasks=26
09/06/24 10:37:46 INFO mapred.JobClient:   FileSystemCounters
09/06/24 10:37:46 INFO mapred.JobClient:     FILE_BYTES_READ=3482158781
09/06/24 10:37:46 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=5117707908
09/06/24 10:37:46 INFO mapred.JobClient:   Map-Reduce Framework
09/06/24 10:37:46 INFO mapred.JobClient:     Reduce input groups=4324900
09/06/24 10:37:46 INFO mapred.JobClient:     Combine output records=0
09/06/24 10:37:46 INFO mapred.JobClient:     Map input records=58443908
09/06/24 10:37:46 INFO mapred.JobClient:     Reduce shuffle bytes=1592197606
09/06/24 10:37:46 INFO mapred.JobClient:     Reduce output records=4324900
09/06/24 10:37:46 INFO mapred.JobClient:     Spilled Records=183879083
09/06/24 10:37:46 INFO mapred.JobClient:     Map output bytes=1521158516
09/06/24 10:37:46 INFO mapred.JobClient:     Map input bytes=0
09/06/24 10:37:46 INFO mapred.JobClient:     Combine input records=0
09/06/24 10:37:46 INFO mapred.JobClient:     Map output records=58443908
09/06/24 10:37:46 INFO mapred.JobClient:     Reduce input records=58443908
[hadoop@server55 ~]$

=======================================================================


The job took a while as you can see.  when i run row counter it takes about
the same time.

job output:




Job Setup: Successful
Status: Succeeded
Started at: Wed Jun 24 09:10:44 PDT 2009
Finished at: Wed Jun 24 10:37:46 PDT 2009
Finished in: 1hrs, 27mins, 1sec
Job Cleanup: Successful

--------------------------------------------------------------------------------
Kind 	% Complete NumTasks 	Pending 	Running 	Complete 	Killed 
Failed/KilledTask Attempts 
map 	100.00% 	28 			0 			0 			28 			0 			0 / 7 
reduce 	100.00% 	6 			0 			0 			6 			0 			0 / 2 


 Counter 							Map 				Reduce 				Total 
Job Counters 
=============================================================================
-Launched reduce tasks 				0 					0 					8 
-Rack-local map tasks 				0 					0 					9 
-Launched map tasks 				0 					0 					35 
-Data-local map tasks 				0 					0 					26 
FileSystemCounters
=============================================================================
-FILE_BYTES_READ 					1,844,112,413 		1,638,046,368 		3,482,158,781 
-FILE_BYTES_WRITTEN 				3,479,661,540 		1,638,046,368 		5,117,707,908 
Map-Reduce Framework
=============================================================================
-Reduce input groups 				0 					4,324,900 			4,324,900 
-Combine output records 			0 					0 					0 
-Map input records 					58,443,908 			0 					58,443,908 
-Reduce shuffle bytes 				0 					1,592,197,606 		1,592,197,606 
-Reduce output records 				0 					4,324,900 			4,324,900 
-Spilled Records 					125,435,175 		58,443,908 			183,879,083 
-Map output bytes 					1,521,158,516 		0 					1,521,158,516 
-Map input bytes 					0 					0 					0 
-Map output records 				58,443,908 			0 					58,443,908 
-Combine input records 				0 					0 					0 
-Reduce input records 				0 					58,443,908 			58,443,908 


=======================================================================

Also I don't think the output table is working correctly.  Some gets work
from shell and some don't.  Here is what i get when they don't:


NativeException: org.apache.hadoop.hbase.client.RetriesExhaustedException:
Trying to contact region server null for region , row 'ROW|IM TRYING TO
GET', but failed after 5 attempts.
Exceptions:
java.io.IOException: HRegionInfo was null or empty in .META.
java.io.IOException: HRegionInfo was null or empty in .META.
java.io.IOException: HRegionInfo was null or empty in .META.
java.io.IOException: HRegionInfo was null or empty in .META.
java.io.IOException: HRegionInfo was null or empty in .META.

        from org/apache/hadoop/hbase/client/HConnectionManager.java:935:in
`getRegionServerWithRetries'
        from org/apache/hadoop/hbase/client/HTable.java:410:in `get'
        from org/apache/hadoop/hbase/client/HTable.java:995:in `getRow'
        from org/apache/hadoop/hbase/client/HTable.java:953:in `getRow'
        from org/apache/hadoop/hbase/client/HTable.java:841:in `getRow'
        from org/apache/hadoop/hbase/client/HTable.java:785:in `getRow'
        from sun/reflect/NativeMethodAccessorImpl.java:-2:in `invoke0'
        from sun/reflect/NativeMethodAccessorImpl.java:39:in `invoke'
        from sun/reflect/DelegatingMethodAccessorImpl.java:25:in `invoke'
        from java/lang/reflect/Method.java:597:in `invoke'
        from org/jruby/javasupport/JavaMethod.java:298:in
`invokeWithExceptionHandling'
        from org/jruby/javasupport/JavaMethod.java:259:in `invoke'
        from org/jruby/java/invokers/InstanceMethodInvoker.java:44:in `call'
        from org/jruby/runtime/callsite/CachingCallSite.java:110:in `call'
        from org/jruby/ast/CallOneArgNode.java:57:in `interpret'
        from org/jruby/ast/LocalAsgnNode.java:123:in `interpret'
... 114 levels...
        from
home/hadoop/hbase_minus_0_dot_20_dot_0_minus_alpha/bin/$_dot_dot_/bin/hirb#start:-1:in
`call'
        from org/jruby/internal/runtime/methods/DynamicMethod.java:226:in
`call'
        from org/jruby/internal/runtime/methods/CompiledMethod.java:211:in
`call'
        from org/jruby/internal/runtime/methods/CompiledMethod.java:71:in
`call'
        from org/jruby/runtime/callsite/CachingCallSite.java:253:in
`cacheAndCall'
        from org/jruby/runtime/callsite/CachingCallSite.java:72:in `call'
        from
home/hadoop/hbase_minus_0_dot_20_dot_0_minus_alpha/bin/$_dot_dot_/bin/hirb.rb:474:in
`__file__'
        from
home/hadoop/hbase_minus_0_dot_20_dot_0_minus_alpha/bin/$_dot_dot_/bin/hirb.rb:-1:in
`load'
        from org/jruby/Ruby.java:577:in `runScript'
        from org/jruby/Ruby.java:480:in `runNormally'
        from org/jruby/Ruby.java:354:in `runFromMain'
        from org/jruby/Main.java:229:in `run'
        from org/jruby/Main.java:110:in `run'
        from org/jruby/Main.java:94:in `main'
        from /home/hadoop/hbase-0.20.0-alpha/bin/../bin/hirb.rb:371:in `get'

========================================================================


-- 
View this message in context: http://www.nabble.com/Running-programs-under-HBase-0.20.0-alpha-tp24152144p24190083.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: Running programs under HBase 0.20.0 alpha

Posted by llpind <so...@hotmail.com>.
I will try it out today, and let you know. 


Ryan Rawson wrote:
> 
> You are exactly correct.
> 
> A solution is in scan, I sent an off list patch to him, seems to help?
> 
> It needs tests however.
> 
> On Jun 23, 2009 3:02 PM, "Andrew Purtell" <ap...@apache.org> wrote:
> 
> This looks like HBASE-1560:
> https://issues.apache.org/jira/browse/HBASE-1560
> 
> - Andy ________________________________ From: llpind
> <so...@hotmail.com>
> To: hbase-user...
> Sent: Tuesday, June 23, 2009 12:04:59 PM
> Subject: Re: Running programs under HBase 0.20.0 alpha
> 
> Okay thanks. Still unable to do a map/reduce job. I changed the API to use
> Put instead of BatchU...
> 
> 

-- 
View this message in context: http://www.nabble.com/Running-programs-under-HBase-0.20.0-alpha-tp24152144p24187522.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: Running programs under HBase 0.20.0 alpha

Posted by Ryan Rawson <ry...@gmail.com>.
You are exactly correct.

A solution is in scan, I sent an off list patch to him, seems to help?

It needs tests however.

On Jun 23, 2009 3:02 PM, "Andrew Purtell" <ap...@apache.org> wrote:

This looks like HBASE-1560: https://issues.apache.org/jira/browse/HBASE-1560

- Andy ________________________________ From: llpind <so...@hotmail.com>
To: hbase-user...
Sent: Tuesday, June 23, 2009 12:04:59 PM
Subject: Re: Running programs under HBase 0.20.0 alpha

Okay thanks. Still unable to do a map/reduce job. I changed the API to use
Put instead of BatchU...

Re: Running programs under HBase 0.20.0 alpha

Posted by Andrew Purtell <ap...@apache.org>.
This looks like HBASE-1560: https://issues.apache.org/jira/browse/HBASE-1560

   - Andy




________________________________
From: llpind <so...@hotmail.com>
To: hbase-user@hadoop.apache.org
Sent: Tuesday, June 23, 2009 12:04:59 PM
Subject: Re: Running programs under HBase 0.20.0 alpha


Okay thanks.

Still unable to do a map/reduce job.  I changed the API to use Put instead
of BatchUpdate, along with some of the moved Classes.  This is a map reduce
program which reads from an input table, and outputs to another HBase table. 
Still using JobConf here:

======================================================================
        JobConf c = new JobConf(getConf(), getClass());
        c.setJobName(getClass().getName());

        String inputTableName = args[0];

        System.out.println(" STARTING... 7");
    
c.setInputFormat(org.apache.hadoop.hbase.mapreduce.TableInputFormat.class);
    
c.setOutputFormat(org.apache.hadoop.hbase.mapreduce.TableOutputFormat.class);

        c.setInputFormat(TableInputFormat.class);
        c.setMapOutputKeyClass(ImmutableBytesWritable.class);
        c.setMapOutputValueClass(IntWritable.class);
        c.setMapperClass(MyMapper.class);
      
        FileInputFormat.addInputPaths(c, inputTableName);
        c.set(TableInputFormat.COLUMN_LIST, "link:");
        

        c.setOutputFormat(TableOutputFormat.class);
        c.setReducerClass(MyReducer.class);
        c.set(TableOutputFormat.OUTPUT_TABLE, outputTableName);
        c.setOutputKeyClass(ImmutableBytesWritable.class);
        c.setOutputValueClass(Put.class);

        JobClient.runJob(c);

=====================================================================

        @Override
        public void map(
                ImmutableBytesWritable key, 
                Result row,
                OutputCollector<ImmutableBytesWritable, IntWritable> collector,
                Reporter r) throws IOException {

            collector.collect(new ImmutableBytesWritable(extractEntity(key.get())),
one);
        }

=============================================================================
        private Put put = new Put();
        @Override
        public void reduce(ImmutableBytesWritable k,
                Iterator<IntWritable> v,
                OutputCollector<ImmutableBytesWritable, Put> c,
                Reporter r) throws IOException {
            
            put = new Put(k.get());
            int sum = 0;
            while (v.hasNext()) {
              sum += v.next().get();
            }
            put.add(Bytes.toBytes("link"), Bytes.toBytes("count"),
Bytes.toBytes(sum) );
            c.collect(k, put);
          

            
        }
====================================================================
2009-06-23 11:37:30,083 INFO org.apache.zookeeper.ClientCnxn: Server
connection successful
2009-06-23 11:37:30,410 INFO org.apache.hadoop.mapred.MapTask:
numReduceTasks: 6
2009-06-23 11:37:30,419 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb =
100
2009-06-23 11:37:30,587 INFO org.apache.hadoop.mapred.MapTask: data buffer =
79691776/99614720
2009-06-23 11:37:30,587 INFO org.apache.hadoop.mapred.MapTask: record buffer
= 262144/327680
2009-06-23 11:44:20,251 WARN org.apache.hadoop.mapred.TaskTracker: Error
running child
org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to contact
region server null for region , row '', but failed after 10 attempts.
Exceptions:
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException

    at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getRegionServerWithRetries(HConnectionManager.java:935)
    at
org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1797)
    at
org.apache.hadoop.hbase.client.HTable$ClientScanner.initialize(HTable.java:1745)
    at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:369)
    at
org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.restart(TableInputFormatBase.java:118)
    at
org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.next(TableInputFormatBase.java:219)
    at
org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.next(TableInputFormatBase.java:87)
    at
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:191)
    at
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:175)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:356)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
    at org.apache.hadoop.mapred.Child.main(Child.java:170)
2009-06-23 11:44:20,253 INFO org.apache.hadoop.mapred.TaskRunner: Runnning
cleanup for the task



============================================================================


Looks like the ScannerCallable is coming into getRegionServerWithRetries as
Null?




stack-3 wrote:
> 
> On Tue, Jun 23, 2009 at 10:07 AM, llpind <so...@hotmail.com> wrote:
> 
>>
>> 1 other question.  Do I list all my servers in the zoo.cfg?  Not sure
>> what
>> role zookeeper plays in map/reduce, please explain.
> 
> 
> 
> It plays no role in MR.
> 
> Please read the 'Getting Started' document.  Has pointers to what zk is
> and
> its role in hbase.
> 
> St.Ack
> 
> 
> 
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Running-programs-under-HBase-0.20.0-alpha-tp24152144p24167811.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
>>
> 
> 

-- 
View this message in context: http://www.nabble.com/Running-programs-under-HBase-0.20.0-alpha-tp24152144p24171189.html
Sent from the HBase User mailing list archive at Nabble.com.


      

Re: Running programs under HBase 0.20.0 alpha

Posted by llpind <so...@hotmail.com>.
Okay thanks.

Still unable to do a map/reduce job.  I changed the API to use Put instead
of BatchUpdate, along with some of the moved Classes.  This is a map reduce
program which reads from an input table, and outputs to another HBase table. 
Still using JobConf here:

======================================================================
		JobConf c = new JobConf(getConf(), getClass());
		c.setJobName(getClass().getName());

		String inputTableName = args[0];

		System.out.println(" STARTING... 7");
	
c.setInputFormat(org.apache.hadoop.hbase.mapreduce.TableInputFormat.class);
	
c.setOutputFormat(org.apache.hadoop.hbase.mapreduce.TableOutputFormat.class);

	    c.setInputFormat(TableInputFormat.class);
	    c.setMapOutputKeyClass(ImmutableBytesWritable.class);
	    c.setMapOutputValueClass(IntWritable.class);
	    c.setMapperClass(MyMapper.class);
	   
	    FileInputFormat.addInputPaths(c, inputTableName);
	    c.set(TableInputFormat.COLUMN_LIST, "link:");
	    

	    c.setOutputFormat(TableOutputFormat.class);
	    c.setReducerClass(MyReducer.class);
	    c.set(TableOutputFormat.OUTPUT_TABLE, outputTableName);
	    c.setOutputKeyClass(ImmutableBytesWritable.class);
	    c.setOutputValueClass(Put.class);

		JobClient.runJob(c);

=====================================================================

		@Override
		public void map(
				ImmutableBytesWritable key, 
				Result row,
				OutputCollector<ImmutableBytesWritable, IntWritable> collector,
				Reporter r) throws IOException {

			collector.collect(new ImmutableBytesWritable(extractEntity(key.get())),
one);
		}

=============================================================================
		private Put put = new Put();
		@Override
		public void reduce(ImmutableBytesWritable k,
				Iterator<IntWritable> v,
				OutputCollector<ImmutableBytesWritable, Put> c,
				Reporter r) throws IOException {
			
			put = new Put(k.get());
	        int sum = 0;
	        while (v.hasNext()) {
	          sum += v.next().get();
	        }
	        put.add(Bytes.toBytes("link"), Bytes.toBytes("count"),
Bytes.toBytes(sum) );
	        c.collect(k, put);
	      

			
		}
====================================================================
2009-06-23 11:37:30,083 INFO org.apache.zookeeper.ClientCnxn: Server
connection successful
2009-06-23 11:37:30,410 INFO org.apache.hadoop.mapred.MapTask:
numReduceTasks: 6
2009-06-23 11:37:30,419 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb =
100
2009-06-23 11:37:30,587 INFO org.apache.hadoop.mapred.MapTask: data buffer =
79691776/99614720
2009-06-23 11:37:30,587 INFO org.apache.hadoop.mapred.MapTask: record buffer
= 262144/327680
2009-06-23 11:44:20,251 WARN org.apache.hadoop.mapred.TaskTracker: Error
running child
org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to contact
region server null for region , row '', but failed after 10 attempts.
Exceptions:
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException

	at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getRegionServerWithRetries(HConnectionManager.java:935)
	at
org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1797)
	at
org.apache.hadoop.hbase.client.HTable$ClientScanner.initialize(HTable.java:1745)
	at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:369)
	at
org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.restart(TableInputFormatBase.java:118)
	at
org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.next(TableInputFormatBase.java:219)
	at
org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.next(TableInputFormatBase.java:87)
	at
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:191)
	at
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:175)
	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:356)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
	at org.apache.hadoop.mapred.Child.main(Child.java:170)
2009-06-23 11:44:20,253 INFO org.apache.hadoop.mapred.TaskRunner: Runnning
cleanup for the task



============================================================================


Looks like the ScannerCallable is coming into getRegionServerWithRetries as
Null?




stack-3 wrote:
> 
> On Tue, Jun 23, 2009 at 10:07 AM, llpind <so...@hotmail.com> wrote:
> 
>>
>> 1 other question.  Do I list all my servers in the zoo.cfg?  Not sure
>> what
>> role zookeeper plays in map/reduce, please explain.
> 
> 
> 
> It plays no role in MR.
> 
> Please read the 'Getting Started' document.  Has pointers to what zk is
> and
> its role in hbase.
> 
> St.Ack
> 
> 
> 
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Running-programs-under-HBase-0.20.0-alpha-tp24152144p24167811.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
>>
> 
> 

-- 
View this message in context: http://www.nabble.com/Running-programs-under-HBase-0.20.0-alpha-tp24152144p24171189.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: Running programs under HBase 0.20.0 alpha

Posted by stack <st...@duboce.net>.
On Tue, Jun 23, 2009 at 10:07 AM, llpind <so...@hotmail.com> wrote:

>
> 1 other question.  Do I list all my servers in the zoo.cfg?  Not sure what
> role zookeeper plays in map/reduce, please explain.



It plays no role in MR.

Please read the 'Getting Started' document.  Has pointers to what zk is and
its role in hbase.

St.Ack



>
> --
> View this message in context:
> http://www.nabble.com/Running-programs-under-HBase-0.20.0-alpha-tp24152144p24167811.html
> Sent from the HBase User mailing list archive at Nabble.com.
>
>

Re: Running programs under HBase 0.20.0 alpha

Posted by llpind <so...@hotmail.com>.
1 other question.  Do I list all my servers in the zoo.cfg?  Not sure what
role zookeeper plays in map/reduce, please explain. 
-- 
View this message in context: http://www.nabble.com/Running-programs-under-HBase-0.20.0-alpha-tp24152144p24167811.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: Running programs under HBase 0.20.0 alpha

Posted by llpind <so...@hotmail.com>.
Is there an example of map/reduce using .20 yet?

A Map/Reduce program that used to work now I get the following error: 

09/06/23 09:31:28 INFO mapred.JobClient: Running job: job_200906221630_0003
09/06/23 09:31:29 INFO mapred.JobClient:  map 0% reduce 0%
09/06/23 09:37:57 INFO mapred.JobClient: Task Id :
attempt_200906221630_0003_m_000013_0, Status : FAILED
org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to contact
region server null for region , row '', but failed after 10 attempts.
Exceptions:
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException

        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getRegionServerWithRetries(HConnectionManager.java:935)
        at
org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1797)
        at
org.apache.hadoop.hbase.client.HTable$ClientScanner.initialize(HTable.java:1745)
        at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:369)
        at
org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.restart(TableInputFormatBase.java:118)
        at
org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.next(TableInputFormatBase.java:219)
        at
org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.next(TableInputFormatBase.java:87)
        at
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:191)
        at
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:175)
        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:356)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
        at org.apache.hadoop.mapred.Child.main(Child.java:170)

09/06/23 09:38:10 INFO mapred.JobClient:  map 3% reduce 0%
-- 
View this message in context: http://www.nabble.com/Running-programs-under-HBase-0.20.0-alpha-tp24152144p24167386.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: Running programs under HBase 0.20.0 alpha

Posted by llpind <so...@hotmail.com>.
That was the issue.  I had it pointing directly to zoo.cfg.  Thanks all



stack-3 wrote:
> 
> You have your conf directory on your CLASSPATH?
> St.Ack
> 
> On Mon, Jun 22, 2009 at 11:11 AM, llpind <so...@hotmail.com> wrote:
> 
>>
>> Yea I tried putting it in the executable jar, still same exception. 
>> Might
>> be
>> a java issue. I've got it in the src folder.  I tried putting it in the
>> same
>> package as main class as well.
>>
>> Also when running the .class file i tried pointing to it from Hadoop
>> classpath env var.  Kind of weird it still complains it cant find it. 
>> what
>> am I doing wrong?
>>
>>
>>
>> Erik Holstad wrote:
>> >
>> > Hi Ilpind.
>> >
>> > The jar that you are running, does it have access to the to zoo.cfg? If
>> > not
>> > you probably need to
>> > add it to the jar or to your classpath.
>> >
>> > Erik
>> >
>> >
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Running-programs-under-HBase-0.20.0-alpha-tp24152144p24152534.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
>>
> 
> 

-- 
View this message in context: http://www.nabble.com/Running-programs-under-HBase-0.20.0-alpha-tp24152144p24154281.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: Running programs under HBase 0.20.0 alpha

Posted by stack <st...@duboce.net>.
You have your conf directory on your CLASSPATH?
St.Ack

On Mon, Jun 22, 2009 at 11:11 AM, llpind <so...@hotmail.com> wrote:

>
> Yea I tried putting it in the executable jar, still same exception.  Might
> be
> a java issue. I've got it in the src folder.  I tried putting it in the
> same
> package as main class as well.
>
> Also when running the .class file i tried pointing to it from Hadoop
> classpath env var.  Kind of weird it still complains it cant find it.  what
> am I doing wrong?
>
>
>
> Erik Holstad wrote:
> >
> > Hi Ilpind.
> >
> > The jar that you are running, does it have access to the to zoo.cfg? If
> > not
> > you probably need to
> > add it to the jar or to your classpath.
> >
> > Erik
> >
> >
>
> --
> View this message in context:
> http://www.nabble.com/Running-programs-under-HBase-0.20.0-alpha-tp24152144p24152534.html
> Sent from the HBase User mailing list archive at Nabble.com.
>
>

Re: Running programs under HBase 0.20.0 alpha

Posted by llpind <so...@hotmail.com>.
Yea I tried putting it in the executable jar, still same exception.  Might be
a java issue. I've got it in the src folder.  I tried putting it in the same
package as main class as well.



Erik Holstad wrote:
> 
> Hi Ilpind.
> 
> The jar that you are running, does it have access to the to zoo.cfg? If
> not
> you probably need to
> add it to the jar or to your classpath.
> 
> Erik
> 
> 

-- 
View this message in context: http://www.nabble.com/Running-programs-under-HBase-0.20.0-alpha-tp24152144p24152534.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: Running programs under HBase 0.20.0 alpha

Posted by Erik Holstad <er...@gmail.com>.
Hi Ilpind.

The jar that you are running, does it have access to the to zoo.cfg? If not
you probably need to
add it to the jar or to your classpath.

Erik

Re: Running programs under HBase 0.20.0 alpha

Posted by Andrew Purtell <ap...@apache.org>.
Hi,

You need to package a valid zoo.cfg on the classpath, either part of the jar, or (as I do it), everywhere on the cluster at /path/to/hadoop/conf/zoo.cfg. 

However, my understanding for 0.20 release is there will be a default zoo.cfg packaged in the HBase jar that will assume there is a locally running quorum peer and attempt to contact it on localhost at the standard port. 

   - Andy




________________________________
From: llpind <so...@hotmail.com>
To: hbase-user@hadoop.apache.org
Sent: Monday, June 22, 2009 10:45:13 AM
Subject: Running programs under HBase 0.20.0 alpha


Hey All,

I've got HBase 0.20.0 Alpha installed and running.  When i write a class,
and try to run it, I get:

09/06/22 10:39:48 FATAL zookeeper.ZooKeeperWrapper: Fail to read properties
from zoo.cfg
java.io.IOException: zoo.cfg not found
        at
org.apache.hadoop.hbase.zookeeper.HQuorumPeer.parseZooKeeperConfig(HQuorumPeer.java:86)
        at
org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.loadZooKeeperConfig(ZooKeeperWrapper.java:188)
        at
org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.<clinit>(ZooKeeperWrapper.java:61)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getZooKeeperWrapper(HConnectionManager.java:818)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:835)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:514)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:490)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:564)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:523)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:490)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:564)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:527)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:490)
        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:124)
        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:107)
        at HBaseLoaderEntity.populateEntities(HBaseLoaderEntity.java:49)
        at HBaseLoaderEntity.main(HBaseLoaderEntity.java:80)

=======================================================================


I receive this exception when i package a jar or standalone .class file.

-- 
View this message in context: http://www.nabble.com/Running-programs-under-HBase-0.20.0-alpha-tp24152144p24152144.html
Sent from the HBase User mailing list archive at Nabble.com.