You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by stack <st...@duboce.net> on 2009/07/07 06:23:13 UTC

Re: ConnectionLoss error with latest hbase trunk ( r: 791199) in map reduce program

What version of hbase?

"Failed to create /hbase" indicates issue with ZooKeeper initialization.
This needs to be addressed first else no subsequent hbasing will succeed.

Where is the 127.0.0.1 coming from?  From a zoo.cfg?  Does hbase work at
all?  Have you tried verifying it works using hbase shell?

Would suggest you make the HTable in the configure step of your MR job and
save it as a data member.   Should run faster.

St.Ack


On Sun, Jul 5, 2009 at 7:33 PM, Naresh Rapolu
<na...@gmail.com>wrote:

>
> We get the following error using the latest ( r: 791199)  hbase trunk.  We
> are running it in standalone mode.
>
>
> 09/07/05 21:56:42 INFO zookeeper.ZooKeeper: Initiating client connection,
> host=localhost:2181 sessionTimeout=30000
> watcher=org.apache.hadoop.hbase.zookeeper.WatcherWrapper@42c4d04d
> 09/07/05 21:56:42 INFO zookeeper.ClientCnxn: Attempting connection to
> server
> localhost/127.0.0.1:2181
> 09/07/05 21:56:42 INFO zookeeper.ClientCnxn: Priming connection to
> java.nio.channels.SocketChannel[connected local=/127.0.0.1:37809
> remote=localhost/127.0.0.1:2181]
> 09/07/05 21:56:42 INFO zookeeper.ClientCnxn: Server connection successful
> 09/07/05 21:56:42 WARN zookeeper.ClientCnxn: Exception closing session 0x0
> to sun.nio.ch.SelectionKeyImpl@4e46b90a
> java.io.IOException: Read error rc = -1 java.nio.DirectByteBuffer[pos=0
> lim=4 cap=4]
>    at org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:653)
>    at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:897)
> 09/07/05 21:56:42 WARN zookeeper.ZooKeeperWrapper: Failed to create /hbase:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase
>    at org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
>    at org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
>    at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:522)
>    at
>
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.ensureExists(ZooKeeperWrapper.java:371)
>    at
>
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.ensureParentExists(ZooKeeperWrapper.java:392)
>    at
>
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.checkOutOfSafeMode(ZooKeeperWrapper.java:505)
>    at
>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:844)
>    at
>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:514)
>    at
>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:490)
>    at
>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:564)
>    at
>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:523)
>    at
>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:490)
>    at
>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:564)
>    at
>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:527)
>    at
>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:490)
>    at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:124)
>    at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:84)
>    at PageRank$PageRankMap.map(PageRank.java:75)
>    at PageRank$PageRankMap.map(PageRank.java:47)
>    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
>    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:363)
>    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:312)
>    at
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:178)
> 09/07/05 21:56:42 INFO mapred.JobClient:  map 0% reduce 0%
> 09/07/05 21:56:43 INFO zookeeper.ClientCnxn: Attempting connection to
> server
> localhost/127.0.0.1:2181
> 09/07/05 21:56:43 INFO zookeeper.ClientCnxn: Priming connection to
> java.nio.channels.SocketChannel[connected local=/127.0.0.1:37810
> remote=localhost/127.0.0.1:2181]
> 09/07/05 21:56:43 INFO zookeeper.ClientCnxn: Server connection successful
> 09/07/05 21:56:43 WARN zookeeper.ClientCnxn: Exception closing session 0x0
> to sun.nio.ch.SelectionKeyImpl@2e257f1b
> java.io.IOException: Read error rc = -1 java.nio.DirectByteBuffer[pos=0
> lim=4 cap=4]
>    at org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:653)
>    at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:897)
>
>
>
> public class PageRank extends Configured implements Tool{
>
>        public static class PageRankMap extends MapReduceBase implements
> Mapper<LongWritable, EdgeRecord, LongWritable, DoubleWritable> {
>                public void map(LongWritable key, EdgeRecord value,
> OutputCollector<LongWritable, DoubleWritable> output, Reporter reporter)
> throws IOException {
>                        // Pagerank of the page
>                        double pagerank = 1;
>
>                        HBaseConfiguration hbc = new HBaseConfiguration();
>                        HTable ht = new HTable(hbc, "PageRank");
>                        Get g = new Get(Bytes.toBytes(value.getNode1()));
>                    Result r = ht.get(g);
>
>                    byte[] read_value;
>                    if (r.containsColumn(Bytes.toBytes("pagerank"),
> Bytes.toBytes("0"))) {
>                        read_value = r.getValue(Bytes.toBytes("pagerank"),
> Bytes.toBytes("0"));
>                        pagerank = Double.parseDouble(new
> String(read_value));
>                    }
>
>                        System.out.println(value.getNode1() + " --> " +
> pagerank);
>                        int numoutlinks = 10;
>
>                        output.collect(new LongWritable(value.getNode2()),
> new
> DoubleWritable(pagerank));
>                }
>        }
>
>        public static class PageRankReduce extends MapReduceBase implements
> Reducer<LongWritable, DoubleWritable, LongWritable, DoubleWritable> {
>                // The pagerank damping factor
>                private final double damping = 0.85;
>
>                public void reduce(LongWritable key,
> Iterator<DoubleWritable> values,
> OutputCollector<LongWritable, DoubleWritable> output, Reporter reporter)
> throws IOException {
>                        double pagerank = 0;
>
>                        while (values.hasNext()) {
>                                pagerank += values.next().get();
>                        }
>
>                        // Now apply the damping factor
>                        pagerank = (1-damping) + damping * pagerank;
>
>                        HTable ht = new HTable("PageRank");
>                        Put p = new Put(Bytes.toBytes(key.get()));
>                        p.add(Bytes.toBytes("pagerank"), Bytes.toBytes("0"),
> new
> Double(pagerank).toString().getBytes());
>                        ht.put(p);
>
>                        output.collect(key, new DoubleWritable(pagerank));
>                }
>        }
>
>        public int run(String[] args) throws Exception {
>                JobConf conf = new JobConf(getConf(), PageRank.class);
>                HBaseConfiguration hbc = new HBaseConfiguration(getConf());
>
>                HBaseAdmin.checkHBaseAvailable(hbc);
>
>                HBaseAdmin hba = new HBaseAdmin(hbc);
>                if(hba.tableExists("PageRank")) {
>                        hba.disableTable("PageRank");
>                        hba.deleteTable("PageRank");
>                }
>                HTableDescriptor htd = new
> HTableDescriptor("PageRank".getBytes());
>                htd.addFamily(new HColumnDescriptor("pagerank"));
>                hba.createTable(htd);
>
>                conf.setInputFormat(EdgeRecordInputFormat.class);
>                conf.setOutputFormat(TextOutputFormat.class);
>
>                conf.setMapOutputKeyClass(LongWritable.class);
>                conf.setMapOutputValueClass(DoubleWritable.class);
>
>                conf.setOutputKeyClass(LongWritable.class);
>                conf.setOutputValueClass(DoubleWritable.class);
>
>                conf.setMapperClass(PageRankMap.class);
>                conf.setReducerClass(PageRankReduce.class);
>
>                try {
>                        FileInputFormat.setInputPaths(conf,
> other_args.get(0));
>                        TextOutputFormat.setOutputPath(conf, new
> Path(other_args.get(1)));
>
>                        //format being matrixadd-matrixid1-matrixid2-sumid
>                        String jobName = "PageRank computation";
>                        conf.setJobName(jobName);
>                        JobClient.runJob(conf);
>                }
>                catch(Exception e) {
>                        e.printStackTrace();
>                        System.out.println("Incorrect argument format!");
>                }
>
>                System.out.println("Outputs are - ");
>                String[] columns = new String[1];
>                columns[0] = "pagerank";
>
>                HTable ht = new HTable("PageRank");
>                double pagerank;
>
>                Scan s = new Scan();
>                s.addColumn(Bytes.toBytes("pagerank"), Bytes.toBytes("0"));
>                ResultScanner scanner = ht.getScanner(s);
>
>                try {
>                        for (Result rr : scanner) {
>                                System.out.println("Found row: " + rr);
>                        }
>                } finally {
>                        scanner.close();
>                }
>                System.out.println("Done!");
>                return 0;
>        }
>
> - Naresh Rapolu
>
>
> --
> View this message in context:
> http://www.nabble.com/ConnectionLoss-error-with-latest--hbase-trunk-%28-r%3A-791199%29-in-map-reduce-program-tp24348944p24348944.html
> Sent from the HBase User mailing list archive at Nabble.com.
>
>