You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Hari Sreekumar <hs...@clickable.com> on 2011/03/28 15:29:27 UTC

Unable to create table

Hi,

I am trying to create table in hbase v0.90.1 and I get the following error:

11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Opening socket connection to
server hadoop2/192.168.1.111:2181
11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Socket connection established
to hadoop2/192.168.1.111:2181, initiating s
ession
11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Session establishment complete
on server hadoop2/192.168.1.111:2181, sess
ionid = 0x12efc946d66000b, negotiated timeout = 180000
11/03/28 18:39:52 INFO client.HConnectionManager$HConnectionImplementation:
Closed zookeeper sessionid=0x12efc946d6600
0b
11/03/28 18:39:52 INFO zookeeper.ZooKeeper: Session: 0x12efc946d66000b
closed
11/03/28 18:39:52 INFO zookeeper.ClientCnxn: EventThread shut down
11/03/28 18:47:02 ERROR create.CreateTableByXML: Caught IOException: No
server address listed in .META. for region
AcContact,,1301317792604.16d1f5fd49478f79002e89ce02cf37b5. while creating
table: Table1

This is the code I am using:
.....
HTableDescriptor desc = CreateTableByXML.convertSchemaToDescriptor(schema);
    try {
      hbaseAdmin.createTable(desc);
    } catch (IOException e) {
      CreateTableByXML.LOG.error("Caught IOException: " + e.getMessage()
              + " while creating table: " + tableName);
....
.....


It used to work fine in v0.20.6. I upgraded today to v0.90.1 and it started
giving this error. Any ideas?

Hari

Re: Unable to create table

Posted by Hari Sreekumar <hs...@clickable.com>.
The Lzo compression test works when I run it through hadoop but not when I
run it through hbase:

[hadoop@hadoop1 opt]$ $HADOOP_HOME/bin/hadoop
org.apache.hadoop.hbase.util.CompressionTest hdfs://hadoop1:54310 lzo
11/03/30 10:18:26 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
11/03/30 10:18:26 INFO lzo.LzoCodec: Successfully loaded & initialized
native-lzo library
11/03/30 10:18:26 INFO compress.CodecPool: Got brand-new compressor
[hadoop@hadoop1 opt]$ $HBASE_HOME/bin/hbase
org.apache.hadoop.hbase.util.CompressionTest hdfs://hadoop1:54310 lzo
11/03/30 10:19:48 ERROR lzo.GPLNativeCodeLoader: Could not load native gpl
library
java.lang.UnsatisfiedLinkError: no gplcompression in java.library.path
        at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1734)
        at java.lang.Runtime.loadLibrary0(Runtime.java:823)
        at java.lang.System.loadLibrary(System.java:1028)
        at
com.hadoop.compression.lzo.GPLNativeCodeLoader.<clinit>(GPLNativeCodeLoader.java:31)
        at
com.hadoop.compression.lzo.LzoCodec.isNativeLzoLoaded(LzoCodec.java:69)
        at
com.hadoop.compression.lzo.LzoCodec.getCompressorType(LzoCodec.java:146)
        at
org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:98)
        at
org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.getCompressor(Compression.java:200)
        at
org.apache.hadoop.hbase.io.hfile.HFile$Writer.getCompressingStream(HFile.java:397)
        at
org.apache.hadoop.hbase.io.hfile.HFile$Writer.newBlock(HFile.java:383)
        at
org.apache.hadoop.hbase.io.hfile.HFile$Writer.checkBlockBoundary(HFile.java:354)
        at
org.apache.hadoop.hbase.io.hfile.HFile$Writer.append(HFile.java:536)
        at
org.apache.hadoop.hbase.io.hfile.HFile$Writer.append(HFile.java:515)
        at
org.apache.hadoop.hbase.util.CompressionTest.main(CompressionTest.java:126)
11/03/30 10:19:48 ERROR lzo.LzoCodec: Cannot load native-lzo without
native-hadoop
java.lang.RuntimeException: native-lzo library not available
        at
com.hadoop.compression.lzo.LzoCodec.getCompressorType(LzoCodec.java:147)
        at
org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:98)
        at
org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.getCompressor(Compression.java:200)
        at
org.apache.hadoop.hbase.io.hfile.HFile$Writer.getCompressingStream(HFile.java:397)
        at
org.apache.hadoop.hbase.io.hfile.HFile$Writer.newBlock(HFile.java:383)
        at
org.apache.hadoop.hbase.io.hfile.HFile$Writer.checkBlockBoundary(HFile.java:354)
        at
org.apache.hadoop.hbase.io.hfile.HFile$Writer.append(HFile.java:536)
        at
org.apache.hadoop.hbase.io.hfile.HFile$Writer.append(HFile.java:515)
        at
org.apache.hadoop.hbase.util.CompressionTest.main(CompressionTest.java:126)
FAILED

I had the native libs added in $HADOOP_HOME/lib only and not in
$HBASE_HOME/lib because I had not created the directory
$HBASE_HOME/lib/native. Now I have them in HBase too. Also, earlier I was
building the native libs in the master node only and rsyncing the
$HADOOP_HOME/lib to all nodes. Now I am building individually on all nodes
(not sure if this made any difference). Works now.

Thanks a lot for the help,
Hari


On Wed, Mar 30, 2011 at 10:12 AM, Hari Sreekumar
<hs...@clickable.com>wrote:

> Ah, yea. I have this error in the regionserver log:
>
> 2011-03-30 10:09:26,534 DEBUG
> org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=1.63 MB,
> free=196.71 MB, max=198.34 MB, blocks=0, accesses=0, hits=0, hitRatio=�%,
> cachingAccesses=0, cachingHits=0, cachingHitsRatio=�%, evictions=0,
> evicted=0, evictedPerRun=NaN
> 2011-03-30 10:09:37,328 DEBUG
> org.apache.hadoop.hbase.regionserver.LogRoller: Hlog roll period 3600000ms
> elapsed
> 2011-03-30 10:09:46,369 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Received request to open
> region: AcContact,,1301424009632.95652cfd0dab6dcf3be0c837c539f8d3.
> 2011-03-30 10:09:46,369 DEBUG
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Processing
> open of AcContact,,1301424009632.95652cfd0dab6dcf3be0c837c539f8d3.
> 2011-03-30 10:09:46,370 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign:
> regionserver:60020-0x12f02e9bdec0007 Attempting to transition node
> 95652cfd0dab6dcf3be0c837c539f8d3 from M_ZK_REGION_OFFLINE to
> RS_ZK_REGION_OPENING
> 2011-03-30 10:09:46,372 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign:
> regionserver:60020-0x12f02e9bdec0007 Successfully transitioned node
> 95652cfd0dab6dcf3be0c837c539f8d3 from M_ZK_REGION_OFFLINE to
> RS_ZK_REGION_OPENING
> 2011-03-30 10:09:46,372 DEBUG org.apache.hadoop.hbase.regionserver.HRegion:
> Opening region: REGION => {NAME =>
> 'AcContact,,1301424009632.95652cfd0dab6dcf3be0c837c539f8d3.', STARTKEY =>
> '', ENDKEY => '', ENCODED => 95652cfd0dab6dcf3be0c837c539f8d3, TABLE =>
> {{NAME => 'AcContact', FAMILIES => [{NAME => 'Data', BLOOMFILTER => 'NONE',
> REPLICATION_SCOPE => '0', COMPRESSION => 'LZO', VERSIONS => '1000', TTL =>
> '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE =>
> 'true'}]}}
> 2011-03-30 10:09:46,373 DEBUG org.apache.hadoop.hbase.regionserver.HRegion:
> Instantiated AcContact,,1301424009632.95652cfd0dab6dcf3be0c837c539f8d3.
> 2011-03-30 10:09:46,373 ERROR
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed open
> of region=AcContact,,1301424009632.95652cfd0dab6dcf3be0c837c539f8d3.
> java.io.IOException: Compression algorithm 'lzo' previously failed test.
>         at
> org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:77)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.checkCompressionCodecs(HRegion.java:2555)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2544)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2532)
>         at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:262)
>         at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:94)
>         at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:151)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
>
> Will try to fix LZO here and let you know..
>
> On Wed, Mar 30, 2011 at 1:24 AM, Jean-Daniel Cryans <jd...@apache.org>wrote:
>
>> hadoop3 was trying to open it, but it seems like it's not able to. It
>> really looks like: https://issues.apache.org/jira/browse/HBASE-3669
>>
>> You should also check what's going on on that slave.
>>
>> J-D
>>
>> On Tue, Mar 29, 2011 at 12:12 PM, Hari Sreekumar
>> <hs...@clickable.com> wrote:
>> > Yep I know, I think I was also on the mailing list thread that inspired
>> > HBASE-3557 :) But what can I do in the current version better than that?
>> >
>> > In any case, I do that only when I catch IOException. So why am I
>> getting
>> > this IOException. I deleted the hbase folder in HDFS and tried doing
>> > everything from start. Here is the HMaster log before the IOException
>> was
>> > thrown: http://pastebin.com/x1BUuPpQ
>> >
>> > thanks,
>> > Hari
>> >
>> > On Wed, Mar 30, 2011 at 12:21 AM, Jean-Daniel Cryans <
>> jdcryans@apache.org>wrote:
>> >
>> >> There's a reason why disabling takes time, if you delete rows from
>> >> .META. you might end up in an inconsistent situation and we'll have a
>> >> hard time helping you :)
>> >>
>> >> So HBASE-3557 is what you want.
>> >>
>> >> Regarding your current issue, RIT is "region in transition" meaning
>> >> that the region is in a state recognized by the master as either
>> >> moving from one region server to another, or just closing. Normally
>> >> after disabling a table its regions all left their in transition
>> >> state, but by deleting the rows directly in .META. who knows exactly
>> >> what's the state of your table?
>> >>
>> >> J-D
>> >>
>> >> On Tue, Mar 29, 2011 at 11:29 AM, Hari Sreekumar
>> >> <hs...@clickable.com> wrote:
>> >> > Hi J-D,
>> >> >
>> >> > Here is the tail of HMaster log:
>> >> >
>> >> > 2011-03-29 23:48:51,155 DEBUG
>> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >> >  region to clear regions in transition;
>> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
>> state=OPENING,
>> >> > ts=1301422405271
>> >> > 2011-03-29 23:48:52,158 DEBUG
>> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >> >  region to clear regions in transition;
>> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
>> state=OPENING,
>> >> > ts=1301422405271
>> >> > 2011-03-29 23:48:53,161 DEBUG
>> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >> >  region to clear regions in transition;
>> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
>> state=OPENING,
>> >> > ts=1301422405271
>> >> > 2011-03-29 23:48:54,163 DEBUG
>> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >> >  region to clear regions in transition;
>> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
>> state=OPENING,
>> >> > ts=1301422405271
>> >> > 2011-03-29 23:48:55,164 DEBUG
>> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >> >  region to clear regions in transition;
>> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
>> state=OPENING,
>> >> > ts=1301422405271
>> >> > 2011-03-29 23:48:56,166 DEBUG
>> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >> >  region to clear regions in transition;
>> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
>> state=OPENING,
>> >> > ts=1301422405271
>> >> > 2011-03-29 23:48:57,168 DEBUG
>> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >> >  region to clear regions in transition;
>> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
>> state=OPENING,
>> >> > ts=1301422405271
>> >> > 2011-03-29 23:48:58,170 DEBUG
>> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >> >  region to clear regions in transition;
>> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
>> state=OPENING,
>> >> > ts=1301422405271
>> >> > 2011-03-29 23:48:59,172 DEBUG
>> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >> >  region to clear regions in transition;
>> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
>> state=OPENING,
>> >> > ts=1301422405271
>> >> > 2011-03-29 23:49:00,174 DEBUG
>> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >> >  region to clear regions in transition;
>> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
>> state=OPENING,
>> >> > ts=1301422405271
>> >> > 2011-03-29 23:49:01,176 DEBUG
>> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >> >  region to clear regions in transition;
>> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
>> state=OPENING,
>> >> > ts=1301422405271
>> >> > 2011-03-29 23:49:02,178 DEBUG
>> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >> >  region to clear regions in transition;
>> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
>> state=OPENING,
>> >> > ts=1301422405271
>> >> > 2011-03-29 23:49:02,178 ERROR
>> >> > org.apache.hadoop.hbase.master.handler.TableEventHandler: Error
>> >> manipulating
>> >> > table AcContact
>> >> > java.io.IOException: Waited hbase.master.wait.on.region (300000ms)
>> for
>> >> > region to leave region
>> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. in
>> transitions
>> >> >        at
>> >> >
>> >>
>> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:60)
>> >> >        at
>> >> >
>> >>
>> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:66)
>> >> >        at
>> >> >
>> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:151)
>> >> >        at
>> >> >
>> >>
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> >> >        at
>> >> >
>> >>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> >> >        at java.lang.Thread.run(Thread.java:662)
>> >> >
>> >> > What is "clearing regions in transition"? What could be the issue?
>> >> >
>> >> > I was told before too that deleting from META isn't recommended. But
>> we
>> >> were
>> >> > facing the disable problem way too often. So we thought we'd make it
>> >> > in-built. What other alternatives do I have?
>> >> >
>> >> > Thx,
>> >> > Hari
>> >> >
>> >> > On Tue, Mar 29, 2011 at 11:30 PM, Jean-Daniel Cryans <
>> >> jdcryans@apache.org>wrote:
>> >> >
>> >> >> The 60 secs timeout means that the client was waiting on the master
>> >> >> for some operation but the master took longer than 60 secs to do it,
>> >> >> so its log should be the next place too look for something whack.
>> >> >>
>> >> >> BTW deleting the rows from .META. directly is probably the worst
>> thing
>> >> >> you can do.
>> >> >>
>> >> >> J-D
>> >> >>
>> >> >> On Tue, Mar 29, 2011 at 12:17 AM, Hari Sreekumar
>> >> >> <hs...@clickable.com> wrote:
>> >> >> > Here is the stack trace:
>> >> >> >
>> >> >> > 11/03/28 18:47:02 INFO zookeeper.ZooKeeper: Initiating client
>> >> connection,
>> >> >> > connectString=hadoop2:2181 sessionTimeout=180000
>> watcher=hconnection
>> >> >> > 11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Opening socket
>> connection
>> >> to
>> >> >> > server hadoop2/192.168.1.111:2181
>> >> >> > 11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Socket connection
>> >> >> established
>> >> >> > to hadoop2/192.168.1.111:2181, initiating session
>> >> >> > 11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Session establishment
>> >> >> complete
>> >> >> > on server hadoop2/192.168.1.111:2181, sessionid =
>> 0x12efc946d66000c,
>> >> >> > negotiated timeout = 180000
>> >> >> > 11/03/28 18:47:02 INFO tools.CleanFromMeta: Deleting row
>> >> >> > AcContact,,1301317792604.16d1f5fd49478f79002e89ce02cf37b5.
>> >> >> > 11/03/28 18:47:02 INFO create.CreateTableByXML: Cleaned table
>> entries
>> >> >> from
>> >> >> > META. Retrying to create table
>> >> >> > 11/03/28 18:48:02 FATAL create.CreateTableByXML: Could not create
>> >> table
>> >> >> even
>> >> >> > after Cleaning Meta entries
>> >> >> > 11/03/28 18:48:02 FATAL create.CreateTableByXML:
>> >> >> > *************************************************************
>> >> >> > 11/03/28 18:48:02 FATAL create.CreateTableByXML: Call to hadoop3/
>> >> >> > 192.168.1.57:60000 failed on socket timeout exception:
>> >> >> > java.net.SocketTimeoutException: 60000 millis timeout while
>> waiting
>> >> for
>> >> >> > channel to be ready for read. ch :
>> >> >> java.nio.channels.SocketChannel[connected
>> >> >> > local=/192.168.0.233:51525 remote=hadoop3/192.168.1.57:60000]
>> >> >> > 11/03/28 18:48:02 FATAL create.CreateTableByXML:
>> >> >> >
>> >> >>
>> >>
>> org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
>> >> >> > org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
>> >> >> >
>> org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
>> >> >> > $Proxy4.createTable(Unknown Source)
>> >> >> >
>> >> >>
>> >>
>> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:341)
>> >> >> >
>> >> >>
>> >>
>> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:303)
>> >> >> >
>> >> >>
>> >>
>> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:227)
>> >> >> >
>> >> >>
>> >>
>> com.clickable.dataengine.hbase.create.CreateTableByXML.createTable(Unknown
>> >> >> > Source)
>> >> >> >
>> com.clickable.dataengine.hbase.create.CreateTableByXML.main(Unknown
>> >> >> Source)
>> >> >> > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >> >> >
>> >> >>
>> >>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >> >> >
>> >> >>
>> >>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >> >> > java.lang.reflect.Method.invoke(Method.java:597)
>> >> >> > org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> >> >> >
>> >> >> > The "CleanFromMeta" function catches IOException and deletes all
>> rows
>> >> >> from
>> >> >> > .META. We had added this in the exception catch block because we
>> used
>> >> to
>> >> >> > face the "Table taking too long to be disabled" exception often.
>> It
>> >> seems
>> >> >> > the rows in META already get created when the IOException is
>> thrown.
>> >> >> > CleanFromMeta cleans .META. and then I try again to create the
>> table,
>> >> >> after
>> >> >> > which I get the socket timeout exception.
>> >> >> >
>> >> >> > I am using Hadoop v0.20.2, r911707. Can this be a reason for this
>> >> error?
>> >> >> I
>> >> >> > get the message "You are currently running the HMaster without
>> HDFS
>> >> >> append
>> >> >> > support enabled. This may result in data loss. Please see the
>> HBase
>> >> >> > wiki<http://wiki.apache.org/hadoop/Hbase/HdfsSyncSupport> for
>> >> >> > details." on the HBase Master UI.
>> >> >> >
>> >> >> > On Tue, Mar 29, 2011 at 12:25 AM, Hari Sreekumar
>> >> >> > <hs...@clickable.com>wrote:
>> >> >> >
>> >> >> >> Hi Stack
>> >> >> >>
>> >> >> >> yes the tablename is AcContact. The tableName variable was wrong.
>> >> >> >> Fixed it now but I still get the same error. Schema is just
>> something
>> >> >> >> created by parsing an XML file which has stuff like column family
>> >> >> >> name, compression type etc so I guess it doesn't have much to do
>> with
>> >> >> >> version. Except that I had to change the bloom filter variable to
>> >> >> >> String ( used to be boolean in 0.20.6). I will paste the stack
>> trace
>> >> >> >> asap
>> >> >> >>
>> >> >> >> hari
>> >> >> >>
>> >> >> >> On Monday, March 28, 2011, Stack <st...@duboce.net> wrote:
>> >> >> >> > Can I see more of the stack track please Hari and is AcContact
>> the
>> >> >> >> > table you are creating?  Is the schema you've saved aside one
>> you
>> >> >> >> > created with 0.20 hbase?  I don't think it matters but asking
>> just
>> >> in
>> >> >> >> > case.
>> >> >> >> > St.Ack
>> >> >> >> >
>> >> >> >> > On Mon, Mar 28, 2011 at 6:29 AM, Hari Sreekumar
>> >> >> >> > <hs...@clickable.com> wrote:
>> >> >> >> >> Hi,
>> >> >> >> >>
>> >> >> >> >> I am trying to create table in hbase v0.90.1 and I get the
>> >> following
>> >> >> >> error:
>> >> >> >> >>
>> >> >> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Opening socket
>> >> >> connection
>> >> >> >> to
>> >> >> >> >> server hadoop2/192.168.1.111:2181
>> >> >> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Socket connection
>> >> >> >> established
>> >> >> >> >> to hadoop2/192.168.1.111:2181, initiating s
>> >> >> >> >> ession
>> >> >> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Session
>> establishment
>> >> >> >> complete
>> >> >> >> >> on server hadoop2/192.168.1.111:2181, sess
>> >> >> >> >> ionid = 0x12efc946d66000b, negotiated timeout = 180000
>> >> >> >> >> 11/03/28 18:39:52 INFO
>> >> >> >> client.HConnectionManager$HConnectionImplementation:
>> >> >> >> >> Closed zookeeper sessionid=0x12efc946d6600
>> >> >> >> >> 0b
>> >> >> >> >> 11/03/28 18:39:52 INFO zookeeper.ZooKeeper: Session:
>> >> >> 0x12efc946d66000b
>> >> >> >> >> closed
>> >> >> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: EventThread shut
>> down
>> >> >> >> >> 11/03/28 18:47:02 ERROR create.CreateTableByXML: Caught
>> >> IOException:
>> >> >> No
>> >> >> >> >> server address listed in .META. for region
>> >> >> >> >> AcContact,,1301317792604.16d1f5fd49478f79002e89ce02cf37b5.
>> while
>> >> >> >> creating
>> >> >> >> >> table: Table1
>> >> >> >> >>
>> >> >> >> >> This is the code I am using:
>> >> >> >> >> .....
>> >> >> >> >> HTableDescriptor desc =
>> >> >> >> CreateTableByXML.convertSchemaToDescriptor(schema);
>> >> >> >> >>    try {
>> >> >> >> >>      hbaseAdmin.createTable(desc);
>> >> >> >> >>    } catch (IOException e) {
>> >> >> >> >>      CreateTableByXML.LOG.error("Caught IOException: " +
>> >> >> e.getMessage()
>> >> >> >> >>              + " while creating table: " + tableName);
>> >> >> >> >> ....
>> >> >> >> >> .....
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> It used to work fine in v0.20.6. I upgraded today to v0.90.1
>> and
>> >> it
>> >> >> >> started
>> >> >> >> >> giving this error. Any ideas?
>> >> >> >> >>
>> >> >> >> >> Hari
>> >> >> >> >>
>> >> >> >> >
>> >> >> >>
>> >> >> >
>> >> >>
>> >> >
>> >>
>> >
>>
>
>

Re: Unable to create table

Posted by Hari Sreekumar <hs...@clickable.com>.
Ah, yea. I have this error in the regionserver log:

2011-03-30 10:09:26,534 DEBUG
org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=1.63 MB,
free=196.71 MB, max=198.34 MB, blocks=0, accesses=0, hits=0, hitRatio=�%,
cachingAccesses=0, cachingHits=0, cachingHitsRatio=�%, evictions=0,
evicted=0, evictedPerRun=NaN
2011-03-30 10:09:37,328 DEBUG
org.apache.hadoop.hbase.regionserver.LogRoller: Hlog roll period 3600000ms
elapsed
2011-03-30 10:09:46,369 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Received request to open
region: AcContact,,1301424009632.95652cfd0dab6dcf3be0c837c539f8d3.
2011-03-30 10:09:46,369 DEBUG
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Processing
open of AcContact,,1301424009632.95652cfd0dab6dcf3be0c837c539f8d3.
2011-03-30 10:09:46,370 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign:
regionserver:60020-0x12f02e9bdec0007 Attempting to transition node
95652cfd0dab6dcf3be0c837c539f8d3 from M_ZK_REGION_OFFLINE to
RS_ZK_REGION_OPENING
2011-03-30 10:09:46,372 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign:
regionserver:60020-0x12f02e9bdec0007 Successfully transitioned node
95652cfd0dab6dcf3be0c837c539f8d3 from M_ZK_REGION_OFFLINE to
RS_ZK_REGION_OPENING
2011-03-30 10:09:46,372 DEBUG org.apache.hadoop.hbase.regionserver.HRegion:
Opening region: REGION => {NAME =>
'AcContact,,1301424009632.95652cfd0dab6dcf3be0c837c539f8d3.', STARTKEY =>
'', ENDKEY => '', ENCODED => 95652cfd0dab6dcf3be0c837c539f8d3, TABLE =>
{{NAME => 'AcContact', FAMILIES => [{NAME => 'Data', BLOOMFILTER => 'NONE',
REPLICATION_SCOPE => '0', COMPRESSION => 'LZO', VERSIONS => '1000', TTL =>
'2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE =>
'true'}]}}
2011-03-30 10:09:46,373 DEBUG org.apache.hadoop.hbase.regionserver.HRegion:
Instantiated AcContact,,1301424009632.95652cfd0dab6dcf3be0c837c539f8d3.
2011-03-30 10:09:46,373 ERROR
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed open
of region=AcContact,,1301424009632.95652cfd0dab6dcf3be0c837c539f8d3.
java.io.IOException: Compression algorithm 'lzo' previously failed test.
        at
org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:77)
        at
org.apache.hadoop.hbase.regionserver.HRegion.checkCompressionCodecs(HRegion.java:2555)
        at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2544)
        at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2532)
        at
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:262)
        at
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:94)
        at
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:151)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)

Will try to fix LZO here and let you know..

On Wed, Mar 30, 2011 at 1:24 AM, Jean-Daniel Cryans <jd...@apache.org>wrote:

> hadoop3 was trying to open it, but it seems like it's not able to. It
> really looks like: https://issues.apache.org/jira/browse/HBASE-3669
>
> You should also check what's going on on that slave.
>
> J-D
>
> On Tue, Mar 29, 2011 at 12:12 PM, Hari Sreekumar
> <hs...@clickable.com> wrote:
> > Yep I know, I think I was also on the mailing list thread that inspired
> > HBASE-3557 :) But what can I do in the current version better than that?
> >
> > In any case, I do that only when I catch IOException. So why am I getting
> > this IOException. I deleted the hbase folder in HDFS and tried doing
> > everything from start. Here is the HMaster log before the IOException was
> > thrown: http://pastebin.com/x1BUuPpQ
> >
> > thanks,
> > Hari
> >
> > On Wed, Mar 30, 2011 at 12:21 AM, Jean-Daniel Cryans <
> jdcryans@apache.org>wrote:
> >
> >> There's a reason why disabling takes time, if you delete rows from
> >> .META. you might end up in an inconsistent situation and we'll have a
> >> hard time helping you :)
> >>
> >> So HBASE-3557 is what you want.
> >>
> >> Regarding your current issue, RIT is "region in transition" meaning
> >> that the region is in a state recognized by the master as either
> >> moving from one region server to another, or just closing. Normally
> >> after disabling a table its regions all left their in transition
> >> state, but by deleting the rows directly in .META. who knows exactly
> >> what's the state of your table?
> >>
> >> J-D
> >>
> >> On Tue, Mar 29, 2011 at 11:29 AM, Hari Sreekumar
> >> <hs...@clickable.com> wrote:
> >> > Hi J-D,
> >> >
> >> > Here is the tail of HMaster log:
> >> >
> >> > 2011-03-29 23:48:51,155 DEBUG
> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >> >  region to clear regions in transition;
> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
> state=OPENING,
> >> > ts=1301422405271
> >> > 2011-03-29 23:48:52,158 DEBUG
> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >> >  region to clear regions in transition;
> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
> state=OPENING,
> >> > ts=1301422405271
> >> > 2011-03-29 23:48:53,161 DEBUG
> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >> >  region to clear regions in transition;
> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
> state=OPENING,
> >> > ts=1301422405271
> >> > 2011-03-29 23:48:54,163 DEBUG
> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >> >  region to clear regions in transition;
> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
> state=OPENING,
> >> > ts=1301422405271
> >> > 2011-03-29 23:48:55,164 DEBUG
> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >> >  region to clear regions in transition;
> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
> state=OPENING,
> >> > ts=1301422405271
> >> > 2011-03-29 23:48:56,166 DEBUG
> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >> >  region to clear regions in transition;
> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
> state=OPENING,
> >> > ts=1301422405271
> >> > 2011-03-29 23:48:57,168 DEBUG
> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >> >  region to clear regions in transition;
> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
> state=OPENING,
> >> > ts=1301422405271
> >> > 2011-03-29 23:48:58,170 DEBUG
> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >> >  region to clear regions in transition;
> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
> state=OPENING,
> >> > ts=1301422405271
> >> > 2011-03-29 23:48:59,172 DEBUG
> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >> >  region to clear regions in transition;
> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
> state=OPENING,
> >> > ts=1301422405271
> >> > 2011-03-29 23:49:00,174 DEBUG
> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >> >  region to clear regions in transition;
> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
> state=OPENING,
> >> > ts=1301422405271
> >> > 2011-03-29 23:49:01,176 DEBUG
> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >> >  region to clear regions in transition;
> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
> state=OPENING,
> >> > ts=1301422405271
> >> > 2011-03-29 23:49:02,178 DEBUG
> >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >> >  region to clear regions in transition;
> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215.
> state=OPENING,
> >> > ts=1301422405271
> >> > 2011-03-29 23:49:02,178 ERROR
> >> > org.apache.hadoop.hbase.master.handler.TableEventHandler: Error
> >> manipulating
> >> > table AcContact
> >> > java.io.IOException: Waited hbase.master.wait.on.region (300000ms) for
> >> > region to leave region
> >> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. in
> transitions
> >> >        at
> >> >
> >>
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:60)
> >> >        at
> >> >
> >>
> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:66)
> >> >        at
> >> >
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:151)
> >> >        at
> >> >
> >>
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> >> >        at
> >> >
> >>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> >> >        at java.lang.Thread.run(Thread.java:662)
> >> >
> >> > What is "clearing regions in transition"? What could be the issue?
> >> >
> >> > I was told before too that deleting from META isn't recommended. But
> we
> >> were
> >> > facing the disable problem way too often. So we thought we'd make it
> >> > in-built. What other alternatives do I have?
> >> >
> >> > Thx,
> >> > Hari
> >> >
> >> > On Tue, Mar 29, 2011 at 11:30 PM, Jean-Daniel Cryans <
> >> jdcryans@apache.org>wrote:
> >> >
> >> >> The 60 secs timeout means that the client was waiting on the master
> >> >> for some operation but the master took longer than 60 secs to do it,
> >> >> so its log should be the next place too look for something whack.
> >> >>
> >> >> BTW deleting the rows from .META. directly is probably the worst
> thing
> >> >> you can do.
> >> >>
> >> >> J-D
> >> >>
> >> >> On Tue, Mar 29, 2011 at 12:17 AM, Hari Sreekumar
> >> >> <hs...@clickable.com> wrote:
> >> >> > Here is the stack trace:
> >> >> >
> >> >> > 11/03/28 18:47:02 INFO zookeeper.ZooKeeper: Initiating client
> >> connection,
> >> >> > connectString=hadoop2:2181 sessionTimeout=180000
> watcher=hconnection
> >> >> > 11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Opening socket
> connection
> >> to
> >> >> > server hadoop2/192.168.1.111:2181
> >> >> > 11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Socket connection
> >> >> established
> >> >> > to hadoop2/192.168.1.111:2181, initiating session
> >> >> > 11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Session establishment
> >> >> complete
> >> >> > on server hadoop2/192.168.1.111:2181, sessionid =
> 0x12efc946d66000c,
> >> >> > negotiated timeout = 180000
> >> >> > 11/03/28 18:47:02 INFO tools.CleanFromMeta: Deleting row
> >> >> > AcContact,,1301317792604.16d1f5fd49478f79002e89ce02cf37b5.
> >> >> > 11/03/28 18:47:02 INFO create.CreateTableByXML: Cleaned table
> entries
> >> >> from
> >> >> > META. Retrying to create table
> >> >> > 11/03/28 18:48:02 FATAL create.CreateTableByXML: Could not create
> >> table
> >> >> even
> >> >> > after Cleaning Meta entries
> >> >> > 11/03/28 18:48:02 FATAL create.CreateTableByXML:
> >> >> > *************************************************************
> >> >> > 11/03/28 18:48:02 FATAL create.CreateTableByXML: Call to hadoop3/
> >> >> > 192.168.1.57:60000 failed on socket timeout exception:
> >> >> > java.net.SocketTimeoutException: 60000 millis timeout while waiting
> >> for
> >> >> > channel to be ready for read. ch :
> >> >> java.nio.channels.SocketChannel[connected
> >> >> > local=/192.168.0.233:51525 remote=hadoop3/192.168.1.57:60000]
> >> >> > 11/03/28 18:48:02 FATAL create.CreateTableByXML:
> >> >> >
> >> >>
> >>
> org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> >> >> > org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> >> >> >
> org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> >> >> > $Proxy4.createTable(Unknown Source)
> >> >> >
> >> >>
> >>
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:341)
> >> >> >
> >> >>
> >>
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:303)
> >> >> >
> >> >>
> >>
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:227)
> >> >> >
> >> >>
> >>
> com.clickable.dataengine.hbase.create.CreateTableByXML.createTable(Unknown
> >> >> > Source)
> >> >> > com.clickable.dataengine.hbase.create.CreateTableByXML.main(Unknown
> >> >> Source)
> >> >> > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >> >> >
> >> >>
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >> >
> >> >>
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >> > java.lang.reflect.Method.invoke(Method.java:597)
> >> >> > org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> >> >> >
> >> >> > The "CleanFromMeta" function catches IOException and deletes all
> rows
> >> >> from
> >> >> > .META. We had added this in the exception catch block because we
> used
> >> to
> >> >> > face the "Table taking too long to be disabled" exception often. It
> >> seems
> >> >> > the rows in META already get created when the IOException is
> thrown.
> >> >> > CleanFromMeta cleans .META. and then I try again to create the
> table,
> >> >> after
> >> >> > which I get the socket timeout exception.
> >> >> >
> >> >> > I am using Hadoop v0.20.2, r911707. Can this be a reason for this
> >> error?
> >> >> I
> >> >> > get the message "You are currently running the HMaster without HDFS
> >> >> append
> >> >> > support enabled. This may result in data loss. Please see the HBase
> >> >> > wiki<http://wiki.apache.org/hadoop/Hbase/HdfsSyncSupport> for
> >> >> > details." on the HBase Master UI.
> >> >> >
> >> >> > On Tue, Mar 29, 2011 at 12:25 AM, Hari Sreekumar
> >> >> > <hs...@clickable.com>wrote:
> >> >> >
> >> >> >> Hi Stack
> >> >> >>
> >> >> >> yes the tablename is AcContact. The tableName variable was wrong.
> >> >> >> Fixed it now but I still get the same error. Schema is just
> something
> >> >> >> created by parsing an XML file which has stuff like column family
> >> >> >> name, compression type etc so I guess it doesn't have much to do
> with
> >> >> >> version. Except that I had to change the bloom filter variable to
> >> >> >> String ( used to be boolean in 0.20.6). I will paste the stack
> trace
> >> >> >> asap
> >> >> >>
> >> >> >> hari
> >> >> >>
> >> >> >> On Monday, March 28, 2011, Stack <st...@duboce.net> wrote:
> >> >> >> > Can I see more of the stack track please Hari and is AcContact
> the
> >> >> >> > table you are creating?  Is the schema you've saved aside one
> you
> >> >> >> > created with 0.20 hbase?  I don't think it matters but asking
> just
> >> in
> >> >> >> > case.
> >> >> >> > St.Ack
> >> >> >> >
> >> >> >> > On Mon, Mar 28, 2011 at 6:29 AM, Hari Sreekumar
> >> >> >> > <hs...@clickable.com> wrote:
> >> >> >> >> Hi,
> >> >> >> >>
> >> >> >> >> I am trying to create table in hbase v0.90.1 and I get the
> >> following
> >> >> >> error:
> >> >> >> >>
> >> >> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Opening socket
> >> >> connection
> >> >> >> to
> >> >> >> >> server hadoop2/192.168.1.111:2181
> >> >> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Socket connection
> >> >> >> established
> >> >> >> >> to hadoop2/192.168.1.111:2181, initiating s
> >> >> >> >> ession
> >> >> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Session
> establishment
> >> >> >> complete
> >> >> >> >> on server hadoop2/192.168.1.111:2181, sess
> >> >> >> >> ionid = 0x12efc946d66000b, negotiated timeout = 180000
> >> >> >> >> 11/03/28 18:39:52 INFO
> >> >> >> client.HConnectionManager$HConnectionImplementation:
> >> >> >> >> Closed zookeeper sessionid=0x12efc946d6600
> >> >> >> >> 0b
> >> >> >> >> 11/03/28 18:39:52 INFO zookeeper.ZooKeeper: Session:
> >> >> 0x12efc946d66000b
> >> >> >> >> closed
> >> >> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: EventThread shut
> down
> >> >> >> >> 11/03/28 18:47:02 ERROR create.CreateTableByXML: Caught
> >> IOException:
> >> >> No
> >> >> >> >> server address listed in .META. for region
> >> >> >> >> AcContact,,1301317792604.16d1f5fd49478f79002e89ce02cf37b5.
> while
> >> >> >> creating
> >> >> >> >> table: Table1
> >> >> >> >>
> >> >> >> >> This is the code I am using:
> >> >> >> >> .....
> >> >> >> >> HTableDescriptor desc =
> >> >> >> CreateTableByXML.convertSchemaToDescriptor(schema);
> >> >> >> >>    try {
> >> >> >> >>      hbaseAdmin.createTable(desc);
> >> >> >> >>    } catch (IOException e) {
> >> >> >> >>      CreateTableByXML.LOG.error("Caught IOException: " +
> >> >> e.getMessage()
> >> >> >> >>              + " while creating table: " + tableName);
> >> >> >> >> ....
> >> >> >> >> .....
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> It used to work fine in v0.20.6. I upgraded today to v0.90.1
> and
> >> it
> >> >> >> started
> >> >> >> >> giving this error. Any ideas?
> >> >> >> >>
> >> >> >> >> Hari
> >> >> >> >>
> >> >> >> >
> >> >> >>
> >> >> >
> >> >>
> >> >
> >>
> >
>

Re: Unable to create table

Posted by Jean-Daniel Cryans <jd...@apache.org>.
hadoop3 was trying to open it, but it seems like it's not able to. It
really looks like: https://issues.apache.org/jira/browse/HBASE-3669

You should also check what's going on on that slave.

J-D

On Tue, Mar 29, 2011 at 12:12 PM, Hari Sreekumar
<hs...@clickable.com> wrote:
> Yep I know, I think I was also on the mailing list thread that inspired
> HBASE-3557 :) But what can I do in the current version better than that?
>
> In any case, I do that only when I catch IOException. So why am I getting
> this IOException. I deleted the hbase folder in HDFS and tried doing
> everything from start. Here is the HMaster log before the IOException was
> thrown: http://pastebin.com/x1BUuPpQ
>
> thanks,
> Hari
>
> On Wed, Mar 30, 2011 at 12:21 AM, Jean-Daniel Cryans <jd...@apache.org>wrote:
>
>> There's a reason why disabling takes time, if you delete rows from
>> .META. you might end up in an inconsistent situation and we'll have a
>> hard time helping you :)
>>
>> So HBASE-3557 is what you want.
>>
>> Regarding your current issue, RIT is "region in transition" meaning
>> that the region is in a state recognized by the master as either
>> moving from one region server to another, or just closing. Normally
>> after disabling a table its regions all left their in transition
>> state, but by deleting the rows directly in .META. who knows exactly
>> what's the state of your table?
>>
>> J-D
>>
>> On Tue, Mar 29, 2011 at 11:29 AM, Hari Sreekumar
>> <hs...@clickable.com> wrote:
>> > Hi J-D,
>> >
>> > Here is the tail of HMaster log:
>> >
>> > 2011-03-29 23:48:51,155 DEBUG
>> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >  region to clear regions in transition;
>> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
>> > ts=1301422405271
>> > 2011-03-29 23:48:52,158 DEBUG
>> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >  region to clear regions in transition;
>> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
>> > ts=1301422405271
>> > 2011-03-29 23:48:53,161 DEBUG
>> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >  region to clear regions in transition;
>> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
>> > ts=1301422405271
>> > 2011-03-29 23:48:54,163 DEBUG
>> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >  region to clear regions in transition;
>> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
>> > ts=1301422405271
>> > 2011-03-29 23:48:55,164 DEBUG
>> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >  region to clear regions in transition;
>> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
>> > ts=1301422405271
>> > 2011-03-29 23:48:56,166 DEBUG
>> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >  region to clear regions in transition;
>> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
>> > ts=1301422405271
>> > 2011-03-29 23:48:57,168 DEBUG
>> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >  region to clear regions in transition;
>> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
>> > ts=1301422405271
>> > 2011-03-29 23:48:58,170 DEBUG
>> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >  region to clear regions in transition;
>> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
>> > ts=1301422405271
>> > 2011-03-29 23:48:59,172 DEBUG
>> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >  region to clear regions in transition;
>> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
>> > ts=1301422405271
>> > 2011-03-29 23:49:00,174 DEBUG
>> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >  region to clear regions in transition;
>> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
>> > ts=1301422405271
>> > 2011-03-29 23:49:01,176 DEBUG
>> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >  region to clear regions in transition;
>> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
>> > ts=1301422405271
>> > 2011-03-29 23:49:02,178 DEBUG
>> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>> >  region to clear regions in transition;
>> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
>> > ts=1301422405271
>> > 2011-03-29 23:49:02,178 ERROR
>> > org.apache.hadoop.hbase.master.handler.TableEventHandler: Error
>> manipulating
>> > table AcContact
>> > java.io.IOException: Waited hbase.master.wait.on.region (300000ms) for
>> > region to leave region
>> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. in transitions
>> >        at
>> >
>> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:60)
>> >        at
>> >
>> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:66)
>> >        at
>> > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:151)
>> >        at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> >        at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> >        at java.lang.Thread.run(Thread.java:662)
>> >
>> > What is "clearing regions in transition"? What could be the issue?
>> >
>> > I was told before too that deleting from META isn't recommended. But we
>> were
>> > facing the disable problem way too often. So we thought we'd make it
>> > in-built. What other alternatives do I have?
>> >
>> > Thx,
>> > Hari
>> >
>> > On Tue, Mar 29, 2011 at 11:30 PM, Jean-Daniel Cryans <
>> jdcryans@apache.org>wrote:
>> >
>> >> The 60 secs timeout means that the client was waiting on the master
>> >> for some operation but the master took longer than 60 secs to do it,
>> >> so its log should be the next place too look for something whack.
>> >>
>> >> BTW deleting the rows from .META. directly is probably the worst thing
>> >> you can do.
>> >>
>> >> J-D
>> >>
>> >> On Tue, Mar 29, 2011 at 12:17 AM, Hari Sreekumar
>> >> <hs...@clickable.com> wrote:
>> >> > Here is the stack trace:
>> >> >
>> >> > 11/03/28 18:47:02 INFO zookeeper.ZooKeeper: Initiating client
>> connection,
>> >> > connectString=hadoop2:2181 sessionTimeout=180000 watcher=hconnection
>> >> > 11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Opening socket connection
>> to
>> >> > server hadoop2/192.168.1.111:2181
>> >> > 11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Socket connection
>> >> established
>> >> > to hadoop2/192.168.1.111:2181, initiating session
>> >> > 11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Session establishment
>> >> complete
>> >> > on server hadoop2/192.168.1.111:2181, sessionid = 0x12efc946d66000c,
>> >> > negotiated timeout = 180000
>> >> > 11/03/28 18:47:02 INFO tools.CleanFromMeta: Deleting row
>> >> > AcContact,,1301317792604.16d1f5fd49478f79002e89ce02cf37b5.
>> >> > 11/03/28 18:47:02 INFO create.CreateTableByXML: Cleaned table entries
>> >> from
>> >> > META. Retrying to create table
>> >> > 11/03/28 18:48:02 FATAL create.CreateTableByXML: Could not create
>> table
>> >> even
>> >> > after Cleaning Meta entries
>> >> > 11/03/28 18:48:02 FATAL create.CreateTableByXML:
>> >> > *************************************************************
>> >> > 11/03/28 18:48:02 FATAL create.CreateTableByXML: Call to hadoop3/
>> >> > 192.168.1.57:60000 failed on socket timeout exception:
>> >> > java.net.SocketTimeoutException: 60000 millis timeout while waiting
>> for
>> >> > channel to be ready for read. ch :
>> >> java.nio.channels.SocketChannel[connected
>> >> > local=/192.168.0.233:51525 remote=hadoop3/192.168.1.57:60000]
>> >> > 11/03/28 18:48:02 FATAL create.CreateTableByXML:
>> >> >
>> >>
>> org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
>> >> > org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
>> >> > org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
>> >> > $Proxy4.createTable(Unknown Source)
>> >> >
>> >>
>> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:341)
>> >> >
>> >>
>> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:303)
>> >> >
>> >>
>> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:227)
>> >> >
>> >>
>> com.clickable.dataengine.hbase.create.CreateTableByXML.createTable(Unknown
>> >> > Source)
>> >> > com.clickable.dataengine.hbase.create.CreateTableByXML.main(Unknown
>> >> Source)
>> >> > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >> >
>> >>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >> >
>> >>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >> > java.lang.reflect.Method.invoke(Method.java:597)
>> >> > org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> >> >
>> >> > The "CleanFromMeta" function catches IOException and deletes all rows
>> >> from
>> >> > .META. We had added this in the exception catch block because we used
>> to
>> >> > face the "Table taking too long to be disabled" exception often. It
>> seems
>> >> > the rows in META already get created when the IOException is thrown.
>> >> > CleanFromMeta cleans .META. and then I try again to create the table,
>> >> after
>> >> > which I get the socket timeout exception.
>> >> >
>> >> > I am using Hadoop v0.20.2, r911707. Can this be a reason for this
>> error?
>> >> I
>> >> > get the message "You are currently running the HMaster without HDFS
>> >> append
>> >> > support enabled. This may result in data loss. Please see the HBase
>> >> > wiki<http://wiki.apache.org/hadoop/Hbase/HdfsSyncSupport> for
>> >> > details." on the HBase Master UI.
>> >> >
>> >> > On Tue, Mar 29, 2011 at 12:25 AM, Hari Sreekumar
>> >> > <hs...@clickable.com>wrote:
>> >> >
>> >> >> Hi Stack
>> >> >>
>> >> >> yes the tablename is AcContact. The tableName variable was wrong.
>> >> >> Fixed it now but I still get the same error. Schema is just something
>> >> >> created by parsing an XML file which has stuff like column family
>> >> >> name, compression type etc so I guess it doesn't have much to do with
>> >> >> version. Except that I had to change the bloom filter variable to
>> >> >> String ( used to be boolean in 0.20.6). I will paste the stack trace
>> >> >> asap
>> >> >>
>> >> >> hari
>> >> >>
>> >> >> On Monday, March 28, 2011, Stack <st...@duboce.net> wrote:
>> >> >> > Can I see more of the stack track please Hari and is AcContact the
>> >> >> > table you are creating?  Is the schema you've saved aside one you
>> >> >> > created with 0.20 hbase?  I don't think it matters but asking just
>> in
>> >> >> > case.
>> >> >> > St.Ack
>> >> >> >
>> >> >> > On Mon, Mar 28, 2011 at 6:29 AM, Hari Sreekumar
>> >> >> > <hs...@clickable.com> wrote:
>> >> >> >> Hi,
>> >> >> >>
>> >> >> >> I am trying to create table in hbase v0.90.1 and I get the
>> following
>> >> >> error:
>> >> >> >>
>> >> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Opening socket
>> >> connection
>> >> >> to
>> >> >> >> server hadoop2/192.168.1.111:2181
>> >> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Socket connection
>> >> >> established
>> >> >> >> to hadoop2/192.168.1.111:2181, initiating s
>> >> >> >> ession
>> >> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Session establishment
>> >> >> complete
>> >> >> >> on server hadoop2/192.168.1.111:2181, sess
>> >> >> >> ionid = 0x12efc946d66000b, negotiated timeout = 180000
>> >> >> >> 11/03/28 18:39:52 INFO
>> >> >> client.HConnectionManager$HConnectionImplementation:
>> >> >> >> Closed zookeeper sessionid=0x12efc946d6600
>> >> >> >> 0b
>> >> >> >> 11/03/28 18:39:52 INFO zookeeper.ZooKeeper: Session:
>> >> 0x12efc946d66000b
>> >> >> >> closed
>> >> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: EventThread shut down
>> >> >> >> 11/03/28 18:47:02 ERROR create.CreateTableByXML: Caught
>> IOException:
>> >> No
>> >> >> >> server address listed in .META. for region
>> >> >> >> AcContact,,1301317792604.16d1f5fd49478f79002e89ce02cf37b5. while
>> >> >> creating
>> >> >> >> table: Table1
>> >> >> >>
>> >> >> >> This is the code I am using:
>> >> >> >> .....
>> >> >> >> HTableDescriptor desc =
>> >> >> CreateTableByXML.convertSchemaToDescriptor(schema);
>> >> >> >>    try {
>> >> >> >>      hbaseAdmin.createTable(desc);
>> >> >> >>    } catch (IOException e) {
>> >> >> >>      CreateTableByXML.LOG.error("Caught IOException: " +
>> >> e.getMessage()
>> >> >> >>              + " while creating table: " + tableName);
>> >> >> >> ....
>> >> >> >> .....
>> >> >> >>
>> >> >> >>
>> >> >> >> It used to work fine in v0.20.6. I upgraded today to v0.90.1 and
>> it
>> >> >> started
>> >> >> >> giving this error. Any ideas?
>> >> >> >>
>> >> >> >> Hari
>> >> >> >>
>> >> >> >
>> >> >>
>> >> >
>> >>
>> >
>>
>

Re: Unable to create table

Posted by Hari Sreekumar <hs...@clickable.com>.
Yep I know, I think I was also on the mailing list thread that inspired
HBASE-3557 :) But what can I do in the current version better than that?

In any case, I do that only when I catch IOException. So why am I getting
this IOException. I deleted the hbase folder in HDFS and tried doing
everything from start. Here is the HMaster log before the IOException was
thrown: http://pastebin.com/x1BUuPpQ

thanks,
Hari

On Wed, Mar 30, 2011 at 12:21 AM, Jean-Daniel Cryans <jd...@apache.org>wrote:

> There's a reason why disabling takes time, if you delete rows from
> .META. you might end up in an inconsistent situation and we'll have a
> hard time helping you :)
>
> So HBASE-3557 is what you want.
>
> Regarding your current issue, RIT is "region in transition" meaning
> that the region is in a state recognized by the master as either
> moving from one region server to another, or just closing. Normally
> after disabling a table its regions all left their in transition
> state, but by deleting the rows directly in .META. who knows exactly
> what's the state of your table?
>
> J-D
>
> On Tue, Mar 29, 2011 at 11:29 AM, Hari Sreekumar
> <hs...@clickable.com> wrote:
> > Hi J-D,
> >
> > Here is the tail of HMaster log:
> >
> > 2011-03-29 23:48:51,155 DEBUG
> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >  region to clear regions in transition;
> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> > ts=1301422405271
> > 2011-03-29 23:48:52,158 DEBUG
> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >  region to clear regions in transition;
> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> > ts=1301422405271
> > 2011-03-29 23:48:53,161 DEBUG
> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >  region to clear regions in transition;
> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> > ts=1301422405271
> > 2011-03-29 23:48:54,163 DEBUG
> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >  region to clear regions in transition;
> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> > ts=1301422405271
> > 2011-03-29 23:48:55,164 DEBUG
> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >  region to clear regions in transition;
> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> > ts=1301422405271
> > 2011-03-29 23:48:56,166 DEBUG
> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >  region to clear regions in transition;
> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> > ts=1301422405271
> > 2011-03-29 23:48:57,168 DEBUG
> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >  region to clear regions in transition;
> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> > ts=1301422405271
> > 2011-03-29 23:48:58,170 DEBUG
> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >  region to clear regions in transition;
> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> > ts=1301422405271
> > 2011-03-29 23:48:59,172 DEBUG
> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >  region to clear regions in transition;
> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> > ts=1301422405271
> > 2011-03-29 23:49:00,174 DEBUG
> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >  region to clear regions in transition;
> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> > ts=1301422405271
> > 2011-03-29 23:49:01,176 DEBUG
> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >  region to clear regions in transition;
> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> > ts=1301422405271
> > 2011-03-29 23:49:02,178 DEBUG
> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
> >  region to clear regions in transition;
> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> > ts=1301422405271
> > 2011-03-29 23:49:02,178 ERROR
> > org.apache.hadoop.hbase.master.handler.TableEventHandler: Error
> manipulating
> > table AcContact
> > java.io.IOException: Waited hbase.master.wait.on.region (300000ms) for
> > region to leave region
> > AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. in transitions
> >        at
> >
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:60)
> >        at
> >
> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:66)
> >        at
> > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:151)
> >        at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> >        at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> >        at java.lang.Thread.run(Thread.java:662)
> >
> > What is "clearing regions in transition"? What could be the issue?
> >
> > I was told before too that deleting from META isn't recommended. But we
> were
> > facing the disable problem way too often. So we thought we'd make it
> > in-built. What other alternatives do I have?
> >
> > Thx,
> > Hari
> >
> > On Tue, Mar 29, 2011 at 11:30 PM, Jean-Daniel Cryans <
> jdcryans@apache.org>wrote:
> >
> >> The 60 secs timeout means that the client was waiting on the master
> >> for some operation but the master took longer than 60 secs to do it,
> >> so its log should be the next place too look for something whack.
> >>
> >> BTW deleting the rows from .META. directly is probably the worst thing
> >> you can do.
> >>
> >> J-D
> >>
> >> On Tue, Mar 29, 2011 at 12:17 AM, Hari Sreekumar
> >> <hs...@clickable.com> wrote:
> >> > Here is the stack trace:
> >> >
> >> > 11/03/28 18:47:02 INFO zookeeper.ZooKeeper: Initiating client
> connection,
> >> > connectString=hadoop2:2181 sessionTimeout=180000 watcher=hconnection
> >> > 11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Opening socket connection
> to
> >> > server hadoop2/192.168.1.111:2181
> >> > 11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Socket connection
> >> established
> >> > to hadoop2/192.168.1.111:2181, initiating session
> >> > 11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Session establishment
> >> complete
> >> > on server hadoop2/192.168.1.111:2181, sessionid = 0x12efc946d66000c,
> >> > negotiated timeout = 180000
> >> > 11/03/28 18:47:02 INFO tools.CleanFromMeta: Deleting row
> >> > AcContact,,1301317792604.16d1f5fd49478f79002e89ce02cf37b5.
> >> > 11/03/28 18:47:02 INFO create.CreateTableByXML: Cleaned table entries
> >> from
> >> > META. Retrying to create table
> >> > 11/03/28 18:48:02 FATAL create.CreateTableByXML: Could not create
> table
> >> even
> >> > after Cleaning Meta entries
> >> > 11/03/28 18:48:02 FATAL create.CreateTableByXML:
> >> > *************************************************************
> >> > 11/03/28 18:48:02 FATAL create.CreateTableByXML: Call to hadoop3/
> >> > 192.168.1.57:60000 failed on socket timeout exception:
> >> > java.net.SocketTimeoutException: 60000 millis timeout while waiting
> for
> >> > channel to be ready for read. ch :
> >> java.nio.channels.SocketChannel[connected
> >> > local=/192.168.0.233:51525 remote=hadoop3/192.168.1.57:60000]
> >> > 11/03/28 18:48:02 FATAL create.CreateTableByXML:
> >> >
> >>
> org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> >> > org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> >> > org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> >> > $Proxy4.createTable(Unknown Source)
> >> >
> >>
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:341)
> >> >
> >>
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:303)
> >> >
> >>
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:227)
> >> >
> >>
> com.clickable.dataengine.hbase.create.CreateTableByXML.createTable(Unknown
> >> > Source)
> >> > com.clickable.dataengine.hbase.create.CreateTableByXML.main(Unknown
> >> Source)
> >> > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >> >
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> > java.lang.reflect.Method.invoke(Method.java:597)
> >> > org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> >> >
> >> > The "CleanFromMeta" function catches IOException and deletes all rows
> >> from
> >> > .META. We had added this in the exception catch block because we used
> to
> >> > face the "Table taking too long to be disabled" exception often. It
> seems
> >> > the rows in META already get created when the IOException is thrown.
> >> > CleanFromMeta cleans .META. and then I try again to create the table,
> >> after
> >> > which I get the socket timeout exception.
> >> >
> >> > I am using Hadoop v0.20.2, r911707. Can this be a reason for this
> error?
> >> I
> >> > get the message "You are currently running the HMaster without HDFS
> >> append
> >> > support enabled. This may result in data loss. Please see the HBase
> >> > wiki<http://wiki.apache.org/hadoop/Hbase/HdfsSyncSupport> for
> >> > details." on the HBase Master UI.
> >> >
> >> > On Tue, Mar 29, 2011 at 12:25 AM, Hari Sreekumar
> >> > <hs...@clickable.com>wrote:
> >> >
> >> >> Hi Stack
> >> >>
> >> >> yes the tablename is AcContact. The tableName variable was wrong.
> >> >> Fixed it now but I still get the same error. Schema is just something
> >> >> created by parsing an XML file which has stuff like column family
> >> >> name, compression type etc so I guess it doesn't have much to do with
> >> >> version. Except that I had to change the bloom filter variable to
> >> >> String ( used to be boolean in 0.20.6). I will paste the stack trace
> >> >> asap
> >> >>
> >> >> hari
> >> >>
> >> >> On Monday, March 28, 2011, Stack <st...@duboce.net> wrote:
> >> >> > Can I see more of the stack track please Hari and is AcContact the
> >> >> > table you are creating?  Is the schema you've saved aside one you
> >> >> > created with 0.20 hbase?  I don't think it matters but asking just
> in
> >> >> > case.
> >> >> > St.Ack
> >> >> >
> >> >> > On Mon, Mar 28, 2011 at 6:29 AM, Hari Sreekumar
> >> >> > <hs...@clickable.com> wrote:
> >> >> >> Hi,
> >> >> >>
> >> >> >> I am trying to create table in hbase v0.90.1 and I get the
> following
> >> >> error:
> >> >> >>
> >> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Opening socket
> >> connection
> >> >> to
> >> >> >> server hadoop2/192.168.1.111:2181
> >> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Socket connection
> >> >> established
> >> >> >> to hadoop2/192.168.1.111:2181, initiating s
> >> >> >> ession
> >> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Session establishment
> >> >> complete
> >> >> >> on server hadoop2/192.168.1.111:2181, sess
> >> >> >> ionid = 0x12efc946d66000b, negotiated timeout = 180000
> >> >> >> 11/03/28 18:39:52 INFO
> >> >> client.HConnectionManager$HConnectionImplementation:
> >> >> >> Closed zookeeper sessionid=0x12efc946d6600
> >> >> >> 0b
> >> >> >> 11/03/28 18:39:52 INFO zookeeper.ZooKeeper: Session:
> >> 0x12efc946d66000b
> >> >> >> closed
> >> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: EventThread shut down
> >> >> >> 11/03/28 18:47:02 ERROR create.CreateTableByXML: Caught
> IOException:
> >> No
> >> >> >> server address listed in .META. for region
> >> >> >> AcContact,,1301317792604.16d1f5fd49478f79002e89ce02cf37b5. while
> >> >> creating
> >> >> >> table: Table1
> >> >> >>
> >> >> >> This is the code I am using:
> >> >> >> .....
> >> >> >> HTableDescriptor desc =
> >> >> CreateTableByXML.convertSchemaToDescriptor(schema);
> >> >> >>    try {
> >> >> >>      hbaseAdmin.createTable(desc);
> >> >> >>    } catch (IOException e) {
> >> >> >>      CreateTableByXML.LOG.error("Caught IOException: " +
> >> e.getMessage()
> >> >> >>              + " while creating table: " + tableName);
> >> >> >> ....
> >> >> >> .....
> >> >> >>
> >> >> >>
> >> >> >> It used to work fine in v0.20.6. I upgraded today to v0.90.1 and
> it
> >> >> started
> >> >> >> giving this error. Any ideas?
> >> >> >>
> >> >> >> Hari
> >> >> >>
> >> >> >
> >> >>
> >> >
> >>
> >
>

Re: Unable to create table

Posted by Jean-Daniel Cryans <jd...@apache.org>.
There's a reason why disabling takes time, if you delete rows from
.META. you might end up in an inconsistent situation and we'll have a
hard time helping you :)

So HBASE-3557 is what you want.

Regarding your current issue, RIT is "region in transition" meaning
that the region is in a state recognized by the master as either
moving from one region server to another, or just closing. Normally
after disabling a table its regions all left their in transition
state, but by deleting the rows directly in .META. who knows exactly
what's the state of your table?

J-D

On Tue, Mar 29, 2011 at 11:29 AM, Hari Sreekumar
<hs...@clickable.com> wrote:
> Hi J-D,
>
> Here is the tail of HMaster log:
>
> 2011-03-29 23:48:51,155 DEBUG
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>  region to clear regions in transition;
> AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> ts=1301422405271
> 2011-03-29 23:48:52,158 DEBUG
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>  region to clear regions in transition;
> AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> ts=1301422405271
> 2011-03-29 23:48:53,161 DEBUG
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>  region to clear regions in transition;
> AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> ts=1301422405271
> 2011-03-29 23:48:54,163 DEBUG
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>  region to clear regions in transition;
> AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> ts=1301422405271
> 2011-03-29 23:48:55,164 DEBUG
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>  region to clear regions in transition;
> AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> ts=1301422405271
> 2011-03-29 23:48:56,166 DEBUG
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>  region to clear regions in transition;
> AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> ts=1301422405271
> 2011-03-29 23:48:57,168 DEBUG
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>  region to clear regions in transition;
> AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> ts=1301422405271
> 2011-03-29 23:48:58,170 DEBUG
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>  region to clear regions in transition;
> AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> ts=1301422405271
> 2011-03-29 23:48:59,172 DEBUG
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>  region to clear regions in transition;
> AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> ts=1301422405271
> 2011-03-29 23:49:00,174 DEBUG
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>  region to clear regions in transition;
> AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> ts=1301422405271
> 2011-03-29 23:49:01,176 DEBUG
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>  region to clear regions in transition;
> AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> ts=1301422405271
> 2011-03-29 23:49:02,178 DEBUG
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
>  region to clear regions in transition;
> AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
> ts=1301422405271
> 2011-03-29 23:49:02,178 ERROR
> org.apache.hadoop.hbase.master.handler.TableEventHandler: Error manipulating
> table AcContact
> java.io.IOException: Waited hbase.master.wait.on.region (300000ms) for
> region to leave region
> AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. in transitions
>        at
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:60)
>        at
> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:66)
>        at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:151)
>        at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:662)
>
> What is "clearing regions in transition"? What could be the issue?
>
> I was told before too that deleting from META isn't recommended. But we were
> facing the disable problem way too often. So we thought we'd make it
> in-built. What other alternatives do I have?
>
> Thx,
> Hari
>
> On Tue, Mar 29, 2011 at 11:30 PM, Jean-Daniel Cryans <jd...@apache.org>wrote:
>
>> The 60 secs timeout means that the client was waiting on the master
>> for some operation but the master took longer than 60 secs to do it,
>> so its log should be the next place too look for something whack.
>>
>> BTW deleting the rows from .META. directly is probably the worst thing
>> you can do.
>>
>> J-D
>>
>> On Tue, Mar 29, 2011 at 12:17 AM, Hari Sreekumar
>> <hs...@clickable.com> wrote:
>> > Here is the stack trace:
>> >
>> > 11/03/28 18:47:02 INFO zookeeper.ZooKeeper: Initiating client connection,
>> > connectString=hadoop2:2181 sessionTimeout=180000 watcher=hconnection
>> > 11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Opening socket connection to
>> > server hadoop2/192.168.1.111:2181
>> > 11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Socket connection
>> established
>> > to hadoop2/192.168.1.111:2181, initiating session
>> > 11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Session establishment
>> complete
>> > on server hadoop2/192.168.1.111:2181, sessionid = 0x12efc946d66000c,
>> > negotiated timeout = 180000
>> > 11/03/28 18:47:02 INFO tools.CleanFromMeta: Deleting row
>> > AcContact,,1301317792604.16d1f5fd49478f79002e89ce02cf37b5.
>> > 11/03/28 18:47:02 INFO create.CreateTableByXML: Cleaned table entries
>> from
>> > META. Retrying to create table
>> > 11/03/28 18:48:02 FATAL create.CreateTableByXML: Could not create table
>> even
>> > after Cleaning Meta entries
>> > 11/03/28 18:48:02 FATAL create.CreateTableByXML:
>> > *************************************************************
>> > 11/03/28 18:48:02 FATAL create.CreateTableByXML: Call to hadoop3/
>> > 192.168.1.57:60000 failed on socket timeout exception:
>> > java.net.SocketTimeoutException: 60000 millis timeout while waiting for
>> > channel to be ready for read. ch :
>> java.nio.channels.SocketChannel[connected
>> > local=/192.168.0.233:51525 remote=hadoop3/192.168.1.57:60000]
>> > 11/03/28 18:48:02 FATAL create.CreateTableByXML:
>> >
>> org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
>> > org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
>> > org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
>> > $Proxy4.createTable(Unknown Source)
>> >
>> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:341)
>> >
>> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:303)
>> >
>> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:227)
>> >
>> com.clickable.dataengine.hbase.create.CreateTableByXML.createTable(Unknown
>> > Source)
>> > com.clickable.dataengine.hbase.create.CreateTableByXML.main(Unknown
>> Source)
>> > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> > java.lang.reflect.Method.invoke(Method.java:597)
>> > org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> >
>> > The "CleanFromMeta" function catches IOException and deletes all rows
>> from
>> > .META. We had added this in the exception catch block because we used to
>> > face the "Table taking too long to be disabled" exception often. It seems
>> > the rows in META already get created when the IOException is thrown.
>> > CleanFromMeta cleans .META. and then I try again to create the table,
>> after
>> > which I get the socket timeout exception.
>> >
>> > I am using Hadoop v0.20.2, r911707. Can this be a reason for this error?
>> I
>> > get the message "You are currently running the HMaster without HDFS
>> append
>> > support enabled. This may result in data loss. Please see the HBase
>> > wiki<http://wiki.apache.org/hadoop/Hbase/HdfsSyncSupport> for
>> > details." on the HBase Master UI.
>> >
>> > On Tue, Mar 29, 2011 at 12:25 AM, Hari Sreekumar
>> > <hs...@clickable.com>wrote:
>> >
>> >> Hi Stack
>> >>
>> >> yes the tablename is AcContact. The tableName variable was wrong.
>> >> Fixed it now but I still get the same error. Schema is just something
>> >> created by parsing an XML file which has stuff like column family
>> >> name, compression type etc so I guess it doesn't have much to do with
>> >> version. Except that I had to change the bloom filter variable to
>> >> String ( used to be boolean in 0.20.6). I will paste the stack trace
>> >> asap
>> >>
>> >> hari
>> >>
>> >> On Monday, March 28, 2011, Stack <st...@duboce.net> wrote:
>> >> > Can I see more of the stack track please Hari and is AcContact the
>> >> > table you are creating?  Is the schema you've saved aside one you
>> >> > created with 0.20 hbase?  I don't think it matters but asking just in
>> >> > case.
>> >> > St.Ack
>> >> >
>> >> > On Mon, Mar 28, 2011 at 6:29 AM, Hari Sreekumar
>> >> > <hs...@clickable.com> wrote:
>> >> >> Hi,
>> >> >>
>> >> >> I am trying to create table in hbase v0.90.1 and I get the following
>> >> error:
>> >> >>
>> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Opening socket
>> connection
>> >> to
>> >> >> server hadoop2/192.168.1.111:2181
>> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Socket connection
>> >> established
>> >> >> to hadoop2/192.168.1.111:2181, initiating s
>> >> >> ession
>> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Session establishment
>> >> complete
>> >> >> on server hadoop2/192.168.1.111:2181, sess
>> >> >> ionid = 0x12efc946d66000b, negotiated timeout = 180000
>> >> >> 11/03/28 18:39:52 INFO
>> >> client.HConnectionManager$HConnectionImplementation:
>> >> >> Closed zookeeper sessionid=0x12efc946d6600
>> >> >> 0b
>> >> >> 11/03/28 18:39:52 INFO zookeeper.ZooKeeper: Session:
>> 0x12efc946d66000b
>> >> >> closed
>> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: EventThread shut down
>> >> >> 11/03/28 18:47:02 ERROR create.CreateTableByXML: Caught IOException:
>> No
>> >> >> server address listed in .META. for region
>> >> >> AcContact,,1301317792604.16d1f5fd49478f79002e89ce02cf37b5. while
>> >> creating
>> >> >> table: Table1
>> >> >>
>> >> >> This is the code I am using:
>> >> >> .....
>> >> >> HTableDescriptor desc =
>> >> CreateTableByXML.convertSchemaToDescriptor(schema);
>> >> >>    try {
>> >> >>      hbaseAdmin.createTable(desc);
>> >> >>    } catch (IOException e) {
>> >> >>      CreateTableByXML.LOG.error("Caught IOException: " +
>> e.getMessage()
>> >> >>              + " while creating table: " + tableName);
>> >> >> ....
>> >> >> .....
>> >> >>
>> >> >>
>> >> >> It used to work fine in v0.20.6. I upgraded today to v0.90.1 and it
>> >> started
>> >> >> giving this error. Any ideas?
>> >> >>
>> >> >> Hari
>> >> >>
>> >> >
>> >>
>> >
>>
>

Re: Unable to create table

Posted by Hari Sreekumar <hs...@clickable.com>.
Hi J-D,

Here is the tail of HMaster log:

2011-03-29 23:48:51,155 DEBUG
org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
 region to clear regions in transition;
AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
ts=1301422405271
2011-03-29 23:48:52,158 DEBUG
org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
 region to clear regions in transition;
AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
ts=1301422405271
2011-03-29 23:48:53,161 DEBUG
org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
 region to clear regions in transition;
AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
ts=1301422405271
2011-03-29 23:48:54,163 DEBUG
org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
 region to clear regions in transition;
AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
ts=1301422405271
2011-03-29 23:48:55,164 DEBUG
org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
 region to clear regions in transition;
AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
ts=1301422405271
2011-03-29 23:48:56,166 DEBUG
org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
 region to clear regions in transition;
AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
ts=1301422405271
2011-03-29 23:48:57,168 DEBUG
org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
 region to clear regions in transition;
AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
ts=1301422405271
2011-03-29 23:48:58,170 DEBUG
org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
 region to clear regions in transition;
AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
ts=1301422405271
2011-03-29 23:48:59,172 DEBUG
org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
 region to clear regions in transition;
AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
ts=1301422405271
2011-03-29 23:49:00,174 DEBUG
org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
 region to clear regions in transition;
AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
ts=1301422405271
2011-03-29 23:49:01,176 DEBUG
org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
 region to clear regions in transition;
AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
ts=1301422405271
2011-03-29 23:49:02,178 DEBUG
org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Waiting on
 region to clear regions in transition;
AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. state=OPENING,
ts=1301422405271
2011-03-29 23:49:02,178 ERROR
org.apache.hadoop.hbase.master.handler.TableEventHandler: Error manipulating
table AcContact
java.io.IOException: Waited hbase.master.wait.on.region (300000ms) for
region to leave region
AcContact,,1301416789483.0cd6d132b2f367f21e88f00778349215. in transitions
        at
org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:60)
        at
org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:66)
        at
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:151)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)

What is "clearing regions in transition"? What could be the issue?

I was told before too that deleting from META isn't recommended. But we were
facing the disable problem way too often. So we thought we'd make it
in-built. What other alternatives do I have?

Thx,
Hari

On Tue, Mar 29, 2011 at 11:30 PM, Jean-Daniel Cryans <jd...@apache.org>wrote:

> The 60 secs timeout means that the client was waiting on the master
> for some operation but the master took longer than 60 secs to do it,
> so its log should be the next place too look for something whack.
>
> BTW deleting the rows from .META. directly is probably the worst thing
> you can do.
>
> J-D
>
> On Tue, Mar 29, 2011 at 12:17 AM, Hari Sreekumar
> <hs...@clickable.com> wrote:
> > Here is the stack trace:
> >
> > 11/03/28 18:47:02 INFO zookeeper.ZooKeeper: Initiating client connection,
> > connectString=hadoop2:2181 sessionTimeout=180000 watcher=hconnection
> > 11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Opening socket connection to
> > server hadoop2/192.168.1.111:2181
> > 11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Socket connection
> established
> > to hadoop2/192.168.1.111:2181, initiating session
> > 11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Session establishment
> complete
> > on server hadoop2/192.168.1.111:2181, sessionid = 0x12efc946d66000c,
> > negotiated timeout = 180000
> > 11/03/28 18:47:02 INFO tools.CleanFromMeta: Deleting row
> > AcContact,,1301317792604.16d1f5fd49478f79002e89ce02cf37b5.
> > 11/03/28 18:47:02 INFO create.CreateTableByXML: Cleaned table entries
> from
> > META. Retrying to create table
> > 11/03/28 18:48:02 FATAL create.CreateTableByXML: Could not create table
> even
> > after Cleaning Meta entries
> > 11/03/28 18:48:02 FATAL create.CreateTableByXML:
> > *************************************************************
> > 11/03/28 18:48:02 FATAL create.CreateTableByXML: Call to hadoop3/
> > 192.168.1.57:60000 failed on socket timeout exception:
> > java.net.SocketTimeoutException: 60000 millis timeout while waiting for
> > channel to be ready for read. ch :
> java.nio.channels.SocketChannel[connected
> > local=/192.168.0.233:51525 remote=hadoop3/192.168.1.57:60000]
> > 11/03/28 18:48:02 FATAL create.CreateTableByXML:
> >
> org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> > org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> > org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> > $Proxy4.createTable(Unknown Source)
> >
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:341)
> >
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:303)
> >
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:227)
> >
> com.clickable.dataengine.hbase.create.CreateTableByXML.createTable(Unknown
> > Source)
> > com.clickable.dataengine.hbase.create.CreateTableByXML.main(Unknown
> Source)
> > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > java.lang.reflect.Method.invoke(Method.java:597)
> > org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> >
> > The "CleanFromMeta" function catches IOException and deletes all rows
> from
> > .META. We had added this in the exception catch block because we used to
> > face the "Table taking too long to be disabled" exception often. It seems
> > the rows in META already get created when the IOException is thrown.
> > CleanFromMeta cleans .META. and then I try again to create the table,
> after
> > which I get the socket timeout exception.
> >
> > I am using Hadoop v0.20.2, r911707. Can this be a reason for this error?
> I
> > get the message "You are currently running the HMaster without HDFS
> append
> > support enabled. This may result in data loss. Please see the HBase
> > wiki<http://wiki.apache.org/hadoop/Hbase/HdfsSyncSupport> for
> > details." on the HBase Master UI.
> >
> > On Tue, Mar 29, 2011 at 12:25 AM, Hari Sreekumar
> > <hs...@clickable.com>wrote:
> >
> >> Hi Stack
> >>
> >> yes the tablename is AcContact. The tableName variable was wrong.
> >> Fixed it now but I still get the same error. Schema is just something
> >> created by parsing an XML file which has stuff like column family
> >> name, compression type etc so I guess it doesn't have much to do with
> >> version. Except that I had to change the bloom filter variable to
> >> String ( used to be boolean in 0.20.6). I will paste the stack trace
> >> asap
> >>
> >> hari
> >>
> >> On Monday, March 28, 2011, Stack <st...@duboce.net> wrote:
> >> > Can I see more of the stack track please Hari and is AcContact the
> >> > table you are creating?  Is the schema you've saved aside one you
> >> > created with 0.20 hbase?  I don't think it matters but asking just in
> >> > case.
> >> > St.Ack
> >> >
> >> > On Mon, Mar 28, 2011 at 6:29 AM, Hari Sreekumar
> >> > <hs...@clickable.com> wrote:
> >> >> Hi,
> >> >>
> >> >> I am trying to create table in hbase v0.90.1 and I get the following
> >> error:
> >> >>
> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Opening socket
> connection
> >> to
> >> >> server hadoop2/192.168.1.111:2181
> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Socket connection
> >> established
> >> >> to hadoop2/192.168.1.111:2181, initiating s
> >> >> ession
> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Session establishment
> >> complete
> >> >> on server hadoop2/192.168.1.111:2181, sess
> >> >> ionid = 0x12efc946d66000b, negotiated timeout = 180000
> >> >> 11/03/28 18:39:52 INFO
> >> client.HConnectionManager$HConnectionImplementation:
> >> >> Closed zookeeper sessionid=0x12efc946d6600
> >> >> 0b
> >> >> 11/03/28 18:39:52 INFO zookeeper.ZooKeeper: Session:
> 0x12efc946d66000b
> >> >> closed
> >> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: EventThread shut down
> >> >> 11/03/28 18:47:02 ERROR create.CreateTableByXML: Caught IOException:
> No
> >> >> server address listed in .META. for region
> >> >> AcContact,,1301317792604.16d1f5fd49478f79002e89ce02cf37b5. while
> >> creating
> >> >> table: Table1
> >> >>
> >> >> This is the code I am using:
> >> >> .....
> >> >> HTableDescriptor desc =
> >> CreateTableByXML.convertSchemaToDescriptor(schema);
> >> >>    try {
> >> >>      hbaseAdmin.createTable(desc);
> >> >>    } catch (IOException e) {
> >> >>      CreateTableByXML.LOG.error("Caught IOException: " +
> e.getMessage()
> >> >>              + " while creating table: " + tableName);
> >> >> ....
> >> >> .....
> >> >>
> >> >>
> >> >> It used to work fine in v0.20.6. I upgraded today to v0.90.1 and it
> >> started
> >> >> giving this error. Any ideas?
> >> >>
> >> >> Hari
> >> >>
> >> >
> >>
> >
>

Re: Unable to create table

Posted by Jean-Daniel Cryans <jd...@apache.org>.
The 60 secs timeout means that the client was waiting on the master
for some operation but the master took longer than 60 secs to do it,
so its log should be the next place too look for something whack.

BTW deleting the rows from .META. directly is probably the worst thing
you can do.

J-D

On Tue, Mar 29, 2011 at 12:17 AM, Hari Sreekumar
<hs...@clickable.com> wrote:
> Here is the stack trace:
>
> 11/03/28 18:47:02 INFO zookeeper.ZooKeeper: Initiating client connection,
> connectString=hadoop2:2181 sessionTimeout=180000 watcher=hconnection
> 11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Opening socket connection to
> server hadoop2/192.168.1.111:2181
> 11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Socket connection established
> to hadoop2/192.168.1.111:2181, initiating session
> 11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Session establishment complete
> on server hadoop2/192.168.1.111:2181, sessionid = 0x12efc946d66000c,
> negotiated timeout = 180000
> 11/03/28 18:47:02 INFO tools.CleanFromMeta: Deleting row
> AcContact,,1301317792604.16d1f5fd49478f79002e89ce02cf37b5.
> 11/03/28 18:47:02 INFO create.CreateTableByXML: Cleaned table entries from
> META. Retrying to create table
> 11/03/28 18:48:02 FATAL create.CreateTableByXML: Could not create table even
> after Cleaning Meta entries
> 11/03/28 18:48:02 FATAL create.CreateTableByXML:
> *************************************************************
> 11/03/28 18:48:02 FATAL create.CreateTableByXML: Call to hadoop3/
> 192.168.1.57:60000 failed on socket timeout exception:
> java.net.SocketTimeoutException: 60000 millis timeout while waiting for
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected
> local=/192.168.0.233:51525 remote=hadoop3/192.168.1.57:60000]
> 11/03/28 18:48:02 FATAL create.CreateTableByXML:
> org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> $Proxy4.createTable(Unknown Source)
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:341)
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:303)
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:227)
> com.clickable.dataengine.hbase.create.CreateTableByXML.createTable(Unknown
> Source)
> com.clickable.dataengine.hbase.create.CreateTableByXML.main(Unknown Source)
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> java.lang.reflect.Method.invoke(Method.java:597)
> org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>
> The "CleanFromMeta" function catches IOException and deletes all rows from
> .META. We had added this in the exception catch block because we used to
> face the "Table taking too long to be disabled" exception often. It seems
> the rows in META already get created when the IOException is thrown.
> CleanFromMeta cleans .META. and then I try again to create the table, after
> which I get the socket timeout exception.
>
> I am using Hadoop v0.20.2, r911707. Can this be a reason for this error? I
> get the message "You are currently running the HMaster without HDFS append
> support enabled. This may result in data loss. Please see the HBase
> wiki<http://wiki.apache.org/hadoop/Hbase/HdfsSyncSupport> for
> details." on the HBase Master UI.
>
> On Tue, Mar 29, 2011 at 12:25 AM, Hari Sreekumar
> <hs...@clickable.com>wrote:
>
>> Hi Stack
>>
>> yes the tablename is AcContact. The tableName variable was wrong.
>> Fixed it now but I still get the same error. Schema is just something
>> created by parsing an XML file which has stuff like column family
>> name, compression type etc so I guess it doesn't have much to do with
>> version. Except that I had to change the bloom filter variable to
>> String ( used to be boolean in 0.20.6). I will paste the stack trace
>> asap
>>
>> hari
>>
>> On Monday, March 28, 2011, Stack <st...@duboce.net> wrote:
>> > Can I see more of the stack track please Hari and is AcContact the
>> > table you are creating?  Is the schema you've saved aside one you
>> > created with 0.20 hbase?  I don't think it matters but asking just in
>> > case.
>> > St.Ack
>> >
>> > On Mon, Mar 28, 2011 at 6:29 AM, Hari Sreekumar
>> > <hs...@clickable.com> wrote:
>> >> Hi,
>> >>
>> >> I am trying to create table in hbase v0.90.1 and I get the following
>> error:
>> >>
>> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Opening socket connection
>> to
>> >> server hadoop2/192.168.1.111:2181
>> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Socket connection
>> established
>> >> to hadoop2/192.168.1.111:2181, initiating s
>> >> ession
>> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Session establishment
>> complete
>> >> on server hadoop2/192.168.1.111:2181, sess
>> >> ionid = 0x12efc946d66000b, negotiated timeout = 180000
>> >> 11/03/28 18:39:52 INFO
>> client.HConnectionManager$HConnectionImplementation:
>> >> Closed zookeeper sessionid=0x12efc946d6600
>> >> 0b
>> >> 11/03/28 18:39:52 INFO zookeeper.ZooKeeper: Session: 0x12efc946d66000b
>> >> closed
>> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: EventThread shut down
>> >> 11/03/28 18:47:02 ERROR create.CreateTableByXML: Caught IOException: No
>> >> server address listed in .META. for region
>> >> AcContact,,1301317792604.16d1f5fd49478f79002e89ce02cf37b5. while
>> creating
>> >> table: Table1
>> >>
>> >> This is the code I am using:
>> >> .....
>> >> HTableDescriptor desc =
>> CreateTableByXML.convertSchemaToDescriptor(schema);
>> >>    try {
>> >>      hbaseAdmin.createTable(desc);
>> >>    } catch (IOException e) {
>> >>      CreateTableByXML.LOG.error("Caught IOException: " + e.getMessage()
>> >>              + " while creating table: " + tableName);
>> >> ....
>> >> .....
>> >>
>> >>
>> >> It used to work fine in v0.20.6. I upgraded today to v0.90.1 and it
>> started
>> >> giving this error. Any ideas?
>> >>
>> >> Hari
>> >>
>> >
>>
>

Re: Unable to create table

Posted by Hari Sreekumar <hs...@clickable.com>.
Here is the stack trace:

11/03/28 18:47:02 INFO zookeeper.ZooKeeper: Initiating client connection,
connectString=hadoop2:2181 sessionTimeout=180000 watcher=hconnection
11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Opening socket connection to
server hadoop2/192.168.1.111:2181
11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Socket connection established
to hadoop2/192.168.1.111:2181, initiating session
11/03/28 18:47:02 INFO zookeeper.ClientCnxn: Session establishment complete
on server hadoop2/192.168.1.111:2181, sessionid = 0x12efc946d66000c,
negotiated timeout = 180000
11/03/28 18:47:02 INFO tools.CleanFromMeta: Deleting row
AcContact,,1301317792604.16d1f5fd49478f79002e89ce02cf37b5.
11/03/28 18:47:02 INFO create.CreateTableByXML: Cleaned table entries from
META. Retrying to create table
11/03/28 18:48:02 FATAL create.CreateTableByXML: Could not create table even
after Cleaning Meta entries
11/03/28 18:48:02 FATAL create.CreateTableByXML:
*************************************************************
11/03/28 18:48:02 FATAL create.CreateTableByXML: Call to hadoop3/
192.168.1.57:60000 failed on socket timeout exception:
java.net.SocketTimeoutException: 60000 millis timeout while waiting for
channel to be ready for read. ch : java.nio.channels.SocketChannel[connected
local=/192.168.0.233:51525 remote=hadoop3/192.168.1.57:60000]
11/03/28 18:48:02 FATAL create.CreateTableByXML:
org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
$Proxy4.createTable(Unknown Source)
org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:341)
org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:303)
org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:227)
com.clickable.dataengine.hbase.create.CreateTableByXML.createTable(Unknown
Source)
com.clickable.dataengine.hbase.create.CreateTableByXML.main(Unknown Source)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
java.lang.reflect.Method.invoke(Method.java:597)
org.apache.hadoop.util.RunJar.main(RunJar.java:156)

The "CleanFromMeta" function catches IOException and deletes all rows from
.META. We had added this in the exception catch block because we used to
face the "Table taking too long to be disabled" exception often. It seems
the rows in META already get created when the IOException is thrown.
CleanFromMeta cleans .META. and then I try again to create the table, after
which I get the socket timeout exception.

I am using Hadoop v0.20.2, r911707. Can this be a reason for this error? I
get the message "You are currently running the HMaster without HDFS append
support enabled. This may result in data loss. Please see the HBase
wiki<http://wiki.apache.org/hadoop/Hbase/HdfsSyncSupport> for
details." on the HBase Master UI.

On Tue, Mar 29, 2011 at 12:25 AM, Hari Sreekumar
<hs...@clickable.com>wrote:

> Hi Stack
>
> yes the tablename is AcContact. The tableName variable was wrong.
> Fixed it now but I still get the same error. Schema is just something
> created by parsing an XML file which has stuff like column family
> name, compression type etc so I guess it doesn't have much to do with
> version. Except that I had to change the bloom filter variable to
> String ( used to be boolean in 0.20.6). I will paste the stack trace
> asap
>
> hari
>
> On Monday, March 28, 2011, Stack <st...@duboce.net> wrote:
> > Can I see more of the stack track please Hari and is AcContact the
> > table you are creating?  Is the schema you've saved aside one you
> > created with 0.20 hbase?  I don't think it matters but asking just in
> > case.
> > St.Ack
> >
> > On Mon, Mar 28, 2011 at 6:29 AM, Hari Sreekumar
> > <hs...@clickable.com> wrote:
> >> Hi,
> >>
> >> I am trying to create table in hbase v0.90.1 and I get the following
> error:
> >>
> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Opening socket connection
> to
> >> server hadoop2/192.168.1.111:2181
> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Socket connection
> established
> >> to hadoop2/192.168.1.111:2181, initiating s
> >> ession
> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Session establishment
> complete
> >> on server hadoop2/192.168.1.111:2181, sess
> >> ionid = 0x12efc946d66000b, negotiated timeout = 180000
> >> 11/03/28 18:39:52 INFO
> client.HConnectionManager$HConnectionImplementation:
> >> Closed zookeeper sessionid=0x12efc946d6600
> >> 0b
> >> 11/03/28 18:39:52 INFO zookeeper.ZooKeeper: Session: 0x12efc946d66000b
> >> closed
> >> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: EventThread shut down
> >> 11/03/28 18:47:02 ERROR create.CreateTableByXML: Caught IOException: No
> >> server address listed in .META. for region
> >> AcContact,,1301317792604.16d1f5fd49478f79002e89ce02cf37b5. while
> creating
> >> table: Table1
> >>
> >> This is the code I am using:
> >> .....
> >> HTableDescriptor desc =
> CreateTableByXML.convertSchemaToDescriptor(schema);
> >>    try {
> >>      hbaseAdmin.createTable(desc);
> >>    } catch (IOException e) {
> >>      CreateTableByXML.LOG.error("Caught IOException: " + e.getMessage()
> >>              + " while creating table: " + tableName);
> >> ....
> >> .....
> >>
> >>
> >> It used to work fine in v0.20.6. I upgraded today to v0.90.1 and it
> started
> >> giving this error. Any ideas?
> >>
> >> Hari
> >>
> >
>

Re: Unable to create table

Posted by Hari Sreekumar <hs...@clickable.com>.
Hi Stack

yes the tablename is AcContact. The tableName variable was wrong.
Fixed it now but I still get the same error. Schema is just something
created by parsing an XML file which has stuff like column family
name, compression type etc so I guess it doesn't have much to do with
version. Except that I had to change the bloom filter variable to
String ( used to be boolean in 0.20.6). I will paste the stack trace
asap

hari

On Monday, March 28, 2011, Stack <st...@duboce.net> wrote:
> Can I see more of the stack track please Hari and is AcContact the
> table you are creating?  Is the schema you've saved aside one you
> created with 0.20 hbase?  I don't think it matters but asking just in
> case.
> St.Ack
>
> On Mon, Mar 28, 2011 at 6:29 AM, Hari Sreekumar
> <hs...@clickable.com> wrote:
>> Hi,
>>
>> I am trying to create table in hbase v0.90.1 and I get the following error:
>>
>> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Opening socket connection to
>> server hadoop2/192.168.1.111:2181
>> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Socket connection established
>> to hadoop2/192.168.1.111:2181, initiating s
>> ession
>> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Session establishment complete
>> on server hadoop2/192.168.1.111:2181, sess
>> ionid = 0x12efc946d66000b, negotiated timeout = 180000
>> 11/03/28 18:39:52 INFO client.HConnectionManager$HConnectionImplementation:
>> Closed zookeeper sessionid=0x12efc946d6600
>> 0b
>> 11/03/28 18:39:52 INFO zookeeper.ZooKeeper: Session: 0x12efc946d66000b
>> closed
>> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: EventThread shut down
>> 11/03/28 18:47:02 ERROR create.CreateTableByXML: Caught IOException: No
>> server address listed in .META. for region
>> AcContact,,1301317792604.16d1f5fd49478f79002e89ce02cf37b5. while creating
>> table: Table1
>>
>> This is the code I am using:
>> .....
>> HTableDescriptor desc = CreateTableByXML.convertSchemaToDescriptor(schema);
>>    try {
>>      hbaseAdmin.createTable(desc);
>>    } catch (IOException e) {
>>      CreateTableByXML.LOG.error("Caught IOException: " + e.getMessage()
>>              + " while creating table: " + tableName);
>> ....
>> .....
>>
>>
>> It used to work fine in v0.20.6. I upgraded today to v0.90.1 and it started
>> giving this error. Any ideas?
>>
>> Hari
>>
>

Re: Unable to create table

Posted by Stack <st...@duboce.net>.
Can I see more of the stack track please Hari and is AcContact the
table you are creating?  Is the schema you've saved aside one you
created with 0.20 hbase?  I don't think it matters but asking just in
case.
St.Ack

On Mon, Mar 28, 2011 at 6:29 AM, Hari Sreekumar
<hs...@clickable.com> wrote:
> Hi,
>
> I am trying to create table in hbase v0.90.1 and I get the following error:
>
> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Opening socket connection to
> server hadoop2/192.168.1.111:2181
> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Socket connection established
> to hadoop2/192.168.1.111:2181, initiating s
> ession
> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: Session establishment complete
> on server hadoop2/192.168.1.111:2181, sess
> ionid = 0x12efc946d66000b, negotiated timeout = 180000
> 11/03/28 18:39:52 INFO client.HConnectionManager$HConnectionImplementation:
> Closed zookeeper sessionid=0x12efc946d6600
> 0b
> 11/03/28 18:39:52 INFO zookeeper.ZooKeeper: Session: 0x12efc946d66000b
> closed
> 11/03/28 18:39:52 INFO zookeeper.ClientCnxn: EventThread shut down
> 11/03/28 18:47:02 ERROR create.CreateTableByXML: Caught IOException: No
> server address listed in .META. for region
> AcContact,,1301317792604.16d1f5fd49478f79002e89ce02cf37b5. while creating
> table: Table1
>
> This is the code I am using:
> .....
> HTableDescriptor desc = CreateTableByXML.convertSchemaToDescriptor(schema);
>    try {
>      hbaseAdmin.createTable(desc);
>    } catch (IOException e) {
>      CreateTableByXML.LOG.error("Caught IOException: " + e.getMessage()
>              + " while creating table: " + tableName);
> ....
> .....
>
>
> It used to work fine in v0.20.6. I upgraded today to v0.90.1 and it started
> giving this error. Any ideas?
>
> Hari
>