You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Marcin Cylke <mc...@touk.pl> on 2012/04/16 15:10:48 UTC

hbase coprocessor unit testing

Hi

I'm trying to write a unit test for HBase coprocessor. However it seems
I'm doing something horribly wrong. The code I'm using to test my
coprocessor class is in the attachment.

As you can see, I'm using HBaseTestingUtility, and running a
mini-cluster with it. The error I keep getting is:

2012-04-12 13:00:39,924 [6,1334228432020] WARN  RecoverableZooKeeper
      :117 - Node /hbase/root-region-server already deleted, and this is
not a retry
2012-04-12 13:00:39,995 [6,1334228432020] INFO  HBaseRPC
      :240 - Server at localhost/127.0.0.1:45664 could not be reached
after 1 tries, giving up.
2012-04-12 13:00:39,995 [6,1334228432020] WARN  AssignmentManager
      :1493 - Failed assignment of -ROOT-,,0.70236052 to
localhost,45664,1334228432229, trying to assign elsewhere instead; retry=0
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed setting
up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to
localhost/127.0.0.1:45664 after attempts=1
    at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:242)
    at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1278)
    at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1235)
    at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1222)
    at
org.apache.hadoop.hbase.master.ServerManager.getServerConnection(ServerManager.java:496)
    at
org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:429)
    at
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1453)
    at
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1200)
    at
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1175)
    at
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1170)
    at
org.apache.hadoop.hbase.master.AssignmentManager.assignRoot(AssignmentManager.java:1918)
    at
org.apache.hadoop.hbase.master.HMaster.assignRootAndMeta(HMaster.java:557)
    at
org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:491)
    at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:326)
    at java.lang.Thread.run(Thread.java:662)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
    at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:656)
    at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:328)
    at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:362)
    at
org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1026)
    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:878)
    at
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
    at $Proxy22.getProtocolVersion(Unknown Source)
    at
org.apache.hadoop.hbase.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:183)
    at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:303)
    at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:280)
    at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:332)
    at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:236)
    ... 14 more
2012-04-12 13:00:39,998 [6,1334228432020] WARN  AssignmentManager
      :1504 - Unable to find a viable location to assign region
-ROOT-,,0.70236052
2012-04-12 13:00:44,138 [.timeoutMonitor] INFO  AssignmentManager
      :2570 - Regions in transition timed out:  -ROOT-,,0.70236052
state=OFFLINE, ts=1334228439998, server=null
2012-04-12 13:00:44,141 [.timeoutMonitor] INFO  AssignmentManager
      :2581 - Region has been OFFLINE for too long, reassigning
-ROOT-,,0.70236052 to a random server
2012-04-12 13:00:44,158 [pool-6-thread-1] INFO  HBaseRPC
      :240 - Server at localhost/127.0.0.1:45664 could not be reached
after 1 tries, giving up.

This may be related to me using the initHRegion() function - perhaps
that region cannot connect to the newly created HBase cluster?


Re: hbase coprocessor unit testing

Posted by Marcin Cylke <mc...@touk.pl>.
On 17/04/12 18:45, Alex Baranau wrote:
> I don't think that your error is related to CPs stuff. What lib versions do
> you use? Can you compare with those of the HBaseHUT pom?

Ok, I've managed to track down the source of my error. If I do normal
Put modifications in my prePut/postPut method everything works ok. The
error occures when I try to make another Put request while in the
prePut/postPut method.

I'm trying to do this in the following manner:

HTableInterface table = c.getEnvironment().getTable(tableName);
        table.put(createPutRequest(columnFamily, qualifier, "", rowCount));
        table.close();

And this gives me the error:

2012-04-19 08:43:24,738 [localhost:2222)] INFO  ClientCnxn
      :933 - Opening socket connection to server localhost/127.0.0.1:2222
2012-04-19 08:43:24,739 [localhost:2222)] WARN  ClientCnxn
      :1063 - Session 0x0 for server null, unexpected error, closing
socket connection and attempting reconnect
java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
	at
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286)
	at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1041)


What I'm trying to achieve with adding another Put to my Coprocessor is
to silently update some other row in my hbase table when I need to (when
specific conditions on incoming Put are met) - based on incoming Put
request. Am I misunderstanding the idea here, and should use some other
facility to accomplish the task?

> Re 127.0.1.1 vs 127.0.0.1 - what your hosts file looked like before and
> now? I think it's just the issue with resolving IP - at one place it
> resolves using localhost, at other - your hostname. Since (I suppose) those
> two didn't match - you got error.

Thanks, it is reasonable :)


Marcin

Re: hbase coprocessor unit testing

Posted by Alex Baranau <al...@gmail.com>.
Are you sure you need to do table.close() after each put? Looks incorrect.

Alex Baranau
------
Sematext :: http://blog.sematext.com/ :: Solr - Lucene - Hadoop - HBase

On Thu, Apr 19, 2012 at 2:48 AM, Marcin Cylke <mc...@touk.pl> wrote:

> On 17/04/12 18:45, Alex Baranau wrote:
> > I don't think that your error is related to CPs stuff. What lib versions
> do
> > you use? Can you compare with those of the HBaseHUT pom?
>
> Ok, I've managed to track down the source of my error. If I do normal
> Put modifications in my prePut/postPut method everything works ok. The
> error occures when I try to make another Put request while in the
> prePut/postPut method.
>
> I'm trying to do this in the following manner:
>
> HTableInterface table = c.getEnvironment().getTable(tableName);
>        table.put(createPutRequest(columnFamily, qualifier, "", rowCount));
>        table.close();
>
> And this gives me the error:
>
> 2012-04-19 08:43:24,738 [localhost:2222)] INFO  ClientCnxn
>       :933 - Opening socket connection to server localhost/127.0.0.1:2222
> 2012-04-19 08:43:24,739 [localhost:2222)] WARN  ClientCnxn
>       :1063 - Session 0x0 for server null, unexpected error, closing
> socket connection and attempting reconnect
> java.net.ConnectException: Connection refused
>        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>        at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
>        at
>
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286)
>        at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1041)
>
>
> What I'm trying to achieve with adding another Put to my Coprocessor is
> to silently update some other row in my hbase table when I need to (when
> specific conditions on incoming Put are met) - based on incoming Put
> request. Am I misunderstanding the idea here, and should use some other
> facility to accomplish the task?
>
> > Re 127.0.1.1 vs 127.0.0.1 - what your hosts file looked like before and
> > now? I think it's just the issue with resolving IP - at one place it
> > resolves using localhost, at other - your hostname. Since (I suppose)
> those
> > two didn't match - you got error.
>
> Thanks, it is reasonable :)
>
>
> Marcin
>

Re: hbase coprocessor unit testing

Posted by Marcin Cylke <mc...@touk.pl>.
On 17/04/12 18:45, Alex Baranau wrote:
> I don't think that your error is related to CPs stuff. What lib versions do
> you use? Can you compare with those of the HBaseHUT pom?

Ok, I've managed to track down the source of my error. If I do normal
Put modifications in my prePut/postPut method everything works ok. The
error occures when I try to make another Put request while in the
prePut/postPut method.

I'm trying to do this in the following manner:

HTableInterface table = c.getEnvironment().getTable(tableName);
        table.put(createPutRequest(columnFamily, qualifier, "", rowCount));
        table.close();

And this gives me the error:

2012-04-19 08:43:24,738 [localhost:2222)] INFO  ClientCnxn
      :933 - Opening socket connection to server localhost/127.0.0.1:2222
2012-04-19 08:43:24,739 [localhost:2222)] WARN  ClientCnxn
      :1063 - Session 0x0 for server null, unexpected error, closing
socket connection and attempting reconnect
java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
	at
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286)
	at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1041)


What I'm trying to achieve with adding another Put to my Coprocessor is
to silently update some other row in my hbase table when I need to (when
specific conditions on incoming Put are met) - based on incoming Put
request. Am I misunderstanding the idea here, and should use some other
facility to accomplish the task?

> Re 127.0.1.1 vs 127.0.0.1 - what your hosts file looked like before and
> now? I think it's just the issue with resolving IP - at one place it
> resolves using localhost, at other - your hostname. Since (I suppose) those
> two didn't match - you got error.

Thanks, it is reasonable :)


Marcin

Re: hbase coprocessor unit testing

Posted by Alex Baranau <al...@gmail.com>.
I don't think that your error is related to CPs stuff. What lib versions do
you use? Can you compare with those of the HBaseHUT pom?

Re 127.0.1.1 vs 127.0.0.1 - what your hosts file looked like before and
now? I think it's just the issue with resolving IP - at one place it
resolves using localhost, at other - your hostname. Since (I suppose) those
two didn't match - you got error.

Alex Baranau
------
Sematext :: http://blog.sematext.com/ :: Solr - Lucene - Hadoop - HBase

On Tue, Apr 17, 2012 at 9:34 AM, Marcin Cylke <mc...@touk.pl> wrote:

> On 17/04/12 15:15, Alex Baranau wrote:
>
> Hi
>
> > Some sanity checks:
> > 1) make sure you don't have 127.0.1.1 in your /etc/hosts (only 127.0.0.1)
>
> I've removed this entry and it worked right away :) Could You explain
> why it did so big difference?
>
> Now the test from HBaseHUT works fine, but mine code is still failing:
>
> #v+
> 2012-04-17 15:26:27,870 [localhost:2222)] WARN  ClientCnxn
>      :1063 - Session 0x0 for server null, unexpected error, closing
> socket connection and attempting reconnect
> java.net.ConnectException: Connection refused
>        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>        at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
>        at
>
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286)
>        at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1041)
> 2012-04-17 15:26:27,871 [dler 2 on 35003] INFO  RecoverableZooKeeper
>      :89 - The identifier of this process is 2032@correspondence
> 2012-04-17 15:26:27,973 [dler 2 on 35003] WARN  RecoverableZooKeeper
>      :159 - Possibly transient ZooKeeper exception:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/master
> 2012-04-17 15:26:27,974 [dler 2 on 35003] INFO  RetryCounter
>      :53 - The 1 times to retry  after sleeping 2000 ms
> 2012-04-17 15:26:28,973 [localhost:2222)] INFO  ClientCnxn
>      :933 - Opening socket connection to server localhost/127.0.0.1:2222
> #v-
>
> My whole test is something like this:
>
> #v+
>  testingUtility.getConfiguration().setStrings(
>           CoprocessorHost.USER_REGION_COPROCESSOR_CONF_KEY,
>           AuxDataCalculator.class.getName());
>  testingUtility.startMiniCluster();
>
> byte[] TABLE = Bytes.toBytes(getClass().getName());
> byte[] A = Bytes.toBytes("A");
> byte[] STATS = Bytes.toBytes("stats");
> byte[] CONTENT = Bytes.toBytes("content");
> byte[][] FAMILIES = new byte[][] { A, STATS, CONTENT } ;
>
> HTable hTable = testingUtility.createTable(TABLE, FAMILIES);
> Put put = new Put(ROW);
> put.add(A, A, A);
>
> hTable.put(put);
>
> Get get = new Get(ROW);
> Result result = hTable.get(get);
> #v-
>
>
> As I don't see any particular differences between Your unit test and
> mine, could You look into this a bit more?
>
> Regards
> Marcin
>

Re: hbase coprocessor unit testing

Posted by Marcin Cylke <mc...@touk.pl>.
On 17/04/12 15:15, Alex Baranau wrote:

Hi

> Some sanity checks:
> 1) make sure you don't have 127.0.1.1 in your /etc/hosts (only 127.0.0.1)

I've removed this entry and it worked right away :) Could You explain
why it did so big difference?

Now the test from HBaseHUT works fine, but mine code is still failing:

#v+
2012-04-17 15:26:27,870 [localhost:2222)] WARN  ClientCnxn
      :1063 - Session 0x0 for server null, unexpected error, closing
socket connection and attempting reconnect
java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
	at
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286)
	at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1041)
2012-04-17 15:26:27,871 [dler 2 on 35003] INFO  RecoverableZooKeeper
      :89 - The identifier of this process is 2032@correspondence
2012-04-17 15:26:27,973 [dler 2 on 35003] WARN  RecoverableZooKeeper
      :159 - Possibly transient ZooKeeper exception:
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /hbase/master
2012-04-17 15:26:27,974 [dler 2 on 35003] INFO  RetryCounter
      :53 - The 1 times to retry  after sleeping 2000 ms
2012-04-17 15:26:28,973 [localhost:2222)] INFO  ClientCnxn
      :933 - Opening socket connection to server localhost/127.0.0.1:2222
#v-

My whole test is something like this:

#v+
 testingUtility.getConfiguration().setStrings(
           CoprocessorHost.USER_REGION_COPROCESSOR_CONF_KEY,
           AuxDataCalculator.class.getName());
 testingUtility.startMiniCluster();

byte[] TABLE = Bytes.toBytes(getClass().getName());
byte[] A = Bytes.toBytes("A");
byte[] STATS = Bytes.toBytes("stats");
byte[] CONTENT = Bytes.toBytes("content");
byte[][] FAMILIES = new byte[][] { A, STATS, CONTENT } ;

HTable hTable = testingUtility.createTable(TABLE, FAMILIES);
Put put = new Put(ROW);
put.add(A, A, A);

hTable.put(put);

Get get = new Get(ROW);
Result result = hTable.get(get);
#v-


As I don't see any particular differences between Your unit test and
mine, could You look into this a bit more?

Regards
Marcin

Re: [ hbase ] Re: hbase coprocessor unit testing

Posted by Alex Baranau <al...@gmail.com>.
Just tried to do a clean clone, then

$ mvn -Dtest=TestHBaseHutCps test

went well [1].

How long does it take for test to fail when you run it?

Some sanity checks:
1) make sure you don't have 127.0.1.1 in your /etc/hosts (only 127.0.0.1)
2) make sure there are no hbase/hadoop processes running on your machine
(sudo jps)
3) cleanup your /tmp dir

I see "java.net.ConnectException: Connection refused", which may indicate
some of your cluster parts failed to start. Bigger log should be more
helpful.

Alex Baranau
------
Sematext :: http://blog.sematext.com/ :: Solr - Lucene - Hadoop - HBase

[1]
got this for sure:

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
[...]
[INFO]
------------------------------------------------------------------------
[INFO] BUILD SUCCESS


On Tue, Apr 17, 2012 at 5:01 AM, Marcin Cylke <mc...@touk.pl> wrote:

> On 16/04/12 16:49, Alex Baranau wrote:
> > Here's some code that worked for me [1]. You may also find useful to look
> > at the pom's dependencies [2].
>
> Thanks, Your cluster initialization is certainly more elegant than what
> I had. However it still gives me the same error as I reported. Moreover,
> I've cloned the repository You linked to (branch CP) and tried running
> tests for that, and am also getting the same error.
>
> Do those test pass for you?
>
> Regards
> Marcin
>
>

Re: [ hbase ] Re: hbase coprocessor unit testing

Posted by Marcin Cylke <mc...@touk.pl>.
On 16/04/12 16:49, Alex Baranau wrote:
> Here's some code that worked for me [1]. You may also find useful to look
> at the pom's dependencies [2].

Thanks, Your cluster initialization is certainly more elegant than what
I had. However it still gives me the same error as I reported. Moreover,
I've cloned the repository You linked to (branch CP) and tried running
tests for that, and am also getting the same error.

Do those test pass for you?

Regards
Marcin


Re: [ hbase ] Re: hbase coprocessor unit testing

Posted by Marcin Cylke <mc...@touk.pl>.
On 16/04/12 16:49, Alex Baranau wrote:
> Here's some code that worked for me [1]. You may also find useful to look
> at the pom's dependencies [2].

Thanks, Your cluster initialization is certainly more elegant than what
I had. However it still gives me the same error as I reported. Moreover,
I've cloned the repository You linked to (branch CP) and tried running
tests for that, and am also getting the same error.

Do those test pass for you?

Regards
Marcin


Re: hbase coprocessor unit testing

Posted by Alex Baranau <al...@gmail.com>.
Here's some code that worked for me [1]. You may also find useful to look
at the pom's dependencies [2].

Alex Baranau
------
Sematext :: http://blog.sematext.com/ :: Solr - Lucene - Hadoop - HBase

[1]

From
https://github.com/sematext/HBaseHUT/blob/CPs/src/test/java/com/sematext/hbase/hut/cp/TestHBaseHutCps.java:

 private HBaseTestingUtility testingUtility = new HBaseTestingUtility();
  private HTable hTable;

  @Before
  public void before() throws Exception {
    testingUtility.getConfiguration().setStrings(
            CoprocessorHost.USER_REGION_COPROCESSOR_CONF_KEY,
            HutReadEndpoint.class.getName());
    testingUtility.startMiniCluster();
    hTable = testingUtility.createTable(Bytes.toBytes(TABLE_NAME), SALE_CF);
  }

  @After
  public void after() throws Exception {
    hTable = null;
    testingUtility.shutdownMiniCluster();
    testingUtility = null;
  }

  [... unit-tests that make use of deployed CP ...]

[2]

Full version: https://github.com/sematext/HBaseHUT/blob/CPs/pom.xml

    <hadoop.version>1.0.0</hadoop.version>
    <hbase.version>0.92.1</hbase.version>

[...]

    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-core</artifactId>
      <version>${hadoop.version}</version>
      <scope>provided</scope>
      <exclusions>
        <exclusion>
          <groupId>org.codehaus.jackson</groupId>
          <artifactId>jackson-mapper-asl</artifactId>
        </exclusion>
        <exclusion>
          <groupId>org.codehaus.jackson</groupId>
          <artifactId>jackson-core-asl</artifactId>
        </exclusion>
      </exclusions>
    </dependency>
    <dependency>
      <groupId>org.apache.hbase</groupId>
      <artifactId>hbase</artifactId>
      <version>${hbase.version}</version>
      <scope>provided</scope>
    </dependency>

    <!-- Tests dependencies -->
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-test</artifactId>
      <version>${hadoop.version}</version>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.hbase</groupId>
      <artifactId>hbase</artifactId>
      <version>${hbase.version}</version>
      <classifier>tests</classifier>
      <scope>test</scope>
    </dependency>

On Mon, Apr 16, 2012 at 9:10 AM, Marcin Cylke <mc...@touk.pl> wrote:

> Hi
>
> I'm trying to write a unit test for HBase coprocessor. However it seems
> I'm doing something horribly wrong. The code I'm using to test my
> coprocessor class is in the attachment.
>
> As you can see, I'm using HBaseTestingUtility, and running a
> mini-cluster with it. The error I keep getting is:
>
> 2012-04-12 13:00:39,924 [6,1334228432020] WARN  RecoverableZooKeeper
>      :117 - Node /hbase/root-region-server already deleted, and this is
> not a retry
> 2012-04-12 13:00:39,995 [6,1334228432020] INFO  HBaseRPC
>      :240 - Server at localhost/127.0.0.1:45664 could not be reached
> after 1 tries, giving up.
> 2012-04-12 13:00:39,995 [6,1334228432020] WARN  AssignmentManager
>      :1493 - Failed assignment of -ROOT-,,0.70236052 to
> localhost,45664,1334228432229, trying to assign elsewhere instead; retry=0
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed setting
> up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to
> localhost/127.0.0.1:45664 after attempts=1
>    at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:242)
>    at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1278)
>    at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1235)
>    at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1222)
>    at
>
> org.apache.hadoop.hbase.master.ServerManager.getServerConnection(ServerManager.java:496)
>    at
>
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:429)
>    at
>
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1453)
>    at
>
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1200)
>    at
>
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1175)
>    at
>
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1170)
>    at
>
> org.apache.hadoop.hbase.master.AssignmentManager.assignRoot(AssignmentManager.java:1918)
>    at
> org.apache.hadoop.hbase.master.HMaster.assignRootAndMeta(HMaster.java:557)
>    at
>
> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:491)
>    at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:326)
>    at java.lang.Thread.run(Thread.java:662)
> Caused by: java.net.ConnectException: Connection refused
>    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>    at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
>    at
>
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:656)
>    at
>
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:328)
>    at
>
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:362)
>    at
>
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1026)
>    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:878)
>    at
>
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>    at $Proxy22.getProtocolVersion(Unknown Source)
>    at
>
> org.apache.hadoop.hbase.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:183)
>    at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:303)
>    at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:280)
>    at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:332)
>    at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:236)
>    ... 14 more
> 2012-04-12 13:00:39,998 [6,1334228432020] WARN  AssignmentManager
>      :1504 - Unable to find a viable location to assign region
> -ROOT-,,0.70236052
> 2012-04-12 13:00:44,138 [.timeoutMonitor] INFO  AssignmentManager
>      :2570 - Regions in transition timed out:  -ROOT-,,0.70236052
> state=OFFLINE, ts=1334228439998, server=null
> 2012-04-12 13:00:44,141 [.timeoutMonitor] INFO  AssignmentManager
>      :2581 - Region has been OFFLINE for too long, reassigning
> -ROOT-,,0.70236052 to a random server
> 2012-04-12 13:00:44,158 [pool-6-thread-1] INFO  HBaseRPC
>      :240 - Server at localhost/127.0.0.1:45664 could not be reached
> after 1 tries, giving up.
>
> This may be related to me using the initHRegion() function - perhaps
> that region cannot connect to the newly created HBase cluster?
>
>