You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by lei liu <li...@gmail.com> on 2013/08/28 10:36:37 UTC

hadoop2 and Hbase0.94

I use hadoop2 and hbase0.94, but there is below exception:

2013-08-28 11:36:12,922 ERROR
[MASTER_TABLE_OPERATIONS-dw74.kgb.sqa.cm4,13646,1377660964832-0]
executor.EventHandler(172): Caught throwable while processing
event C_M_DELETE_TABLE
java.lang.IllegalArgumentException: Wrong FS: file:/tmp/
hbase-shenxiu.cx/hbase/observed_table/47b334989065a8ac84873e6d07c1de62,
expected: hdfs://localhost.lo
caldomain:35974
        at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:590)
        at
org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:172)
        at
org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:402)
        at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1427)
        at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1467)
        at
org.apache.hadoop.hbase.util.FSUtils.listStatus(FSUtils.java:1052)
        at
org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:123)
        at
org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:72)
        at
org.apache.hadoop.hbase.master.MasterFileSystem.deleteRegion(MasterFileSystem.java:444)
        at
org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:73)
        at
org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:96)
        at
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
2013-08-28 11:37:05,653 INFO
 [Master:0;dw74.kgb.sqa.cm4,13646,1377660964832.archivedHFileCleaner]
util.FSUtils(1055): hdfs://localhost.localdomain:35974/use

Re: hadoop2 and Hbase0.94

Posted by Ted Yu <yu...@gmail.com>.
What command did you use ?

I used the following commands and the test passed:

mvn clean package -Dhadoop.profile=2.0 -DskipTests
mvn test -Dhadoop.profile=2.0 -PrunAllTests
-DfailIfNoTests=false -Dtest=TestMasterObserver



On Wed, Aug 28, 2013 at 1:49 AM, lei liu <li...@gmail.com> wrote:

> In org.apache.hadoop.hbase.coprocessor.TestMasterObserver unit test.
>
>
> 2013/8/28 lei liu <li...@gmail.com>
>
>> When I run hbase unit test, there is the exception.
>>
>>
>> 2013/8/28 Harsh J <ha...@cloudera.com>
>>
>>> Moving to user@hbase.apache.org.
>>>
>>> Please share your hbase-site.xml and core-site.xml. Was this HBase
>>> cluster previously running on a standalone local filesystem mode?
>>>
>>> On Wed, Aug 28, 2013 at 2:06 PM, lei liu <li...@gmail.com> wrote:
>>> > I use hadoop2 and hbase0.94, but there is below exception:
>>> >
>>> > 2013-08-28 11:36:12,922 ERROR
>>> > [MASTER_TABLE_OPERATIONS-dw74.kgb.sqa.cm4,13646,1377660964832-0]
>>> > executor.EventHandler(172): Caught throwable while processing
>>> > event C_M_DELETE_TABLE
>>> > java.lang.IllegalArgumentException: Wrong FS:
>>> > file:/tmp/
>>> hbase-shenxiu.cx/hbase/observed_table/47b334989065a8ac84873e6d07c1de62,
>>> > expected: hdfs://localhost.lo
>>> > caldomain:35974
>>> >         at
>>> org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:590)
>>> >         at
>>> >
>>> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:172)
>>> >         at
>>> >
>>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:402)
>>> >         at
>>> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1427)
>>> >         at
>>> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1467)
>>> >         at
>>> > org.apache.hadoop.hbase.util.FSUtils.listStatus(FSUtils.java:1052)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:123)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:72)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.master.MasterFileSystem.deleteRegion(MasterFileSystem.java:444)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:73)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:96)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>>> >         at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>> >         at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>> >         at java.lang.Thread.run(Thread.java:662)
>>> > 2013-08-28 11:37:05,653 INFO
>>> > [Master:0;dw74.kgb.sqa.cm4,13646,1377660964832.archivedHFileCleaner]
>>> > util.FSUtils(1055): hdfs://localhost.localdomain:35974/use
>>>
>>>
>>>
>>> --
>>> Harsh J
>>>
>>
>>
>

Re: hadoop2 and Hbase0.94

Posted by Ted Yu <yu...@gmail.com>.
What command did you use ?

I used the following commands and the test passed:

mvn clean package -Dhadoop.profile=2.0 -DskipTests
mvn test -Dhadoop.profile=2.0 -PrunAllTests
-DfailIfNoTests=false -Dtest=TestMasterObserver



On Wed, Aug 28, 2013 at 1:49 AM, lei liu <li...@gmail.com> wrote:

> In org.apache.hadoop.hbase.coprocessor.TestMasterObserver unit test.
>
>
> 2013/8/28 lei liu <li...@gmail.com>
>
>> When I run hbase unit test, there is the exception.
>>
>>
>> 2013/8/28 Harsh J <ha...@cloudera.com>
>>
>>> Moving to user@hbase.apache.org.
>>>
>>> Please share your hbase-site.xml and core-site.xml. Was this HBase
>>> cluster previously running on a standalone local filesystem mode?
>>>
>>> On Wed, Aug 28, 2013 at 2:06 PM, lei liu <li...@gmail.com> wrote:
>>> > I use hadoop2 and hbase0.94, but there is below exception:
>>> >
>>> > 2013-08-28 11:36:12,922 ERROR
>>> > [MASTER_TABLE_OPERATIONS-dw74.kgb.sqa.cm4,13646,1377660964832-0]
>>> > executor.EventHandler(172): Caught throwable while processing
>>> > event C_M_DELETE_TABLE
>>> > java.lang.IllegalArgumentException: Wrong FS:
>>> > file:/tmp/
>>> hbase-shenxiu.cx/hbase/observed_table/47b334989065a8ac84873e6d07c1de62,
>>> > expected: hdfs://localhost.lo
>>> > caldomain:35974
>>> >         at
>>> org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:590)
>>> >         at
>>> >
>>> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:172)
>>> >         at
>>> >
>>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:402)
>>> >         at
>>> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1427)
>>> >         at
>>> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1467)
>>> >         at
>>> > org.apache.hadoop.hbase.util.FSUtils.listStatus(FSUtils.java:1052)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:123)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:72)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.master.MasterFileSystem.deleteRegion(MasterFileSystem.java:444)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:73)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:96)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>>> >         at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>> >         at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>> >         at java.lang.Thread.run(Thread.java:662)
>>> > 2013-08-28 11:37:05,653 INFO
>>> > [Master:0;dw74.kgb.sqa.cm4,13646,1377660964832.archivedHFileCleaner]
>>> > util.FSUtils(1055): hdfs://localhost.localdomain:35974/use
>>>
>>>
>>>
>>> --
>>> Harsh J
>>>
>>
>>
>

Re: hadoop2 and Hbase0.94

Posted by Ted Yu <yu...@gmail.com>.
What command did you use ?

I used the following commands and the test passed:

mvn clean package -Dhadoop.profile=2.0 -DskipTests
mvn test -Dhadoop.profile=2.0 -PrunAllTests
-DfailIfNoTests=false -Dtest=TestMasterObserver



On Wed, Aug 28, 2013 at 1:49 AM, lei liu <li...@gmail.com> wrote:

> In org.apache.hadoop.hbase.coprocessor.TestMasterObserver unit test.
>
>
> 2013/8/28 lei liu <li...@gmail.com>
>
>> When I run hbase unit test, there is the exception.
>>
>>
>> 2013/8/28 Harsh J <ha...@cloudera.com>
>>
>>> Moving to user@hbase.apache.org.
>>>
>>> Please share your hbase-site.xml and core-site.xml. Was this HBase
>>> cluster previously running on a standalone local filesystem mode?
>>>
>>> On Wed, Aug 28, 2013 at 2:06 PM, lei liu <li...@gmail.com> wrote:
>>> > I use hadoop2 and hbase0.94, but there is below exception:
>>> >
>>> > 2013-08-28 11:36:12,922 ERROR
>>> > [MASTER_TABLE_OPERATIONS-dw74.kgb.sqa.cm4,13646,1377660964832-0]
>>> > executor.EventHandler(172): Caught throwable while processing
>>> > event C_M_DELETE_TABLE
>>> > java.lang.IllegalArgumentException: Wrong FS:
>>> > file:/tmp/
>>> hbase-shenxiu.cx/hbase/observed_table/47b334989065a8ac84873e6d07c1de62,
>>> > expected: hdfs://localhost.lo
>>> > caldomain:35974
>>> >         at
>>> org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:590)
>>> >         at
>>> >
>>> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:172)
>>> >         at
>>> >
>>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:402)
>>> >         at
>>> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1427)
>>> >         at
>>> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1467)
>>> >         at
>>> > org.apache.hadoop.hbase.util.FSUtils.listStatus(FSUtils.java:1052)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:123)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:72)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.master.MasterFileSystem.deleteRegion(MasterFileSystem.java:444)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:73)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:96)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>>> >         at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>> >         at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>> >         at java.lang.Thread.run(Thread.java:662)
>>> > 2013-08-28 11:37:05,653 INFO
>>> > [Master:0;dw74.kgb.sqa.cm4,13646,1377660964832.archivedHFileCleaner]
>>> > util.FSUtils(1055): hdfs://localhost.localdomain:35974/use
>>>
>>>
>>>
>>> --
>>> Harsh J
>>>
>>
>>
>

Re: hadoop2 and Hbase0.94

Posted by Ted Yu <yu...@gmail.com>.
What command did you use ?

I used the following commands and the test passed:

mvn clean package -Dhadoop.profile=2.0 -DskipTests
mvn test -Dhadoop.profile=2.0 -PrunAllTests
-DfailIfNoTests=false -Dtest=TestMasterObserver



On Wed, Aug 28, 2013 at 1:49 AM, lei liu <li...@gmail.com> wrote:

> In org.apache.hadoop.hbase.coprocessor.TestMasterObserver unit test.
>
>
> 2013/8/28 lei liu <li...@gmail.com>
>
>> When I run hbase unit test, there is the exception.
>>
>>
>> 2013/8/28 Harsh J <ha...@cloudera.com>
>>
>>> Moving to user@hbase.apache.org.
>>>
>>> Please share your hbase-site.xml and core-site.xml. Was this HBase
>>> cluster previously running on a standalone local filesystem mode?
>>>
>>> On Wed, Aug 28, 2013 at 2:06 PM, lei liu <li...@gmail.com> wrote:
>>> > I use hadoop2 and hbase0.94, but there is below exception:
>>> >
>>> > 2013-08-28 11:36:12,922 ERROR
>>> > [MASTER_TABLE_OPERATIONS-dw74.kgb.sqa.cm4,13646,1377660964832-0]
>>> > executor.EventHandler(172): Caught throwable while processing
>>> > event C_M_DELETE_TABLE
>>> > java.lang.IllegalArgumentException: Wrong FS:
>>> > file:/tmp/
>>> hbase-shenxiu.cx/hbase/observed_table/47b334989065a8ac84873e6d07c1de62,
>>> > expected: hdfs://localhost.lo
>>> > caldomain:35974
>>> >         at
>>> org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:590)
>>> >         at
>>> >
>>> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:172)
>>> >         at
>>> >
>>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:402)
>>> >         at
>>> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1427)
>>> >         at
>>> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1467)
>>> >         at
>>> > org.apache.hadoop.hbase.util.FSUtils.listStatus(FSUtils.java:1052)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:123)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:72)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.master.MasterFileSystem.deleteRegion(MasterFileSystem.java:444)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:73)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:96)
>>> >         at
>>> >
>>> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>>> >         at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>> >         at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>> >         at java.lang.Thread.run(Thread.java:662)
>>> > 2013-08-28 11:37:05,653 INFO
>>> > [Master:0;dw74.kgb.sqa.cm4,13646,1377660964832.archivedHFileCleaner]
>>> > util.FSUtils(1055): hdfs://localhost.localdomain:35974/use
>>>
>>>
>>>
>>> --
>>> Harsh J
>>>
>>
>>
>

Re: hadoop2 and Hbase0.94

Posted by lei liu <li...@gmail.com>.
In org.apache.hadoop.hbase.coprocessor.TestMasterObserver unit test.


2013/8/28 lei liu <li...@gmail.com>

> When I run hbase unit test, there is the exception.
>
>
> 2013/8/28 Harsh J <ha...@cloudera.com>
>
>> Moving to user@hbase.apache.org.
>>
>> Please share your hbase-site.xml and core-site.xml. Was this HBase
>> cluster previously running on a standalone local filesystem mode?
>>
>> On Wed, Aug 28, 2013 at 2:06 PM, lei liu <li...@gmail.com> wrote:
>> > I use hadoop2 and hbase0.94, but there is below exception:
>> >
>> > 2013-08-28 11:36:12,922 ERROR
>> > [MASTER_TABLE_OPERATIONS-dw74.kgb.sqa.cm4,13646,1377660964832-0]
>> > executor.EventHandler(172): Caught throwable while processing
>> > event C_M_DELETE_TABLE
>> > java.lang.IllegalArgumentException: Wrong FS:
>> > file:/tmp/
>> hbase-shenxiu.cx/hbase/observed_table/47b334989065a8ac84873e6d07c1de62,
>> > expected: hdfs://localhost.lo
>> > caldomain:35974
>> >         at
>> org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:590)
>> >         at
>> >
>> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:172)
>> >         at
>> >
>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:402)
>> >         at
>> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1427)
>> >         at
>> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1467)
>> >         at
>> > org.apache.hadoop.hbase.util.FSUtils.listStatus(FSUtils.java:1052)
>> >         at
>> >
>> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:123)
>> >         at
>> >
>> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:72)
>> >         at
>> >
>> org.apache.hadoop.hbase.master.MasterFileSystem.deleteRegion(MasterFileSystem.java:444)
>> >         at
>> >
>> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:73)
>> >         at
>> >
>> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:96)
>> >         at
>> > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>> >         at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> >         at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> >         at java.lang.Thread.run(Thread.java:662)
>> > 2013-08-28 11:37:05,653 INFO
>> > [Master:0;dw74.kgb.sqa.cm4,13646,1377660964832.archivedHFileCleaner]
>> > util.FSUtils(1055): hdfs://localhost.localdomain:35974/use
>>
>>
>>
>> --
>> Harsh J
>>
>
>

Re: hadoop2 and Hbase0.94

Posted by lei liu <li...@gmail.com>.
In org.apache.hadoop.hbase.coprocessor.TestMasterObserver unit test.


2013/8/28 lei liu <li...@gmail.com>

> When I run hbase unit test, there is the exception.
>
>
> 2013/8/28 Harsh J <ha...@cloudera.com>
>
>> Moving to user@hbase.apache.org.
>>
>> Please share your hbase-site.xml and core-site.xml. Was this HBase
>> cluster previously running on a standalone local filesystem mode?
>>
>> On Wed, Aug 28, 2013 at 2:06 PM, lei liu <li...@gmail.com> wrote:
>> > I use hadoop2 and hbase0.94, but there is below exception:
>> >
>> > 2013-08-28 11:36:12,922 ERROR
>> > [MASTER_TABLE_OPERATIONS-dw74.kgb.sqa.cm4,13646,1377660964832-0]
>> > executor.EventHandler(172): Caught throwable while processing
>> > event C_M_DELETE_TABLE
>> > java.lang.IllegalArgumentException: Wrong FS:
>> > file:/tmp/
>> hbase-shenxiu.cx/hbase/observed_table/47b334989065a8ac84873e6d07c1de62,
>> > expected: hdfs://localhost.lo
>> > caldomain:35974
>> >         at
>> org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:590)
>> >         at
>> >
>> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:172)
>> >         at
>> >
>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:402)
>> >         at
>> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1427)
>> >         at
>> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1467)
>> >         at
>> > org.apache.hadoop.hbase.util.FSUtils.listStatus(FSUtils.java:1052)
>> >         at
>> >
>> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:123)
>> >         at
>> >
>> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:72)
>> >         at
>> >
>> org.apache.hadoop.hbase.master.MasterFileSystem.deleteRegion(MasterFileSystem.java:444)
>> >         at
>> >
>> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:73)
>> >         at
>> >
>> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:96)
>> >         at
>> > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>> >         at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> >         at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> >         at java.lang.Thread.run(Thread.java:662)
>> > 2013-08-28 11:37:05,653 INFO
>> > [Master:0;dw74.kgb.sqa.cm4,13646,1377660964832.archivedHFileCleaner]
>> > util.FSUtils(1055): hdfs://localhost.localdomain:35974/use
>>
>>
>>
>> --
>> Harsh J
>>
>
>

Re: hadoop2 and Hbase0.94

Posted by lei liu <li...@gmail.com>.
In org.apache.hadoop.hbase.coprocessor.TestMasterObserver unit test.


2013/8/28 lei liu <li...@gmail.com>

> When I run hbase unit test, there is the exception.
>
>
> 2013/8/28 Harsh J <ha...@cloudera.com>
>
>> Moving to user@hbase.apache.org.
>>
>> Please share your hbase-site.xml and core-site.xml. Was this HBase
>> cluster previously running on a standalone local filesystem mode?
>>
>> On Wed, Aug 28, 2013 at 2:06 PM, lei liu <li...@gmail.com> wrote:
>> > I use hadoop2 and hbase0.94, but there is below exception:
>> >
>> > 2013-08-28 11:36:12,922 ERROR
>> > [MASTER_TABLE_OPERATIONS-dw74.kgb.sqa.cm4,13646,1377660964832-0]
>> > executor.EventHandler(172): Caught throwable while processing
>> > event C_M_DELETE_TABLE
>> > java.lang.IllegalArgumentException: Wrong FS:
>> > file:/tmp/
>> hbase-shenxiu.cx/hbase/observed_table/47b334989065a8ac84873e6d07c1de62,
>> > expected: hdfs://localhost.lo
>> > caldomain:35974
>> >         at
>> org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:590)
>> >         at
>> >
>> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:172)
>> >         at
>> >
>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:402)
>> >         at
>> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1427)
>> >         at
>> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1467)
>> >         at
>> > org.apache.hadoop.hbase.util.FSUtils.listStatus(FSUtils.java:1052)
>> >         at
>> >
>> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:123)
>> >         at
>> >
>> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:72)
>> >         at
>> >
>> org.apache.hadoop.hbase.master.MasterFileSystem.deleteRegion(MasterFileSystem.java:444)
>> >         at
>> >
>> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:73)
>> >         at
>> >
>> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:96)
>> >         at
>> > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>> >         at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> >         at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> >         at java.lang.Thread.run(Thread.java:662)
>> > 2013-08-28 11:37:05,653 INFO
>> > [Master:0;dw74.kgb.sqa.cm4,13646,1377660964832.archivedHFileCleaner]
>> > util.FSUtils(1055): hdfs://localhost.localdomain:35974/use
>>
>>
>>
>> --
>> Harsh J
>>
>
>

Re: hadoop2 and Hbase0.94

Posted by lei liu <li...@gmail.com>.
In org.apache.hadoop.hbase.coprocessor.TestMasterObserver unit test.


2013/8/28 lei liu <li...@gmail.com>

> When I run hbase unit test, there is the exception.
>
>
> 2013/8/28 Harsh J <ha...@cloudera.com>
>
>> Moving to user@hbase.apache.org.
>>
>> Please share your hbase-site.xml and core-site.xml. Was this HBase
>> cluster previously running on a standalone local filesystem mode?
>>
>> On Wed, Aug 28, 2013 at 2:06 PM, lei liu <li...@gmail.com> wrote:
>> > I use hadoop2 and hbase0.94, but there is below exception:
>> >
>> > 2013-08-28 11:36:12,922 ERROR
>> > [MASTER_TABLE_OPERATIONS-dw74.kgb.sqa.cm4,13646,1377660964832-0]
>> > executor.EventHandler(172): Caught throwable while processing
>> > event C_M_DELETE_TABLE
>> > java.lang.IllegalArgumentException: Wrong FS:
>> > file:/tmp/
>> hbase-shenxiu.cx/hbase/observed_table/47b334989065a8ac84873e6d07c1de62,
>> > expected: hdfs://localhost.lo
>> > caldomain:35974
>> >         at
>> org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:590)
>> >         at
>> >
>> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:172)
>> >         at
>> >
>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:402)
>> >         at
>> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1427)
>> >         at
>> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1467)
>> >         at
>> > org.apache.hadoop.hbase.util.FSUtils.listStatus(FSUtils.java:1052)
>> >         at
>> >
>> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:123)
>> >         at
>> >
>> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:72)
>> >         at
>> >
>> org.apache.hadoop.hbase.master.MasterFileSystem.deleteRegion(MasterFileSystem.java:444)
>> >         at
>> >
>> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:73)
>> >         at
>> >
>> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:96)
>> >         at
>> > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>> >         at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> >         at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> >         at java.lang.Thread.run(Thread.java:662)
>> > 2013-08-28 11:37:05,653 INFO
>> > [Master:0;dw74.kgb.sqa.cm4,13646,1377660964832.archivedHFileCleaner]
>> > util.FSUtils(1055): hdfs://localhost.localdomain:35974/use
>>
>>
>>
>> --
>> Harsh J
>>
>
>

Re: hadoop2 and Hbase0.94

Posted by lei liu <li...@gmail.com>.
When I run hbase unit test, there is the exception.


2013/8/28 Harsh J <ha...@cloudera.com>

> Moving to user@hbase.apache.org.
>
> Please share your hbase-site.xml and core-site.xml. Was this HBase
> cluster previously running on a standalone local filesystem mode?
>
> On Wed, Aug 28, 2013 at 2:06 PM, lei liu <li...@gmail.com> wrote:
> > I use hadoop2 and hbase0.94, but there is below exception:
> >
> > 2013-08-28 11:36:12,922 ERROR
> > [MASTER_TABLE_OPERATIONS-dw74.kgb.sqa.cm4,13646,1377660964832-0]
> > executor.EventHandler(172): Caught throwable while processing
> > event C_M_DELETE_TABLE
> > java.lang.IllegalArgumentException: Wrong FS:
> > file:/tmp/
> hbase-shenxiu.cx/hbase/observed_table/47b334989065a8ac84873e6d07c1de62,
> > expected: hdfs://localhost.lo
> > caldomain:35974
> >         at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:590)
> >         at
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:172)
> >         at
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:402)
> >         at
> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1427)
> >         at
> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1467)
> >         at
> > org.apache.hadoop.hbase.util.FSUtils.listStatus(FSUtils.java:1052)
> >         at
> >
> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:123)
> >         at
> >
> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:72)
> >         at
> >
> org.apache.hadoop.hbase.master.MasterFileSystem.deleteRegion(MasterFileSystem.java:444)
> >         at
> >
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:73)
> >         at
> >
> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:96)
> >         at
> > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> >         at java.lang.Thread.run(Thread.java:662)
> > 2013-08-28 11:37:05,653 INFO
> > [Master:0;dw74.kgb.sqa.cm4,13646,1377660964832.archivedHFileCleaner]
> > util.FSUtils(1055): hdfs://localhost.localdomain:35974/use
>
>
>
> --
> Harsh J
>

Re: hadoop2 and Hbase0.94

Posted by lei liu <li...@gmail.com>.
When I run hbase unit test, there is the exception.


2013/8/28 Harsh J <ha...@cloudera.com>

> Moving to user@hbase.apache.org.
>
> Please share your hbase-site.xml and core-site.xml. Was this HBase
> cluster previously running on a standalone local filesystem mode?
>
> On Wed, Aug 28, 2013 at 2:06 PM, lei liu <li...@gmail.com> wrote:
> > I use hadoop2 and hbase0.94, but there is below exception:
> >
> > 2013-08-28 11:36:12,922 ERROR
> > [MASTER_TABLE_OPERATIONS-dw74.kgb.sqa.cm4,13646,1377660964832-0]
> > executor.EventHandler(172): Caught throwable while processing
> > event C_M_DELETE_TABLE
> > java.lang.IllegalArgumentException: Wrong FS:
> > file:/tmp/
> hbase-shenxiu.cx/hbase/observed_table/47b334989065a8ac84873e6d07c1de62,
> > expected: hdfs://localhost.lo
> > caldomain:35974
> >         at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:590)
> >         at
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:172)
> >         at
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:402)
> >         at
> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1427)
> >         at
> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1467)
> >         at
> > org.apache.hadoop.hbase.util.FSUtils.listStatus(FSUtils.java:1052)
> >         at
> >
> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:123)
> >         at
> >
> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:72)
> >         at
> >
> org.apache.hadoop.hbase.master.MasterFileSystem.deleteRegion(MasterFileSystem.java:444)
> >         at
> >
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:73)
> >         at
> >
> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:96)
> >         at
> > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> >         at java.lang.Thread.run(Thread.java:662)
> > 2013-08-28 11:37:05,653 INFO
> > [Master:0;dw74.kgb.sqa.cm4,13646,1377660964832.archivedHFileCleaner]
> > util.FSUtils(1055): hdfs://localhost.localdomain:35974/use
>
>
>
> --
> Harsh J
>

Re: hadoop2 and Hbase0.94

Posted by lei liu <li...@gmail.com>.
When I run hbase unit test, there is the exception.


2013/8/28 Harsh J <ha...@cloudera.com>

> Moving to user@hbase.apache.org.
>
> Please share your hbase-site.xml and core-site.xml. Was this HBase
> cluster previously running on a standalone local filesystem mode?
>
> On Wed, Aug 28, 2013 at 2:06 PM, lei liu <li...@gmail.com> wrote:
> > I use hadoop2 and hbase0.94, but there is below exception:
> >
> > 2013-08-28 11:36:12,922 ERROR
> > [MASTER_TABLE_OPERATIONS-dw74.kgb.sqa.cm4,13646,1377660964832-0]
> > executor.EventHandler(172): Caught throwable while processing
> > event C_M_DELETE_TABLE
> > java.lang.IllegalArgumentException: Wrong FS:
> > file:/tmp/
> hbase-shenxiu.cx/hbase/observed_table/47b334989065a8ac84873e6d07c1de62,
> > expected: hdfs://localhost.lo
> > caldomain:35974
> >         at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:590)
> >         at
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:172)
> >         at
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:402)
> >         at
> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1427)
> >         at
> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1467)
> >         at
> > org.apache.hadoop.hbase.util.FSUtils.listStatus(FSUtils.java:1052)
> >         at
> >
> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:123)
> >         at
> >
> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:72)
> >         at
> >
> org.apache.hadoop.hbase.master.MasterFileSystem.deleteRegion(MasterFileSystem.java:444)
> >         at
> >
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:73)
> >         at
> >
> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:96)
> >         at
> > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> >         at java.lang.Thread.run(Thread.java:662)
> > 2013-08-28 11:37:05,653 INFO
> > [Master:0;dw74.kgb.sqa.cm4,13646,1377660964832.archivedHFileCleaner]
> > util.FSUtils(1055): hdfs://localhost.localdomain:35974/use
>
>
>
> --
> Harsh J
>

Re: hadoop2 and Hbase0.94

Posted by lei liu <li...@gmail.com>.
When I run hbase unit test, there is the exception.


2013/8/28 Harsh J <ha...@cloudera.com>

> Moving to user@hbase.apache.org.
>
> Please share your hbase-site.xml and core-site.xml. Was this HBase
> cluster previously running on a standalone local filesystem mode?
>
> On Wed, Aug 28, 2013 at 2:06 PM, lei liu <li...@gmail.com> wrote:
> > I use hadoop2 and hbase0.94, but there is below exception:
> >
> > 2013-08-28 11:36:12,922 ERROR
> > [MASTER_TABLE_OPERATIONS-dw74.kgb.sqa.cm4,13646,1377660964832-0]
> > executor.EventHandler(172): Caught throwable while processing
> > event C_M_DELETE_TABLE
> > java.lang.IllegalArgumentException: Wrong FS:
> > file:/tmp/
> hbase-shenxiu.cx/hbase/observed_table/47b334989065a8ac84873e6d07c1de62,
> > expected: hdfs://localhost.lo
> > caldomain:35974
> >         at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:590)
> >         at
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:172)
> >         at
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:402)
> >         at
> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1427)
> >         at
> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1467)
> >         at
> > org.apache.hadoop.hbase.util.FSUtils.listStatus(FSUtils.java:1052)
> >         at
> >
> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:123)
> >         at
> >
> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:72)
> >         at
> >
> org.apache.hadoop.hbase.master.MasterFileSystem.deleteRegion(MasterFileSystem.java:444)
> >         at
> >
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:73)
> >         at
> >
> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:96)
> >         at
> > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> >         at java.lang.Thread.run(Thread.java:662)
> > 2013-08-28 11:37:05,653 INFO
> > [Master:0;dw74.kgb.sqa.cm4,13646,1377660964832.archivedHFileCleaner]
> > util.FSUtils(1055): hdfs://localhost.localdomain:35974/use
>
>
>
> --
> Harsh J
>

Re: hadoop2 and Hbase0.94

Posted by Harsh J <ha...@cloudera.com>.
Moving to user@hbase.apache.org.

Please share your hbase-site.xml and core-site.xml. Was this HBase
cluster previously running on a standalone local filesystem mode?

On Wed, Aug 28, 2013 at 2:06 PM, lei liu <li...@gmail.com> wrote:
> I use hadoop2 and hbase0.94, but there is below exception:
>
> 2013-08-28 11:36:12,922 ERROR
> [MASTER_TABLE_OPERATIONS-dw74.kgb.sqa.cm4,13646,1377660964832-0]
> executor.EventHandler(172): Caught throwable while processing
> event C_M_DELETE_TABLE
> java.lang.IllegalArgumentException: Wrong FS:
> file:/tmp/hbase-shenxiu.cx/hbase/observed_table/47b334989065a8ac84873e6d07c1de62,
> expected: hdfs://localhost.lo
> caldomain:35974
>         at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:590)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:172)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:402)
>         at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1427)
>         at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1467)
>         at
> org.apache.hadoop.hbase.util.FSUtils.listStatus(FSUtils.java:1052)
>         at
> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:123)
>         at
> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:72)
>         at
> org.apache.hadoop.hbase.master.MasterFileSystem.deleteRegion(MasterFileSystem.java:444)
>         at
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:73)
>         at
> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:96)
>         at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> 2013-08-28 11:37:05,653 INFO
> [Master:0;dw74.kgb.sqa.cm4,13646,1377660964832.archivedHFileCleaner]
> util.FSUtils(1055): hdfs://localhost.localdomain:35974/use



-- 
Harsh J

Re: hadoop2 and Hbase0.94

Posted by Harsh J <ha...@cloudera.com>.
Moving to user@hbase.apache.org.

Please share your hbase-site.xml and core-site.xml. Was this HBase
cluster previously running on a standalone local filesystem mode?

On Wed, Aug 28, 2013 at 2:06 PM, lei liu <li...@gmail.com> wrote:
> I use hadoop2 and hbase0.94, but there is below exception:
>
> 2013-08-28 11:36:12,922 ERROR
> [MASTER_TABLE_OPERATIONS-dw74.kgb.sqa.cm4,13646,1377660964832-0]
> executor.EventHandler(172): Caught throwable while processing
> event C_M_DELETE_TABLE
> java.lang.IllegalArgumentException: Wrong FS:
> file:/tmp/hbase-shenxiu.cx/hbase/observed_table/47b334989065a8ac84873e6d07c1de62,
> expected: hdfs://localhost.lo
> caldomain:35974
>         at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:590)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:172)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:402)
>         at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1427)
>         at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1467)
>         at
> org.apache.hadoop.hbase.util.FSUtils.listStatus(FSUtils.java:1052)
>         at
> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:123)
>         at
> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:72)
>         at
> org.apache.hadoop.hbase.master.MasterFileSystem.deleteRegion(MasterFileSystem.java:444)
>         at
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:73)
>         at
> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:96)
>         at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> 2013-08-28 11:37:05,653 INFO
> [Master:0;dw74.kgb.sqa.cm4,13646,1377660964832.archivedHFileCleaner]
> util.FSUtils(1055): hdfs://localhost.localdomain:35974/use



-- 
Harsh J

Re: hadoop2 and Hbase0.94

Posted by Harsh J <ha...@cloudera.com>.
Moving to user@hbase.apache.org.

Please share your hbase-site.xml and core-site.xml. Was this HBase
cluster previously running on a standalone local filesystem mode?

On Wed, Aug 28, 2013 at 2:06 PM, lei liu <li...@gmail.com> wrote:
> I use hadoop2 and hbase0.94, but there is below exception:
>
> 2013-08-28 11:36:12,922 ERROR
> [MASTER_TABLE_OPERATIONS-dw74.kgb.sqa.cm4,13646,1377660964832-0]
> executor.EventHandler(172): Caught throwable while processing
> event C_M_DELETE_TABLE
> java.lang.IllegalArgumentException: Wrong FS:
> file:/tmp/hbase-shenxiu.cx/hbase/observed_table/47b334989065a8ac84873e6d07c1de62,
> expected: hdfs://localhost.lo
> caldomain:35974
>         at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:590)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:172)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:402)
>         at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1427)
>         at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1467)
>         at
> org.apache.hadoop.hbase.util.FSUtils.listStatus(FSUtils.java:1052)
>         at
> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:123)
>         at
> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:72)
>         at
> org.apache.hadoop.hbase.master.MasterFileSystem.deleteRegion(MasterFileSystem.java:444)
>         at
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:73)
>         at
> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:96)
>         at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> 2013-08-28 11:37:05,653 INFO
> [Master:0;dw74.kgb.sqa.cm4,13646,1377660964832.archivedHFileCleaner]
> util.FSUtils(1055): hdfs://localhost.localdomain:35974/use



-- 
Harsh J

Re: hadoop2 and Hbase0.94

Posted by Harsh J <ha...@cloudera.com>.
Moving to user@hbase.apache.org.

Please share your hbase-site.xml and core-site.xml. Was this HBase
cluster previously running on a standalone local filesystem mode?

On Wed, Aug 28, 2013 at 2:06 PM, lei liu <li...@gmail.com> wrote:
> I use hadoop2 and hbase0.94, but there is below exception:
>
> 2013-08-28 11:36:12,922 ERROR
> [MASTER_TABLE_OPERATIONS-dw74.kgb.sqa.cm4,13646,1377660964832-0]
> executor.EventHandler(172): Caught throwable while processing
> event C_M_DELETE_TABLE
> java.lang.IllegalArgumentException: Wrong FS:
> file:/tmp/hbase-shenxiu.cx/hbase/observed_table/47b334989065a8ac84873e6d07c1de62,
> expected: hdfs://localhost.lo
> caldomain:35974
>         at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:590)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:172)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:402)
>         at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1427)
>         at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1467)
>         at
> org.apache.hadoop.hbase.util.FSUtils.listStatus(FSUtils.java:1052)
>         at
> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:123)
>         at
> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:72)
>         at
> org.apache.hadoop.hbase.master.MasterFileSystem.deleteRegion(MasterFileSystem.java:444)
>         at
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:73)
>         at
> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:96)
>         at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> 2013-08-28 11:37:05,653 INFO
> [Master:0;dw74.kgb.sqa.cm4,13646,1377660964832.archivedHFileCleaner]
> util.FSUtils(1055): hdfs://localhost.localdomain:35974/use



-- 
Harsh J

Re: hadoop2 and Hbase0.94

Posted by Harsh J <ha...@cloudera.com>.
Moving to user@hbase.apache.org.

Please share your hbase-site.xml and core-site.xml. Was this HBase
cluster previously running on a standalone local filesystem mode?

On Wed, Aug 28, 2013 at 2:06 PM, lei liu <li...@gmail.com> wrote:
> I use hadoop2 and hbase0.94, but there is below exception:
>
> 2013-08-28 11:36:12,922 ERROR
> [MASTER_TABLE_OPERATIONS-dw74.kgb.sqa.cm4,13646,1377660964832-0]
> executor.EventHandler(172): Caught throwable while processing
> event C_M_DELETE_TABLE
> java.lang.IllegalArgumentException: Wrong FS:
> file:/tmp/hbase-shenxiu.cx/hbase/observed_table/47b334989065a8ac84873e6d07c1de62,
> expected: hdfs://localhost.lo
> caldomain:35974
>         at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:590)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:172)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:402)
>         at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1427)
>         at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1467)
>         at
> org.apache.hadoop.hbase.util.FSUtils.listStatus(FSUtils.java:1052)
>         at
> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:123)
>         at
> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:72)
>         at
> org.apache.hadoop.hbase.master.MasterFileSystem.deleteRegion(MasterFileSystem.java:444)
>         at
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:73)
>         at
> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:96)
>         at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> 2013-08-28 11:37:05,653 INFO
> [Master:0;dw74.kgb.sqa.cm4,13646,1377660964832.archivedHFileCleaner]
> util.FSUtils(1055): hdfs://localhost.localdomain:35974/use



-- 
Harsh J