You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Bharat Shetty <bh...@gmail.com> on 2013/11/22 07:50:07 UTC

HMaster daemon is not starting up

Hi all,

I've been running into an error of late whose root cause I'm not able to
decipher. Before this I had been able to run HBase on top of HDFS without
any issues. Suddenly HMaster shut down one day and when I try to restart
I'm unable to start the HMaster daemon.

Could you please guide if there is something that I might be missing.

>From the logs:
vim hbase-iouser-master-naples.log


2013-11-22 09:50:27,096 ERROR [main] master.HMasterCommandLine: Master
exiting
java.lang.RuntimeException: Failed construction of Master: class
org.apache.hadoop.hbase.master.HMaster
        at
org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2773)
        at
org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:184)
        at
org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:134)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at
org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
        at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2787)
Caused by: java.lang.UnsupportedOperationException: Not implemented by the
DistributedFileSystem FileSystem implementation
        at org.apache.hadoop.fs.FileSystem.getScheme(FileSystem.java:209)
        at
org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2397)
        at
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2407)
        at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2424)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
        at
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2463)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2445)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:363)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:275)
        at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:884)
        at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:455)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
        at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:532)
        at
org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2768)

From: hbase-iouser-regionserver-naples.log

2013-11-22 09:50:28,150 ERROR [main] regionserver.HRegionServerCommandLine:
Region server exiting
java.lang.UnsupportedOperationException: Not implemented by the
DistributedFileSystem FileSystem implementation
        at org.apache.hadoop.fs.FileSystem.getScheme(FileSystem.java:209)
        at
org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2397)
        at
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2407)
        at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2424)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
        at
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2463)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2445)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:363)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:165)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegionServer.java:2276)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegionServer.java:2260)
        at
org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:62)
        at
org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:85)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at
org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2311)
~
Regards,

- Bharat

Re: HMaster daemon is not starting up

Posted by Ted Yu <yu...@gmail.com>.
For your first question, please take a look at
http://hbase.apache.org/book.html#zookeeper


On Sat, Nov 23, 2013 at 10:11 AM, Bharat Shetty <bh...@gmail.com>wrote:

> Yeah, error is related to HDFS.
>
> I'm using HBase 0.96.0 with Hadoop 2.1.0-beta.
>
> On more further digging, it appears that the /tmp directory on the master
> node had some problems. Zookeeper for deployment is being managed
> internally by the HBase, appears to store data in /tmp directory
> (/tmp/hbase-iouser/zookeeper/). Since this region of the filesystem was
> unstable owing filesystem problems in the /tmp folder of the master node,
> zookeeper communication btw Master and slaves failed and as such HMaster
> seems to have shut down. I was able to get up the master running on other
> node with same configurations used previously.
>
> There was no changes to the lib directory of the HBase during deployment.
> The setup was working fine and I was able to run map-reduce programs for
> importing, exporting and filtering on millions of records in HBase prior to
> HMaster failure.
>
> In a production level scenario, which is ideal ? Zookeeper managed
> ourselves as opposed to zookeeper managed internally by the HBase ?
>
> Also, are there any documentation or links anywhere for production level
> configurations for HBase running on top of the HDFS (Hadoop) ?
>
> Best,
> Bharat
>
> -- B
>
>
> On Fri, Nov 22, 2013 at 1:17 PM, Ted Yu <yu...@gmail.com> wrote:
>
> > The error seemed to be related to hdfs.
> >
> > What version of HBase / hadoop are you using ?
> > Was there any change in the lib directory of HBase depployment ?
> >
> > Cheers
> >
> >
> > On Fri, Nov 22, 2013 at 2:50 PM, Bharat Shetty <bharat.shetty@gmail.com
> > >wrote:
> >
> > > Hi all,
> > >
> > > I've been running into an error of late whose root cause I'm not able
> to
> > > decipher. Before this I had been able to run HBase on top of HDFS
> without
> > > any issues. Suddenly HMaster shut down one day and when I try to
> restart
> > > I'm unable to start the HMaster daemon.
> > >
> > > Could you please guide if there is something that I might be missing.
> > >
> > > From the logs:
> > > vim hbase-iouser-master-naples.log
> > >
> > >
> > > 2013-11-22 09:50:27,096 ERROR [main] master.HMasterCommandLine: Master
> > > exiting
> > > java.lang.RuntimeException: Failed construction of Master: class
> > > org.apache.hadoop.hbase.master.HMaster
> > >         at
> > >
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2773)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:184)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:134)
> > >         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
> > >         at
> org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2787)
> > > Caused by: java.lang.UnsupportedOperationException: Not implemented by
> > the
> > > DistributedFileSystem FileSystem implementation
> > >         at
> org.apache.hadoop.fs.FileSystem.getScheme(FileSystem.java:209)
> > >         at
> > > org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2397)
> > >         at
> > >
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2407)
> > >         at
> > > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2424)
> > >         at
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
> > >         at
> > > org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2463)
> > >         at
> > org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2445)
> > >         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:363)
> > >         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:275)
> > >         at
> > > org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:884)
> > >         at
> > org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:455)
> > >         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> > > Method)
> > >         at
> > >
> > >
> >
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> > >         at
> > >
> > >
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> > >         at
> > java.lang.reflect.Constructor.newInstance(Constructor.java:532)
> > >         at
> > >
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2768)
> > >
> > > From: hbase-iouser-regionserver-naples.log
> > >
> > > 2013-11-22 09:50:28,150 ERROR [main]
> > regionserver.HRegionServerCommandLine:
> > > Region server exiting
> > > java.lang.UnsupportedOperationException: Not implemented by the
> > > DistributedFileSystem FileSystem implementation
> > >         at
> org.apache.hadoop.fs.FileSystem.getScheme(FileSystem.java:209)
> > >         at
> > > org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2397)
> > >         at
> > >
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2407)
> > >         at
> > > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2424)
> > >         at
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
> > >         at
> > > org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2463)
> > >         at
> > org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2445)
> > >         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:363)
> > >         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:165)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegionServer.java:2276)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegionServer.java:2260)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:62)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:85)
> > >         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2311)
> > > ~
> > > Regards,
> > >
> > > - Bharat
> > >
> >
>

Re: HMaster daemon is not starting up

Posted by Bharat Shetty <bh...@gmail.com>.
Stack, Ted,

Thanks a lot for the pointers. I'm amazed at the quick responses that come
in this mailing lists from this community!

I had changed the directory from /tmp to some other directory for hbase.rootdir
long back only,  but had missed doing the same for Zookeeper in the
conf/hbase-site.xml, since I assumed that setting the HBASE_MANAGES_ZK=true
flag in the hbase-env.sh causes zookeeper to be managed internally by HBase
and as such control over the zookeeper data directory is not possible.
After reading the pointers you gave, I changed it to some other directory
other than /tmp for zookeeper.

So when the /tmp file system on the master node in our cluster went corrupt
somehow,  HQuorumPeer daemon went kaput forcing shut down of the HMaster
daemon.

Thanks once again.

Best,
- Bharat



-- B


On Sat, Nov 23, 2013 at 10:52 AM, Stack <st...@duboce.net> wrote:

> On Fri, Nov 22, 2013 at 6:11 PM, Bharat Shetty <bharat.shetty@gmail.com
> >wrote:
>
> > Yeah, error is related to HDFS.
> >
> > I'm using HBase 0.96.0 with Hadoop 2.1.0-beta.
> >
> > On more further digging, it appears that the /tmp directory on the master
> > node had some problems. Zookeeper for deployment is being managed
> > internally by the HBase, appears to store data in /tmp directory
> > (/tmp/hbase-iouser/zookeeper/). Since this region of the filesystem was
> > unstable owing filesystem problems in the /tmp folder of the master node,
> > zookeeper communication btw Master and slaves failed and as such HMaster
> > seems to have shut down. I was able to get up the master running on other
> > node with same configurations used previously.
> >
> > There was no changes to the lib directory of the HBase during deployment.
> > The setup was working fine and I was able to run map-reduce programs for
> > importing, exporting and filtering on millions of records in HBase prior
> to
> > HMaster failure.
> >
> > In a production level scenario, which is ideal ? Zookeeper managed
> > ourselves as opposed to zookeeper managed internally by the HBase ?
> >
> > Also, are there any documentation or links anywhere for production level
> > configurations for HBase running on top of the HDFS (Hadoop) ?
> >
>
> See the first section in the refguide, the quickstart section:
> http://hbase.apache.org/book.html#quickstart  The first thing it tells you
> do is edit hbase-site.xml and change where zk is writing its data.  "By
> default, hbase.rootdir is set to /tmp/hbase-${user.name} and similarly so
> for the default ZooKeeper data location which means you'll lose all your
> data whenever your server reboots unless you change it (Most operating
> systems clear /tmp on restart)."
>
> St.Ack
>

Re: HMaster daemon is not starting up

Posted by Stack <st...@duboce.net>.
On Fri, Nov 22, 2013 at 6:11 PM, Bharat Shetty <bh...@gmail.com>wrote:

> Yeah, error is related to HDFS.
>
> I'm using HBase 0.96.0 with Hadoop 2.1.0-beta.
>
> On more further digging, it appears that the /tmp directory on the master
> node had some problems. Zookeeper for deployment is being managed
> internally by the HBase, appears to store data in /tmp directory
> (/tmp/hbase-iouser/zookeeper/). Since this region of the filesystem was
> unstable owing filesystem problems in the /tmp folder of the master node,
> zookeeper communication btw Master and slaves failed and as such HMaster
> seems to have shut down. I was able to get up the master running on other
> node with same configurations used previously.
>
> There was no changes to the lib directory of the HBase during deployment.
> The setup was working fine and I was able to run map-reduce programs for
> importing, exporting and filtering on millions of records in HBase prior to
> HMaster failure.
>
> In a production level scenario, which is ideal ? Zookeeper managed
> ourselves as opposed to zookeeper managed internally by the HBase ?
>
> Also, are there any documentation or links anywhere for production level
> configurations for HBase running on top of the HDFS (Hadoop) ?
>

See the first section in the refguide, the quickstart section:
http://hbase.apache.org/book.html#quickstart  The first thing it tells you
do is edit hbase-site.xml and change where zk is writing its data.  "By
default, hbase.rootdir is set to /tmp/hbase-${user.name} and similarly so
for the default ZooKeeper data location which means you'll lose all your
data whenever your server reboots unless you change it (Most operating
systems clear /tmp on restart)."

St.Ack

Re: HMaster daemon is not starting up

Posted by Bharat Shetty <bh...@gmail.com>.
Yeah, error is related to HDFS.

I'm using HBase 0.96.0 with Hadoop 2.1.0-beta.

On more further digging, it appears that the /tmp directory on the master
node had some problems. Zookeeper for deployment is being managed
internally by the HBase, appears to store data in /tmp directory
(/tmp/hbase-iouser/zookeeper/). Since this region of the filesystem was
unstable owing filesystem problems in the /tmp folder of the master node,
zookeeper communication btw Master and slaves failed and as such HMaster
seems to have shut down. I was able to get up the master running on other
node with same configurations used previously.

There was no changes to the lib directory of the HBase during deployment.
The setup was working fine and I was able to run map-reduce programs for
importing, exporting and filtering on millions of records in HBase prior to
HMaster failure.

In a production level scenario, which is ideal ? Zookeeper managed
ourselves as opposed to zookeeper managed internally by the HBase ?

Also, are there any documentation or links anywhere for production level
configurations for HBase running on top of the HDFS (Hadoop) ?

Best,
Bharat

-- B


On Fri, Nov 22, 2013 at 1:17 PM, Ted Yu <yu...@gmail.com> wrote:

> The error seemed to be related to hdfs.
>
> What version of HBase / hadoop are you using ?
> Was there any change in the lib directory of HBase depployment ?
>
> Cheers
>
>
> On Fri, Nov 22, 2013 at 2:50 PM, Bharat Shetty <bharat.shetty@gmail.com
> >wrote:
>
> > Hi all,
> >
> > I've been running into an error of late whose root cause I'm not able to
> > decipher. Before this I had been able to run HBase on top of HDFS without
> > any issues. Suddenly HMaster shut down one day and when I try to restart
> > I'm unable to start the HMaster daemon.
> >
> > Could you please guide if there is something that I might be missing.
> >
> > From the logs:
> > vim hbase-iouser-master-naples.log
> >
> >
> > 2013-11-22 09:50:27,096 ERROR [main] master.HMasterCommandLine: Master
> > exiting
> > java.lang.RuntimeException: Failed construction of Master: class
> > org.apache.hadoop.hbase.master.HMaster
> >         at
> > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2773)
> >         at
> >
> >
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:184)
> >         at
> >
> >
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:134)
> >         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> >         at
> >
> >
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
> >         at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2787)
> > Caused by: java.lang.UnsupportedOperationException: Not implemented by
> the
> > DistributedFileSystem FileSystem implementation
> >         at org.apache.hadoop.fs.FileSystem.getScheme(FileSystem.java:209)
> >         at
> > org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2397)
> >         at
> > org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2407)
> >         at
> > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2424)
> >         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
> >         at
> > org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2463)
> >         at
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2445)
> >         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:363)
> >         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:275)
> >         at
> > org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:884)
> >         at
> org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:455)
> >         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> > Method)
> >         at
> >
> >
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> >         at
> >
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >         at
> java.lang.reflect.Constructor.newInstance(Constructor.java:532)
> >         at
> > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2768)
> >
> > From: hbase-iouser-regionserver-naples.log
> >
> > 2013-11-22 09:50:28,150 ERROR [main]
> regionserver.HRegionServerCommandLine:
> > Region server exiting
> > java.lang.UnsupportedOperationException: Not implemented by the
> > DistributedFileSystem FileSystem implementation
> >         at org.apache.hadoop.fs.FileSystem.getScheme(FileSystem.java:209)
> >         at
> > org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2397)
> >         at
> > org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2407)
> >         at
> > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2424)
> >         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
> >         at
> > org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2463)
> >         at
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2445)
> >         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:363)
> >         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:165)
> >         at
> >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegionServer.java:2276)
> >         at
> >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegionServer.java:2260)
> >         at
> >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:62)
> >         at
> >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:85)
> >         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> >         at
> >
> >
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
> >         at
> >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2311)
> > ~
> > Regards,
> >
> > - Bharat
> >
>

Re: HMaster daemon is not starting up

Posted by Ted Yu <yu...@gmail.com>.
The error seemed to be related to hdfs.

What version of HBase / hadoop are you using ?
Was there any change in the lib directory of HBase depployment ?

Cheers


On Fri, Nov 22, 2013 at 2:50 PM, Bharat Shetty <bh...@gmail.com>wrote:

> Hi all,
>
> I've been running into an error of late whose root cause I'm not able to
> decipher. Before this I had been able to run HBase on top of HDFS without
> any issues. Suddenly HMaster shut down one day and when I try to restart
> I'm unable to start the HMaster daemon.
>
> Could you please guide if there is something that I might be missing.
>
> From the logs:
> vim hbase-iouser-master-naples.log
>
>
> 2013-11-22 09:50:27,096 ERROR [main] master.HMasterCommandLine: Master
> exiting
> java.lang.RuntimeException: Failed construction of Master: class
> org.apache.hadoop.hbase.master.HMaster
>         at
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2773)
>         at
>
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:184)
>         at
>
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:134)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>         at
>
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
>         at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2787)
> Caused by: java.lang.UnsupportedOperationException: Not implemented by the
> DistributedFileSystem FileSystem implementation
>         at org.apache.hadoop.fs.FileSystem.getScheme(FileSystem.java:209)
>         at
> org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2397)
>         at
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2407)
>         at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2424)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
>         at
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2463)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2445)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:363)
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:275)
>         at
> org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:884)
>         at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:455)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>         at
>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>         at
>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:532)
>         at
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2768)
>
> From: hbase-iouser-regionserver-naples.log
>
> 2013-11-22 09:50:28,150 ERROR [main] regionserver.HRegionServerCommandLine:
> Region server exiting
> java.lang.UnsupportedOperationException: Not implemented by the
> DistributedFileSystem FileSystem implementation
>         at org.apache.hadoop.fs.FileSystem.getScheme(FileSystem.java:209)
>         at
> org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2397)
>         at
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2407)
>         at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2424)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
>         at
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2463)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2445)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:363)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:165)
>         at
>
> org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegionServer.java:2276)
>         at
>
> org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegionServer.java:2260)
>         at
>
> org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:62)
>         at
>
> org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:85)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>         at
>
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
>         at
>
> org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2311)
> ~
> Regards,
>
> - Bharat
>