You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@accumulo.apache.org by Drew Farris <dr...@apache.org> on 2012/10/23 16:15:29 UTC

Re: Resolving the "Permission Denied" Message On Accumulo Monitor Page

For what it's worth, I encountered this when trying to set up a system
where Accumulo is run by a user different than the one used to run
hdfs.

This will likely become more prevalent as people move towards hadoop
1+ where different users are used for hdfs and mapred -- the hadoop
user becomes a less obvious choice for running Accumulo.

In addition to the Namenode permission denied message, it seems that
the monitor is unable to connect to the master when the accumulo user
is not in the Hadoop supergroup (in 1.4.1)

I observed the same error messages David recorded above, but didn't
see anything that seemed specific to the master issue.

I haven't had the chance to dig much further, has anyone looked in to
this? Any thoughts on whether it might be possible for things to work
without having to add the accumulo user to the hdfs supergroup?

Perhaps a discussion of running Accumulo as a particular user could be
added to the installation manual - I don't think the current manual
covers anything related to user accounts at all.

Drew

On Thu, Jul 26, 2012 at 6:16 PM, David Medinets
<da...@gmail.com> wrote:
> On Mon, Jul 23, 2012 at 8:35 PM, Josh Elser <jo...@gmail.com> wrote:
>> Out of curiosity, what is the actual exception/stack-trace printed in the
>> monitor's log?
>
> 26 21:43:37,848 [servlets.BasicServlet] DEBUG:
> org.apache.hadoop.security.AccessControlException:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=accumulo, access=READ_EXECUTE,
> inode="system":hadoop:supergroup:rwx-wx-wx
> org.apache.hadoop.security.AccessControlException:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=accumulo, access=READ_EXECUTE,
> inode="system":hadoop:supergroup:rwx-wx-wx
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
>         at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:96)
>         at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:58)
>         at org.apache.hadoop.hdfs.DFSClient.getContentSummary(DFSClient.java:924)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.getContentSummary(DistributedFileSystem.java:232)
>         at org.apache.accumulo.server.trace.TraceFileSystem.getContentSummary(TraceFileSystem.java:312)
>         at org.apache.accumulo.server.monitor.servlets.DefaultServlet.doAccumuloTable(DefaultServlet.java:312)
>         at org.apache.accumulo.server.monitor.servlets.DefaultServlet.pageBody(DefaultServlet.java:243)
>         at org.apache.accumulo.server.monitor.servlets.BasicServlet.doGet(BasicServlet.java:61)
>         at org.apache.accumulo.server.monitor.servlets.DefaultServlet.doGet(DefaultServlet.java:161)
>         at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
>         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>         at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
>         at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
>         at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
>         at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>         at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>         at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>         at org.mortbay.jetty.Server.handle(Server.java:324)
>         at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
>         at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
>         at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
>         at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
>         at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
>         at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
>         at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)

Re: Resolving the "Permission Denied" Message On Accumulo Monitor Page

Posted by Eric Newton <er...@gmail.com>.
If the bulk importer is a different user than the accumulo user, it's
possible that the accumulo user cannot read or move the files to be
imported.

-Eric

On Tue, Oct 23, 2012 at 11:04 AM, Drew Farris <dr...@apache.org> wrote:

> Eric,
>
> Thanks for the insight. You're correct of course, the monitor can
> connect to the master. I saw that the 'Accumulo Master' box on the
> overview page was empty and assumed the worst.
>
> I'll open a jira ticket against documentation for the user/group
> issue. What are the issues with bulk imports? (I haven't made it that
> far yet).
>
> Drew
>
> On Tue, Oct 23, 2012 at 10:20 AM, Eric Newton <er...@gmail.com>
> wrote:
> > Actually, I run accumulo as myself on a test cluster.  The NameNode
> > monitoring doesn't work, but the monitor can talk to the master just
> fine.
> >
> > The TaskTracker monitoring won't work in Hadoop 2+, so I'm probably
> going to
> > remove hadoop monitoring from accumulo 1.5.
> >
> > Documenting users and permissions would be a good idea, especially with
> > regard to bulk imports.
> >
> > -Eric
> >
> >
> > On Tue, Oct 23, 2012 at 10:15 AM, Drew Farris <dr...@apache.org> wrote:
> >>
> >> For what it's worth, I encountered this when trying to set up a system
> >> where Accumulo is run by a user different than the one used to run
> >> hdfs.
> >>
> >> This will likely become more prevalent as people move towards hadoop
> >> 1+ where different users are used for hdfs and mapred -- the hadoop
> >> user becomes a less obvious choice for running Accumulo.
> >>
> >> In addition to the Namenode permission denied message, it seems that
> >> the monitor is unable to connect to the master when the accumulo user
> >> is not in the Hadoop supergroup (in 1.4.1)
> >>
> >> I observed the same error messages David recorded above, but didn't
> >> see anything that seemed specific to the master issue.
> >>
> >> I haven't had the chance to dig much further, has anyone looked in to
> >> this? Any thoughts on whether it might be possible for things to work
> >> without having to add the accumulo user to the hdfs supergroup?
> >>
> >> Perhaps a discussion of running Accumulo as a particular user could be
> >> added to the installation manual - I don't think the current manual
> >> covers anything related to user accounts at all.
> >>
> >> Drew
> >>
> >> On Thu, Jul 26, 2012 at 6:16 PM, David Medinets
> >> <da...@gmail.com> wrote:
> >> > On Mon, Jul 23, 2012 at 8:35 PM, Josh Elser <jo...@gmail.com>
> >> > wrote:
> >> >> Out of curiosity, what is the actual exception/stack-trace printed in
> >> >> the
> >> >> monitor's log?
> >> >
> >> > 26 21:43:37,848 [servlets.BasicServlet] DEBUG:
> >> > org.apache.hadoop.security.AccessControlException:
> >> > org.apache.hadoop.security.AccessControlException: Permission denied:
> >> > user=accumulo, access=READ_EXECUTE,
> >> > inode="system":hadoop:supergroup:rwx-wx-wx
> >> > org.apache.hadoop.security.AccessControlException:
> >> > org.apache.hadoop.security.AccessControlException: Permission denied:
> >> > user=accumulo, access=READ_EXECUTE,
> >> > inode="system":hadoop:supergroup:rwx-wx-wx
> >> >         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> >> > Method)
> >> >         at
> >> >
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> >> >         at
> >> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >> >         at
> >> > java.lang.reflect.Constructor.newInstance(Constructor.java:525)
> >> >         at
> >> >
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:96)
> >> >         at
> >> >
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:58)
> >> >         at
> >> > org.apache.hadoop.hdfs.DFSClient.getContentSummary(DFSClient.java:924)
> >> >         at
> >> >
> org.apache.hadoop.hdfs.DistributedFileSystem.getContentSummary(DistributedFileSystem.java:232)
> >> >         at
> >> >
> org.apache.accumulo.server.trace.TraceFileSystem.getContentSummary(TraceFileSystem.java:312)
> >> >         at
> >> >
> org.apache.accumulo.server.monitor.servlets.DefaultServlet.doAccumuloTable(DefaultServlet.java:312)
> >> >         at
> >> >
> org.apache.accumulo.server.monitor.servlets.DefaultServlet.pageBody(DefaultServlet.java:243)
> >> >         at
> >> >
> org.apache.accumulo.server.monitor.servlets.BasicServlet.doGet(BasicServlet.java:61)
> >> >         at
> >> >
> org.apache.accumulo.server.monitor.servlets.DefaultServlet.doGet(DefaultServlet.java:161)
> >> >         at
> javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
> >> >         at
> javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
> >> >         at
> >> > org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
> >> >         at
> >> >
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
> >> >         at
> >> >
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
> >> >         at
> >> >
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> >> >         at
> >> >
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
> >> >         at
> >> >
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> >> >         at org.mortbay.jetty.Server.handle(Server.java:324)
> >> >         at
> >> >
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
> >> >         at
> >> >
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
> >> >         at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
> >> >         at
> >> > org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
> >> >         at
> >> > org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
> >> >         at
> >> >
> org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
> >> >         at
> >> >
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)
> >
> >
>

Re: Resolving the "Permission Denied" Message On Accumulo Monitor Page

Posted by Drew Farris <dr...@apache.org>.
Eric,

Thanks for the insight. You're correct of course, the monitor can
connect to the master. I saw that the 'Accumulo Master' box on the
overview page was empty and assumed the worst.

I'll open a jira ticket against documentation for the user/group
issue. What are the issues with bulk imports? (I haven't made it that
far yet).

Drew

On Tue, Oct 23, 2012 at 10:20 AM, Eric Newton <er...@gmail.com> wrote:
> Actually, I run accumulo as myself on a test cluster.  The NameNode
> monitoring doesn't work, but the monitor can talk to the master just fine.
>
> The TaskTracker monitoring won't work in Hadoop 2+, so I'm probably going to
> remove hadoop monitoring from accumulo 1.5.
>
> Documenting users and permissions would be a good idea, especially with
> regard to bulk imports.
>
> -Eric
>
>
> On Tue, Oct 23, 2012 at 10:15 AM, Drew Farris <dr...@apache.org> wrote:
>>
>> For what it's worth, I encountered this when trying to set up a system
>> where Accumulo is run by a user different than the one used to run
>> hdfs.
>>
>> This will likely become more prevalent as people move towards hadoop
>> 1+ where different users are used for hdfs and mapred -- the hadoop
>> user becomes a less obvious choice for running Accumulo.
>>
>> In addition to the Namenode permission denied message, it seems that
>> the monitor is unable to connect to the master when the accumulo user
>> is not in the Hadoop supergroup (in 1.4.1)
>>
>> I observed the same error messages David recorded above, but didn't
>> see anything that seemed specific to the master issue.
>>
>> I haven't had the chance to dig much further, has anyone looked in to
>> this? Any thoughts on whether it might be possible for things to work
>> without having to add the accumulo user to the hdfs supergroup?
>>
>> Perhaps a discussion of running Accumulo as a particular user could be
>> added to the installation manual - I don't think the current manual
>> covers anything related to user accounts at all.
>>
>> Drew
>>
>> On Thu, Jul 26, 2012 at 6:16 PM, David Medinets
>> <da...@gmail.com> wrote:
>> > On Mon, Jul 23, 2012 at 8:35 PM, Josh Elser <jo...@gmail.com>
>> > wrote:
>> >> Out of curiosity, what is the actual exception/stack-trace printed in
>> >> the
>> >> monitor's log?
>> >
>> > 26 21:43:37,848 [servlets.BasicServlet] DEBUG:
>> > org.apache.hadoop.security.AccessControlException:
>> > org.apache.hadoop.security.AccessControlException: Permission denied:
>> > user=accumulo, access=READ_EXECUTE,
>> > inode="system":hadoop:supergroup:rwx-wx-wx
>> > org.apache.hadoop.security.AccessControlException:
>> > org.apache.hadoop.security.AccessControlException: Permission denied:
>> > user=accumulo, access=READ_EXECUTE,
>> > inode="system":hadoop:supergroup:rwx-wx-wx
>> >         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> > Method)
>> >         at
>> > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>> >         at
>> > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> >         at
>> > java.lang.reflect.Constructor.newInstance(Constructor.java:525)
>> >         at
>> > org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:96)
>> >         at
>> > org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:58)
>> >         at
>> > org.apache.hadoop.hdfs.DFSClient.getContentSummary(DFSClient.java:924)
>> >         at
>> > org.apache.hadoop.hdfs.DistributedFileSystem.getContentSummary(DistributedFileSystem.java:232)
>> >         at
>> > org.apache.accumulo.server.trace.TraceFileSystem.getContentSummary(TraceFileSystem.java:312)
>> >         at
>> > org.apache.accumulo.server.monitor.servlets.DefaultServlet.doAccumuloTable(DefaultServlet.java:312)
>> >         at
>> > org.apache.accumulo.server.monitor.servlets.DefaultServlet.pageBody(DefaultServlet.java:243)
>> >         at
>> > org.apache.accumulo.server.monitor.servlets.BasicServlet.doGet(BasicServlet.java:61)
>> >         at
>> > org.apache.accumulo.server.monitor.servlets.DefaultServlet.doGet(DefaultServlet.java:161)
>> >         at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
>> >         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>> >         at
>> > org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
>> >         at
>> > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
>> >         at
>> > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
>> >         at
>> > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>> >         at
>> > org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>> >         at
>> > org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>> >         at org.mortbay.jetty.Server.handle(Server.java:324)
>> >         at
>> > org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
>> >         at
>> > org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
>> >         at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
>> >         at
>> > org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
>> >         at
>> > org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
>> >         at
>> > org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
>> >         at
>> > org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)
>
>

Re: Resolving the "Permission Denied" Message On Accumulo Monitor Page

Posted by Drew Farris <dr...@apache.org>.
So, to be sure I understand.. the "permission denied" messages are mostly a
cosmetic issue? Other than the monitoring page, overall Accumulo
functionality is not impacted in any way?

On Tue, Oct 23, 2012 at 10:20 AM, Eric Newton <er...@gmail.com> wrote:

> Actually, I run accumulo as myself on a test cluster.  The NameNode
> monitoring doesn't work, but the monitor can talk to the master just fine.
>
> The TaskTracker monitoring won't work in Hadoop 2+, so I'm probably going
> to remove hadoop monitoring from accumulo 1.5.
>
> Documenting users and permissions would be a good idea, especially with
> regard to bulk imports.
>
> -Eric
>
>
> On Tue, Oct 23, 2012 at 10:15 AM, Drew Farris <dr...@apache.org> wrote:
>
>> For what it's worth, I encountered this when trying to set up a system
>> where Accumulo is run by a user different than the one used to run
>> hdfs.
>>
>> This will likely become more prevalent as people move towards hadoop
>> 1+ where different users are used for hdfs and mapred -- the hadoop
>> user becomes a less obvious choice for running Accumulo.
>>
>> In addition to the Namenode permission denied message, it seems that
>> the monitor is unable to connect to the master when the accumulo user
>> is not in the Hadoop supergroup (in 1.4.1)
>>
>> I observed the same error messages David recorded above, but didn't
>> see anything that seemed specific to the master issue.
>>
>> I haven't had the chance to dig much further, has anyone looked in to
>> this? Any thoughts on whether it might be possible for things to work
>> without having to add the accumulo user to the hdfs supergroup?
>>
>> Perhaps a discussion of running Accumulo as a particular user could be
>> added to the installation manual - I don't think the current manual
>> covers anything related to user accounts at all.
>>
>> Drew
>>
>> On Thu, Jul 26, 2012 at 6:16 PM, David Medinets
>> <da...@gmail.com> wrote:
>> > On Mon, Jul 23, 2012 at 8:35 PM, Josh Elser <jo...@gmail.com>
>> wrote:
>> >> Out of curiosity, what is the actual exception/stack-trace printed in
>> the
>> >> monitor's log?
>> >
>> > 26 21:43:37,848 [servlets.BasicServlet] DEBUG:
>> > org.apache.hadoop.security.AccessControlException:
>> > org.apache.hadoop.security.AccessControlException: Permission denied:
>> > user=accumulo, access=READ_EXECUTE,
>> > inode="system":hadoop:supergroup:rwx-wx-wx
>> > org.apache.hadoop.security.AccessControlException:
>> > org.apache.hadoop.security.AccessControlException: Permission denied:
>> > user=accumulo, access=READ_EXECUTE,
>> > inode="system":hadoop:supergroup:rwx-wx-wx
>> >         at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> >         at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>> >         at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> >         at
>> java.lang.reflect.Constructor.newInstance(Constructor.java:525)
>> >         at
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:96)
>> >         at
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:58)
>> >         at
>> org.apache.hadoop.hdfs.DFSClient.getContentSummary(DFSClient.java:924)
>> >         at
>> org.apache.hadoop.hdfs.DistributedFileSystem.getContentSummary(DistributedFileSystem.java:232)
>> >         at
>> org.apache.accumulo.server.trace.TraceFileSystem.getContentSummary(TraceFileSystem.java:312)
>> >         at
>> org.apache.accumulo.server.monitor.servlets.DefaultServlet.doAccumuloTable(DefaultServlet.java:312)
>> >         at
>> org.apache.accumulo.server.monitor.servlets.DefaultServlet.pageBody(DefaultServlet.java:243)
>> >         at
>> org.apache.accumulo.server.monitor.servlets.BasicServlet.doGet(BasicServlet.java:61)
>> >         at
>> org.apache.accumulo.server.monitor.servlets.DefaultServlet.doGet(DefaultServlet.java:161)
>> >         at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
>> >         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>> >         at
>> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
>> >         at
>> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
>> >         at
>> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
>> >         at
>> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>> >         at
>> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>> >         at
>> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>> >         at org.mortbay.jetty.Server.handle(Server.java:324)
>> >         at
>> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
>> >         at
>> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
>> >         at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
>> >         at
>> org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
>> >         at
>> org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
>> >         at
>> org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
>> >         at
>> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)
>>
>
>

Re: Resolving the "Permission Denied" Message On Accumulo Monitor Page

Posted by Eric Newton <er...@gmail.com>.
Actually, I run accumulo as myself on a test cluster.  The NameNode
monitoring doesn't work, but the monitor can talk to the master just fine.

The TaskTracker monitoring won't work in Hadoop 2+, so I'm probably going
to remove hadoop monitoring from accumulo 1.5.

Documenting users and permissions would be a good idea, especially with
regard to bulk imports.

-Eric

On Tue, Oct 23, 2012 at 10:15 AM, Drew Farris <dr...@apache.org> wrote:

> For what it's worth, I encountered this when trying to set up a system
> where Accumulo is run by a user different than the one used to run
> hdfs.
>
> This will likely become more prevalent as people move towards hadoop
> 1+ where different users are used for hdfs and mapred -- the hadoop
> user becomes a less obvious choice for running Accumulo.
>
> In addition to the Namenode permission denied message, it seems that
> the monitor is unable to connect to the master when the accumulo user
> is not in the Hadoop supergroup (in 1.4.1)
>
> I observed the same error messages David recorded above, but didn't
> see anything that seemed specific to the master issue.
>
> I haven't had the chance to dig much further, has anyone looked in to
> this? Any thoughts on whether it might be possible for things to work
> without having to add the accumulo user to the hdfs supergroup?
>
> Perhaps a discussion of running Accumulo as a particular user could be
> added to the installation manual - I don't think the current manual
> covers anything related to user accounts at all.
>
> Drew
>
> On Thu, Jul 26, 2012 at 6:16 PM, David Medinets
> <da...@gmail.com> wrote:
> > On Mon, Jul 23, 2012 at 8:35 PM, Josh Elser <jo...@gmail.com>
> wrote:
> >> Out of curiosity, what is the actual exception/stack-trace printed in
> the
> >> monitor's log?
> >
> > 26 21:43:37,848 [servlets.BasicServlet] DEBUG:
> > org.apache.hadoop.security.AccessControlException:
> > org.apache.hadoop.security.AccessControlException: Permission denied:
> > user=accumulo, access=READ_EXECUTE,
> > inode="system":hadoop:supergroup:rwx-wx-wx
> > org.apache.hadoop.security.AccessControlException:
> > org.apache.hadoop.security.AccessControlException: Permission denied:
> > user=accumulo, access=READ_EXECUTE,
> > inode="system":hadoop:supergroup:rwx-wx-wx
> >         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> >         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> >         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >         at
> java.lang.reflect.Constructor.newInstance(Constructor.java:525)
> >         at
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:96)
> >         at
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:58)
> >         at
> org.apache.hadoop.hdfs.DFSClient.getContentSummary(DFSClient.java:924)
> >         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getContentSummary(DistributedFileSystem.java:232)
> >         at
> org.apache.accumulo.server.trace.TraceFileSystem.getContentSummary(TraceFileSystem.java:312)
> >         at
> org.apache.accumulo.server.monitor.servlets.DefaultServlet.doAccumuloTable(DefaultServlet.java:312)
> >         at
> org.apache.accumulo.server.monitor.servlets.DefaultServlet.pageBody(DefaultServlet.java:243)
> >         at
> org.apache.accumulo.server.monitor.servlets.BasicServlet.doGet(BasicServlet.java:61)
> >         at
> org.apache.accumulo.server.monitor.servlets.DefaultServlet.doGet(DefaultServlet.java:161)
> >         at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
> >         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
> >         at
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
> >         at
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
> >         at
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
> >         at
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> >         at
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
> >         at
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> >         at org.mortbay.jetty.Server.handle(Server.java:324)
> >         at
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
> >         at
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
> >         at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
> >         at
> org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
> >         at
> org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
> >         at
> org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
> >         at
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)
>