You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@accumulo.apache.org by Takashi Sasaki <ts...@gmail.com> on 2016/12/04 07:35:02 UTC

Master server throw AccessControlException

I use Accumulo-1.7.2 with Haddop2.7.2 and ZooKeeper 3.4.8

Master server suddenly throw AccessControlException.

java.io.IOException:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=accumulo, access=EXECUTE,
inode="/accumulo/recovery/603194f3-dd41-44ed-8ad6-90d408149952/failed/da":accumulo:accumulo:-rw-r--r--
 at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
 at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
 at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
 at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
 at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1720)
 at org.apache.hadoop.hdfs.server.namenode.FSDirSt
 at AndListingOp.getFileInfo(FSDirSt
 at AndListingOp.java:108)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3855)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1011)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTransl
 at orPB.getFileInfo(ClientNamenodeProtocolServerSideTransl
 at orPB.java:843)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
 at java.security.AccessController.doPrivileged(N
 at ive Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at org.apache.hadoop.security.UserGroupInform
 at ion.doAs(UserGroupInform
 at ion.java:1657)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)


How can I solve this Exception?


Thank you,
Takashi.

Re: Master server throw AccessControlException

Posted by Takashi Sasaki <ts...@gmail.com>.
Hello,

It has not recurred so far.
If it recurs, I will edit hdfs-site.xml
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
My hadoop cluster is used by Accumulo only, so that it isn't necessary
checking permisson.

Thank you,
Takashi

2016-12-06 11:00 GMT+09:00 Takashi Sasaki <ts...@gmail.com>:
> Hello, Josh
>
>
> I heard my project menber that did a `hdfs dfs -chmod -R 644
> /accumulo`, but no one did it...
>
>
> Thank you for reply,
>
> Takashi
>
> 2016-12-06 5:50 GMT+09:00 Josh Elser <jo...@gmail.com>:
>> 2016-12-02 19:30:16,170 [tserver.TabletServer] WARN : exception trying to
>> assign tablet +r<< hdfs://10.24.83.112:8020/accumulo/tables/+r/root_tablet
>> java.lang.RuntimeException: java.io.IOException:
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=accumulo, access=EXECUTE,
>> inode="/accumulo/recovery/603194f3-dd41-44ed-8ad6-90d408149952/failed/data":accumulo:accumulo:-rw-r--r--
>>
>> at org.apache.accumulo.tserver.log.MultiReader.<init>(MultiReader.java:113)
>> at
>> org.apache.accumulo.tserver.log.SortedLogRecovery.recover(SortedLogRecovery.java:105)
>> at
>> org.apache.accumulo.tserver.log.TabletServerLogger.recover(TabletServerLogger.java:483)
>>
>>
>> It looks like Accumulo doesn't have permission to list the contents of this
>> directory as the directory lacks the execute bit. Did someone do a `hdfs dfs
>> -chmod -R 644 /accumulo`? I don't know how Accumulo would have a directory
>> which is chmod 644 instead of 755...
>>
>>
>> Takashi Sasaki wrote:
>>>
>>> Oops, I didn't know stripping attachments.
>>>
>>> I'm hosting the file on my google drive.
>>>
>>> tserver.log
>>> https://drive.google.com/open?id=0B0ffj_ngVZxuaHJQYUtDY3doYm8
>>>
>>> master.log
>>> https://drive.google.com/open?id=0B0ffj_ngVZxuMEk4MHJVQzVWZXc
>>>
>>>
>>> Thank you for advice,
>>>
>>> Takashi
>>>
>>> 2016-12-05 22:00 GMT+09:00 Josh Elser<el...@apache.org>:
>>>>
>>>> Apache mailing lists strip attachments. Please host the files somewhere
>>>> and
>>>> provide a link to them.
>>>>
>>>> On Dec 4, 2016 20:54, "Takashi Sasaki"<ts...@gmail.com>  wrote:
>>>>>
>>>>> Hello,
>>>>>
>>>>> I'm sorry to take a few wrong infomation at first post.
>>>>>
>>>>> I asked the project members again about the problem.
>>>>> Master server did not throw AccessControlException.
>>>>>
>>>>> Actually, TabletServer threw AccessControlException.
>>>>> And, the stack trace was missing words, wrong path.
>>>>>
>>>>> Correct full stack trace was line 52 in attached file "tserver.log".
>>>>> And I attache "master.log" for your reference.
>>>>>
>>>>> Unfortunately, I could not still get debug log.
>>>>>
>>>>> Thank you for your support,
>>>>> Takashi
>>>>>
>>>>>
>>>>> 2016-12-04 18:33 GMT+09:00 Takashi Sasaki<ts...@gmail.com>:
>>>>>>
>>>>>> Hello, Christopher
>>>>>>
>>>>>>> The stack trace doesn't include anything from Accumulo, so it's not
>>>>>>> clear where in the Accumulo code this occurred. Do you have the full
>>>>>>> stack
>>>>>>> trace?
>>>>>>
>>>>>> Yes, I understand the stack trace isn't including from Accumulo.
>>>>>> I don't have full stack trace now, but I will try to find it.
>>>>>>
>>>>>> In additon, I use Accumulo on AWS EMR cluster for Enterprise
>>>>>> Production System, so log level isn't debug, becase of disk capacity
>>>>>> problem.
>>>>>> I will try to reproduce with debug log level.
>>>>>>
>>>>>> Thank you for your reply,
>>>>>> Takashi
>>>>>>
>>>>>> 2016-12-04 18:00 GMT+09:00 Christopher<ct...@apache.org>:
>>>>>>>
>>>>>>> The stack trace doesn't include anything from Accumulo, so it's not
>>>>>>> clear
>>>>>>> where in the Accumulo code this occurred. Do you have the full stack
>>>>>>> trace?
>>>>>>>
>>>>>>> In particular, it's not clear to me that there should be a directory
>>>>>>> called
>>>>>>> failed/da at that location, nor is it clear why Accumulo would be
>>>>>>> trying to
>>>>>>> check for the execute permission on it, unless it's trying to recurse
>>>>>>> into a
>>>>>>> directory. There is one part of the code where, if the directory
>>>>>>> exists
>>>>>>> when
>>>>>>> log recovery begins, it may try to do a recursive delete, but I can't
>>>>>>> see
>>>>>>> how this location would have been created by Accumulo. If that is the
>>>>>>> case,
>>>>>>> then it should be safe to manually delete this directory and its
>>>>>>> contents.
>>>>>>> The failed marker should be a regular file, though, and should not be
>>>>>>> a
>>>>>>> directory with another directory called "da" in it. So, I can't see
>>>>>>> how
>>>>>>> this
>>>>>>> was even created, unless by an older version or another program.
>>>>>>>
>>>>>>> The only way I can see this occurring is if you recently did an
>>>>>>> upgrade,
>>>>>>> while Accumulo had not yet finished outstanding log recoveries from a
>>>>>>> previous shutdown, AND the previous version did something different
>>>>>>> than
>>>>>>> 1.7.2. If that was the case, then perhaps the older version could have
>>>>>>> created this problematic directory. It seems unlikely, though...
>>>>>>> because
>>>>>>> directories are usually not created without the execute bit... and the
>>>>>>> error
>>>>>>> message looks like a directory missing that bit.
>>>>>>>
>>>>>>> It's hard to know more without seeing the full stack trace with the
>>>>>>> relevant
>>>>>>> accumulo methods included. It might also help to see the master debug
>>>>>>> logs
>>>>>>> leading up to the error.
>>>>>>>
>>>>>>> On Sun, Dec 4, 2016 at 2:35 AM Takashi Sasaki<ts...@gmail.com>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> I use Accumulo-1.7.2 with Haddop2.7.2 and ZooKeeper 3.4.8
>>>>>>>>
>>>>>>>> Master server suddenly throw AccessControlException.
>>>>>>>>
>>>>>>>> java.io.IOException:
>>>>>>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>>>>>>>> user=accumulo, access=EXECUTE,
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> inode="/accumulo/recovery/603194f3-dd41-44ed-8ad6-90d408149952/failed/da":accumulo:accumulo:-rw-r--r--
>>>>>>>>   at
>>>>>>>>
>>>>>>>>
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
>>>>>>>>   at
>>>>>>>>
>>>>>>>>
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
>>>>>>>>   at
>>>>>>>>
>>>>>>>>
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
>>>>>>>>   at
>>>>>>>>
>>>>>>>>
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>>>>>>>>   at
>>>>>>>>
>>>>>>>>
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1720)
>>>>>>>>   at org.apache.hadoop.hdfs.server.namenode.FSDirSt
>>>>>>>>   at AndListingOp.getFileInfo(FSDirSt
>>>>>>>>   at AndListingOp.java:108)
>>>>>>>>   at
>>>>>>>>
>>>>>>>>
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3855)
>>>>>>>>   at
>>>>>>>>
>>>>>>>>
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1011)
>>>>>>>>   at
>>>>>>>>
>>>>>>>>
>>>>>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTransl
>>>>>>>>   at orPB.getFileInfo(ClientNamenodeProtocolServerSideTransl
>>>>>>>>   at orPB.java:843)
>>>>>>>>   at
>>>>>>>>
>>>>>>>>
>>>>>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>>>>>>>   at
>>>>>>>>
>>>>>>>>
>>>>>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>>>>>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>>>>>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>>>>>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>>>>>>>>   at java.security.AccessController.doPrivileged(N
>>>>>>>>   at ive Method)
>>>>>>>>   at javax.security.auth.Subject.doAs(Subject.java:422)
>>>>>>>>   at org.apache.hadoop.security.UserGroupInform
>>>>>>>>   at ion.doAs(UserGroupInform
>>>>>>>>   at ion.java:1657)
>>>>>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>>>>>>>>
>>>>>>>>
>>>>>>>> How can I solve this Exception?
>>>>>>>>
>>>>>>>>
>>>>>>>> Thank you,
>>>>>>>> Takashi.

Re: Master server throw AccessControlException

Posted by Takashi Sasaki <ts...@gmail.com>.
Hello, Josh


I heard my project menber that did a `hdfs dfs -chmod -R 644
/accumulo`, but no one did it...


Thank you for reply,

Takashi

2016-12-06 5:50 GMT+09:00 Josh Elser <jo...@gmail.com>:
> 2016-12-02 19:30:16,170 [tserver.TabletServer] WARN : exception trying to
> assign tablet +r<< hdfs://10.24.83.112:8020/accumulo/tables/+r/root_tablet
> java.lang.RuntimeException: java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=accumulo, access=EXECUTE,
> inode="/accumulo/recovery/603194f3-dd41-44ed-8ad6-90d408149952/failed/data":accumulo:accumulo:-rw-r--r--
>
> at org.apache.accumulo.tserver.log.MultiReader.<init>(MultiReader.java:113)
> at
> org.apache.accumulo.tserver.log.SortedLogRecovery.recover(SortedLogRecovery.java:105)
> at
> org.apache.accumulo.tserver.log.TabletServerLogger.recover(TabletServerLogger.java:483)
>
>
> It looks like Accumulo doesn't have permission to list the contents of this
> directory as the directory lacks the execute bit. Did someone do a `hdfs dfs
> -chmod -R 644 /accumulo`? I don't know how Accumulo would have a directory
> which is chmod 644 instead of 755...
>
>
> Takashi Sasaki wrote:
>>
>> Oops, I didn't know stripping attachments.
>>
>> I'm hosting the file on my google drive.
>>
>> tserver.log
>> https://drive.google.com/open?id=0B0ffj_ngVZxuaHJQYUtDY3doYm8
>>
>> master.log
>> https://drive.google.com/open?id=0B0ffj_ngVZxuMEk4MHJVQzVWZXc
>>
>>
>> Thank you for advice,
>>
>> Takashi
>>
>> 2016-12-05 22:00 GMT+09:00 Josh Elser<el...@apache.org>:
>>>
>>> Apache mailing lists strip attachments. Please host the files somewhere
>>> and
>>> provide a link to them.
>>>
>>> On Dec 4, 2016 20:54, "Takashi Sasaki"<ts...@gmail.com>  wrote:
>>>>
>>>> Hello,
>>>>
>>>> I'm sorry to take a few wrong infomation at first post.
>>>>
>>>> I asked the project members again about the problem.
>>>> Master server did not throw AccessControlException.
>>>>
>>>> Actually, TabletServer threw AccessControlException.
>>>> And, the stack trace was missing words, wrong path.
>>>>
>>>> Correct full stack trace was line 52 in attached file "tserver.log".
>>>> And I attache "master.log" for your reference.
>>>>
>>>> Unfortunately, I could not still get debug log.
>>>>
>>>> Thank you for your support,
>>>> Takashi
>>>>
>>>>
>>>> 2016-12-04 18:33 GMT+09:00 Takashi Sasaki<ts...@gmail.com>:
>>>>>
>>>>> Hello, Christopher
>>>>>
>>>>>> The stack trace doesn't include anything from Accumulo, so it's not
>>>>>> clear where in the Accumulo code this occurred. Do you have the full
>>>>>> stack
>>>>>> trace?
>>>>>
>>>>> Yes, I understand the stack trace isn't including from Accumulo.
>>>>> I don't have full stack trace now, but I will try to find it.
>>>>>
>>>>> In additon, I use Accumulo on AWS EMR cluster for Enterprise
>>>>> Production System, so log level isn't debug, becase of disk capacity
>>>>> problem.
>>>>> I will try to reproduce with debug log level.
>>>>>
>>>>> Thank you for your reply,
>>>>> Takashi
>>>>>
>>>>> 2016-12-04 18:00 GMT+09:00 Christopher<ct...@apache.org>:
>>>>>>
>>>>>> The stack trace doesn't include anything from Accumulo, so it's not
>>>>>> clear
>>>>>> where in the Accumulo code this occurred. Do you have the full stack
>>>>>> trace?
>>>>>>
>>>>>> In particular, it's not clear to me that there should be a directory
>>>>>> called
>>>>>> failed/da at that location, nor is it clear why Accumulo would be
>>>>>> trying to
>>>>>> check for the execute permission on it, unless it's trying to recurse
>>>>>> into a
>>>>>> directory. There is one part of the code where, if the directory
>>>>>> exists
>>>>>> when
>>>>>> log recovery begins, it may try to do a recursive delete, but I can't
>>>>>> see
>>>>>> how this location would have been created by Accumulo. If that is the
>>>>>> case,
>>>>>> then it should be safe to manually delete this directory and its
>>>>>> contents.
>>>>>> The failed marker should be a regular file, though, and should not be
>>>>>> a
>>>>>> directory with another directory called "da" in it. So, I can't see
>>>>>> how
>>>>>> this
>>>>>> was even created, unless by an older version or another program.
>>>>>>
>>>>>> The only way I can see this occurring is if you recently did an
>>>>>> upgrade,
>>>>>> while Accumulo had not yet finished outstanding log recoveries from a
>>>>>> previous shutdown, AND the previous version did something different
>>>>>> than
>>>>>> 1.7.2. If that was the case, then perhaps the older version could have
>>>>>> created this problematic directory. It seems unlikely, though...
>>>>>> because
>>>>>> directories are usually not created without the execute bit... and the
>>>>>> error
>>>>>> message looks like a directory missing that bit.
>>>>>>
>>>>>> It's hard to know more without seeing the full stack trace with the
>>>>>> relevant
>>>>>> accumulo methods included. It might also help to see the master debug
>>>>>> logs
>>>>>> leading up to the error.
>>>>>>
>>>>>> On Sun, Dec 4, 2016 at 2:35 AM Takashi Sasaki<ts...@gmail.com>
>>>>>> wrote:
>>>>>>>
>>>>>>> I use Accumulo-1.7.2 with Haddop2.7.2 and ZooKeeper 3.4.8
>>>>>>>
>>>>>>> Master server suddenly throw AccessControlException.
>>>>>>>
>>>>>>> java.io.IOException:
>>>>>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>>>>>>> user=accumulo, access=EXECUTE,
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> inode="/accumulo/recovery/603194f3-dd41-44ed-8ad6-90d408149952/failed/da":accumulo:accumulo:-rw-r--r--
>>>>>>>   at
>>>>>>>
>>>>>>>
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
>>>>>>>   at
>>>>>>>
>>>>>>>
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
>>>>>>>   at
>>>>>>>
>>>>>>>
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
>>>>>>>   at
>>>>>>>
>>>>>>>
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>>>>>>>   at
>>>>>>>
>>>>>>>
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1720)
>>>>>>>   at org.apache.hadoop.hdfs.server.namenode.FSDirSt
>>>>>>>   at AndListingOp.getFileInfo(FSDirSt
>>>>>>>   at AndListingOp.java:108)
>>>>>>>   at
>>>>>>>
>>>>>>>
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3855)
>>>>>>>   at
>>>>>>>
>>>>>>>
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1011)
>>>>>>>   at
>>>>>>>
>>>>>>>
>>>>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTransl
>>>>>>>   at orPB.getFileInfo(ClientNamenodeProtocolServerSideTransl
>>>>>>>   at orPB.java:843)
>>>>>>>   at
>>>>>>>
>>>>>>>
>>>>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>>>>>>   at
>>>>>>>
>>>>>>>
>>>>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>>>>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>>>>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>>>>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>>>>>>>   at java.security.AccessController.doPrivileged(N
>>>>>>>   at ive Method)
>>>>>>>   at javax.security.auth.Subject.doAs(Subject.java:422)
>>>>>>>   at org.apache.hadoop.security.UserGroupInform
>>>>>>>   at ion.doAs(UserGroupInform
>>>>>>>   at ion.java:1657)
>>>>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>>>>>>>
>>>>>>>
>>>>>>> How can I solve this Exception?
>>>>>>>
>>>>>>>
>>>>>>> Thank you,
>>>>>>> Takashi.

Re: Master server throw AccessControlException

Posted by Josh Elser <jo...@gmail.com>.
2016-12-02 19:30:16,170 [tserver.TabletServer] WARN : exception trying 
to assign tablet +r<< 
hdfs://10.24.83.112:8020/accumulo/tables/+r/root_tablet
java.lang.RuntimeException: java.io.IOException: 
org.apache.hadoop.security.AccessControlException: Permission denied: 
user=accumulo, access=EXECUTE, 
inode="/accumulo/recovery/603194f3-dd41-44ed-8ad6-90d408149952/failed/data":accumulo:accumulo:-rw-r--r--

at org.apache.accumulo.tserver.log.MultiReader.<init>(MultiReader.java:113)
at 
org.apache.accumulo.tserver.log.SortedLogRecovery.recover(SortedLogRecovery.java:105)
at 
org.apache.accumulo.tserver.log.TabletServerLogger.recover(TabletServerLogger.java:483)


It looks like Accumulo doesn't have permission to list the contents of 
this directory as the directory lacks the execute bit. Did someone do a 
`hdfs dfs -chmod -R 644 /accumulo`? I don't know how Accumulo would have 
a directory which is chmod 644 instead of 755...

Takashi Sasaki wrote:
> Oops, I didn't know stripping attachments.
>
> I'm hosting the file on my google drive.
>
> tserver.log
> https://drive.google.com/open?id=0B0ffj_ngVZxuaHJQYUtDY3doYm8
>
> master.log
> https://drive.google.com/open?id=0B0ffj_ngVZxuMEk4MHJVQzVWZXc
>
>
> Thank you for advice,
>
> Takashi
>
> 2016-12-05 22:00 GMT+09:00 Josh Elser<el...@apache.org>:
>> Apache mailing lists strip attachments. Please host the files somewhere and
>> provide a link to them.
>>
>> On Dec 4, 2016 20:54, "Takashi Sasaki"<ts...@gmail.com>  wrote:
>>> Hello,
>>>
>>> I'm sorry to take a few wrong infomation at first post.
>>>
>>> I asked the project members again about the problem.
>>> Master server did not throw AccessControlException.
>>>
>>> Actually, TabletServer threw AccessControlException.
>>> And, the stack trace was missing words, wrong path.
>>>
>>> Correct full stack trace was line 52 in attached file "tserver.log".
>>> And I attache "master.log" for your reference.
>>>
>>> Unfortunately, I could not still get debug log.
>>>
>>> Thank you for your support,
>>> Takashi
>>>
>>>
>>> 2016-12-04 18:33 GMT+09:00 Takashi Sasaki<ts...@gmail.com>:
>>>> Hello, Christopher
>>>>
>>>>> The stack trace doesn't include anything from Accumulo, so it's not
>>>>> clear where in the Accumulo code this occurred. Do you have the full stack
>>>>> trace?
>>>> Yes, I understand the stack trace isn't including from Accumulo.
>>>> I don't have full stack trace now, but I will try to find it.
>>>>
>>>> In additon, I use Accumulo on AWS EMR cluster for Enterprise
>>>> Production System, so log level isn't debug, becase of disk capacity
>>>> problem.
>>>> I will try to reproduce with debug log level.
>>>>
>>>> Thank you for your reply,
>>>> Takashi
>>>>
>>>> 2016-12-04 18:00 GMT+09:00 Christopher<ct...@apache.org>:
>>>>> The stack trace doesn't include anything from Accumulo, so it's not
>>>>> clear
>>>>> where in the Accumulo code this occurred. Do you have the full stack
>>>>> trace?
>>>>>
>>>>> In particular, it's not clear to me that there should be a directory
>>>>> called
>>>>> failed/da at that location, nor is it clear why Accumulo would be
>>>>> trying to
>>>>> check for the execute permission on it, unless it's trying to recurse
>>>>> into a
>>>>> directory. There is one part of the code where, if the directory exists
>>>>> when
>>>>> log recovery begins, it may try to do a recursive delete, but I can't
>>>>> see
>>>>> how this location would have been created by Accumulo. If that is the
>>>>> case,
>>>>> then it should be safe to manually delete this directory and its
>>>>> contents.
>>>>> The failed marker should be a regular file, though, and should not be a
>>>>> directory with another directory called "da" in it. So, I can't see how
>>>>> this
>>>>> was even created, unless by an older version or another program.
>>>>>
>>>>> The only way I can see this occurring is if you recently did an
>>>>> upgrade,
>>>>> while Accumulo had not yet finished outstanding log recoveries from a
>>>>> previous shutdown, AND the previous version did something different
>>>>> than
>>>>> 1.7.2. If that was the case, then perhaps the older version could have
>>>>> created this problematic directory. It seems unlikely, though...
>>>>> because
>>>>> directories are usually not created without the execute bit... and the
>>>>> error
>>>>> message looks like a directory missing that bit.
>>>>>
>>>>> It's hard to know more without seeing the full stack trace with the
>>>>> relevant
>>>>> accumulo methods included. It might also help to see the master debug
>>>>> logs
>>>>> leading up to the error.
>>>>>
>>>>> On Sun, Dec 4, 2016 at 2:35 AM Takashi Sasaki<ts...@gmail.com>
>>>>> wrote:
>>>>>> I use Accumulo-1.7.2 with Haddop2.7.2 and ZooKeeper 3.4.8
>>>>>>
>>>>>> Master server suddenly throw AccessControlException.
>>>>>>
>>>>>> java.io.IOException:
>>>>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>>>>>> user=accumulo, access=EXECUTE,
>>>>>>
>>>>>>
>>>>>> inode="/accumulo/recovery/603194f3-dd41-44ed-8ad6-90d408149952/failed/da":accumulo:accumulo:-rw-r--r--
>>>>>>   at
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
>>>>>>   at
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
>>>>>>   at
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
>>>>>>   at
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>>>>>>   at
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1720)
>>>>>>   at org.apache.hadoop.hdfs.server.namenode.FSDirSt
>>>>>>   at AndListingOp.getFileInfo(FSDirSt
>>>>>>   at AndListingOp.java:108)
>>>>>>   at
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3855)
>>>>>>   at
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1011)
>>>>>>   at
>>>>>>
>>>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTransl
>>>>>>   at orPB.getFileInfo(ClientNamenodeProtocolServerSideTransl
>>>>>>   at orPB.java:843)
>>>>>>   at
>>>>>>
>>>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>>>>>   at
>>>>>>
>>>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>>>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>>>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>>>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>>>>>>   at java.security.AccessController.doPrivileged(N
>>>>>>   at ive Method)
>>>>>>   at javax.security.auth.Subject.doAs(Subject.java:422)
>>>>>>   at org.apache.hadoop.security.UserGroupInform
>>>>>>   at ion.doAs(UserGroupInform
>>>>>>   at ion.java:1657)
>>>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>>>>>>
>>>>>>
>>>>>> How can I solve this Exception?
>>>>>>
>>>>>>
>>>>>> Thank you,
>>>>>> Takashi.

Re: Master server throw AccessControlException

Posted by Takashi Sasaki <ts...@gmail.com>.
Oops, I didn't know stripping attachments.

I'm hosting the file on my google drive.

tserver.log
https://drive.google.com/open?id=0B0ffj_ngVZxuaHJQYUtDY3doYm8

master.log
https://drive.google.com/open?id=0B0ffj_ngVZxuMEk4MHJVQzVWZXc


Thank you for advice,

Takashi

2016-12-05 22:00 GMT+09:00 Josh Elser <el...@apache.org>:
> Apache mailing lists strip attachments. Please host the files somewhere and
> provide a link to them.
>
> On Dec 4, 2016 20:54, "Takashi Sasaki" <ts...@gmail.com> wrote:
>>
>> Hello,
>>
>> I'm sorry to take a few wrong infomation at first post.
>>
>> I asked the project members again about the problem.
>> Master server did not throw AccessControlException.
>>
>> Actually, TabletServer threw AccessControlException.
>> And, the stack trace was missing words, wrong path.
>>
>> Correct full stack trace was line 52 in attached file "tserver.log".
>> And I attache "master.log" for your reference.
>>
>> Unfortunately, I could not still get debug log.
>>
>> Thank you for your support,
>> Takashi
>>
>>
>> 2016-12-04 18:33 GMT+09:00 Takashi Sasaki <ts...@gmail.com>:
>> > Hello, Christopher
>> >
>> >>The stack trace doesn't include anything from Accumulo, so it's not
>> >> clear where in the Accumulo code this occurred. Do you have the full stack
>> >> trace?
>> > Yes, I understand the stack trace isn't including from Accumulo.
>> > I don't have full stack trace now, but I will try to find it.
>> >
>> > In additon, I use Accumulo on AWS EMR cluster for Enterprise
>> > Production System, so log level isn't debug, becase of disk capacity
>> > problem.
>> > I will try to reproduce with debug log level.
>> >
>> > Thank you for your reply,
>> > Takashi
>> >
>> > 2016-12-04 18:00 GMT+09:00 Christopher <ct...@apache.org>:
>> >> The stack trace doesn't include anything from Accumulo, so it's not
>> >> clear
>> >> where in the Accumulo code this occurred. Do you have the full stack
>> >> trace?
>> >>
>> >> In particular, it's not clear to me that there should be a directory
>> >> called
>> >> failed/da at that location, nor is it clear why Accumulo would be
>> >> trying to
>> >> check for the execute permission on it, unless it's trying to recurse
>> >> into a
>> >> directory. There is one part of the code where, if the directory exists
>> >> when
>> >> log recovery begins, it may try to do a recursive delete, but I can't
>> >> see
>> >> how this location would have been created by Accumulo. If that is the
>> >> case,
>> >> then it should be safe to manually delete this directory and its
>> >> contents.
>> >> The failed marker should be a regular file, though, and should not be a
>> >> directory with another directory called "da" in it. So, I can't see how
>> >> this
>> >> was even created, unless by an older version or another program.
>> >>
>> >> The only way I can see this occurring is if you recently did an
>> >> upgrade,
>> >> while Accumulo had not yet finished outstanding log recoveries from a
>> >> previous shutdown, AND the previous version did something different
>> >> than
>> >> 1.7.2. If that was the case, then perhaps the older version could have
>> >> created this problematic directory. It seems unlikely, though...
>> >> because
>> >> directories are usually not created without the execute bit... and the
>> >> error
>> >> message looks like a directory missing that bit.
>> >>
>> >> It's hard to know more without seeing the full stack trace with the
>> >> relevant
>> >> accumulo methods included. It might also help to see the master debug
>> >> logs
>> >> leading up to the error.
>> >>
>> >> On Sun, Dec 4, 2016 at 2:35 AM Takashi Sasaki <ts...@gmail.com>
>> >> wrote:
>> >>>
>> >>> I use Accumulo-1.7.2 with Haddop2.7.2 and ZooKeeper 3.4.8
>> >>>
>> >>> Master server suddenly throw AccessControlException.
>> >>>
>> >>> java.io.IOException:
>> >>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> >>> user=accumulo, access=EXECUTE,
>> >>>
>> >>>
>> >>> inode="/accumulo/recovery/603194f3-dd41-44ed-8ad6-90d408149952/failed/da":accumulo:accumulo:-rw-r--r--
>> >>>  at
>> >>>
>> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
>> >>>  at
>> >>>
>> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
>> >>>  at
>> >>>
>> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
>> >>>  at
>> >>>
>> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>> >>>  at
>> >>>
>> >>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1720)
>> >>>  at org.apache.hadoop.hdfs.server.namenode.FSDirSt
>> >>>  at AndListingOp.getFileInfo(FSDirSt
>> >>>  at AndListingOp.java:108)
>> >>>  at
>> >>>
>> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3855)
>> >>>  at
>> >>>
>> >>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1011)
>> >>>  at
>> >>>
>> >>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTransl
>> >>>  at orPB.getFileInfo(ClientNamenodeProtocolServerSideTransl
>> >>>  at orPB.java:843)
>> >>>  at
>> >>>
>> >>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>> >>>  at
>> >>>
>> >>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>> >>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>> >>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>> >>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>> >>>  at java.security.AccessController.doPrivileged(N
>> >>>  at ive Method)
>> >>>  at javax.security.auth.Subject.doAs(Subject.java:422)
>> >>>  at org.apache.hadoop.security.UserGroupInform
>> >>>  at ion.doAs(UserGroupInform
>> >>>  at ion.java:1657)
>> >>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>> >>>
>> >>>
>> >>> How can I solve this Exception?
>> >>>
>> >>>
>> >>> Thank you,
>> >>> Takashi.

Re: Master server throw AccessControlException

Posted by Josh Elser <el...@apache.org>.
Apache mailing lists strip attachments. Please host the files somewhere and
provide a link to them.

On Dec 4, 2016 20:54, "Takashi Sasaki" <ts...@gmail.com> wrote:

> Hello,
>
> I'm sorry to take a few wrong infomation at first post.
>
> I asked the project members again about the problem.
> Master server did not throw AccessControlException.
>
> Actually, TabletServer threw AccessControlException.
> And, the stack trace was missing words, wrong path.
>
> Correct full stack trace was line 52 in attached file "tserver.log".
> And I attache "master.log" for your reference.
>
> Unfortunately, I could not still get debug log.
>
> Thank you for your support,
> Takashi
>
>
> 2016-12-04 18:33 GMT+09:00 Takashi Sasaki <ts...@gmail.com>:
> > Hello, Christopher
> >
> >>The stack trace doesn't include anything from Accumulo, so it's not
> clear where in the Accumulo code this occurred. Do you have the full stack
> trace?
> > Yes, I understand the stack trace isn't including from Accumulo.
> > I don't have full stack trace now, but I will try to find it.
> >
> > In additon, I use Accumulo on AWS EMR cluster for Enterprise
> > Production System, so log level isn't debug, becase of disk capacity
> > problem.
> > I will try to reproduce with debug log level.
> >
> > Thank you for your reply,
> > Takashi
> >
> > 2016-12-04 18:00 GMT+09:00 Christopher <ct...@apache.org>:
> >> The stack trace doesn't include anything from Accumulo, so it's not
> clear
> >> where in the Accumulo code this occurred. Do you have the full stack
> trace?
> >>
> >> In particular, it's not clear to me that there should be a directory
> called
> >> failed/da at that location, nor is it clear why Accumulo would be
> trying to
> >> check for the execute permission on it, unless it's trying to recurse
> into a
> >> directory. There is one part of the code where, if the directory exists
> when
> >> log recovery begins, it may try to do a recursive delete, but I can't
> see
> >> how this location would have been created by Accumulo. If that is the
> case,
> >> then it should be safe to manually delete this directory and its
> contents.
> >> The failed marker should be a regular file, though, and should not be a
> >> directory with another directory called "da" in it. So, I can't see how
> this
> >> was even created, unless by an older version or another program.
> >>
> >> The only way I can see this occurring is if you recently did an upgrade,
> >> while Accumulo had not yet finished outstanding log recoveries from a
> >> previous shutdown, AND the previous version did something different than
> >> 1.7.2. If that was the case, then perhaps the older version could have
> >> created this problematic directory. It seems unlikely, though... because
> >> directories are usually not created without the execute bit... and the
> error
> >> message looks like a directory missing that bit.
> >>
> >> It's hard to know more without seeing the full stack trace with the
> relevant
> >> accumulo methods included. It might also help to see the master debug
> logs
> >> leading up to the error.
> >>
> >> On Sun, Dec 4, 2016 at 2:35 AM Takashi Sasaki <ts...@gmail.com>
> wrote:
> >>>
> >>> I use Accumulo-1.7.2 with Haddop2.7.2 and ZooKeeper 3.4.8
> >>>
> >>> Master server suddenly throw AccessControlException.
> >>>
> >>> java.io.IOException:
> >>> org.apache.hadoop.security.AccessControlException: Permission denied:
> >>> user=accumulo, access=EXECUTE,
> >>>
> >>> inode="/accumulo/recovery/603194f3-dd41-44ed-8ad6-
> 90d408149952/failed/da":accumulo:accumulo:-rw-r--r--
> >>>  at
> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> check(FSPermissionChecker.java:319)
> >>>  at
> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> checkTraverse(FSPermissionChecker.java:259)
> >>>  at
> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> checkPermission(FSPermissionChecker.java:205)
> >>>  at
> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> checkPermission(FSPermissionChecker.java:190)
> >>>  at
> >>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.
> checkPermission(FSDirectory.java:1720)
> >>>  at org.apache.hadoop.hdfs.server.namenode.FSDirSt
> >>>  at AndListingOp.getFileInfo(FSDirSt
> >>>  at AndListingOp.java:108)
> >>>  at
> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.
> getFileInfo(FSNamesystem.java:3855)
> >>>  at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.
> getFileInfo(NameNodeRpcServer.java:1011)
> >>>  at
> >>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSi
> deTransl
> >>>  at orPB.getFileInfo(ClientNamenodeProtocolServerSideTransl
> >>>  at orPB.java:843)
> >>>  at
> >>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$
> ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.
> java)
> >>>  at
> >>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$
> ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> >>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> >>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> >>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
> >>>  at java.security.AccessController.doPrivileged(N
> >>>  at ive Method)
> >>>  at javax.security.auth.Subject.doAs(Subject.java:422)
> >>>  at org.apache.hadoop.security.UserGroupInform
> >>>  at ion.doAs(UserGroupInform
> >>>  at ion.java:1657)
> >>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
> >>>
> >>>
> >>> How can I solve this Exception?
> >>>
> >>>
> >>> Thank you,
> >>> Takashi.
>

Re: Master server throw AccessControlException

Posted by Takashi Sasaki <ts...@gmail.com>.
Hello,

I'm sorry to take a few wrong infomation at first post.

I asked the project members again about the problem.
Master server did not throw AccessControlException.

Actually, TabletServer threw AccessControlException.
And, the stack trace was missing words, wrong path.

Correct full stack trace was line 52 in attached file "tserver.log".
And I attache "master.log" for your reference.

Unfortunately, I could not still get debug log.

Thank you for your support,
Takashi


2016-12-04 18:33 GMT+09:00 Takashi Sasaki <ts...@gmail.com>:
> Hello, Christopher
>
>>The stack trace doesn't include anything from Accumulo, so it's not clear where in the Accumulo code this occurred. Do you have the full stack trace?
> Yes, I understand the stack trace isn't including from Accumulo.
> I don't have full stack trace now, but I will try to find it.
>
> In additon, I use Accumulo on AWS EMR cluster for Enterprise
> Production System, so log level isn't debug, becase of disk capacity
> problem.
> I will try to reproduce with debug log level.
>
> Thank you for your reply,
> Takashi
>
> 2016-12-04 18:00 GMT+09:00 Christopher <ct...@apache.org>:
>> The stack trace doesn't include anything from Accumulo, so it's not clear
>> where in the Accumulo code this occurred. Do you have the full stack trace?
>>
>> In particular, it's not clear to me that there should be a directory called
>> failed/da at that location, nor is it clear why Accumulo would be trying to
>> check for the execute permission on it, unless it's trying to recurse into a
>> directory. There is one part of the code where, if the directory exists when
>> log recovery begins, it may try to do a recursive delete, but I can't see
>> how this location would have been created by Accumulo. If that is the case,
>> then it should be safe to manually delete this directory and its contents.
>> The failed marker should be a regular file, though, and should not be a
>> directory with another directory called "da" in it. So, I can't see how this
>> was even created, unless by an older version or another program.
>>
>> The only way I can see this occurring is if you recently did an upgrade,
>> while Accumulo had not yet finished outstanding log recoveries from a
>> previous shutdown, AND the previous version did something different than
>> 1.7.2. If that was the case, then perhaps the older version could have
>> created this problematic directory. It seems unlikely, though... because
>> directories are usually not created without the execute bit... and the error
>> message looks like a directory missing that bit.
>>
>> It's hard to know more without seeing the full stack trace with the relevant
>> accumulo methods included. It might also help to see the master debug logs
>> leading up to the error.
>>
>> On Sun, Dec 4, 2016 at 2:35 AM Takashi Sasaki <ts...@gmail.com> wrote:
>>>
>>> I use Accumulo-1.7.2 with Haddop2.7.2 and ZooKeeper 3.4.8
>>>
>>> Master server suddenly throw AccessControlException.
>>>
>>> java.io.IOException:
>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>>> user=accumulo, access=EXECUTE,
>>>
>>> inode="/accumulo/recovery/603194f3-dd41-44ed-8ad6-90d408149952/failed/da":accumulo:accumulo:-rw-r--r--
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1720)
>>>  at org.apache.hadoop.hdfs.server.namenode.FSDirSt
>>>  at AndListingOp.getFileInfo(FSDirSt
>>>  at AndListingOp.java:108)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3855)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1011)
>>>  at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTransl
>>>  at orPB.getFileInfo(ClientNamenodeProtocolServerSideTransl
>>>  at orPB.java:843)
>>>  at
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>>  at
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>>>  at java.security.AccessController.doPrivileged(N
>>>  at ive Method)
>>>  at javax.security.auth.Subject.doAs(Subject.java:422)
>>>  at org.apache.hadoop.security.UserGroupInform
>>>  at ion.doAs(UserGroupInform
>>>  at ion.java:1657)
>>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>>>
>>>
>>> How can I solve this Exception?
>>>
>>>
>>> Thank you,
>>> Takashi.

Re: Master server throw AccessControlException

Posted by Takashi Sasaki <ts...@gmail.com>.
Hello, Christopher

>The stack trace doesn't include anything from Accumulo, so it's not clear where in the Accumulo code this occurred. Do you have the full stack trace?
Yes, I understand the stack trace isn't including from Accumulo.
I don't have full stack trace now, but I will try to find it.

In additon, I use Accumulo on AWS EMR cluster for Enterprise
Production System, so log level isn't debug, becase of disk capacity
problem.
I will try to reproduce with debug log level.

Thank you for your reply,
Takashi

2016-12-04 18:00 GMT+09:00 Christopher <ct...@apache.org>:
> The stack trace doesn't include anything from Accumulo, so it's not clear
> where in the Accumulo code this occurred. Do you have the full stack trace?
>
> In particular, it's not clear to me that there should be a directory called
> failed/da at that location, nor is it clear why Accumulo would be trying to
> check for the execute permission on it, unless it's trying to recurse into a
> directory. There is one part of the code where, if the directory exists when
> log recovery begins, it may try to do a recursive delete, but I can't see
> how this location would have been created by Accumulo. If that is the case,
> then it should be safe to manually delete this directory and its contents.
> The failed marker should be a regular file, though, and should not be a
> directory with another directory called "da" in it. So, I can't see how this
> was even created, unless by an older version or another program.
>
> The only way I can see this occurring is if you recently did an upgrade,
> while Accumulo had not yet finished outstanding log recoveries from a
> previous shutdown, AND the previous version did something different than
> 1.7.2. If that was the case, then perhaps the older version could have
> created this problematic directory. It seems unlikely, though... because
> directories are usually not created without the execute bit... and the error
> message looks like a directory missing that bit.
>
> It's hard to know more without seeing the full stack trace with the relevant
> accumulo methods included. It might also help to see the master debug logs
> leading up to the error.
>
> On Sun, Dec 4, 2016 at 2:35 AM Takashi Sasaki <ts...@gmail.com> wrote:
>>
>> I use Accumulo-1.7.2 with Haddop2.7.2 and ZooKeeper 3.4.8
>>
>> Master server suddenly throw AccessControlException.
>>
>> java.io.IOException:
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=accumulo, access=EXECUTE,
>>
>> inode="/accumulo/recovery/603194f3-dd41-44ed-8ad6-90d408149952/failed/da":accumulo:accumulo:-rw-r--r--
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1720)
>>  at org.apache.hadoop.hdfs.server.namenode.FSDirSt
>>  at AndListingOp.getFileInfo(FSDirSt
>>  at AndListingOp.java:108)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3855)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1011)
>>  at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTransl
>>  at orPB.getFileInfo(ClientNamenodeProtocolServerSideTransl
>>  at orPB.java:843)
>>  at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>  at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>>  at java.security.AccessController.doPrivileged(N
>>  at ive Method)
>>  at javax.security.auth.Subject.doAs(Subject.java:422)
>>  at org.apache.hadoop.security.UserGroupInform
>>  at ion.doAs(UserGroupInform
>>  at ion.java:1657)
>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>>
>>
>> How can I solve this Exception?
>>
>>
>> Thank you,
>> Takashi.

Re: Master server throw AccessControlException

Posted by Christopher <ct...@apache.org>.
The stack trace doesn't include anything from Accumulo, so it's not clear
where in the Accumulo code this occurred. Do you have the full stack trace?

In particular, it's not clear to me that there should be a directory called
failed/da at that location, nor is it clear why Accumulo would be trying to
check for the execute permission on it, unless it's trying to recurse into
a directory. There is one part of the code where, if the directory exists
when log recovery begins, it may try to do a recursive delete, but I can't
see how this location would have been created by Accumulo. If that is the
case, then it should be safe to manually delete this directory and its
contents. The failed marker should be a regular file, though, and should
not be a directory with another directory called "da" in it. So, I can't
see how this was even created, unless by an older version or another
program.

The only way I can see this occurring is if you recently did an upgrade,
while Accumulo had not yet finished outstanding log recoveries from a
previous shutdown, AND the previous version did something different than
1.7.2. If that was the case, then perhaps the older version could have
created this problematic directory. It seems unlikely, though... because
directories are usually not created without the execute bit... and the
error message looks like a directory missing that bit.

It's hard to know more without seeing the full stack trace with the
relevant accumulo methods included. It might also help to see the master
debug logs leading up to the error.

On Sun, Dec 4, 2016 at 2:35 AM Takashi Sasaki <ts...@gmail.com> wrote:

> I use Accumulo-1.7.2 with Haddop2.7.2 and ZooKeeper 3.4.8
>
> Master server suddenly throw AccessControlException.
>
> java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=accumulo, access=EXECUTE,
>
> inode="/accumulo/recovery/603194f3-dd41-44ed-8ad6-90d408149952/failed/da":accumulo:accumulo:-rw-r--r--
>  at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1720)
>  at org.apache.hadoop.hdfs.server.namenode.FSDirSt
>  at AndListingOp.getFileInfo(FSDirSt
>  at AndListingOp.java:108)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3855)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1011)
>  at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTransl
>  at orPB.getFileInfo(ClientNamenodeProtocolServerSideTransl
>  at orPB.java:843)
>  at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>  at java.security.AccessController.doPrivileged(N
>  at ive Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at org.apache.hadoop.security.UserGroupInform
>  at ion.doAs(UserGroupInform
>  at ion.java:1657)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>
>
> How can I solve this Exception?
>
>
> Thank you,
> Takashi.
>