You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ranger.apache.org by "Zsombor Gegesy (JIRA)" <ji...@apache.org> on 2018/03/06 16:48:00 UTC

[jira] [Resolved] (RANGER-1368) different service on the same machine would make the shared audit local buffer directory access confilct when using hdfs to store audit log

     [ https://issues.apache.org/jira/browse/RANGER-1368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Zsombor Gegesy resolved RANGER-1368.
------------------------------------
    Resolution: Won't Fix

> different service on the same machine would make the shared audit local buffer directory access confilct when using hdfs to store audit log
> -------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: RANGER-1368
>                 URL: https://issues.apache.org/jira/browse/RANGER-1368
>             Project: Ranger
>          Issue Type: Bug
>          Components: audit
>    Affects Versions: 0.6.0, 0.7.0
>         Environment: linux centos 6,ranger 0.6.0
>            Reporter: zhangxiong
>            Priority: Major
>         Attachments: 0002-make-the-common-dir-created-for-audit-log-local-file.patch
>
>
> when configured hdfs to store ranger audit log, different service on the same machine would make the shared audit local buffer directory access confilct.
> detail:
> suppose that we have a server S with one hdfs NameNode NN and yarn ResourceManager RM deployed on it, and ranger configured hdfs as audit log storage backend and configured a local log buffer directory /tmp/ranger/log/ via 'xasecure.audit.hdfs.config.local.buffer.directory' .
> first ,namenode started and find the directory /tmp/ranger/log/ doesn't exists, so NN created the dir with access mod 700 and continues its own work . then resourcemanager started and need to store its audit log to the already exist dir ,unfortunately RM has no access authz to the dir and cannot work well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)