You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by "Attila Doroszlai (Jira)" <ji...@apache.org> on 2020/06/03 15:02:00 UTC

[jira] [Resolved] (HDDS-3694) Reduce dn-audit log

     [ https://issues.apache.org/jira/browse/HDDS-3694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Attila Doroszlai resolved HDDS-3694.
------------------------------------
    Fix Version/s: 0.6.0
       Resolution: Fixed

> Reduce dn-audit log
> -------------------
>
>                 Key: HDDS-3694
>                 URL: https://issues.apache.org/jira/browse/HDDS-3694
>             Project: Hadoop Distributed Data Store
>          Issue Type: Improvement
>            Reporter: Rajesh Balamohan
>            Assignee: Dinesh Chitlangia
>            Priority: Critical
>              Labels: Triaged, performance, pull-request-available
>             Fix For: 0.6.0
>
>         Attachments: write_to_dn_audit_causing_high_disk_util.png
>
>
> Do we really need such fine grained audit log? This ends up creating too many entries for chunks.
> {noformat}
> 2020-05-31 23:31:48,477 | INFO  | DNAudit | user=null | ip=null | op=WRITE_CHUNK {blockData=conID: 165 locID: 104267324230275483 bcsId: 93943} | ret=SUCCESS |
> 2020-05-31 23:31:48,482 | INFO  | DNAudit | user=null | ip=null | op=WRITE_CHUNK {blockData=conID: 165 locID: 104267323565871437 bcsId: 93940} | ret=SUCCESS |
> 2020-05-31 23:31:48,487 | INFO  | DNAudit | user=null | ip=null | op=WRITE_CHUNK {blockData=conID: 165 locID: 104267324230275483 bcsId: 93943} | ret=SUCCESS |
> 2020-05-31 23:31:48,497 | INFO  | DNAudit | user=null | ip=null | op=WRITE_CHUNK {blockData=conID: 166 locID: 104267324172472725 bcsId: 93934} | ret=SUCCESS |
> 2020-05-31 23:31:48,501 | INFO  | DNAudit | user=null | ip=null | op=WRITE_CHUNK {blockData=conID: 165 locID: 104267323675906396 bcsId: 93958} | ret=SUCCESS |
> 2020-05-31 23:31:48,504 | INFO  | DNAudit | user=null | ip=null | op=WRITE_CHUNK {blockData=conID: 165 locID: 104267324230275483 bcsId: 93943} | ret=SUCCESS |
> 2020-05-31 23:31:48,509 | INFO  | DNAudit | user=null | ip=null | op=WRITE_CHUNK {blockData=conID: 166 locID: 104267323685343583 bcsId: 93974} | ret=SUCCESS |
> 2020-05-31 23:31:48,512 | INFO  | DNAudit | user=null | ip=null | op=WRITE_CHUNK {blockData=conID: 166 locID: 104267324172472725 bcsId: 93934} | ret=SUCCESS |
> 2020-05-31 23:31:48,516 | INFO  | DNAudit | user=null | ip=null | op=WRITE_CHUNK {blockData=conID: 165 locID: 104267324332380586 bcsId: 0} | ret=SUCCESS |
> 2020-05-31 23:31:48,726 | INFO  | DNAudit | user=null | ip=null | op=WRITE_CHUNK {blockData=conID: 166 locID: 104267324232634780 bcsId: 93964} | ret=SUCCESS |
> 2020-05-31 23:31:48,733 | INFO  | DNAudit | user=null | ip=null | op=WRITE_CHUNK {blockData=conID: 166 locID: 104267323976323460 bcsId: 93967} | ret=SUCCESS |
> 2020-05-31 23:31:48,740 | INFO  | DNAudit | user=null | ip=null | op=WRITE_CHUNK {blockData=conID: 165 locID: 104267324131512723 bcsId: 93952} | ret=SUCCESS |
> 2020-05-31 23:31:48,752 | INFO  | DNAudit | user=null | ip=null | op=WRITE_CHUNK {blockData=conID: 165 locID: 104267324230275483 bcsId: 93943} | ret=SUCCESS |
> 2020-05-31 23:31:48,760 | INFO  | DNAudit | user=null | ip=null | op=WRITE_CHUNK {blockData=conID: 165 locID: 104267323675906396 bcsId: 93958} | ret=SUCCESS |
> 2020-05-31 23:31:48,772 | INFO  | DNAudit | user=null | ip=null | op=WRITE_CHUNK {blockData=conID: 166 locID: 104267323685343583 bcsId: 93974} | ret=SUCCESS |
> 2020-05-31 23:31:48,780 | INFO  | DNAudit | user=null | ip=null | op=WRITE_CHUNK {blockData=conID: 164 locID: 104267324304724389 bcsId: 0} | ret=SUCCESS |
> 2020-05-31 23:31:48,787 | INFO  | DNAudit | user=null | ip=null | op=WRITE_CHUNK {blockData=conID: 164 locID: 104267323991724421 bcsId: 93970} | ret=SUCCESS |
> 2020-05-31 23:31:48,794 | INFO  | DNAudit | user=null | ip=null | op=WRITE_CHUNK {blockData=conID: 164 locID: 104267323725189479 bcsId: 93963} | ret=SUCCESS |
>  {noformat}
> And ends up choking disk utilization with lesser write/mb/sec.
> Refer to 100+ writes being written with 0.52 MB/sec and choking entire disk utilization.
> !write_to_dn_audit_causing_high_disk_util.png|width=726,height=300!
>  
> Also, the username and IP are currently set as null. This needs to be replaced by using details from grpc



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org