You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@accumulo.apache.org by "Adam Fuchs (JIRA)" <ji...@apache.org> on 2014/11/07 23:44:33 UTC

[jira] [Resolved] (ACCUMULO-3303) funky performance with large WAL

     [ https://issues.apache.org/jira/browse/ACCUMULO-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Adam Fuchs resolved ACCUMULO-3303.
----------------------------------
    Resolution: Won't Fix

Issue exists in HDFS code. Filed HDFS-7380. Workaround: set tserver.wal.blocksize=2G whenever tserver.walog.max.size is bigger than 2G.

> funky performance with large WAL
> --------------------------------
>
>                 Key: ACCUMULO-3303
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-3303
>             Project: Accumulo
>          Issue Type: Bug
>          Components: logger, tserver
>    Affects Versions: 1.6.1
>            Reporter: Adam Fuchs
>         Attachments: 1GB_WAL.png, 2GB_WAL.png, 4GB_WAL.png, 512MB_WAL.png, 8GB_WAL.png, WAL_disabled.png
>
>
> The tserver seems to get into a funky state when writing to a large write-ahead log. I ran some continuous ingest tests varying tserver.walog.max.size in {512M, 1G, 2G, 4G, 8G} and got some results that I have yet to understand. I was expecting to see the effects of walog metadata management as described in ACCUMULO-2889, but I also found an additional behavior of ingest slowing down for long periods when using a large walog size.
> The cluster configuration was as follows:
> {code}
> Accumulo version: 1.6.2-SNAPSHOT (current head of origin/1.6)
> Nodes: 4
> Masters: 1
> Slaves: 3
> Cores per node: 24
> Drives per node: 8x1TB data + 2 raided system
> Memory per node: 64GB
> tserver.memory.maps.max=2G
> table.file.compress.type=snappy (for ci table only)
> tserver.mutation.queue.max=16M
> tserver.wal.sync.method=hflush
> Native maps enabled
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)