You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by "Hadoop QA (JIRA)" <ji...@apache.org> on 2015/12/04 09:28:10 UTC
[jira] [Commented] (AMBARI-14140) Test and Adopt FIFO compaction
policy for AMS high load tables
[ https://issues.apache.org/jira/browse/AMBARI-14140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15041272#comment-15041272 ]
Hadoop QA commented on AMBARI-14140:
------------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12775655/AMBARI-14140.patch
against trunk revision .
{color:red}-1 patch{color}. The patch command could not apply the patch.
Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/4482//console
This message is automatically generated.
> Test and Adopt FIFO compaction policy for AMS high load tables
> --------------------------------------------------------------
>
> Key: AMBARI-14140
> URL: https://issues.apache.org/jira/browse/AMBARI-14140
> Project: Ambari
> Issue Type: Bug
> Components: ambari-metrics
> Reporter: Aravindan Vijayan
> Assignee: Aravindan Vijayan
> Priority: Critical
> Fix For: 2.2.0
>
> Attachments: AMBARI-14140.patch
>
>
> FIFO compaction policy selects only files which have all cells expired. The column family MUST have non-default TTL.
> Essentially, FIFO compactor does only one job: collects expired store files.
> Because we do not do any real compaction, we do not use CPU and IO (disk and network), we do not evict hot data from a block cache. The result: improved throughput and latency both write and read.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)