You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hawq.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2016/01/04 05:27:39 UTC
[jira] [Commented] (HAWQ-295) New metadata flush strategy remove 1
entry every time flush due to flush condition wrong.
[ https://issues.apache.org/jira/browse/HAWQ-295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15080684#comment-15080684 ]
ASF GitHub Bot commented on HAWQ-295:
-------------------------------------
Github user asfgit closed the pull request at:
https://github.com/apache/incubator-hawq/pull/231
> New metadata flush strategy remove 1 entry every time flush due to flush condition wrong.
> -----------------------------------------------------------------------------------------
>
> Key: HAWQ-295
> URL: https://issues.apache.org/jira/browse/HAWQ-295
> Project: Apache HAWQ
> Issue Type: Bug
> Reporter: Xiang Sheng
> Assignee: Lei Chang
>
> In the new metadata flush strategy , the metadata remove 1 entry every metadata check. Because the cache entry ratio was calculated with wrong data type.
> Besides it`s better to change the max_hdfs_file_num to 32 k than 128k.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)