You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hawq.apache.org by "Lei Chang (JIRA)" <ji...@apache.org> on 2016/01/24 02:18:40 UTC
[jira] [Updated] (HAWQ-295) New metadata flush strategy remove 1
entry every time flush due to flush condition wrong.
[ https://issues.apache.org/jira/browse/HAWQ-295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Lei Chang updated HAWQ-295:
---------------------------
Fix Version/s: 2.0.0
> New metadata flush strategy remove 1 entry every time flush due to flush condition wrong.
> -----------------------------------------------------------------------------------------
>
> Key: HAWQ-295
> URL: https://issues.apache.org/jira/browse/HAWQ-295
> Project: Apache HAWQ
> Issue Type: Bug
> Reporter: Xiang Sheng
> Assignee: Lei Chang
> Fix For: 2.0.0
>
>
> In the new metadata flush strategy , the metadata remove 1 entry every metadata check. Because the cache entry ratio was calculated with wrong data type.
> Besides it`s better to change the max_hdfs_file_num to 32 k than 128k.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)