You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@kylin.apache.org by "Shaofeng SHI (JIRA)" <ji...@apache.org> on 2016/12/29 05:37:58 UTC

[jira] [Commented] (KYLIN-2328) Reduce the size of metadata uploaded to distributed cache

    [ https://issues.apache.org/jira/browse/KYLIN-2328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15784572#comment-15784572 ] 

Shaofeng SHI commented on KYLIN-2328:
-------------------------------------

Hi Dayue, you're correct, KafkaFlatTableJob doesn't need cube metadata, it just persistent the messages from Kafka to HDFS.

> Reduce the size of metadata uploaded to distributed cache
> ---------------------------------------------------------
>
>                 Key: KYLIN-2328
>                 URL: https://issues.apache.org/jira/browse/KYLIN-2328
>             Project: Kylin
>          Issue Type: Improvement
>          Components: Job Engine
>    Affects Versions: all
>            Reporter: Dayue Gao
>            Assignee: Dayue Gao
>             Fix For: v2.0.0
>
>         Attachments: KYLIN-2328.patch
>
>
> Currently, each MR job uploads all the metadata belonging to a cube to distributed cache. When the total size of metadata increases, the submission time ("MapReduce Waiting" at Monitor UI) also increases and could become a significant problem.
> We could actually optimize the amount of metadata uploaded according to the type of job, for example
> * CuboidJob only needs dictionary of the building segment
> * CubeHFileJob doesn't need any dictionary



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)