You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@kylin.apache.org by "hailin.huang (Jira)" <ji...@apache.org> on 2019/10/21 03:08:00 UTC
[jira] [Created] (KYLIN-4200) SparkCubingBylayer is not rubust
using CubeStatsReader to estimateLayerPartitionNum
hailin.huang created KYLIN-4200:
-----------------------------------
Summary: SparkCubingBylayer is not rubust using CubeStatsReader to estimateLayerPartitionNum
Key: KYLIN-4200
URL: https://issues.apache.org/jira/browse/KYLIN-4200
Project: Kylin
Issue Type: Bug
Components: Job Engine
Affects Versions: v2.6.4
Reporter: hailin.huang
In prod env, I found some scene as follow:
if user has measure of Bitmap, the Spark tasks often block at job 0,1 which compute layer 0,1.
after analyze spark log, I found that, spark use CubeStatsReader to estimateLayerPartition Num, If layer 0 has cuboid 255, and its size is 10M, according to the default configuration pararm ,the partition will be 1, however it's quick small for this amout of data.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)