You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@kylin.apache.org by "Shaofeng SHI (JIRA)" <ji...@apache.org> on 2018/07/23 05:53:00 UTC
[jira] [Commented] (KYLIN-3462) "dfs.replication=2" and compression
not work in Spark cube engine
[ https://issues.apache.org/jira/browse/KYLIN-3462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16552355#comment-16552355 ]
Shaofeng SHI commented on KYLIN-3462:
-------------------------------------
Adding the following configurations in kylin.properties will solve it, you can use it as a temp solution:
{code:java}
kylin.engine.spark-conf.spark.hadoop.dfs.replication=2
kylin.engine.spark-conf.spark.hadoop.mapreduce.output.fileoutputformat.compress=true
kylin.engine.spark-conf.spark.hadoop.mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.DefaultCodec
{code}
> "dfs.replication=2" and compression not work in Spark cube engine
> -----------------------------------------------------------------
>
> Key: KYLIN-3462
> URL: https://issues.apache.org/jira/browse/KYLIN-3462
> Project: Kylin
> Issue Type: Bug
> Components: Spark Engine
> Affects Versions: v2.3.0, v2.3.1, v2.4.0
> Reporter: Shaofeng SHI
> Priority: Major
> Attachments: cuboid_generated_by_mr.png, cuboid_generated_by_spark.png
>
>
> In a comparison between Spark and MR cubing, I noticed the cuboid files that Spark engine generated is 3x lager than MR, and took 4x larger more disk on HDFS than MR.
>
> The reason is, the "dfs.replication=2" didn't work when Spark save to HDFS. And by default no compression for spark.
>
> The converted HFiles are in the same size, the query results are the same. So this difference may easily be overlooked.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)