You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@carbondata.apache.org by Liang Chen <ch...@apache.org> on 2018/02/02 15:33:04 UTC

Re: MODERATE for dev@carbondata.apache.org

Hi

Please join carbon mailing list, you can send mail to
dev-subscribe@carbondata.apache.org and follow the guide to join.
Please find my reply inline.

1.no multiple levels partitions , we need three levels partitions, like
year,day,hour

Reply : Year,day,hour belong to one column(field)  or three columns ?   Can
you explain, what are your exact scenarios?  we can help you to design
partition + sort columns to solve your specific query issues.

2.spark needs import carbondata jar, we wouldn't modify the existing sql
algorithm

Reply : No need to modify any sql rules , you can use all sql which be
supported by SparkSQL to query carbondata.

3.low stability, insert failure frequently
Reply : What are the exact error ?

Regards
Liang


2018-02-02 11:30 GMT+08:00 <
dev-reject-1517542240.15377.bipdjkoklbfkadkmgaap@carbondata.apache.org>:

>
> To approve:
>    dev-accept-1517542240.15377.bipdjkoklbfkadkmgaap@carbondata.apache.org
> To reject:
>    dev-reject-1517542240.15377.bipdjkoklbfkadkmgaap@carbondata.apache.org
> To give a reason to reject:
> %%% Start comment
> %%% End comment
>
>
>
> ---------- 已转发邮件 ----------
> From: ilegend <51...@qq.com>
> To: dev@carbondata.apache.org
> Cc:
> Bcc:
> Date: Fri, 2 Feb 2018 11:30:24 +0800
> Subject: Help, carbondata issues on spark
> Hi guys
> We're testing carbondata for our project. The performance of the
> carbondata is better than parquet under the special rules, but there are
> some problems. Do you have any solutions for our issues.
> Hdfs 2.6, spark 2.1, carbondata 1.3
> 1.no multiple levels partitions , we need three levels partitions, like
> year,day,hour
> 2.spark needs import carbondata jar, we wouldn't modify the existing sql
> algorithm
> 3.low stability, insert failure frequently
>
> Look forward to your reply.
>
> 发自我的 iPhone
>
>
>
>
>
>
>
>
>