You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@carbondata.apache.org by "Venkata Ramana G (JIRA)" <ji...@apache.org> on 2017/09/04 14:41:00 UTC

[jira] [Commented] (CARBONDATA-1305) On creating the dictinary with large dictionary csv NegativeArraySizeException is thrown

    [ https://issues.apache.org/jira/browse/CARBONDATA-1305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152672#comment-16152672 ] 

Venkata Ramana G commented on CARBONDATA-1305:
----------------------------------------------

Analysis: During dictionary creation the dictionary values are kept in a HashSet. When the size of hashset reaches more than 500000000 this exception is thrown.

Solution: Limit the dictionary values to 10000000

> On creating the dictinary with large dictionary csv NegativeArraySizeException is thrown
> ----------------------------------------------------------------------------------------
>
>                 Key: CARBONDATA-1305
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-1305
>             Project: CarbonData
>          Issue Type: Bug
>            Reporter: Kunal Kapoor
>            Assignee: Kunal Kapoor
>          Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> Step to reproduce: 
> 1. create table table1 (a string, b bigint) stored by 'carbondata';
> 2. LOAD DATA inpath 'hdfs://hacluster/user/xyz/datafile_0.csv' into table table1 options('DELIMITER'=',', 'QUOTECHAR'='"','COLUMNDICT'='a:hdfs://hacluster/user/xyz/dict.csv','FILEHEADER'='a,b','SINGLE_PASS'='TRUE');



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)