You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@carbondata.apache.org by Liang Big data <ch...@gmail.com> on 2016/07/01 05:43:29 UTC

Re: Re: Re: carbondata问题请教

Hi

Your input path may have some issues,please try this below statement :
cc.sql("load data inpath './carbondata/sample.csv' into table table1")

BTW: In future, please send your questions to
dev@carbondata.incubator.apache.org, you will get stronger support from
community :)

Regards
Liang


2016-07-01 7:53 GMT+05:30 籍九洲 <ji...@163.com>:

> 现在使用spark版本是1.6.1 前边比较顺利,但是到cc.sql("load data inpath '$dataFilePath' into
> table table1")这一步报错
> org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path
> does not exist: /home/jijz/spark-1.6.1-bin-hadoop2.6.1/carbondata/sample.csv
> ,另外,show create table可以正常执行,但是cc.sql("select * from  table1
> ").show报错:org.carbondata.core.util.CarbonUtilException: Either dictionary
> or its metadata does not exist for column identifier ::
> ca98640e-b0a1-49d9-9225-8b7c2eaf79a2
>
>
>
>
> At 2016-06-30 19:08:19, "Liang Big data" <ch...@gmail.com> wrote:
>
> Hi
>
> Currently , Apache CarbonData not support spark 1.6.2 yet, please raise
> one issue for supporting spark 1.6.2 to community :
> https://issues.apache.org/jira/browse/CARBONDATA
>
> Regards
> Liang
>
> 在 2016年6月30日 下午3:02,籍九洲 <ji...@163.com>写道:
>
>> 代码是从git上下载的,我的spark版本是1.6.2,
>> [jijz@vlnx032175 spark-1.6.2-bin-hadoop2.6]$ echo $mysql_jar
>> ./lib/mysql-connector-java-5.1.22-bin.jar
>> [jijz@vlnx032175 spark-1.6.2-bin-hadoop2.6]$ ./bin/spark-shell --master
>> local --jars ${carbondata_jar},${mysql_jar}
>> 编译用的是:[root@slave104 incubator-carbondata-master]# mvn -Pspark-1.6.2
>> clean install
>>
>>
>>
>>
>>
>>
>>
>> 在 2016-06-30 17:27:16,"Liang Big data" <ch...@gmail.com> 写道:
>>
>> 首先,欢迎你开始使用/研究carbondata。
>>
>> 1.请问,你下载的CarbonData代码是来自这个github库吗?
>> https://github.com/apache/incubator-carbondata
>> 2.请问你用的spark版本是哪个?
>>
>> 3.按照这个,需要配置mysql,你配置了吗?
>> https://github.com/apache/incubator-carbondata/blob/master/docs/Quick-Start.md
>>
>> store目录会自动创建的,应该不是那个问题。
>>
>>
>> Regards
>> Liang
>>
>> 2016-06-30 14:42 GMT+05:30 籍九洲 <ji...@163.com>:
>>
>>> 我在启动spark-shell之后
>>> val cc = new CarbonContext(sc, "./carbondata/store")
>>> 报错
>>> scala> val cc = new CarbonContext(sc, "./carbondata/store")
>>> java.lang.VerifyError: Bad type on operand stack
>>> Exception Details:
>>>   Location:
>>>
>>> org/apache/spark/sql/CarbonContext.optimizer$lzycompute()Lorg/apache/spark/sql/catalyst/optimizer/Optimizer;
>>> @27: invokespecial
>>>   Reason:
>>>     Type 'org/apache/spark/sql/catalyst/optimizer/DefaultOptimizer$'
>>> (current frame, stack[3]) is not assignable to
>>> 'org/apache/spark/sql/catalyst/optimizer/Optimizer'
>>>   Current Frame:
>>>     bci: @27
>>>     flags: { }
>>>     locals: { 'org/apache/spark/sql/CarbonContext',
>>> 'org/apache/spark/sql/CarbonContext' }
>>>     stack: { 'org/apache/spark/sql/CarbonContext', uninitialized 16,
>>> uninitialized 16,
>>> 'org/apache/spark/sql/catalyst/optimizer/DefaultOptimizer$',
>>> 'org/apache/spark/sql/SQLConf' }
>>>   Bytecode:
>>>     0x0000000: 2a59 4cc2 2ab4 0061 077e 9103 a000 202a
>>>     0x0000010: bb00 9359 b200 982a b600 81b7 009b b500
>>>     0x0000020: 9d2a 2ab4 0061 0780 91b5 0061 b200 5957
>>>     0x0000030: 2bc3 2ab4 009d b02b c3bf
>>>   Exception Handler Table:
>>>     bci [4, 50] => handler: 55
>>>   Stackmap Table:
>>>     append_frame(@44,Object[#2])
>>>     same_locals_1_stack_item_frame(@55,Object[#93])
>>>
>>> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:30)
>>> 我查看cp -r processing/carbonplugins ${SPARK_HOME}/carbondata 发现processing/
>>> carbonplugins下边只有.kettle
>>>
>>> 没有store这个?这是什么原因呢,请问如何解决,谢谢。
>>> 期待您的回复
>>>
>>>
>>>
>>>
>>>
>>
>>
>>
>> --
>>
>> Regards
>> Liang
>>
>>
>>
>>
>>
>
>
>
> --
>
> Regards
> Liang
>
>
>
>
>



-- 

Regards
Liang