You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@carbondata.apache.org by 金铸 <ji...@neusoft.com> on 2016/08/12 06:25:18 UTC

load data fail

hi :
/usr/hdp/2.4.0.0-169/spark/bin/spark-shell --master yarn-client --jars 
/opt/incubator-carbondata/assembly/target/scala-2.10/carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-rdbms-3.2.9.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-core-3.2.10.jar,/opt//mysql-connector-java-5.1.37.jar

scala>import org.apache.spark.sql.CarbonContext
scala>import java.io.File
scala>import org.apache.hadoop.hive.conf.HiveConf





scala>val cc = new CarbonContext(sc, 
"hdfs://hadoop01/data/carbondata01/store")

scala>cc.setConf("hive.metastore.warehouse.dir", "/apps/hive/warehouse")
scala>cc.setConf(HiveConf.ConfVars.HIVECHECKFILEFORMAT.varname, "false")
scala>cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/carbonlib/carbonplugins")

scala> 
cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/carbonlib/carbonplugins")

scala> cc.sql(s"load data local inpath 'hdfs://hadoop01/sample.csv' into 
table t4 options('FILEHEADER'='id,name,city,age')")
INFO  12-08 14:21:24,461 - main Query [LOAD DATA LOCAL INPATH 
'HDFS://HADOOP01/SAMPLE.CSV' INTO TABLE T4 
OPTIONS('FILEHEADER'='ID,NAME,CITY,AGE')]
INFO  12-08 14:21:39,475 - Table MetaData Unlocked Successfully after 
data load
java.lang.RuntimeException: Table is locked for updation. Please try 
after some time
     at scala.sys.package$.error(package.scala:27)
     at 
org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:1049)
     at 
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
     at 
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
     at 
org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
     at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
     at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
     at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)

thanks a lot


---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------

Re: load data fail

Posted by chenliang613 <ch...@gmail.com>.
HiAs we discussed, the error of "Table is locked for updation. Please try
after some time " has been solved through setting directory rights.The below
is new error, please Ravindra check and provide helps
:----------------------------WARN  12-08 16:29:51,871 - Lost task 1.1 in
stage 2.0 (TID 6, hadoop03): java.lang.RuntimeException: Dictionary file
name is locked for updation. Please try after some time    at
scala.sys.package$.error(package.scala:27)    at
org.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD$$anon$1.(CarbonGlobalDictionaryRDD.scala:354)   
at
org.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD.compute(CarbonGlobalDictionaryRDD.scala:295)   
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)    at
org.apache.spark.rdd.RDD.iterator(RDD.scala:270)    at
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)    at
org.apache.spark.scheduler.Task.run(Task.scala:89)    at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)   
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)   
at java.lang.Thread.run(Thread.java:745)   



--
View this message in context: http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/load-data-fail-tp100p104.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at Nabble.com.

Re: 答复: load data fail

Posted by 金铸 <ji...@neusoft.com>.
i drop table,use $hdc_home/hive/conf/hive-sie.xml replace 
$hdc_home/spark/conf/hive-site.xml,fixed it.

but i do not know the principle inside.


if t4 is exist in hive's defualt,in other words create table t4 in 
hive,then create table in carbondata do not reported exception。




在 2016/8/17 10:35, Chenliang (Liang, CarbonData) 写道:
> Can you share the case experience: how did you solve it.
>
> Regards
> Liang
> -----邮件原件-----
> 发件人: 金铸 [mailto:jin_zh@neusoft.com]
> 发送时间: 2016年8月17日 10:31
> 收件人: dev@carbondata.incubator.apache.org
> 主题: Re: load data fail
>
> thanks a lot,I  solve this。
>
>
>
> 在 2016/8/17 0:53, Eason 写道:
>> hi jinzhu,
>>
>> whether this happen on multiple instance loading the same table?
>>
>> currently ,it is no support concurrent load on same table.
>>
>> for this exception
>>
>> 1.please check if any locks are created under system temp folder
>> with<databasename>/<tablename>/lockfile, if it exists please delete.
>>
>> 2.try to change the lock ype:
>> carbon.lock.type =  ZOOKEEPERLOCK Regards, Eason
>>
>> 在 2016年08月12日 14:25, 金铸 写道:
>>> hi : /usr/hdp/2.4.0.0-169/spark/bin/spark-shell --master yarn-client
>>> --jars
>>> /opt/incubator-carbondata/assembly/target/scala-2.10/carbondata_2.10-
>>> 0.1.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar,/usr/hdp/2.4.0.0-169/
>>> spark/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/2.4.0.0-169/spark/li
>>> b/datanucleus-rdbms-3.2.9.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucl
>>> eus-core-3.2.10.jar,/opt//mysql-connector-java-5.1.37.jar
>>> scala>import org.apache.spark.sql.CarbonContext scala>import
>>> java.io.File scala>import org.apache.hadoop.hive.conf.HiveConf
>>> scala>val cc = new CarbonContext(sc,
>>> "hdfs://hadoop01/data/carbondata01/store")
>>> scala>cc.setConf("hive.metastore.warehouse.dir",
>>> "/apps/hive/warehouse")
>>> scala>cc.setConf(HiveConf.ConfVars.HIVECHECKFILEFORMAT.varname,
>>> "false")
>>> scala>cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/car
>>> scala>bonlib/carbonplugins")
>>> scala>
>>> cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/carbonlib
>>> /carbonplugins")
>>> scala> cc.sql(s"load data local inpath 'hdfs://hadoop01/sample.csv'
>>> into table t4 options('FILEHEADER'='id,name,city,age')") INFO  12-08
>>> 14:21:24,461 - main Query [LOAD DATA LOCAL INPATH
>>> 'HDFS://HADOOP01/SAMPLE.CSV' INTO TABLE T4
>>> OPTIONS('FILEHEADER'='ID,NAME,CITY,AGE')] INFO  12-08 14:21:39,475 -
>>> Table MetaData Unlocked Successfully after data load
>>> java.lang.RuntimeException: Table is locked for updation. Please try
>>> after some time     at scala.sys.package$.error(package.scala:27)
>>> at
>>> org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:1049)
>>>      at
>>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
>>>      at
>>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
>>>      at
>>> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
>>>      at
>>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
>>>      at
>>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
>>>      at
>>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.s
>>> cala:150)
>>> thanks a lot
>>> ---------------------------------------------------------------------
>>>
>
>
>
>   


---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------

答复: load data fail

Posted by "Chenliang (Liang, CarbonData)" <ch...@huawei.com>.
Can you share the case experience: how did you solve it.

Regards
Liang
-----邮件原件-----
发件人: 金铸 [mailto:jin_zh@neusoft.com] 
发送时间: 2016年8月17日 10:31
收件人: dev@carbondata.incubator.apache.org
主题: Re: load data fail

thanks a lot,I  solve this。



在 2016/8/17 0:53, Eason 写道:
> hi jinzhu,
>
> whether this happen on multiple instance loading the same table?
>
> currently ,it is no support concurrent load on same table.
>
> for this exception
>
> 1.please check if any locks are created under system temp folder 
> with<databasename>/<tablename>/lockfile, if it exists please delete.
>
> 2.try to change the lock ype:
> carbon.lock.type =  ZOOKEEPERLOCK Regards, Eason
>
> 在 2016年08月12日 14:25, 金铸 写道:
>> hi : /usr/hdp/2.4.0.0-169/spark/bin/spark-shell --master yarn-client 
>> --jars 
>> /opt/incubator-carbondata/assembly/target/scala-2.10/carbondata_2.10-
>> 0.1.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar,/usr/hdp/2.4.0.0-169/
>> spark/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/2.4.0.0-169/spark/li
>> b/datanucleus-rdbms-3.2.9.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucl
>> eus-core-3.2.10.jar,/opt//mysql-connector-java-5.1.37.jar
>> scala>import org.apache.spark.sql.CarbonContext scala>import
>> java.io.File scala>import org.apache.hadoop.hive.conf.HiveConf
>> scala>val cc = new CarbonContext(sc,
>> "hdfs://hadoop01/data/carbondata01/store")
>> scala>cc.setConf("hive.metastore.warehouse.dir",
>> "/apps/hive/warehouse")
>> scala>cc.setConf(HiveConf.ConfVars.HIVECHECKFILEFORMAT.varname,
>> "false")
>> scala>cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/car
>> scala>bonlib/carbonplugins")
>> scala> 
>> cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/carbonlib
>> /carbonplugins")
>> scala> cc.sql(s"load data local inpath 'hdfs://hadoop01/sample.csv' 
>> into table t4 options('FILEHEADER'='id,name,city,age')") INFO  12-08
>> 14:21:24,461 - main Query [LOAD DATA LOCAL INPATH 
>> 'HDFS://HADOOP01/SAMPLE.CSV' INTO TABLE T4 
>> OPTIONS('FILEHEADER'='ID,NAME,CITY,AGE')] INFO  12-08 14:21:39,475 - 
>> Table MetaData Unlocked Successfully after data load
>> java.lang.RuntimeException: Table is locked for updation. Please try 
>> after some time     at scala.sys.package$.error(package.scala:27)     
>> at
>> org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:1049) 
>>     at
>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58) 
>>     at
>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56) 
>>     at
>> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70) 
>>     at
>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) 
>>     at
>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) 
>>     at
>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.s
>> cala:150)
>> thanks a lot
>> ---------------------------------------------------------------------
>> ------------------------------ Confidentiality Notice: The 
>> information contained in this e-mail and any accompanying 
>> attachment(s) is intended only for the use of the intended recipient 
>> and may be confidential and/or privileged of Neusoft Corporation, its 
>> subsidiaries and/or its affiliates. If any reader of this 
>> communication is not the intended recipient, unauthorized use, 
>> forwarding, printing, storing, disclosure or copying is strictly 
>> prohibited, and may be unlawful.If you have received this 
>> communication in error,please immediately notify the sender by return 
>> e-mail, and delete the original message and all copies from your 
>> system. Thank you.
>> ---------------------------------------------------------------------
>> ------------------------------
>
>
>



---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) is intended only for the use of the intended recipient and may be confidential and/or privileged of Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying is strictly prohibited, and may be unlawful.If you have received this communication in error,please immediately notify the sender by return e-mail, and delete the original message and all copies from your system. Thank you.
---------------------------------------------------------------------------------------------------

Re: load data fail

Posted by 金铸 <ji...@neusoft.com>.
thanks a lot,I  solve this。



在 2016/8/17 0:53, Eason 写道:
> hi jinzhu,
>
> whether this happen on multiple instance loading the same table?
>
> currently ,it is no support concurrent load on same table.
>
> for this exception
>
> 1.please check if any locks are created under system temp folder 
> with<databasename>/<tablename>/lockfile, if it exists please delete.
>
> 2.try to change the lock ype:
> carbon.lock.type =  ZOOKEEPERLOCK Regards,
> Eason
>
> 在 2016年08月12日 14:25, 金铸 写道:
>> hi : /usr/hdp/2.4.0.0-169/spark/bin/spark-shell --master yarn-client 
>> --jars 
>> /opt/incubator-carbondata/assembly/target/scala-2.10/carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-rdbms-3.2.9.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-core-3.2.10.jar,/opt//mysql-connector-java-5.1.37.jar 
>> scala>import org.apache.spark.sql.CarbonContext scala>import 
>> java.io.File scala>import org.apache.hadoop.hive.conf.HiveConf 
>> scala>val cc = new CarbonContext(sc, 
>> "hdfs://hadoop01/data/carbondata01/store") 
>> scala>cc.setConf("hive.metastore.warehouse.dir", 
>> "/apps/hive/warehouse") 
>> scala>cc.setConf(HiveConf.ConfVars.HIVECHECKFILEFORMAT.varname, 
>> "false") 
>> scala>cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/carbonlib/carbonplugins") 
>> scala> 
>> cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/carbonlib/carbonplugins") 
>> scala> cc.sql(s"load data local inpath 'hdfs://hadoop01/sample.csv' 
>> into table t4 options('FILEHEADER'='id,name,city,age')") INFO  12-08 
>> 14:21:24,461 - main Query [LOAD DATA LOCAL INPATH 
>> 'HDFS://HADOOP01/SAMPLE.CSV' INTO TABLE T4 
>> OPTIONS('FILEHEADER'='ID,NAME,CITY,AGE')] INFO  12-08 14:21:39,475 - 
>> Table MetaData Unlocked Successfully after data load 
>> java.lang.RuntimeException: Table is locked for updation. Please try 
>> after some time     at scala.sys.package$.error(package.scala:27)     
>> at 
>> org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:1049) 
>>     at 
>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58) 
>>     at 
>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56) 
>>     at 
>> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70) 
>>     at 
>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) 
>>     at 
>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) 
>>     at 
>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) 
>> thanks a lot 
>> --------------------------------------------------------------------------------------------------- 
>> Confidentiality Notice: The information contained in this e-mail and 
>> any accompanying attachment(s) is intended only for the use of the 
>> intended recipient and may be confidential and/or privileged of 
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any 
>> reader of this communication is not the intended recipient, 
>> unauthorized use, forwarding, printing, storing, disclosure or 
>> copying is strictly prohibited, and may be unlawful.If you have 
>> received this communication in error,please immediately notify the 
>> sender by return e-mail, and delete the original message and all 
>> copies from your system. Thank you. 
>> --------------------------------------------------------------------------------------------------- 
>
>
>



---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------

Re: load data fail

Posted by Eason <mr...@aliyun.com>.
hi jinzhu,

whether this happen on multiple instance loading the same table?

currently ,it is no support concurrent load on same table.

for this exception

1.please check if any locks are created under system temp folder 
with<databasename>/<tablename>/lockfile, if it exists please delete.

2.try to change the lock ype:
carbon.lock.type =  ZOOKEEPERLOCK Regards,
Eason

\u5728 2016\u5e7408\u670812\u65e5 14:25, \u91d1\u94f8 \u5199\u9053:
> hi \uff1a /usr/hdp/2.4.0.0-169/spark/bin/spark-shell --master yarn-client 
> --jars 
> /opt/incubator-carbondata/assembly/target/scala-2.10/carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-rdbms-3.2.9.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-core-3.2.10.jar,/opt//mysql-connector-java-5.1.37.jar 
> scala>import org.apache.spark.sql.CarbonContext scala>import 
> java.io.File scala>import org.apache.hadoop.hive.conf.HiveConf 
> scala>val cc = new CarbonContext(sc, 
> "hdfs://hadoop01/data/carbondata01/store") 
> scala>cc.setConf("hive.metastore.warehouse.dir", 
> "/apps/hive/warehouse") 
> scala>cc.setConf(HiveConf.ConfVars.HIVECHECKFILEFORMAT.varname, 
> "false") 
> scala>cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/carbonlib/carbonplugins") 
> scala> 
> cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/carbonlib/carbonplugins") 
> scala> cc.sql(s"load data local inpath 'hdfs://hadoop01/sample.csv' 
> into table t4 options('FILEHEADER'='id,name,city,age')") INFO  12-08 
> 14:21:24,461 - main Query [LOAD DATA LOCAL INPATH 
> 'HDFS://HADOOP01/SAMPLE.CSV' INTO TABLE T4 
> OPTIONS('FILEHEADER'='ID,NAME,CITY,AGE')] INFO  12-08 14:21:39,475 - 
> Table MetaData Unlocked Successfully after data load 
> java.lang.RuntimeException: Table is locked for updation. Please try 
> after some time     at scala.sys.package$.error(package.scala:27)     
> at 
> org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:1049) 
>     at 
> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58) 
>     at 
> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56) 
>     at 
> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70) 
>     at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) 
>     at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) 
>     at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) 
> thanks a lot 
> --------------------------------------------------------------------------------------------------- 
> Confidentiality Notice: The information contained in this e-mail and 
> any accompanying attachment(s) is intended only for the use of the 
> intended recipient and may be confidential and/or privileged of 
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any 
> reader of this communication is not the intended recipient, 
> unauthorized use, forwarding, printing,  storing, disclosure or 
> copying is strictly prohibited, and may be unlawful.If you have 
> received this communication in error,please immediately notify the 
> sender by return e-mail, and delete the original message and all 
> copies from your system. Thank you. 
> --------------------------------------------------------------------------------------------------- 


Fwd: load data fail

Posted by Ravindra Pesala <ra...@gmail.com>.
---------- Forwarded message ----------
From: Ravindra Pesala <ra...@gmail.com>
Date: 12 August 2016 at 12:45
Subject: Re: load data fail
To: dev <de...@carbondata.incubator.apache.org>


Hi,

Are you getting this exception continuously for every load? Usually it
occurs when you try to load the data concurrently to the same table. So
please make sure that no other instance of carbon is running and data load
on the same table is not happening.
Check if any locks are created under system temp folder with
<detabasename>/<tablename>/lockfile, if it exists please delete.

Thanks & Regards,
Ravi


On 12 August 2016 at 11:55, 金铸 <ji...@neusoft.com> wrote:

> hi :
> /usr/hdp/2.4.0.0-169/spark/bin/spark-shell --master yarn-client --jars
> /opt/incubator-carbondata/assembly/target/scala-2.10/carbond
> ata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar,/
> usr/hdp/2.4.0.0-169/spark/lib/datanucleus-api-jdo-3.2.6.jar,
> /usr/hdp/2.4.0.0-169/spark/lib/datanucleus-rdbms-3.2.9.
> jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-core-3.
> 2.10.jar,/opt//mysql-connector-java-5.1.37.jar
>
> scala>import org.apache.spark.sql.CarbonContext
> scala>import java.io.File
> scala>import org.apache.hadoop.hive.conf.HiveConf
>
>
>
>
>
> scala>val cc = new CarbonContext(sc, "hdfs://hadoop01/data/carbonda
> ta01/store")
>
> scala>cc.setConf("hive.metastore.warehouse.dir", "/apps/hive/warehouse")
> scala>cc.setConf(HiveConf.ConfVars.HIVECHECKFILEFORMAT.varname, "false")
> scala>cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/
> spark/carbonlib/carbonplugins")
>
> scala> cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/
> carbonlib/carbonplugins")
>
> scala> cc.sql(s"load data local inpath 'hdfs://hadoop01/sample.csv' into
> table t4 options('FILEHEADER'='id,name,city,age')")
> INFO  12-08 14:21:24,461 - main Query [LOAD DATA LOCAL INPATH
> 'HDFS://HADOOP01/SAMPLE.CSV' INTO TABLE T4 OPTIONS('FILEHEADER'='ID,NAME,
> CITY,AGE')]
> INFO  12-08 14:21:39,475 - Table MetaData Unlocked Successfully after data
> load
> java.lang.RuntimeException: Table is locked for updation. Please try after
> some time
>     at scala.sys.package$.error(package.scala:27)
>     at org.apache.spark.sql.execution.command.LoadTable.run(carbonT
> ableSchema.scala:1049)
>     at org.apache.spark.sql.execution.ExecutedCommand.sideEffectRes
> ult$lzycompute(commands.scala:58)
>     at org.apache.spark.sql.execution.ExecutedCommand.sideEffectRes
> ult(commands.scala:56)
>     at org.apache.spark.sql.execution.ExecutedCommand.doExecute(com
> mands.scala:70)
>     at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.
> apply(SparkPlan.scala:132)
>     at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.
> apply(SparkPlan.scala:130)
>     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperati
> onScope.scala:150)
>
> thanks a lot
>
>
> ------------------------------------------------------------
> ---------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
> ------------------------------------------------------------
> ---------------------------------------
>



-- 
Thanks & Regards,
Ravi



-- 
Thanks & Regards,
Ravi