You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@carbondata.apache.org by "yangwei@sigmatrix.cn" <ya...@sigmatrix.cn> on 2016/06/30 07:37:02 UTC

按quick start说明在spark-shell执行到create table时报错

Hi,
  我在https://github.com/HuaweiBigData/carbondata/wiki/Quick-Start 
  执行如下:
 scala> import org.apache.spark.sql.CarbonContext
import org.apache.spark.sql.CarbonContext

scala> import java.io.File
import java.io.File

scala> import org.apache.hadoop.hive.conf.HiveConf
import org.apache.hadoop.hive.conf.HiveConf

scala> val metadata = new File("").getCanonicalPath + "/carbondata/metadata"
metadata: String = /mnt/resource/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/spark/carbondata/metadata

scala> val cc = new CarbonContext(sc, "/mnt/resource/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/spark/carbondata/store")
cc: org.apache.spark.sql.CarbonContext = org.apache.spark.sql.CarbonContext@2746b25b

scala> cc.setConf("carbon.kettle.home","/mnt/resource/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/spark/carbondata/carbonplugins")

scala> val metadata = new File("").getCanonicalPath + "/carbondata/metadata"
metadata: String = /mnt/resource/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/spark/carbondata/metadata

scala> cc.setConf("hive.metastore.warehouse.dir", metadata)

scala> cc.setConf(HiveConf.ConfVars.HIVECHECKFILEFORMAT.varname, "false")

scala> cc.sql("create table if not exists table1 (id string, name string, city string, age Int) STORED BY 'org.apache.carbondata.format'")
AUDIT 30-06 07:27:48,141 - [BDGroup01]Creating timestamp file
java.io.IOException: Permission denied
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:1006)
at org.carbondata.core.datastorage.store.impl.FileFactory.createNewFile(FileFactory.java:357)
at org.apache.spark.sql.hive.CarbonMetastoreCatalog.updateSchemasUpdatedTime(CarbonMetastoreCatalog.scala:584)
at org.apache.spark.sql.hive.CarbonMetastoreCatalog.loadMetadata(CarbonMetastoreCatalog.scala:225)
at org.apache.spark.sql.hive.CarbonMetastoreCatalog.<init>(CarbonMetastoreCatalog.scala:113)
at org.apache.spark.sql.CarbonContext$$anon$1.<init>(CarbonContext.scala:45)
at org.apache.spark.sql.CarbonContext.catalog$lzycompute(CarbonContext.scala:45)
at org.apache.spark.sql.CarbonContext.catalog(CarbonContext.scala:43)
at org.apache.spark.sql.CarbonContext.analyzer$lzycompute(CarbonContext.scala:49)
at org.apache.spark.sql.CarbonContext.analyzer(CarbonContext.scala:49)
at org.apache.spark.sql.SQLContext$QueryExecution.assertAnalyzed(SQLContext.scala:914)




杨卫

15116966545



答复: 按quick start说明在spark-shell执行到create table时报错

Posted by "Chenliang (Liang, CarbonData)" <ch...@huawei.com>.
Hi

The issue has been solved with Eason's help
@yanwei: Welcome to try Apache CarbonData , look forward to seeing your contribution in community :)

Regards
Liang
发件人: yangwei@sigmatrix.cn [mailto:yangwei@sigmatrix.cn] 
发送时间: 2016年6月30日 18:35
收件人: Linyixin (Eason)
抄送: Chenliang (Liang, CarbonData)
主题: 回复: 答复: 按quick start说明在spark-shell执行到create table时报错

Hi Linyixin:
  按您的提示已解决谢谢, 最终解决是把spark安装目录权限改为hdfs,因为我是用hdfs起的spark-shell
4 drwxr-xr-x. 10 hdfs hdfs  4096 Jun 28 17:53 spark
sudo -u hdfs  spark-shell --master local --jars ${carbondata_jar},${mysql_jar}

-----邮件原件-----
发件人: Ravindra Pesala [mailto:ravi.pesala@gmail.com] 
发送时间: 2016年6月30日 18:35
收件人: dev@carbondata.incubator.apache.org
主题: Re: 按quick start说明在spark-shell执行到create table时报错

Hi Yangwei,

It seems user does not have permission to create files inside the store path (/mnt/resource/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.
p0.3/lib/spark/carbondata/store) you provided. Please make sure the user has read/write permissions to store path.

Regards,
Ravindra.

On 30 June 2016 at 13:07, yangwei@sigmatrix.cn <ya...@sigmatrix.cn> wrote:

> Hi,
>   我在https://github.com/HuaweiBigData/carbondata/wiki/Quick-Start
>   执行如下:
>  scala> import org.apache.spark.sql.CarbonContext
> import org.apache.spark.sql.CarbonContext
>
> scala> import java.io.File
> import java.io.File
>
> scala> import org.apache.hadoop.hive.conf.HiveConf
> import org.apache.hadoop.hive.conf.HiveConf
>
> scala> val metadata = new File("").getCanonicalPath +
> "/carbondata/metadata"
> metadata: String =
> /mnt/resource/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/spark
> /carbondata/metadata
>
> scala> val cc = new CarbonContext(sc,
> "/mnt/resource/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/spar
> k/carbondata/store")
> cc: org.apache.spark.sql.CarbonContext = 
> org.apache.spark.sql.CarbonContext@2746b25b
>
> scala>
> cc.setConf("carbon.kettle.home","/mnt/resource/opt/cloudera/parcels/CD
> H-5.6.1-1.cdh5.6.1.p0.3/lib/spark/carbondata/carbonplugins")
>
> scala> val metadata = new File("").getCanonicalPath +
> "/carbondata/metadata"
> metadata: String =
> /mnt/resource/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/spark
> /carbondata/metadata
>
> scala> cc.setConf("hive.metastore.warehouse.dir", metadata)
>
> scala> cc.setConf(HiveConf.ConfVars.HIVECHECKFILEFORMAT.varname, 
> scala> "false")
>
> scala> cc.sql("create table if not exists table1 (id string, name 
> scala> string,
> city string, age Int) STORED BY 'org.apache.carbondata.format'") AUDIT 
> 30-06 07:27:48,141 - [BDGroup01]Creating timestamp file
> java.io.IOException: Permission denied at 
> java.io.UnixFileSystem.createFileExclusively(Native Method) at 
> java.io.File.createNewFile(File.java:1006)
> at
> org.carbondata.core.datastorage.store.impl.FileFactory.createNewFile(F
> ileFactory.java:357)
> at
> org.apache.spark.sql.hive.CarbonMetastoreCatalog.updateSchemasUpdatedT
> ime(CarbonMetastoreCatalog.scala:584)
> at
> org.apache.spark.sql.hive.CarbonMetastoreCatalog.loadMetadata(CarbonMe
> tastoreCatalog.scala:225)
> at
> org.apache.spark.sql.hive.CarbonMetastoreCatalog.<init>(CarbonMetastor
> eCatalog.scala:113)
> at
> org.apache.spark.sql.CarbonContext$$anon$1.<init>(CarbonContext.scala:
> 45)
> at
> org.apache.spark.sql.CarbonContext.catalog$lzycompute(CarbonContext.sc
> ala:45) at 
> org.apache.spark.sql.CarbonContext.catalog(CarbonContext.scala:43)
> at
> org.apache.spark.sql.CarbonContext.analyzer$lzycompute(CarbonContext.s
> cala:49) at 
> org.apache.spark.sql.CarbonContext.analyzer(CarbonContext.scala:49)
> at
> org.apache.spark.sql.SQLContext$QueryExecution.assertAnalyzed(SQLConte
> xt.scala:914)
>
>
>
>
> 杨卫
>
> 15116966545
>
>
>


--
Thanks & Regards,
Ravi

Re: 按quick start说明在spark-shell执行到create table时报错

Posted by Ravindra Pesala <ra...@gmail.com>.
Hi Yangwei,

It seems user does not have permission to create files inside the store
path (/mnt/resource/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.
p0.3/lib/spark/carbondata/store) you provided. Please make sure the user
has read/write permissions to store path.

Regards,
Ravindra.

On 30 June 2016 at 13:07, yangwei@sigmatrix.cn <ya...@sigmatrix.cn> wrote:

> Hi,
>   我在https://github.com/HuaweiBigData/carbondata/wiki/Quick-Start
>   执行如下:
>  scala> import org.apache.spark.sql.CarbonContext
> import org.apache.spark.sql.CarbonContext
>
> scala> import java.io.File
> import java.io.File
>
> scala> import org.apache.hadoop.hive.conf.HiveConf
> import org.apache.hadoop.hive.conf.HiveConf
>
> scala> val metadata = new File("").getCanonicalPath +
> "/carbondata/metadata"
> metadata: String =
> /mnt/resource/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/spark/carbondata/metadata
>
> scala> val cc = new CarbonContext(sc,
> "/mnt/resource/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/spark/carbondata/store")
> cc: org.apache.spark.sql.CarbonContext =
> org.apache.spark.sql.CarbonContext@2746b25b
>
> scala>
> cc.setConf("carbon.kettle.home","/mnt/resource/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/spark/carbondata/carbonplugins")
>
> scala> val metadata = new File("").getCanonicalPath +
> "/carbondata/metadata"
> metadata: String =
> /mnt/resource/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/spark/carbondata/metadata
>
> scala> cc.setConf("hive.metastore.warehouse.dir", metadata)
>
> scala> cc.setConf(HiveConf.ConfVars.HIVECHECKFILEFORMAT.varname, "false")
>
> scala> cc.sql("create table if not exists table1 (id string, name string,
> city string, age Int) STORED BY 'org.apache.carbondata.format'")
> AUDIT 30-06 07:27:48,141 - [BDGroup01]Creating timestamp file
> java.io.IOException: Permission denied
> at java.io.UnixFileSystem.createFileExclusively(Native Method)
> at java.io.File.createNewFile(File.java:1006)
> at
> org.carbondata.core.datastorage.store.impl.FileFactory.createNewFile(FileFactory.java:357)
> at
> org.apache.spark.sql.hive.CarbonMetastoreCatalog.updateSchemasUpdatedTime(CarbonMetastoreCatalog.scala:584)
> at
> org.apache.spark.sql.hive.CarbonMetastoreCatalog.loadMetadata(CarbonMetastoreCatalog.scala:225)
> at
> org.apache.spark.sql.hive.CarbonMetastoreCatalog.<init>(CarbonMetastoreCatalog.scala:113)
> at
> org.apache.spark.sql.CarbonContext$$anon$1.<init>(CarbonContext.scala:45)
> at
> org.apache.spark.sql.CarbonContext.catalog$lzycompute(CarbonContext.scala:45)
> at org.apache.spark.sql.CarbonContext.catalog(CarbonContext.scala:43)
> at
> org.apache.spark.sql.CarbonContext.analyzer$lzycompute(CarbonContext.scala:49)
> at org.apache.spark.sql.CarbonContext.analyzer(CarbonContext.scala:49)
> at
> org.apache.spark.sql.SQLContext$QueryExecution.assertAnalyzed(SQLContext.scala:914)
>
>
>
>
> 杨卫
>
> 15116966545
>
>
>


-- 
Thanks & Regards,
Ravi