You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@carbondata.apache.org by Li Peng <pe...@outlook.com> on 2017/02/08 03:13:05 UTC

query exception: Path is not a file when carbon 1.0.0

Hi,
   When use carbon 1.0.0 to query,  the sql is "select count(*) from
store1.sale"
   I can get the query result , but also get the below error log:


Exception while invoking
ClientNamenodeProtocolTranslatorPB.getBlockLocations over
dpnode02/192.168.50.2:8020. Not retrying because try once and fail.
org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): Path
is not a file: /carbondata/carbonstore/store1/sale/Metadata
	at
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:75)
	at
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
	at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1860)
	at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1831)
	at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1744)
	at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:693)
	at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:373)
	at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)

	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
	at org.apache.hadoop.ipc.Client.call(Client.java:1496)
	at org.apache.hadoop.ipc.Client.call(Client.java:1396)
	at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
	at com.sun.proxy.$Proxy31.getBlockLocations(Unknown Source)
	at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:270)
	at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
	at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
	at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
	at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
	at com.sun.proxy.$Proxy32.getBlockLocations(Unknown Source)
	at
org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1236)
	at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1223)
	at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1211)
	at
org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:309)
	at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:274)
	at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:266)
	at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1536)
	at
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:330)
	at
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:326)
	at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:326)
	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:782)
	at
org.apache.carbondata.core.datastore.impl.FileFactory.getDataInputStream(FileFactory.java:130)
	at
org.apache.carbondata.core.datastore.impl.FileFactory.getDataInputStream(FileFactory.java:104)
	at
org.apache.carbondata.core.fileoperations.AtomicFileOperationsImpl.openForRead(AtomicFileOperationsImpl.java:46)
	at
org.apache.carbondata.core.statusmanager.SegmentUpdateStatusManager.readLoadMetadata(SegmentUpdateStatusManager.java:689)
	at
org.apache.carbondata.core.statusmanager.SegmentUpdateStatusManager.<init>(SegmentUpdateStatusManager.java:86)
	at
org.apache.carbondata.hadoop.CarbonInputFormat.getSplits(CarbonInputFormat.java:234)
	at
org.apache.carbondata.spark.rdd.CarbonScanRDD.getPartitions(CarbonScanRDD.scala:80)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
	at scala.Option.getOrElse(Option.scala:120)
	at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
	at
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
	at scala.Option.getOrElse(Option.scala:120)
	at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
	at
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
	at scala.Option.getOrElse(Option.scala:120)
	at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
	at
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
	at scala.Option.getOrElse(Option.scala:120)
	at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
	at org.apache.spark.ShuffleDependency.<init>(Dependency.scala:91)
	at
org.apache.spark.sql.execution.Exchange.prepareShuffleDependency(Exchange.scala:220)
	at
org.apache.spark.sql.execution.Exchange$$anonfun$doExecute$1.apply(Exchange.scala:254)
	at
org.apache.spark.sql.execution.Exchange$$anonfun$doExecute$1.apply(Exchange.scala:248)
	at
org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:48)
	at org.apache.spark.sql.execution.Exchange.doExecute(Exchange.scala:247)
	at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
	at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
	at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
	at
org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1.apply(TungstenAggregate.scala:86)
	at
org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1.apply(TungstenAggregate.scala:80)
	at
org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:48)
	at
org.apache.spark.sql.execution.aggregate.TungstenAggregate.doExecute(TungstenAggregate.scala:80)
	at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
	at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
	at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
	at
org.apache.spark.sql.CarbonDictionaryDecoder$$anonfun$doExecute$1.apply(CarbonDictionaryDecoder.scala:214)
	at
org.apache.spark.sql.CarbonDictionaryDecoder$$anonfun$doExecute$1.apply(CarbonDictionaryDecoder.scala:153)
	at
org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:48)
	at
org.apache.spark.sql.CarbonDictionaryDecoder.doExecute(CarbonDictionaryDecoder.scala:153)
	at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
	at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
	at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
	at
org.apache.spark.sql.execution.ConvertToSafe.doExecute(rowFormatConverters.scala:56)
	at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
	at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
	at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
	at
org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:187)
	at
org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:165)
	at
org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:174)
	at
org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
	at
org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
	at
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
	at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2086)
	at
org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$execute$1(DataFrame.scala:1498)
	at
org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$collect(DataFrame.scala:1505)
	at
org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1375)
	at
org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1374)
	at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2099)
	at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1374)
	at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1456)
	at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:170)
	at org.apache.spark.sql.DataFrame.show(DataFrame.scala:350)
	at org.apache.spark.sql.DataFrame.show(DataFrame.scala:311)
	at org.apache.spark.sql.DataFrame.show(DataFrame.scala:319)
	at
$line30.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:35)
	at $line30.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:40)
	at $line30.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:42)
	at $line30.$read$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:44)
	at $line30.$read$$iwC$$iwC$$iwC$$iwC.<init>(<console>:46)
	at $line30.$read$$iwC$$iwC$$iwC.<init>(<console>:48)
	at $line30.$read$$iwC$$iwC.<init>(<console>:50)
	at $line30.$read$$iwC.<init>(<console>:52)
	at $line30.$read.<init>(<console>:54)
	at $line30.$read$.<init>(<console>:58)
	at $line30.$read$.<clinit>(<console>)
	at $line30.$eval$.<init>(<console>:7)
	at $line30.$eval$.<clinit>(<console>)
	at $line30.$eval.$print(<console>)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at
org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
	at
org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
	at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
	at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
	at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
	at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
	at
org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
	at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
	at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
	at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
	at
org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
	at
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
	at
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
	at
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
	at
scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
	at
org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
	at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
	at org.apache.spark.repl.Main$.main(Main.scala:31)
	at org.apache.spark.repl.Main.main(Main.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)



--
View this message in context: http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/query-exception-Path-is-not-a-file-when-carbon-1-0-0-tp7433.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at Nabble.com.

Re: query exception: Path is not a file when carbon 1.0.0

Posted by Ravindra Pesala <ra...@gmail.com>.
Hi,

This exception is actually ignored in class SegmentUpdateStatusManager line
number 696. This exception does not create any problem. Usually this
exception won't be printed in any server logs as we are ignoring it. May be
in spark-shell it is printing. we will look into it.

Regards,
Ravindra.

On 8 February 2017 at 08:43, Li Peng <pe...@outlook.com> wrote:

> Hi,
>    When use carbon 1.0.0 to query,  the sql is "select count(*) from
> store1.sale"
>    I can get the query result , but also get the below error log:
>
>
> Exception while invoking
> ClientNamenodeProtocolTranslatorPB.getBlockLocations over
> dpnode02/192.168.50.2:8020. Not retrying because try once and fail.
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): Path
> is not a file: /carbondata/carbonstore/store1/sale/Metadata
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INo
> deFile.java:75)
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INo
> deFile.java:61)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlock
> LocationsInt(FSNamesystem.java:1860)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlock
> Locations(FSNamesystem.java:1831)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlock
> Locations(FSNamesystem.java:1744)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.get
> BlockLocations(NameNodeRpcServer.java:693)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServ
> erSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolS
> erverSideTranslatorPB.java:373)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocol
> Protos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNam
> enodeProtocolProtos.java)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcIn
> voker.call(ProtobufRpcEngine.java:640)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGro
> upInformation.java:1724)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
>
>         at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1496)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1396)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(
> ProtobufRpcEngine.java:233)
>         at com.sun.proxy.$Proxy31.getBlockLocations(Unknown Source)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTran
> slatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:270)
>         at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
> thodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMeth
> od(RetryInvocationHandler.java:278)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(Ret
> ryInvocationHandler.java:194)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(Ret
> ryInvocationHandler.java:176)
>         at com.sun.proxy.$Proxy32.getBlockLocations(Unknown Source)
>         at
> org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSCl
> ient.java:1236)
>         at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.
> java:1223)
>         at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.
> java:1211)
>         at
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndG
> etLastBlockLength(DFSInputStream.java:309)
>         at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStrea
> m.java:274)
>         at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.
> java:266)
>         at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1536)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(
> DistributedFileSystem.java:330)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(
> DistributedFileSystem.java:326)
>         at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSyst
> emLinkResolver.java:81)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.open(Distribute
> dFileSystem.java:326)
>         at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:782)
>         at
> org.apache.carbondata.core.datastore.impl.FileFactory.getDat
> aInputStream(FileFactory.java:130)
>         at
> org.apache.carbondata.core.datastore.impl.FileFactory.getDat
> aInputStream(FileFactory.java:104)
>         at
> org.apache.carbondata.core.fileoperations.AtomicFileOperatio
> nsImpl.openForRead(AtomicFileOperationsImpl.java:46)
>         at
> org.apache.carbondata.core.statusmanager.SegmentUpdateStatus
> Manager.readLoadMetadata(SegmentUpdateStatusManager.java:689)
>         at
> org.apache.carbondata.core.statusmanager.SegmentUpdateStatus
> Manager.<init>(SegmentUpdateStatusManager.java:86)
>         at
> org.apache.carbondata.hadoop.CarbonInputFormat.getSplits(Car
> bonInputFormat.java:234)
>         at
> org.apache.carbondata.spark.rdd.CarbonScanRDD.getPartitions(
> CarbonScanRDD.scala:80)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.
> scala:242)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.
> scala:240)
>         at scala.Option.getOrElse(Option.scala:120)
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
>         at
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapParti
> tionsRDD.scala:35)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.
> scala:242)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.
> scala:240)
>         at scala.Option.getOrElse(Option.scala:120)
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
>         at
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapParti
> tionsRDD.scala:35)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.
> scala:242)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.
> scala:240)
>         at scala.Option.getOrElse(Option.scala:120)
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
>         at
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapParti
> tionsRDD.scala:35)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.
> scala:242)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.
> scala:240)
>         at scala.Option.getOrElse(Option.scala:120)
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
>         at org.apache.spark.ShuffleDependency.<init>(Dependency.scala:91)
>         at
> org.apache.spark.sql.execution.Exchange.prepareShuffleDepend
> ency(Exchange.scala:220)
>         at
> org.apache.spark.sql.execution.Exchange$$anonfun$doExecute$
> 1.apply(Exchange.scala:254)
>         at
> org.apache.spark.sql.execution.Exchange$$anonfun$doExecute$
> 1.apply(Exchange.scala:248)
>         at
> org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:48)
>         at org.apache.spark.sql.execution.Exchange.doExecute(Exchange.
> scala:247)
>         at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.
> apply(SparkPlan.scala:132)
>         at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.
> apply(SparkPlan.scala:130)
>         at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperati
> onScope.scala:150)
>         at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.
> scala:130)
>         at
> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$
> anonfun$doExecute$1.apply(TungstenAggregate.scala:86)
>         at
> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$
> anonfun$doExecute$1.apply(TungstenAggregate.scala:80)
>         at
> org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:48)
>         at
> org.apache.spark.sql.execution.aggregate.TungstenAggregate.
> doExecute(TungstenAggregate.scala:80)
>         at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.
> apply(SparkPlan.scala:132)
>         at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.
> apply(SparkPlan.scala:130)
>         at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperati
> onScope.scala:150)
>         at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.
> scala:130)
>         at
> org.apache.spark.sql.CarbonDictionaryDecoder$$anonfun$
> doExecute$1.apply(CarbonDictionaryDecoder.scala:214)
>         at
> org.apache.spark.sql.CarbonDictionaryDecoder$$anonfun$
> doExecute$1.apply(CarbonDictionaryDecoder.scala:153)
>         at
> org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:48)
>         at
> org.apache.spark.sql.CarbonDictionaryDecoder.doExecute(Carbo
> nDictionaryDecoder.scala:153)
>         at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.
> apply(SparkPlan.scala:132)
>         at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.
> apply(SparkPlan.scala:130)
>         at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperati
> onScope.scala:150)
>         at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.
> scala:130)
>         at
> org.apache.spark.sql.execution.ConvertToSafe.doExecute(
> rowFormatConverters.scala:56)
>         at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.
> apply(SparkPlan.scala:132)
>         at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.
> apply(SparkPlan.scala:130)
>         at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperati
> onScope.scala:150)
>         at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.
> scala:130)
>         at
> org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:187)
>         at
> org.apache.spark.sql.execution.Limit.executeCollect(
> basicOperators.scala:165)
>         at
> org.apache.spark.sql.execution.SparkPlan.executeCollectPubli
> c(SparkPlan.scala:174)
>         at
> org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$
> sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
>         at
> org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$
> sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
>         at
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutio
> nId(SQLExecution.scala:56)
>         at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.
> scala:2086)
>         at
> org.apache.spark.sql.DataFrame.org$apache$spark$sql$
> DataFrame$$execute$1(DataFrame.scala:1498)
>         at
> org.apache.spark.sql.DataFrame.org$apache$spark$sql$
> DataFrame$$collect(DataFrame.scala:1505)
>         at
> org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1375)
>         at
> org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1374)
>         at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:
> 2099)
>         at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1374)
>         at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1456)
>         at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:170)
>         at org.apache.spark.sql.DataFrame.show(DataFrame.scala:350)
>         at org.apache.spark.sql.DataFrame.show(DataFrame.scala:311)
>         at org.apache.spark.sql.DataFrame.show(DataFrame.scala:319)
>         at
> $line30.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:35)
>         at $line30.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<con
> sole>:40)
>         at $line30.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>
> :42)
>         at $line30.$read$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:44)
>         at $line30.$read$$iwC$$iwC$$iwC$$iwC.<init>(<console>:46)
>         at $line30.$read$$iwC$$iwC$$iwC.<init>(<console>:48)
>         at $line30.$read$$iwC$$iwC.<init>(<console>:50)
>         at $line30.$read$$iwC.<init>(<console>:52)
>         at $line30.$read.<init>(<console>:54)
>         at $line30.$read$.<init>(<console>:58)
>         at $line30.$read$.<clinit>(<console>)
>         at $line30.$eval$.<init>(<console>:7)
>         at $line30.$eval$.<clinit>(<console>)
>         at $line30.$eval.$print(<console>)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
> ssorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
> thodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>         at
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>         at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.
> scala:840)
>         at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:
> 871)
>         at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:
> 819)
>         at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoo
> p.scala:857)
>         at
> org.apache.spark.repl.SparkILoop.interpretStartingWith(Spark
> ILoop.scala:902)
>         at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>         at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.
> scala:657)
>         at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.
> scala:665)
>         at
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$
> SparkILoop$$loop(SparkILoop.scala:670)
>         at
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$
> repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>         at
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$
> repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>         at
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$
> repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>         at
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(S
> calaClassLoader.scala:135)
>         at
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$
> SparkILoop$$process(SparkILoop.scala:945)
>         at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
>         at org.apache.spark.repl.Main$.main(Main.scala:31)
>         at org.apache.spark.repl.Main.main(Main.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
> ssorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
> thodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy
> $SparkSubmit$$runMain(SparkSubmit.scala:731)
>         at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit
> .scala:181)
>         at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.
> scala:206)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:
> 121)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
>
>
> --
> View this message in context: http://apache-carbondata-maili
> ng-list-archive.1130556.n5.nabble.com/query-exception-Pat
> h-is-not-a-file-when-carbon-1-0-0-tp7433.html
> Sent from the Apache CarbonData Mailing List archive mailing list archive
> at Nabble.com.
>



-- 
Thanks & Regards,
Ravi