You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by Antonio Si <an...@gmail.com> on 2017/06/09 20:57:41 UTC

using hdfs as secondary file system

Hi,

I am new to ignite. I have followed the doc in configuring ignite to use
hdfs as secondary
file system.

I go to spark-shell and when I try to do

df.write.save("test")

where df is a dataframe, I am getting the following Exception:

Driver stacktrace:

at org.apache.spark.scheduler.DAGScheduler.org
$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)

at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)

at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)

at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)

at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)

at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)

at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)

at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)

at scala.Option.foreach(Option.scala:236)

at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)

at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)

at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)

at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)

at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)

at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)

at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)

at org.apache.spark.SparkContext.runJob(SparkContext.scala:1922)

at
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:148)

... 62 more

Caused by: java.io.IOException: Generic IGFS error occurred.

at
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsUtils.cast(HadoopIgfsUtils.java:139)

at
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsWrapper.withReconnectHandling(HadoopIgfsWrapper.java:341)

at
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsWrapper.create(HadoopIgfsWrapper.java:256)

at
org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileSystem.create(IgniteHadoopFileSystem.java:455)

at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)

at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)

at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)

at
org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:176)

at
org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:160)

at
org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:289)

at
org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:262)

at
org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetRelation.scala:94)

at
org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anon$3.newInstance(ParquetRelation.scala:286)

at
org.apache.spark.sql.execution.datasources.BaseWriterContainer.newOutputWriter(WriterContainer.scala:129)

at
org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:255)

at
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:148)

at
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:148)

at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)

at org.apache.spark.scheduler.Task.run(Task.scala:89)

at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)

at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:745)

Caused by: class org.apache.ignite.IgniteCheckedException: Generic IGFS
error occurred.

at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7242)

at
org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:258)

at
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:170)

at
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:139)

at
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsOutProc.create(HadoopIgfsOutProc.java:373)

at
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsWrapper$15.apply(HadoopIgfsWrapper.java:259)

at
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsWrapper$15.apply(HadoopIgfsWrapper.java:256)

at
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsWrapper.withReconnectHandling(HadoopIgfsWrapper.java:324)

... 21 more

Caused by: class org.apache.ignite.igfs.IgfsException: Generic IGFS error
occurred.

at
org.apache.ignite.internal.igfs.common.IgfsControlResponse.throwError(IgfsControlResponse.java:308)

at
org.apache.ignite.internal.igfs.common.IgfsControlResponse.throwError(IgfsControlResponse.java:317)

at
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsOutProc$1.apply(HadoopIgfsOutProc.java:513)

at
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsOutProc$1.apply(HadoopIgfsOutProc.java:507)

at
org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)

at
org.apache.ignite.internal.util.future.GridFutureChainListener.apply(GridFutureChainListener.java:70)

at
org.apache.ignite.internal.util.future.GridFutureChainListener.apply(GridFutureChainListener.java:30)

at
org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:382)

at
org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:352)

at
org.apache.ignite.internal.util.future.GridFutureAdapter$ChainFuture.<init>(GridFutureAdapter.java:556)

at
org.apache.ignite.internal.util.future.GridFutureAdapter.chain(GridFutureAdapter.java:357)

... 25 more


I try to step into the code and debug a little. It can create the folder
test, but it fails when it tries do a create with a path:

igfs://dfmlIgfs@localhost
:10500/user/antoniosi/test/_temporary/0/_temporary/attempt_201706091355_0008_m_000000_0/part-r-00000-bcd32521-bd71-4cc9-9046-f096a7e906d7.gz.parquet

Any suggestions would be helpful.

Thanks.

Antonio.

Re: using hdfs as secondary file system

Posted by Antonio Si <an...@gmail.com>.
Hi Viadimir,

I got this issue resolved. I believe ignite is using asm 4 library
internally and I need to use jdk7 in order to work with asm 4.
Switching to jdk7 resolve this issue.

Thanks.

Antonio.

On Wed, Jun 14, 2017 at 4:15 AM, Vladimir Ozerov <vo...@gridgain.com>
wrote:

> Hi Antonio,
>
> Is it possible to attach logs from server nodes?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/using-hdfs-as-secondary-file-system-tp13580p13696.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>

Re: using hdfs as secondary file system

Posted by Vladimir Ozerov <vo...@gridgain.com>.
Hi Antonio,

Is it possible to attach logs from server nodes?



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/using-hdfs-as-secondary-file-system-tp13580p13696.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.