You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@parquet.apache.org by "Steven She (JIRA)" <ji...@apache.org> on 2015/06/24 20:26:05 UTC
[jira] [Commented] (PARQUET-317) writeMetaDataFile crashes when a
relative root Path is used
[ https://issues.apache.org/jira/browse/PARQUET-317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14599905#comment-14599905 ]
Steven She commented on PARQUET-317:
------------------------------------
https://github.com/apache/parquet-mr/pull/228
> writeMetaDataFile crashes when a relative root Path is used
> -----------------------------------------------------------
>
> Key: PARQUET-317
> URL: https://issues.apache.org/jira/browse/PARQUET-317
> Project: Parquet
> Issue Type: Bug
> Components: parquet-mr
> Affects Versions: 1.8.0
> Reporter: Steven She
> Priority: Minor
>
> In Spark, I can save an RDD to the local file system using a relative path, e.g.:
> {noformat}
> rdd.saveAsNewAPIHadoopFile(
> "relativeRoot",
> classOf[Void],
> tag.runtimeClass.asInstanceOf[Class[T]],
> classOf[ParquetOutputFormat[T]],
> job.getConfiguration)
> {noformat}
> This leads to a crash in the ParquetFileWriter.mergeFooters(..) method since the footer paths are read as fully qualified paths, but the root path is provided as a relative path:
> {noformat}
> org.apache.parquet.io.ParquetEncodingException: /Users/stevenshe/schema/relativeRoot/part-r-00000.snappy.parquet invalid: all the files must be contained in the root relativeRoot
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)