You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sathish (JIRA)" <ji...@apache.org> on 2019/01/16 04:54:00 UTC

[jira] [Created] (SPARK-26631) Issue while reading Parquet data from Hadoop Archive files (.har)

Sathish created SPARK-26631:
-------------------------------

             Summary: Issue while reading Parquet data from Hadoop Archive files (.har)
                 Key: SPARK-26631
                 URL: https://issues.apache.org/jira/browse/SPARK-26631
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 2.2.0
         Environment: Here are the steps to reproduce the issue

a) hadoop fs -mkdir /tmp/testparquet

b) Get sample parquet data and rename the file to userdata1.parquet

wget [https://github.com/Teradata/kylo/blob/master/samples/sample-data/parquet/userdata1.parquet?raw=true]

c) hadoop fs -put userdata.parquet /tmp/testparquet

d) hadoop archive -archiveName testarchive.har -p /tmp/testparquet /tmp

e) We should be able to see the file under har file

hadoop fs -ls har:///tmp/testarchive.har

f) Launch spark2 / spark shell

g)
{code:java}
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
    val df = sqlContext.read.parquet("har:///tmp/testarchive.har/userdata1.parquet"){code}
is there anything which I am missing here.
            Reporter: Sathish


While reading Parquet file from Hadoop Archive file Spark is failing with below exception

 
{code:java}
scala> val hardf = sqlContext.read.parquet("har:///tmp/testarchive.har/userdata1.parquet") org.apache.spark.sql.AnalysisException: Unable to infer schema for Parquet. It must be specified manually.;   at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$9.apply(DataSource.scala:208)   at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$9.apply(DataSource.scala:208)   at scala.Option.getOrElse(Option.scala:121)   at org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:207)   at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:393)   at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)   at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:622)   at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:606)   ... 49 elided
{code}
 

Whereas the same parquet file can be read normally without any issues
{code:java}
scala> val df = sqlContext.read.parquet("hdfs:///tmp/testparquet/userdata1.parquet")

df: org.apache.spark.sql.DataFrame = [registration_dttm: timestamp, id: int ... 11 more fields]
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org