You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by "ZHENG, Xu-dong" <do...@gmail.com> on 2014/12/09 08:27:40 UTC

Re: spark sql - save to Parquet file - Unsupported datatype TimestampType

I meet the same issue. Any solution?

On Wed, Nov 12, 2014 at 2:54 PM, tridib <tr...@live.com> wrote:

> Hi Friends,
> I am trying to save a json file to parquet. I got error "Unsupported
> datatype TimestampType".
> Is not parquet support date? Which parquet version does spark uses? Is
> there
> any work around?
>
>
> Here the stacktrace:
>
> java.lang.RuntimeException: Unsupported datatype TimestampType
>         at scala.sys.package$.error(package.scala:27)
>         at
>
> org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$fromDataType$2.apply(ParquetTypes.scala:343)
>         at
>
> org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$fromDataType$2.apply(ParquetTypes.scala:292)
>         at scala.Option.getOrElse(Option.scala:120)
>         at
>
> org.apache.spark.sql.parquet.ParquetTypesConverter$.fromDataType(ParquetTypes.scala:291)
>         at
>
> org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$fromDataType$2$$anonfun$3.apply(ParquetTypes.scala:320)
>         at
>
> org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$fromDataType$2$$anonfun$3.apply(ParquetTypes.scala:320)
>         at
>
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>         at
>
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>         at scala.collection.mutable.ArraySeq.foreach(ArraySeq.scala:73)
>         at
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>         at scala.collection.AbstractTraversable.map(Traversable.scala:105)
>         at
>
> org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$fromDataType$2.apply(ParquetTypes.scala:319)
>         at
>
> org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$fromDataType$2.apply(ParquetTypes.scala:292)
>         at scala.Option.getOrElse(Option.scala:120)
>         at
>
> org.apache.spark.sql.parquet.ParquetTypesConverter$.fromDataType(ParquetTypes.scala:291)
>         at
>
> org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$4.apply(ParquetTypes.scala:363)
>         at
>
> org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$4.apply(ParquetTypes.scala:362)
>         at
>
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>         at
>
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>         at scala.collection.mutable.ArraySeq.foreach(ArraySeq.scala:73)
>         at
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>         at scala.collection.AbstractTraversable.map(Traversable.scala:105)
>         at
>
> org.apache.spark.sql.parquet.ParquetTypesConverter$.convertFromAttributes(ParquetTypes.scala:361)
>         at
>
> org.apache.spark.sql.parquet.ParquetTypesConverter$.writeMetaData(ParquetTypes.scala:407)
>         at
>
> org.apache.spark.sql.parquet.ParquetRelation$.createEmpty(ParquetRelation.scala:151)
>         at
>
> org.apache.spark.sql.parquet.ParquetRelation$.create(ParquetRelation.scala:130)
>         at
>
> org.apache.spark.sql.execution.SparkStrategies$ParquetOperations$.apply(SparkStrategies.scala:204)
>         at
>
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
>         at
>
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
>         at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
>         at
>
> org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
>         at
>
> org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:418)
>         at
>
> org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:416)
>         at
>
> org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:422)
>         at
>
> org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:422)
>         at
>
> org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:425)
>         at
> org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:425)
>         at
>
> org.apache.spark.sql.SchemaRDDLike$class.saveAsParquetFile(SchemaRDDLike.scala:76)
>         at
>
> org.apache.spark.sql.api.java.JavaSchemaRDD.saveAsParquetFile(JavaSchemaRDD.scala:42)
>
> Thanks & Regards
> Tridib
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/spark-sql-save-to-Parquet-file-Unsupported-datatype-TimestampType-tp18691.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>


-- 
郑旭东
ZHENG, Xu-dong

Re: spark sql - save to Parquet file - Unsupported datatype TimestampType

Posted by Michael Armbrust <mi...@databricks.com>.
Not yet unfortunately.  You could cast the timestamp to a long if you don't
need nanosecond precision.

Here is a related JIRA: https://issues.apache.org/jira/browse/SPARK-4768

On Mon, Dec 8, 2014 at 11:27 PM, ZHENG, Xu-dong <do...@gmail.com> wrote:

> I meet the same issue. Any solution?
>
> On Wed, Nov 12, 2014 at 2:54 PM, tridib <tr...@live.com> wrote:
>
>> Hi Friends,
>> I am trying to save a json file to parquet. I got error "Unsupported
>> datatype TimestampType".
>> Is not parquet support date? Which parquet version does spark uses? Is
>> there
>> any work around?
>>
>>
>> Here the stacktrace:
>>
>> java.lang.RuntimeException: Unsupported datatype TimestampType
>>         at scala.sys.package$.error(package.scala:27)
>>         at
>>
>> org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$fromDataType$2.apply(ParquetTypes.scala:343)
>>         at
>>
>> org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$fromDataType$2.apply(ParquetTypes.scala:292)
>>         at scala.Option.getOrElse(Option.scala:120)
>>         at
>>
>> org.apache.spark.sql.parquet.ParquetTypesConverter$.fromDataType(ParquetTypes.scala:291)
>>         at
>>
>> org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$fromDataType$2$$anonfun$3.apply(ParquetTypes.scala:320)
>>         at
>>
>> org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$fromDataType$2$$anonfun$3.apply(ParquetTypes.scala:320)
>>         at
>>
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>>         at
>>
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>>         at scala.collection.mutable.ArraySeq.foreach(ArraySeq.scala:73)
>>         at
>> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>>         at scala.collection.AbstractTraversable.map(Traversable.scala:105)
>>         at
>>
>> org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$fromDataType$2.apply(ParquetTypes.scala:319)
>>         at
>>
>> org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$fromDataType$2.apply(ParquetTypes.scala:292)
>>         at scala.Option.getOrElse(Option.scala:120)
>>         at
>>
>> org.apache.spark.sql.parquet.ParquetTypesConverter$.fromDataType(ParquetTypes.scala:291)
>>         at
>>
>> org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$4.apply(ParquetTypes.scala:363)
>>         at
>>
>> org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$4.apply(ParquetTypes.scala:362)
>>         at
>>
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>>         at
>>
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>>         at scala.collection.mutable.ArraySeq.foreach(ArraySeq.scala:73)
>>         at
>> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>>         at scala.collection.AbstractTraversable.map(Traversable.scala:105)
>>         at
>>
>> org.apache.spark.sql.parquet.ParquetTypesConverter$.convertFromAttributes(ParquetTypes.scala:361)
>>         at
>>
>> org.apache.spark.sql.parquet.ParquetTypesConverter$.writeMetaData(ParquetTypes.scala:407)
>>         at
>>
>> org.apache.spark.sql.parquet.ParquetRelation$.createEmpty(ParquetRelation.scala:151)
>>         at
>>
>> org.apache.spark.sql.parquet.ParquetRelation$.create(ParquetRelation.scala:130)
>>         at
>>
>> org.apache.spark.sql.execution.SparkStrategies$ParquetOperations$.apply(SparkStrategies.scala:204)
>>         at
>>
>> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
>>         at
>>
>> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
>>         at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
>>         at
>>
>> org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
>>         at
>>
>> org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:418)
>>         at
>>
>> org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:416)
>>         at
>>
>> org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:422)
>>         at
>>
>> org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:422)
>>         at
>>
>> org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:425)
>>         at
>> org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:425)
>>         at
>>
>> org.apache.spark.sql.SchemaRDDLike$class.saveAsParquetFile(SchemaRDDLike.scala:76)
>>         at
>>
>> org.apache.spark.sql.api.java.JavaSchemaRDD.saveAsParquetFile(JavaSchemaRDD.scala:42)
>>
>> Thanks & Regards
>> Tridib
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/spark-sql-save-to-Parquet-file-Unsupported-datatype-TimestampType-tp18691.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
>> For additional commands, e-mail: user-help@spark.apache.org
>>
>>
>
>
> --
> 郑旭东
> ZHENG, Xu-dong
>
>