You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Steven Magana-Zook (JIRA)" <ji...@apache.org> on 2016/07/21 21:49:20 UTC
[jira] [Issue Comment Deleted] (SPARK-4003) Add {Big Decimal,
Timestamp, Date} types to Java SqlContext
[ https://issues.apache.org/jira/browse/SPARK-4003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Steven Magana-Zook updated SPARK-4003:
--------------------------------------
Comment: was deleted
(was: I am still seeing this issue in Spark 1.6.1 when I try using Timestamp fields. One thing i am doing differently than the mailing list [post|http://apache-spark-user-list.1001560.n3.nabble.com/scala-MatchError-class-java-sql-Timestamp-td16761.html] that spurred the creation of this bug is that i am manually defining my schema and applying it to a RDD of Rows instead of doing reflection of a Java bean.
Define schema:
{code:java}
fields.add(DataTypes.createStructField("myField", DataTypes.TimestampType, true));
{code}
POJO field:
{code:java}
private java.sql.Timestamp myField;
public Timestamp getMyField() {
return myField;
}
public void setMyField(Timestamp myField) {
this.myField = myField;
}
{code}
Results in:
{panel:title=Stacktrace}
scala.MatchError: 2016-07-21 09:41:00.625 (of class java.sql.Timestamp)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$StringConverter$.toCatalystImpl(CatalystTypeConverters.scala:295)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$StringConverter$.toCatalystImpl(CatalystTypeConverters.scala:294)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:260)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:250)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply(CatalystTypeConverters.scala:401)
at org.apache.spark.sql.SQLContext$$anonfun$6.apply(SQLContext.scala:492)
at org.apache.spark.sql.SQLContext$$anonfun$6.apply(SQLContext.scala:492)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:263)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
{panel}
Is manually defining the schema, instead of using reflection of a bean, a new path at the same bug? Am I just doing this wrong?)
> Add {Big Decimal, Timestamp, Date} types to Java SqlContext
> -----------------------------------------------------------
>
> Key: SPARK-4003
> URL: https://issues.apache.org/jira/browse/SPARK-4003
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Reporter: Adrian Wang
> Assignee: Adrian Wang
> Fix For: 1.2.0
>
>
> in JavaSqlContext, we need to let java program use big decimal, timestamp, date types.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org