You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ignite.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2016/05/19 16:05:13 UTC
[jira] [Commented] (IGNITE-3175) BigDecimal fields are not
supported if query is executed from IgniteRDD
[ https://issues.apache.org/jira/browse/IGNITE-3175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15291373#comment-15291373 ]
ASF GitHub Bot commented on IGNITE-3175:
----------------------------------------
GitHub user tledkov-gridgain opened a pull request:
https://github.com/apache/ignite/pull/736
IGNITE-3175 BigDecimal fields are not supported if query is executed from IgniteRDD
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/gridgain/apache-ignite ignite-3175
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/ignite/pull/736.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #736
----
commit 4fa840399bd2580ff313f05ea8ea905677df1749
Author: tledkov-gridgain <tl...@gridgain.com>
Date: 2016-05-19T16:03:56Z
IGNITE-3175 BigDecimal fields are not supported if query is executed from IgniteRDD
----
> BigDecimal fields are not supported if query is executed from IgniteRDD
> -----------------------------------------------------------------------
>
> Key: IGNITE-3175
> URL: https://issues.apache.org/jira/browse/IGNITE-3175
> Project: Ignite
> Issue Type: Bug
> Components: Ignite RDD
> Affects Versions: 1.5.0.final
> Reporter: Valentin Kulichenko
> Assignee: Taras Ledkov
> Fix For: 1.7
>
>
> If one of the fields participating in the query is {{BigDecimal}}, the query will fail when executed from {{IgniteRDD}} with the following error:
> {noformat}
> scala.MatchError: 1124757 (of class java.math.BigDecimal)
> at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:255)
> at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:250)
> at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
> at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:260)
> at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:250)
> at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
> at org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply(CatalystTypeConverters.scala:401)
> at org.apache.spark.sql.SQLContext$$anonfun$6.apply(SQLContext.scala:492)
> at org.apache.spark.sql.SQLContext$$anonfun$6.apply(SQLContext.scala:492)
> at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
> at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
> at org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.processInputs(TungstenAggregationIterator.scala:505)
> at org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.<init>(TungstenAggregationIterator.scala:686)
> at org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:95)
> at org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:86)
> at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
> at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
> at org.apache.spark.scheduler.Task.run(Task.scala:89)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Most likely this is caused by the fact that {{IgniteRDD.dataType()}} method doesn't honor {{BigDecimal}} and returns {{StructType}} by default. We should fix this and check other possible types as well.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)