You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Niek Bartholomeus (JIRA)" <ji...@apache.org> on 2016/10/11 14:20:20 UTC
[jira] [Created] (SPARK-17872) aggregate function on dataset with
tuples grouped by non sequential fields
Niek Bartholomeus created SPARK-17872:
-----------------------------------------
Summary: aggregate function on dataset with tuples grouped by non sequential fields
Key: SPARK-17872
URL: https://issues.apache.org/jira/browse/SPARK-17872
Project: Spark
Issue Type: Bug
Components: Spark Core
Affects Versions: 2.0.1
Reporter: Niek Bartholomeus
The following lines where the field index in the tuple used in an aggregate function is lower than a field index used in the group by clause fails:
{code}
val testDS = Seq((1, 1, 1, 1)).toDS
// group by field one and three, aggregate on field 2:
testDS
.groupByKey { case (level1, level1FigureA, level2, level2FigureB) => (level1, level2) }
.agg((sum($"_2" * $"_4")).as[Double])
.collect
{code}
Error message:
{code}
org.apache.spark.sql.AnalysisException: Reference '_2' is ambiguous, could be: _2#562, _2#569.;
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolve(LogicalPlan.scala:264)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveChildren(LogicalPlan.scala:148)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$9$$anonfun$applyOrElse$5$$anonfun$31.apply(Analyzer.scala:604)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$9$$anonfun$applyOrElse$5$$anonfun$31.apply(Analyzer.scala:604)
at org.apache.spark.sql.catalyst.analysis.package$.withPosition(package.scala:48)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$9$$anonfun$applyOrElse$5.applyOrElse(Analyzer.scala:604)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$9$$anonfun$applyOrElse$5.applyOrElse(Analyzer.scala:600)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:301)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:301)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:69)
{code}
While the following code - where the aggregate field indices are all higher than the groupby field indices - works fine:
{code}
testDS
.map { case (level1, level1FigureA, level2, level2FigureB) => (level1, level2, level1FigureA, level2FigureB) }
.groupByKey { case (level1, level2, level1FigureA, level2FigureB) => (level1, level2) }
.agg((sum($"_3" * $"_4")).as[Double])
.collect
{code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org