You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "sandeshyapuram (Jira)" <ji...@apache.org> on 2019/11/14 09:31:00 UTC

[jira] [Updated] (SPARK-29890) Unable to fill na with 0 with duplicate columns

     [ https://issues.apache.org/jira/browse/SPARK-29890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

sandeshyapuram updated SPARK-29890:
-----------------------------------
    Environment:     (was: Trying to fill out na values with 0.
{code:java}
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.3.3
      /_/Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_222)
Type in expressions to have them evaluated.
Type :help for more information.scala> :paste
// Entering paste mode (ctrl-D to finish)val parent = spark.sparkContext.parallelize(Seq((1,2),(3,4),(5,6))).toDF("nums", "abc")
val c1 = parent.filter(lit(true))
val c2 = parent.filter(lit(true))
c1.join(c2, Seq("nums"), "left")
.na.fill(0).show
// Exiting paste mode, now interpreting.ivysettings.xml file not found in HIVE_HOME or HIVE_CONF_DIR,/etc/hive/conf.dist/ivysettings.xml will be used
19/11/14 04:24:24 ERROR org.apache.hadoop.security.JniBasedUnixGroupsMapping: error looking up the name of group 820818257: No such file or directory
org.apache.spark.sql.AnalysisException: Reference 'abc' is ambiguous, could be: abc, abc.;
  at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolve(LogicalPlan.scala:213)
  at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveQuoted(LogicalPlan.scala:117)
  at org.apache.spark.sql.Dataset.resolve(Dataset.scala:220)
  at org.apache.spark.sql.Dataset.col(Dataset.scala:1246)
  at org.apache.spark.sql.DataFrameNaFunctions.org$apache$spark$sql$DataFrameNaFunctions$$fillCol(DataFrameNaFunctions.scala:443)
  at org.apache.spark.sql.DataFrameNaFunctions$$anonfun$7.apply(DataFrameNaFunctions.scala:500)
  at org.apache.spark.sql.DataFrameNaFunctions$$anonfun$7.apply(DataFrameNaFunctions.scala:492)
  at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
  at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
  at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
  at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
  at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
  at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
  at org.apache.spark.sql.DataFrameNaFunctions.fillValue(DataFrameNaFunctions.scala:492)
  at org.apache.spark.sql.DataFrameNaFunctions.fill(DataFrameNaFunctions.scala:171)
  at org.apache.spark.sql.DataFrameNaFunctions.fill(DataFrameNaFunctions.scala:155)
  at org.apache.spark.sql.DataFrameNaFunctions.fill(DataFrameNaFunctions.scala:134)
  ... 54 elided
{code})

> Unable to fill na with 0 with duplicate columns
> -----------------------------------------------
>
>                 Key: SPARK-29890
>                 URL: https://issues.apache.org/jira/browse/SPARK-29890
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Shell
>    Affects Versions: 2.3.3
>            Reporter: sandeshyapuram
>            Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org