You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Allison Wang (Jira)" <ji...@apache.org> on 2021/02/18 00:31:00 UTC

[jira] [Created] (SPARK-34459) map_from_arrays() throws UnsupportedOperationException with array of ColumnarMap

Allison Wang created SPARK-34459:
------------------------------------

             Summary: map_from_arrays() throws UnsupportedOperationException with array of ColumnarMap
                 Key: SPARK-34459
                 URL: https://issues.apache.org/jira/browse/SPARK-34459
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 3.2.0
            Reporter: Allison Wang


An example to reproduce this error: 
{code:scala}
sql("select map(1, 2) as m_a, map(3, 4) as m_b").write.saveAsTable("t")
sql("select map_from_arrays(array(1, 2), array(m_a, m_b)) from t")
{code}

Exception trace:
{code:java}
java.lang.UnsupportedOperationException
	at org.apache.spark.sql.vectorized.ColumnarMap.copy(ColumnarMap.java:51)
	at org.apache.spark.sql.vectorized.ColumnarMap.copy(ColumnarMap.java:25)
	at org.apache.spark.sql.catalyst.InternalRow$.copyValue(InternalRow.scala:121)
	at org.apache.spark.sql.catalyst.util.GenericArrayData.copy(GenericArrayData.scala:54)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:754)
	at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.encodeUnsafeRows(UnsafeRowBatchUtils.scala:80)
	at org.apache.spark.sql.execution.collect.Collector.$anonfun$processPartition$1(Collector.scala:179)
	at org.apache.spark.SparkContext.$anonfun$runJob$6(SparkContext.scala:2541)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.doRunTask(Task.scala:150)
	at org.apache.spark.scheduler.Task.run(Task.scala:119)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$10(Executor.scala:733)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1643)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:736)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:748)
{code}
This is due to ColumnarMap's copy method not implemented.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org