You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/09/22 00:29:37 UTC

[GitHub] [spark] HyukjinKwon opened a new pull request, #37959: [SPARK-40142][PYTHON][DOCS][FOLLOW-UP] Remove non-ANSI compliant example in element_at

HyukjinKwon opened a new pull request, #37959:
URL: https://github.com/apache/spark/pull/37959

   ### What changes were proposed in this pull request?
   
   This PR removes non-ANSI compliant example in element_at.
   
   ### Why are the changes needed?
   
   ANSI build fails to run the example.
   
   https://github.com/apache/spark/actions/runs/3094607589/jobs/5008176959
   
   ```
   File "/__w/spark/spark/python/pyspark/sql/functions.py", line 6599, in pyspark.sql.functions.element_at
       	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1189)
       	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2897)
       	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2836)
       	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2825)
       	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
       	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:952)
       	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2237)
       	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2258)
       	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2277)
       	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2302)
       	at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1020)
       	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
       	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
       	at org.apache.spark.rdd.RDD.withScope(RDD.scala:406)
       	at org.apache.spark.rdd.RDD.collect(RDD.scala:1019)
       	at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:424)
       	at org.apache.spark.sql.Dataset.$anonfun$collectToPython$1(Dataset.scala:3925)
       	at org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:4095)
       	at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:512)
       	at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:4093)
       	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:111)
       	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:171)
       	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95)
       	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
       	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
       	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:4093)
       	at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:3922)
       	at sun.reflect.GeneratedMethodAccessor63.invoke(Unknown Source)
       	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
       	at java.lang.reflect.Method.invoke(Method.java:498)
       	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
       	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
       	at py4j.Gateway.invoke(Gateway.java:282)
       	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
       	at py4j.commands.CallCommand.execute(CallCommand.java:79)
       	at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
       	at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
       	at java.lang.Thread.run(Thread.java:750)
       Caused by: org.apache.spark.SparkArrayIndexOutOfBoundsException: [INVALID_ARRAY_INDEX_IN_ELEMENT_AT] The index -4 is out of bounds. The array has 3 elements. Use `try_element_at` to tolerate accessing element at invalid index and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
       	at org.apache.spark.sql.errors.QueryExecutionErrors$.invalidElementAtIndexError(QueryExecutionErrors.scala:264)
       	at org.apache.spark.sql.errors.QueryExecutionErrors.invalidElementAtIndexError(QueryExecutionErrors.scala)
       	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(generated.java:43)
       	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
       	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760)
       	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:364)
       	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:889)
       	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:889)
       	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
       	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
       	at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
       	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)
       	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
       	at org.apache.spark.scheduler.Task.run(Task.scala:139)
       	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
       	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1491)
       	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
       	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
       	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
       	... 1 more
   
   /usr/local/pypy/pypy3.7/lib-python/3/runpy.py:125: RuntimeWarning: 'pyspark.sql.functions' found in sys.modules after import of package 'pyspark.sql', but prior to execution of 'pyspark.sql.functions'; this may result in unpredictable behaviour
     warn(RuntimeWarning(msg))
   /__w/spark/spark/python/pyspark/context.py:310: FutureWarning: Python 3.7 support is deprecated in Spark 3.4.
     warnings.warn("Python 3.7 support is deprecated in Spark 3.4.", FutureWarning)
   **********************************************************************
      1 of   6 in pyspark.sql.functions.element_at
   
   ```
   
   ### Does this PR introduce _any_ user-facing change?
   
   No. The example added is not exposed to end users yet.
   
   ### How was this patch tested?
   Manually tested with enabling the ANSI configuration (`spark.sql.ansi.enabled`)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on pull request #37959: [SPARK-40142][PYTHON][DOCS][FOLLOW-UP] Remove non-ANSI compliant example in element_at

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on PR #37959:
URL: https://github.com/apache/spark/pull/37959#issuecomment-1254373373

   @gengliangwang mind taking a quick look please? 🙏 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon closed pull request #37959: [SPARK-40142][PYTHON][DOCS][FOLLOW-UP] Remove non-ANSI compliant example in element_at

Posted by GitBox <gi...@apache.org>.
HyukjinKwon closed pull request #37959: [SPARK-40142][PYTHON][DOCS][FOLLOW-UP] Remove non-ANSI compliant example in element_at
URL: https://github.com/apache/spark/pull/37959


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on pull request #37959: [SPARK-40142][PYTHON][DOCS][FOLLOW-UP] Remove non-ANSI compliant example in element_at

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on PR #37959:
URL: https://github.com/apache/spark/pull/37959#issuecomment-1254440082

   Merged to master.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org