You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Javier Pérez (JIRA)" <ji...@apache.org> on 2016/03/11 11:30:39 UTC

[jira] [Created] (SPARK-13819) using a tegexp_replace in a gropu by clause raises a nullpointerexception

Javier Pérez created SPARK-13819:
------------------------------------

             Summary: using a tegexp_replace in a gropu by clause raises a nullpointerexception
                 Key: SPARK-13819
                 URL: https://issues.apache.org/jira/browse/SPARK-13819
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 1.6.0
            Reporter: Javier Pérez


1. Start start-thriftserver.sh
2. connect with beeline
3. Perform the following query over a table:
  SELECT t0.textsample 
  FROM test t0 
  ORDER BY regexp_replace(
                        t0.code, 
                        concat('\\Q', 'a', '\\E'), 
                        regexp_replace(
                           regexp_replace('zz', '\\\\', '\\\\\\\\'),
                            '\\$', 
                            '\\\\\\$')) DESC;
Problem: NullPointerException

Trace:

 java.lang.NullPointerException
	at org.apache.spark.sql.catalyst.expressions.RegExpReplace.nullSafeEval(regexpExpressions.scala:224)
	at org.apache.spark.sql.catalyst.expressions.TernaryExpression.eval(Expression.scala:458)
	at org.apache.spark.sql.catalyst.expressions.InterpretedOrdering.compare(ordering.scala:36)
	at org.apache.spark.sql.catalyst.expressions.InterpretedOrdering.compare(ordering.scala:27)
	at scala.math.Ordering$class.gt(Ordering.scala:97)
	at org.apache.spark.sql.catalyst.expressions.InterpretedOrdering.gt(ordering.scala:27)
	at org.apache.spark.RangePartitioner.getPartition(Partitioner.scala:168)
	at org.apache.spark.sql.execution.Exchange$$anonfun$doExecute$1$$anonfun$4$$anonfun$apply$4.apply(Exchange.scala:180)
	at org.apache.spark.sql.execution.Exchange$$anonfun$doExecute$1$$anonfun$4$$anonfun$apply$4.apply(Exchange.scala:180)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.insertAll(BypassMergeSortShuffleWriter.java:119)
	at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
	at org.apache.spark.scheduler.Task.run(Task.scala:88)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org