You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "bing huang (JIRA)" <ji...@apache.org> on 2017/05/22 11:40:04 UTC

[jira] [Updated] (SPARK-20837) Spark SQL doesn't support escape of single/double quote as SQL standard.

     [ https://issues.apache.org/jira/browse/SPARK-20837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

bing huang updated SPARK-20837:
-------------------------------
    Description: 
The code snippet I used to demonstrate the issue:

    val conf = new SparkConf().setAppName("bhuang").setMaster("local[3]")
    val sc = new SparkContext(conf)
    val sqlContext = new SQLContext(sc)


    // create test dataset
    val data = (1 to 10).map{x:Int => x match {
      case t if t <= 5 => Row("New 'york' city", t.toString,"2015-01-01 13:59:59.123", 2147483647.0, Double
        .PositiveInfinity)
      case t => Row("New york city", t.toString,"2015-01-02 23:59:59.456", 1.0, Double.PositiveInfinity)
    }}

    // create schema of the test dataset
    val schema = StructType(Array(
      StructField("A1", DataTypes.StringType),
      StructField("A2", DataTypes.StringType),
      StructField("A3", DataTypes.StringType),
      StructField("A4", DataTypes.DoubleType),
      StructField("A5", DataTypes.DoubleType)
    ))
    val rdd = sc.parallelize(data)
    val df = sqlContext.createDataFrame(rdd,schema)
    df.registerTempTable("test")

    val sqlString ="select A2 from test where A1 not in ('New ''york'' city')"

    sqlContext.sql(sqlString).show(false)

> Spark SQL doesn't support escape of single/double quote as SQL standard.
> ------------------------------------------------------------------------
>
>                 Key: SPARK-20837
>                 URL: https://issues.apache.org/jira/browse/SPARK-20837
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.6.1, 1.6.2, 1.6.3, 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1
>            Reporter: bing huang
>
> The code snippet I used to demonstrate the issue:
>     val conf = new SparkConf().setAppName("bhuang").setMaster("local[3]")
>     val sc = new SparkContext(conf)
>     val sqlContext = new SQLContext(sc)
>     // create test dataset
>     val data = (1 to 10).map{x:Int => x match {
>       case t if t <= 5 => Row("New 'york' city", t.toString,"2015-01-01 13:59:59.123", 2147483647.0, Double
>         .PositiveInfinity)
>       case t => Row("New york city", t.toString,"2015-01-02 23:59:59.456", 1.0, Double.PositiveInfinity)
>     }}
>     // create schema of the test dataset
>     val schema = StructType(Array(
>       StructField("A1", DataTypes.StringType),
>       StructField("A2", DataTypes.StringType),
>       StructField("A3", DataTypes.StringType),
>       StructField("A4", DataTypes.DoubleType),
>       StructField("A5", DataTypes.DoubleType)
>     ))
>     val rdd = sc.parallelize(data)
>     val df = sqlContext.createDataFrame(rdd,schema)
>     df.registerTempTable("test")
>     val sqlString ="select A2 from test where A1 not in ('New ''york'' city')"
>     sqlContext.sql(sqlString).show(false)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org