You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by jayadevanmurali <gi...@git.apache.org> on 2016/01/29 13:08:53 UTC

[GitHub] spark pull request: [SPARK-12982][SQL] Add table name validation i...

GitHub user jayadevanmurali opened a pull request:

    https://github.com/apache/spark/pull/10983

    [SPARK-12982][SQL] Add table name validation in temp table registration

    Add the table name validation at the temp table creation

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/jayadevanmurali/spark branch-0.1-SPARK-SPARK-12982

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/10983.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #10983
    
----
commit ef5d7c72d53d06b517f95201094a21fbb97d0006
Author: jayadevanmurali <ja...@tcs.com>
Date:   2016-01-29T11:55:43Z

    Update SQLContext.scala
    
    Add the table name validation at the temp table creation

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-12982][SQL] Add table name validation i...

Posted by jayadevanmurali <gi...@git.apache.org>.
Github user jayadevanmurali commented on the pull request:

    https://github.com/apache/spark/pull/10983#issuecomment-179886072
  
    Created new pr with latest code for this issue


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-12982][SQL] Add table name validation i...

Posted by hvanhovell <gi...@git.apache.org>.
Github user hvanhovell commented on the pull request:

    https://github.com/apache/spark/pull/10983#issuecomment-177434727
  
    @jayadevanmurali please don't close the PR, your fix is valid. We really shouldn't use table names like ```t~``` unless they are quoted using backticks. Please update your PR and add a test to ```SQLQuerySuite```.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-12982][SQL] Add table name validation i...

Posted by jayadevanmurali <gi...@git.apache.org>.
Github user jayadevanmurali commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10983#discussion_r51342667
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
    @@ -747,7 +747,7 @@ class SQLContext private[sql](
        * only during the lifetime of this instance of SQLContext.
        */
       private[sql] def registerDataFrameAsTable(df: DataFrame, tableName: String): Unit = {
    -    catalog.registerTable(TableIdentifier(tableName), df.logicalPlan)
    +    catalog.registerTable(SqlParser.parseTableIdentifier(tableName), df.logicalPlan)
    --- End diff --
    
    Ok I can see the variable definition at line 211 of SqlContext.scala
    @transient
     protected[sql] val sqlParser = new SparkSQLParser(getSQLDialect().parse(_))
    
    But this varable is not used anywhare.All methods  use Sqlarser.parseTableIdentifier() for example  @Experimental
    def createExternalTable(
    tableName: String,
    source: String,
    options: Map[String, String]): DataFrame = {
    **val tableIdent = SqlParser.parseTableIdentifier(tableName)**
    val cmd =
    CreateTableUsing(
    tableIdent,
    userSpecifiedSchema = None,
    source,
    temporary = false,
    options,
    allowExisting = false,
    managedIfNoPath = false)
    executePlan(cmd).toRdd
    table(tableIdent)
    }
    
    Correct me if am wrong.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-12982][SQL] Add table name validation i...

Posted by hvanhovell <gi...@git.apache.org>.
Github user hvanhovell commented on the pull request:

    https://github.com/apache/spark/pull/10983#issuecomment-177943412
  
    Sure it is. Why not just merge the current master?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-12982][SQL] Add table name validation i...

Posted by hvanhovell <gi...@git.apache.org>.
Github user hvanhovell commented on the pull request:

    https://github.com/apache/spark/pull/10983#issuecomment-179867521
  
    @jayadevanmurali could you close this one?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-12982][SQL] Add table name validation i...

Posted by jayadevanmurali <gi...@git.apache.org>.
Github user jayadevanmurali commented on the pull request:

    https://github.com/apache/spark/pull/10983#issuecomment-177942271
  
    @hvanhovell the current pull request is based on an outdated code. So I would like to create a new pull request on the latest code. Hope that is fine


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-12982][SQL] Add table name validation i...

Posted by jayadevanmurali <gi...@git.apache.org>.
Github user jayadevanmurali commented on the pull request:

    https://github.com/apache/spark/pull/10983#issuecomment-177392588
  
    Thanks @hvanhovell , Got your point. I updated my code and repeat the steps. I was able to replicate this. Please check the steps
    
    jayadevan@Satellite-L640:~/spark$ ./bin/spark-shell
    NOTE: SPARK_PREPEND_CLASSES is set, placing locally compiled Spark classes ahead of assembly.
    Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
    Setting default log level to "WARN".
    To adjust logging level use sc.setLogLevel(newLevel).
    16/01/31 09:27:57 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    16/01/31 09:27:57 WARN Utils: Your hostname, Satellite-L640 resolves to a loopback address: 127.0.1.1, but we couldn't find any external IP address!
    16/01/31 09:27:57 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
    Spark context available as sc (master = local[*], app id = local-1454212680541).
    SQL context available as sqlContext.
    Welcome to
          ____              __
         / __/__  ___ _____/ /__
        _\ \/ _ \/ _ `/ __/  '_/
       /___/ .__/\_,_/_/ /_/\_\   version 2.0.0-SNAPSHOT
          /_/
             
    Using Scala version 2.11.7 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_80)
    Type in expressions to have them evaluated.
    Type :help for more information.
    
    scala> import org.apache.spark.sql.types.{StringType, StructField, StructType}
    import org.apache.spark.sql.types.{StringType, StructField, StructType}
    
    scala> import org.apache.spark.sql.{DataFrame, Row, SQLContext}
    import org.apache.spark.sql.{DataFrame, Row, SQLContext}
    
    scala> import org.apache.spark.{SparkContext, SparkConf}
    import org.apache.spark.{SparkContext, SparkConf}
    
    scala> val rows = List(Row("foo"), Row("bar"));
    rows: List[org.apache.spark.sql.Row] = List([foo], [bar])
    
    scala> val schema = StructType(Seq(StructField("col", StringType)));
    schema: org.apache.spark.sql.types.StructType = StructType(StructField(col,StringType,true))
    
    scala> val rdd = sc.parallelize(rows);
    rdd: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = ParallelCollectionRDD[0] at parallelize at <console>:29
    
    scala> val df = sqlContext.createDataFrame(rdd, schema)
    df: org.apache.spark.sql.DataFrame = [col: string]
    
    scala> df.registerTempTable("t~")
    
    scala> df.sqlContext.dropTempTable("t~")
    org.apache.spark.sql.AnalysisException: NoViableAltException(327@[209:20: ( DOT id2= identifier )?])
    ; line 1 pos 1
      at org.apache.spark.sql.catalyst.parser.ParseErrorReporter.throwError(ParseDriver.scala:158)
      at org.apache.spark.sql.catalyst.parser.ParseErrorReporter.throwError(ParseDriver.scala:147)
      at org.apache.spark.sql.catalyst.parser.ParseDriver$.parse(ParseDriver.scala:95)
      at org.apache.spark.sql.catalyst.parser.ParseDriver$.parseTableName(ParseDriver.scala:42)
      at org.apache.spark.sql.catalyst.CatalystQl.parseTableIdentifier(CatalystQl.scala:81)
      at org.apache.spark.sql.SQLContext.table(SQLContext.scala:811)
      at org.apache.spark.sql.SQLContext.dropTempTable(SQLContext.scala:738)
      ... 49 elided
    
    So I will close this pull request and raise a new one.
    



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-12982][SQL] Add table name validation i...

Posted by hvanhovell <gi...@git.apache.org>.
Github user hvanhovell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10983#discussion_r51254950
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
    @@ -747,7 +747,7 @@ class SQLContext private[sql](
        * only during the lifetime of this instance of SQLContext.
        */
       private[sql] def registerDataFrameAsTable(df: DataFrame, tableName: String): Unit = {
    -    catalog.registerTable(TableIdentifier(tableName), df.logicalPlan)
    +    catalog.registerTable(SqlParser.parseTableIdentifier(tableName), df.logicalPlan)
    --- End diff --
    
    We have removed ````SqlParser``` recently. Could you merge/rebase to the most recent master, and use the ```sqlParser``` variable for this?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-12982][SQL] Add table name validation i...

Posted by hvanhovell <gi...@git.apache.org>.
Github user hvanhovell commented on the pull request:

    https://github.com/apache/spark/pull/10983#issuecomment-176728277
  
    @jayadevanmurali I am not sure if the problem described in the JIRA is still an issue in the current master. Could you check this?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-12982][SQL] Add table name validation i...

Posted by hvanhovell <gi...@git.apache.org>.
Github user hvanhovell commented on the pull request:

    https://github.com/apache/spark/pull/10983#issuecomment-177448445
  
    @jayadevanmurali could you add the test to ```DataFrameSuite``` instead of ```SQLQuerySuite```? Sorry about the confusion.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-12982][SQL] Add table name validation i...

Posted by jayadevanmurali <gi...@git.apache.org>.
Github user jayadevanmurali commented on the pull request:

    https://github.com/apache/spark/pull/10983#issuecomment-177540750
  
    @hvanhovell yes sure, no problem


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-12982][SQL] Add table name validation i...

Posted by jayadevanmurali <gi...@git.apache.org>.
Github user jayadevanmurali commented on the pull request:

    https://github.com/apache/spark/pull/10983#issuecomment-177109715
  
    @hvanhovell 
    I was able to replicate this in spark 2.0.0.
    
    Steps
    ayadevan@Satellite-L640:~/spark$ ./bin/spark-shell 
    Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
    Setting default log level to "WARN".
    To adjust logging level use sc.setLogLevel(newLevel).
    Welcome to
          ____              __
         / __/__  ___ _____/ /__
        _\ \/ _ \/ _ `/ __/  '_/
       /___/ .__/\_,_/_/ /_/\_\   version 2.0.0-SNAPSHOT
          /_/
    
    Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_80)
    Type in expressions to have them evaluated.
    Type :help for more information.
    16/01/30 14:19:21 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    16/01/30 14:19:22 WARN Utils: Your hostname, Satellite-L640 resolves to a loopback address: 127.0.1.1; using 100.86.225.72 instead (on interface ppp0)
    16/01/30 14:19:22 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
    Spark context available as sc (master = local[*], app id = local-1454143767817).
    SQL context available as sqlContext.
    
    scala> import org.apache.spark.sql.types.{StringType, StructField, StructType}
    import org.apache.spark.sql.types.{StringType, StructField, StructType}
    
    scala> import org.apache.spark.sql.{DataFrame, Row, SQLContext}
    import org.apache.spark.sql.{DataFrame, Row, SQLContext}
    
    scala> import org.apache.spark.{SparkContext, SparkConf}
    import org.apache.spark.{SparkContext, SparkConf}
    
    scala>  val rows = List(Row("foo"), Row("bar"));
    rows: List[org.apache.spark.sql.Row] = List([foo], [bar])
    
    scala> val schema = StructType(Seq(StructField("col", StringType)));
    schema: org.apache.spark.sql.types.StructType = StructType(StructField(col,StringType,true))
    
    scala> val rdd = sc.parallelize(rows);
    rdd: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = ParallelCollectionRDD[0] at parallelize at <console>:32
    
    scala> val df = sqlContext.createDataFrame(rdd, schema)
    df: org.apache.spark.sql.DataFrame = [col: string]
    
    scala> df.registerTempTable("t~")
    
    scala> df.sqlContext.dropTempTable("t~")
    java.lang.RuntimeException: [1.2] failure: ``.'' expected but `~' found
    
    t~
     ^
    	at scala.sys.package$.error(package.scala:27)
    	at org.apache.spark.sql.catalyst.SqlParser$.parseTableIdentifier(SqlParser.scala:58)
    	at org.apache.spark.sql.SQLContext.table(SQLContext.scala:836)
    	at org.apache.spark.sql.SQLContext.dropTempTable(SQLContext.scala:763)
    	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:39)
    	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:44)
    	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:46)
    	at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:48)
    	at $iwC$$iwC$$iwC$$iwC.<init>(<console>:50)
    	at $iwC$$iwC$$iwC.<init>(<console>:52)
    	at $iwC$$iwC.<init>(<console>:54)
    	at $iwC.<init>(<console>:56)
    	at <init>(<console>:58)
    	at .<init>(<console>:62)
    	at .<clinit>(<console>)
    	at .<init>(<console>:7)
    	at .<clinit>(<console>)
    	at $print(<console>)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:606)
    	at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1045)
    	at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1326)
    	at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:821)
    	at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:852)
    	at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:800)
    	at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
    	at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
    	at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
    	at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
    	at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
    	at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
    	at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
    	at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
    	at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
    	at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
    	at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
    	at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1064)
    	at org.apache.spark.repl.Main$.main(Main.scala:31)
    	at org.apache.spark.repl.Main.main(Main.scala)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:606)
    	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
    	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
    	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
    	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
    	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    
    
    scala> 
    
    



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-12982][SQL] Add table name validation i...

Posted by jayadevanmurali <gi...@git.apache.org>.
Github user jayadevanmurali commented on the pull request:

    https://github.com/apache/spark/pull/10983#issuecomment-177447251
  
    Thanks @hvanhovell, Working on test case


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-12982][SQL] Add table name validation i...

Posted by jayadevanmurali <gi...@git.apache.org>.
Github user jayadevanmurali closed the pull request at:

    https://github.com/apache/spark/pull/10983


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-12982][SQL] Add table name validation i...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/10983#issuecomment-176725239
  
    Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-12982][SQL] Add table name validation i...

Posted by hvanhovell <gi...@git.apache.org>.
Github user hvanhovell commented on the pull request:

    https://github.com/apache/spark/pull/10983#issuecomment-177137733
  
    You are using an older version of the master branch (last commit 25 days ago). Your version still has the ```org.apache.spark.sql.catalyst.SqlParser``` class. That has been removed since commit https://github.com/apache/spark/commit/7cd7f2202547224593517b392f56e49e4c94cabc.
    
    Please update your master, and try again.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-12982][SQL] Add table name validation i...

Posted by jayadevanmurali <gi...@git.apache.org>.
Github user jayadevanmurali commented on the pull request:

    https://github.com/apache/spark/pull/10983#issuecomment-179270485
  
    @hvanhovell I have created a new PR for this issue with latest code. https://github.com/apache/spark/pull/11051. Could you please review the same.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-12982][SQL] Add table name validation i...

Posted by jayadevanmurali <gi...@git.apache.org>.
Github user jayadevanmurali commented on the pull request:

    https://github.com/apache/spark/pull/10983#issuecomment-177969053
  
    @hvanhovell, Yea I tried to merge my branch (https://github.com/jayadevanmurali/spark/tree/branch-0.1-SPARK-SPARK-12982) with apache spark master. But I cant see any option to merge by "create pull request". That's why I suggest new PR option.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org