You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/03/03 03:15:55 UTC

[GitHub] [spark] gengliangwang edited a comment on pull request #35690: [SPARK-38335][SQL] Implement parser support for DEFAULT column values

gengliangwang edited a comment on pull request #35690:
URL: https://github.com/apache/spark/pull/35690#issuecomment-1057620996


   Supporting default column values is very common among DBMS. However, this will be a breaking change for Spark SQL
   Currently Spark SQL
   ```
   > create table t(i int, j int);
   > insert into t values(1);
   Error in query: `default`.`t` requires that the data to be inserted have the same number of columns as the target table: target table has 2 column(s) but the inserted data has 1 column(s), including 0 partition column(s) having constant value(s).
   ```
   
   After supporting default column value:
   ```
   > create table t(i int, j int);
   > insert into t values(1);
   > select * from t;
   1	NULL
   
   > create table t2(i int, j int default 0);
   > insert into t2 values(1);
   > select * from t2;
   1	0
   ```
   
   I am +1 with the change.
   Before merging this PR, I would like to collect the opinions of more committers. We can send SPIP for voting if necessary.
   cc @cloud-fan  @dongjoon-hyun @viirya @dbtsai @huaxingao @maropu @zsxwing @wangyum @yaooqinn WDYT? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org