You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "LvDongrong (JIRA)" <ji...@apache.org> on 2018/01/25 03:35:00 UTC

[jira] [Commented] (SPARK-23210) Introduce the concept of default value to schema

    [ https://issues.apache.org/jira/browse/SPARK-23210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16338675#comment-16338675 ] 

LvDongrong commented on SPARK-23210:
------------------------------------

 Can we set the  default value to be null, like hive? @maropu @gatorsmile  thank you!

> Introduce the concept of default value to schema
> ------------------------------------------------
>
>                 Key: SPARK-23210
>                 URL: https://issues.apache.org/jira/browse/SPARK-23210
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.2.1
>            Reporter: LvDongrong
>            Priority: Major
>
> There is no concept of DEFAULT VALUE for schema in spark now.
> Our team want to support insert into serial columns of table,like "insert into (a, c) values ("value1", "value2") for our use case, but the default vaule of column is not definited. In hive, the default vaule of column is NULL if we don't specify.
> So I think maybe it is necessary to introduce the concept of default value to schema in spark.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org