You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Leanken.Lin (JIRA)" <ji...@apache.org> on 2019/06/14 06:52:00 UTC
[jira] [Updated] (SPARK-28050) DataFrameWriter support insertInto a
specific table partition
[ https://issues.apache.org/jira/browse/SPARK-28050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Leanken.Lin updated SPARK-28050:
--------------------------------
Description:
```
val ptTableName = "mc_test_pt_table"
sql(s"CREATE TABLE ${ptTableName} (name STRING, num BIGINT) PARTITIONED BY (pt1 STRING, pt2 STRING)")
val df = spark.sparkContext.parallelize(0 to 99, 2)
.map(f =>
{
(s"name-$f", f)
})
.toDF("name", "num")
// if i want to insert df into a specific partition
// say pt1='2018',pt2='0601' current api does not supported
// only with following work around
df.createOrReplaceTempView(s"${ptTableName}_tmp_view")
sql(s"insert into table ${ptTableName} partition (pt1='2018', pt2='0601') select * from ${ptTableName}_tmp_view")
```
Propose to have another API in DataframeWriter that can do somethink like:
```
df.write.insertInto(ptTableName, "pt1='2018',pt2='0601'")
```
we have a lot of this kind of scenario in our production env. providing a api like this will make us less painful.
was:
val ptTableName = "mc_test_pt_table"
sql(s"CREATE TABLE ${ptTableName} (name STRING, num BIGINT) PARTITIONED BY (pt1 STRING, pt2 STRING)")
val df = spark.sparkContext.parallelize(0 to 99, 2)
.map(f =>
{
(s"name-$f", f)
})
.toDF("name", "num")
// if i want to insert df into a specific partition
// say pt1='2018',pt2='0601' current api does not supported
// only with following work around
df.createOrReplaceTempView(s"${ptTableName}_tmp_view")
sql(s"insert into table ${ptTableName} partition (pt1='2018', pt2='0601') select * from ${ptTableName}_tmp_view")
Propose to have another API in DataframeWriter that can do somethink like:
df.write.insertInto(ptTableName, "pt1='2018',pt2='0601'")
we have a lot of this kind of scenario in our production env. providing a api like this will make us less painful.
> DataFrameWriter support insertInto a specific table partition
> -------------------------------------------------------------
>
> Key: SPARK-28050
> URL: https://issues.apache.org/jira/browse/SPARK-28050
> Project: Spark
> Issue Type: New Feature
> Components: SQL
> Affects Versions: 2.3.3, 2.4.3
> Reporter: Leanken.Lin
> Priority: Minor
> Fix For: 2.3.3, 2.4.3
>
>
> ```
> val ptTableName = "mc_test_pt_table"
> sql(s"CREATE TABLE ${ptTableName} (name STRING, num BIGINT) PARTITIONED BY (pt1 STRING, pt2 STRING)")
> val df = spark.sparkContext.parallelize(0 to 99, 2)
> .map(f =>
> {
> (s"name-$f", f)
> })
> .toDF("name", "num")
> // if i want to insert df into a specific partition
> // say pt1='2018',pt2='0601' current api does not supported
> // only with following work around
> df.createOrReplaceTempView(s"${ptTableName}_tmp_view")
> sql(s"insert into table ${ptTableName} partition (pt1='2018', pt2='0601') select * from ${ptTableName}_tmp_view")
> ```
> Propose to have another API in DataframeWriter that can do somethink like:
> ```
> df.write.insertInto(ptTableName, "pt1='2018',pt2='0601'")
> ```
> we have a lot of this kind of scenario in our production env. providing a api like this will make us less painful.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org