You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Navin Goel (JIRA)" <ji...@apache.org> on 2017/03/11 14:29:04 UTC
[jira] [Commented] (SPARK-19742) When using SparkSession to write a
dataset to Hive the schema is ignored
[ https://issues.apache.org/jira/browse/SPARK-19742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15906213#comment-15906213 ]
Navin Goel commented on SPARK-19742:
------------------------------------
Work around code in Java for anyone else ho is interested.
{code}
public void saveWithSchema(Dataset dataset, String tableName) {
dataset.selectExpr(getSchema(tableName,dataset)).write().insertInto(tableName);
}
private String[] getSchema(String table, Dataset ds) {
SparkSession spark = new SparkSession.Builder()
.master(settings().sparkMaster)
.appName(settings().sparkAppName)
.enableHiveSupport()
.getOrCreate();
Dataset tmp = spark.sql("select * from "+table+" limit 1");
String[] sourceColumns = ds.columns();
String[] targetColumns = tmp.columns();
String[] result = new String[targetColumns.length];
for (int i = 0; i < targetColumns.length; i++) {
String t = targetColumns[i];
if(containsCaseInsensitive(t, Arrays.asList(sourceColumns)))
result[i] = t;
else
result[i] = "null as "+t;
}
return result;
}
private static boolean containsCaseInsensitive(String s, List<String> l){
for (String string : l){
if (string.equalsIgnoreCase(s)){
return true;
}
}
return false;
}
{code}
> When using SparkSession to write a dataset to Hive the schema is ignored
> ------------------------------------------------------------------------
>
> Key: SPARK-19742
> URL: https://issues.apache.org/jira/browse/SPARK-19742
> Project: Spark
> Issue Type: Bug
> Components: Java API
> Affects Versions: 2.0.1
> Environment: Running on Ubuntu with HDP 2.4.
> Reporter: Navin Goel
>
> I am saving a Dataset that is created form reading a json and some selects and filters into a hive table. The dataset.write().insertInto function does not look at schema when writing to the table but instead writes in order to the hive table.
> The schemas for both the tables are same.
> schema printed from spark of the dataset being written:
> StructType(StructField(countrycode,StringType,true), StructField(systemflag,StringType,true), StructField(classcode,StringType,true), StructField(classname,StringType,true), StructField(rangestart,StringType,true), StructField(rangeend,StringType,true), StructField(tablename,StringType,true), StructField(last_updated_date,TimestampType,true))
> Schema of the dataset after loading the same table from Hive:
> StructType(StructField(systemflag,StringType,true), StructField(RangeEnd,StringType,true), StructField(classcode,StringType,true), StructField(classname,StringType,true), StructField(last_updated_date,TimestampType,true), StructField(countrycode,StringType,true), StructField(rangestart,StringType,true), StructField(tablename,StringType,true))
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org