You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by robbyki <gi...@git.apache.org> on 2018/01/04 13:58:10 UTC

[GitHub] spark issue #5618: [SPARK-7039][SQL]JDBCRDD: Add support on type NVARCHAR

Github user robbyki commented on the issue:

    https://github.com/apache/spark/pull/5618
  
    How can I create a schema outside of spark containing columns with varchar and nvarchar and then save a dataframe with truncate = true and avoid an invalid datatype error for TEXT in Netezza by registering a new dialect? My current dialect has StringType mapped to either varchar or nvarchar but I can't have both and I'm failing to understand how to customize and persists table schemas without my dialects over-writing everything. Using spark 2.1. 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org