You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by robbyki <gi...@git.apache.org> on 2018/01/04 11:37:18 UTC

[GitHub] spark issue #8374: [SPARK-10101] [SQL] Add maxlength to JDBC field metadata ...

Github user robbyki commented on the issue:

    https://github.com/apache/spark/pull/8374
  
    Apologies for misunderstanding this issue but I'm going through several resources to try and understand how to maintain my schema created outside of spark and then just truncating my tables from spark followed by writing with a savemode of overwrite. My problem exactly this issue with respect to my db netezza failing when it sees spark trying to save a text data type so I then have to go specify in my new jdbc dialect to use varchar(n) which does work however that just replaces all of my varchar columns (different lengths for different columns) with whatever I specified in my dialect which is not what I want. How can I just have it save the TEXT as varchar without specifying a length in the custom dialect?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org