You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Georg Heiler <ge...@gmail.com> on 2017/07/06 04:25:00 UTC

custom column types for JDBC datasource writer

Hi,
is it possible to somehow make spark not use VARCHAR(255) but something
bigger i.e. CLOB for Strings?

If not, is it at least possible to catch the exception which is thrown. To
me, it seems that spark is catching and logging it - so I can no longer
intervene and handle it:

https://stackoverflow.com/questions/44927764/spark-jdbc-oracle-long-string-fields

Regards,
Georg

Re: custom column types for JDBC datasource writer

Posted by Georg Heiler <ge...@gmail.com>.
Great, thanks!
But for the current release is there any possibility to be able to catch
the exception and handle it i.e. not have spark only log it to the console?

Takeshi Yamamuro <li...@gmail.com> schrieb am Do., 6. Juli 2017 um
06:44 Uhr:

> -dev +user
>
> You can in master and see
> https://github.com/apache/spark/commit/c7911807050227fcd13161ce090330d9d8daa533
> .
> This option will be available in the next release.
>
> // maropu
>
> On Thu, Jul 6, 2017 at 1:25 PM, Georg Heiler <ge...@gmail.com>
> wrote:
>
>> Hi,
>> is it possible to somehow make spark not use VARCHAR(255) but something
>> bigger i.e. CLOB for Strings?
>>
>> If not, is it at least possible to catch the exception which is thrown.
>> To me, it seems that spark is catching and logging it - so I can no longer
>> intervene and handle it:
>>
>>
>> https://stackoverflow.com/questions/44927764/spark-jdbc-oracle-long-string-fields
>>
>> Regards,
>> Georg
>>
>
>
>
> --
> ---
> Takeshi Yamamuro
>

Re: custom column types for JDBC datasource writer

Posted by Takeshi Yamamuro <li...@gmail.com>.
-dev +user

You can in master and see
https://github.com/apache/spark/commit/c7911807050227fcd13161ce090330d9d8daa533
.
This option will be available in the next release.

// maropu

On Thu, Jul 6, 2017 at 1:25 PM, Georg Heiler <ge...@gmail.com>
wrote:

> Hi,
> is it possible to somehow make spark not use VARCHAR(255) but something
> bigger i.e. CLOB for Strings?
>
> If not, is it at least possible to catch the exception which is thrown. To
> me, it seems that spark is catching and logging it - so I can no longer
> intervene and handle it:
>
> https://stackoverflow.com/questions/44927764/spark-jdbc-
> oracle-long-string-fields
>
> Regards,
> Georg
>



-- 
---
Takeshi Yamamuro