You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Andrés Ivaldi <ia...@gmail.com> on 2016/05/26 22:02:08 UTC

Insert into JDBC

Hello,
 I'realize that when dataframe executes insert it is inserting by scheme
order column instead by name, ie

dataframe.write(SaveMode).jdbc(url, table, properties)

Reading the profiler the execution is

insert into TableName values(a,b,c..)

what i need is
insert into TableNames (colA,colB,colC) values(a,b,c)

could be some configuration?

regards.

-- 
Ing. Ivaldi Andres

Re: Insert into JDBC

Posted by Andrés Ivaldi <ia...@gmail.com>.
Done, version 1.6.1 has the fix, updated and work fine

Thanks.

On Thu, May 26, 2016 at 4:15 PM, Anthony May <an...@gmail.com> wrote:

> It's on the 1.6 branch
>
> On Thu, May 26, 2016 at 4:43 PM Andrés Ivaldi <ia...@gmail.com> wrote:
>
>> I see, I'm using Spark 1.6.0 and that change seems to be for 2.0 or maybe
>> it's in 1.6.1 looking at the history.
>> thanks I'll see if update spark  to 1.6.1
>>
>> On Thu, May 26, 2016 at 3:33 PM, Anthony May <an...@gmail.com>
>> wrote:
>>
>>> It doesn't appear to be configurable, but it is inserting by column name:
>>>
>>> https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala#L102
>>>
>>> On Thu, 26 May 2016 at 16:02 Andrés Ivaldi <ia...@gmail.com> wrote:
>>>
>>>> Hello,
>>>>  I'realize that when dataframe executes insert it is inserting by
>>>> scheme order column instead by name, ie
>>>>
>>>> dataframe.write(SaveMode).jdbc(url, table, properties)
>>>>
>>>> Reading the profiler the execution is
>>>>
>>>> insert into TableName values(a,b,c..)
>>>>
>>>> what i need is
>>>> insert into TableNames (colA,colB,colC) values(a,b,c)
>>>>
>>>> could be some configuration?
>>>>
>>>> regards.
>>>>
>>>> --
>>>> Ing. Ivaldi Andres
>>>>
>>>
>>
>>
>> --
>> Ing. Ivaldi Andres
>>
>


-- 
Ing. Ivaldi Andres

Re: Insert into JDBC

Posted by Anthony May <an...@gmail.com>.
It's on the 1.6 branch
On Thu, May 26, 2016 at 4:43 PM Andrés Ivaldi <ia...@gmail.com> wrote:

> I see, I'm using Spark 1.6.0 and that change seems to be for 2.0 or maybe
> it's in 1.6.1 looking at the history.
> thanks I'll see if update spark  to 1.6.1
>
> On Thu, May 26, 2016 at 3:33 PM, Anthony May <an...@gmail.com> wrote:
>
>> It doesn't appear to be configurable, but it is inserting by column name:
>>
>> https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala#L102
>>
>> On Thu, 26 May 2016 at 16:02 Andrés Ivaldi <ia...@gmail.com> wrote:
>>
>>> Hello,
>>>  I'realize that when dataframe executes insert it is inserting by scheme
>>> order column instead by name, ie
>>>
>>> dataframe.write(SaveMode).jdbc(url, table, properties)
>>>
>>> Reading the profiler the execution is
>>>
>>> insert into TableName values(a,b,c..)
>>>
>>> what i need is
>>> insert into TableNames (colA,colB,colC) values(a,b,c)
>>>
>>> could be some configuration?
>>>
>>> regards.
>>>
>>> --
>>> Ing. Ivaldi Andres
>>>
>>
>
>
> --
> Ing. Ivaldi Andres
>

Re: Insert into JDBC

Posted by Andrés Ivaldi <ia...@gmail.com>.
I see, I'm using Spark 1.6.0 and that change seems to be for 2.0 or maybe
it's in 1.6.1 looking at the history.
thanks I'll see if update spark  to 1.6.1

On Thu, May 26, 2016 at 3:33 PM, Anthony May <an...@gmail.com> wrote:

> It doesn't appear to be configurable, but it is inserting by column name:
>
> https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala#L102
>
> On Thu, 26 May 2016 at 16:02 Andrés Ivaldi <ia...@gmail.com> wrote:
>
>> Hello,
>>  I'realize that when dataframe executes insert it is inserting by scheme
>> order column instead by name, ie
>>
>> dataframe.write(SaveMode).jdbc(url, table, properties)
>>
>> Reading the profiler the execution is
>>
>> insert into TableName values(a,b,c..)
>>
>> what i need is
>> insert into TableNames (colA,colB,colC) values(a,b,c)
>>
>> could be some configuration?
>>
>> regards.
>>
>> --
>> Ing. Ivaldi Andres
>>
>


-- 
Ing. Ivaldi Andres

Re: Insert into JDBC

Posted by Anthony May <an...@gmail.com>.
It doesn't appear to be configurable, but it is inserting by column name:
https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala#L102

On Thu, 26 May 2016 at 16:02 Andrés Ivaldi <ia...@gmail.com> wrote:

> Hello,
>  I'realize that when dataframe executes insert it is inserting by scheme
> order column instead by name, ie
>
> dataframe.write(SaveMode).jdbc(url, table, properties)
>
> Reading the profiler the execution is
>
> insert into TableName values(a,b,c..)
>
> what i need is
> insert into TableNames (colA,colB,colC) values(a,b,c)
>
> could be some configuration?
>
> regards.
>
> --
> Ing. Ivaldi Andres
>