You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by Ganesh Sarde <ga...@gmail.com> on 2018/01/24 07:20:18 UTC

Ignite performing slow on cluster.

Hi ,

We are using ignite JDBC thin driver to store 1 million records in a table
on ignite cache. To insert 1 Million records on single node it take 60 sec,
whereas on cluster of 2 nodes it takes 5 min and time grows exponentially
as number of nodes are increased.

attached ignite log file where time was consumed on cluster. attached
configuration file for the cluster. The log and configuration file is here
<https://drive.google.com/drive/folders/1x0qdBMDyjYEaJI-UF17kif63uLPJKq94>

IS there any additional configuration required to get time down to insert
records over a cluster.

Thanks & Regards,

Ganesh Sarde

Re: Ignite performing slow on cluster.

Posted by vkulichenko <va...@gmail.com>.
Ganesh,

Thin driver uses one of the nodes as a gateway, so once you add second node,
half of updates will have to make two network hops instead of one, so
slowdown is expected. Although, it should not get worse further when you add
third, fourth and so on node.

The best option for this case would be to use client node driver [1] with
'streaming' option set to true. I would recommend you to try it out and
check the results.

[1] https://apacheignite-sql.readme.io/docs/jdbc-client-driver

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Ignite performing slow on cluster.

Posted by Ganesh Sarde <ga...@gmail.com>.
Hi ,
Following is the java code as requested.

sql = INSERT INTO Mon1_0 ( T_ID,kpi4,kpi5,id,kpi2,kpi3,kpi1)
VALUES(?,?,?,?,?,?,?)



                             preparedStatement = con.prepareStatement(sql.
toString(),

                                                ResultSet.
*TYPE_FORWARD_ONLY*, ResultSet.*CONCUR_READ_ONLY*);



                             *int* rowNum=1;

                             *int* tempCnt=0;

                             *for* (RowDTO row : rows) {

                                      *int* col = 2;



                                      preparedStatement.setInt(1, rowNum++);

                                      *for* (String cols : fields) {

                                                MetaDataType type =
metaMap.get(cols);



                                                *if*(type == *null*||
(row.getColumnValue(cols) == *null*)){

                                                          preparedStatement


.setNull(col++,Types.*NULL*);

                                                         *continue*;

                                                }



                                                *switch* (type) {

                                                *case* *LONG*:

                                                          preparedStatement

                                                          .setLong(col++,
(Long) row.getColumnValue(cols));

                                                          *break*;

                                                *case* *STRING*:

                                                          preparedStatement


.setString(col++,row.getColumnValue(cols).toString());

                                                          *break*;

                                                *case* *DOUBLE*:

                                                          preparedStatement

                                                          .setDouble(col++,
(Double) row.getColumnValue(cols));

                                                          *break*;

                                                *case* *FLOAT*:

                                                          preparedStatement

                                                          .setFloat(col++,
(Float) row.getColumnValue(cols));

                                                          *break*;

                                                *case**INTEGER*:

                                                          preparedStatement

                                                          .setInt(col++,
(Integer) row.getColumnValue(cols));

                                                          *break*;



                                                *default*:

                                                          *break*;

                                                }



                                      }


preparedStatement.addBatch();

                                                tempCnt++;

                                                *if*(tempCnt % 100000 == 0){

                                                          preparedStatement.
executeBatch();

                                                          System.*out*
.println("batch "+tempCnt/10000);

                                                }



                             }



                             preparedStatement.executeBatch();

                   } *catch* (SQLException e) {

                             // *TODO* Auto-generated catch block

                             *throw* *new*ConnectionException(e.
getMessage());

                   }



                   rows=*null*;


Thanks & Regards,
Ganesh sarde

On 24-Jan-2018 1:09 PM, "Jörn Franke" <jo...@gmail.com> wrote:

What is the Java source code? Most of the people have difficulties to write
proper Java JDBC code for bulk inserts (even for normal databases). It
requires some thinking on threading, buffers and of course selecting the
right methodology to insert etc

On 24. Jan 2018, at 08:20, Ganesh Sarde <ga...@gmail.com> wrote:


Hi ,

We are using ignite JDBC thin driver to store 1 million records in a table
on ignite cache. To insert 1 Million records on single node it take 60 sec,
whereas on cluster of 2 nodes it takes 5 min and time grows exponentially
as number of nodes are increased.

attached ignite log file where time was consumed on cluster. attached
configuration file for the cluster. The log and configuration file is here
<https://drive.google.com/drive/folders/1x0qdBMDyjYEaJI-UF17kif63uLPJKq94>

IS there any additional configuration required to get time down to insert
records over a cluster.

Thanks & Regards,

Ganesh Sarde

Re: Ignite performing slow on cluster.

Posted by Jörn Franke <jo...@gmail.com>.
What is the Java source code? Most of the people have difficulties to write proper Java JDBC code for bulk inserts (even for normal databases). It requires some thinking on threading, buffers and of course selecting the right methodology to insert etc

> On 24. Jan 2018, at 08:20, Ganesh Sarde <ga...@gmail.com> wrote:
> 
> 
> Hi ,
> 
> We are using ignite JDBC thin driver to store 1 million records in a table on ignite cache. To insert 1 Million records on single node it take 60 sec, whereas on cluster of 2 nodes it takes 5 min and time grows exponentially as number of nodes are increased.
> 
> attached ignite log file where time was consumed on cluster. attached configuration file for the cluster. The log and configuration file is here
> IS there any additional configuration required to get time down to insert records over a cluster.
> 
> Thanks & Regards,
> 
> Ganesh Sarde
> 
>