You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by ght230 <gh...@163.com> on 2016/10/31 09:02:17 UTC

When writethrough processing, Persistent storage failed

When I was putting data into a cache(configured writethrough), the persistent
storage(Oracle) failed.

Then I found Ignite still continuously trying to write data to the Oracle,
Until all the public thread pools are filled.

When Oracle restored, Ignite still can not automatically connect to the it.

I do not know whether this is a normal treatment, or anything else I need to
configure?



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/When-writethrough-processing-Persistent-storage-failed-tp8622.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: When writethrough processing, Persistent storage failed

Posted by ght230 <gh...@163.com>.
I used the one provided out of the box.



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/When-writethrough-processing-Persistent-storage-failed-tp8622p8644.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: When writethrough processing, Persistent storage failed

Posted by vkulichenko <va...@gmail.com>.
Hi,

If there is a high load, I think there is a big chance of losing something
in 10 seconds. However, you can try to increase flushSize property which
controls the maximum amount of entries saved in queue. Note that in case you
can't tolerate any data losses, you should use write-through instead of
write-behind.

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/When-writethrough-processing-Persistent-storage-failed-tp8622p8796.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: When writethrough processing, Persistent storage failed

Posted by ght230 <gh...@163.com>.
In my test, I only killed MySql DB for about 10 seconds, then restore it.

Then I checked Mysql DB, I found some data missed.

my test code as following: 
        for (int i = 1; i < keyCnt; i++){
            orderGood2Key = new OrderGood2Key(i);
            orderGood2 = new OrderGood2(i,i+keyCnt,i+keyCnt);
            orderGoodCache.put(orderGood2Key, orderGood2);
        }

The cache configure as following:
				<bean class="org.apache.ignite.configuration.CacheConfiguration">
					<property name="name" value="orderGoodCache" />
					<property name="atomicityMode" value="ATOMIC" />
					<property name="cacheMode" value="PARTITIONED" />
					<property name="backups" value="0" />
					<property name="cacheStoreFactory" ref="orderPojoStoreFactory" />
					<property name="readThrough" value="false" />
					<property name="writeThrough" value="true" />
					<property name="writeBehindEnabled" value="true" />
					<property name="writeBehindFlushSize" value="10240" />
					<property name="writeBehindFlushFrequency" value="5000" />
					<property name="writeBehindFlushThreadCount" value="2" />
					<property name="indexedTypes">
						<array>
							<value>org.apache.ignite.OrderGood2Key</value>
							<value>org.apache.ignite.OrderGood2</value>
						</array>
					</property>
				</bean>

 
According to the data read from MySql DB, we can find that the data with ID
between 4754 to 5101 are missed.

ID     order_id    good_id
...
4751	10004751	10004751
4752	10004752	10004752
4753	10004753	10004753
5102	10005102	10005102
5103	10005103	10005103
5104	10005104	10005104
...

Is there anything wrong in my configure?



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/When-writethrough-processing-Persistent-storage-failed-tp8622p8769.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: When writethrough processing, Persistent storage failed

Posted by vkulichenko <va...@gmail.com>.
To be honest, I'm not sure I understand the problem. In case of write-behind,
Ignite will try to save as much entries as possible in case connection to DB
is lost. It will basically keep entries in queue until the queue is too
large, and then will start to evict from it to avoid OOME. Once connection
is restored, it should start flushing entries that are still in queue to the
DB. Do you observe another behavior?

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/When-writethrough-processing-Persistent-storage-failed-tp8622p8742.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: When writethrough processing, Persistent storage failed

Posted by ght230 <gh...@163.com>.
Anyone can answer this problem?



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/When-writethrough-processing-Persistent-storage-failed-tp8622p8730.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: When writethrough processing, Persistent storage failed

Posted by ght230 <gh...@163.com>.
I have another problem about this.

If DB fails, after a period of time it restore. Then during the period of
time from DB fail to DB restore, all the data written to the cache is not
synchronous written to DB.

After DB restore, will Ignite automatically write the  missed data to
Oracle?



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/When-writethrough-processing-Persistent-storage-failed-tp8622p8656.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: When writethrough processing, Persistent storage failed

Posted by vkulichenko <va...@gmail.com>.
Hi,

What cache store implementation are you using? The one provided out of the
box or you own? Are there any exception during database write?

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/When-writethrough-processing-Persistent-storage-failed-tp8622p8632.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.