You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by diopek <de...@gmail.com> on 2017/02/22 16:54:38 UTC

Missing records Ignite cache size grows

We are experiencing missing cached records if number of records exceeds
certain numbers
Currently actual number of records from database is 522575 (key entries).
After loading to Ignite Cache, and cache size returning 522511 records,
missing 64 entries.
if we replace Ignite Cache with regular java cache LinkedHashMap, error is
not happening?
Current Cache Structure and configuration is as the following;

*CacheConfiguration<Long, ArrayList&lt;MyPOJO>> cacheCfg = new
CacheConfiguration<>(cacheName); 
cacheCfg.setStartSize(startSize); 
cacheCfg.setCacheMode(CacheMode.LOCAL); 
cacheCfg.setIndexedTypes(Long.class,ArrayList.class); 
cacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(MyCacheLoadOnlyStore.class))*

This issue is not happening with during our unit tests and also runs with
lower number of records.
We currently use Ignite 1.8.0 but we realized this error while using 1.5.0
version of Ignite.
Any advise is really appreciated, otherwise we have to drop off using
Ignite. Thanks,



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Missing-records-Ignite-cache-size-grows-tp10809.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Missing records Ignite cache size grows

Posted by diopek <de...@gmail.com>.
Hi Andrew,
Problem is, we have 24 GB RAM, 8 CPU Windows server where we couldn't
replicate the issue.
Issue happens when we move it to Linux box with 64 CPU and set the memory 64
GB RAM (or higher) to process much larger volumes data, again we observed
that Java collection gets populated correctly but after data gets loaded
into Ignite, Ignite cache size always 10s of records less than actual java
collection size. We caught this bug after almost a year when those missing
number or transactions happen to have quite large balances that was missed
in generated reports.
Again, if you are trying to replicate this issue for dev PCs as close to our
PC specs, I'd say it's not possible and we couldn't replicate the issue in
almost a week in local dev box (where we run Eclipse etc.).
in either case local or integration test environment we are pointing to same
data source to read the data that also rules out any data error. And again
just before java collection total number record counts is always correct
whether data is small or large.
Thanks,



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Missing-records-Ignite-cache-size-grows-tp10809p10884.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Missing records Ignite cache size grows

Posted by Andrey Mashenkov <an...@gmail.com>.
Hi,

I can't reproduce the issue.
Would you please make a simple test that will reproduce it?

On Fri, Feb 24, 2017 at 11:42 PM, diopek <de...@gmail.com> wrote:

> CacheConfiguration<Long, ArrayList&lt;RwaDTO>> cacheCfg = new
> CacheConfiguration<>(cacheName);
>                 cacheCfg.setStartSize(startSize);
>                 cacheCfg.setCacheMode(CacheMode.LOCAL);
>                 cacheCfg.setIndexedTypes(Long.class,ArrayList.class);
>
> cacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(*
> ExistingOrReplenishCacheLoadOnlyStore3.class*));
>
>                 Ignite ignite = Ignition.ignite();
>                 extOrRepCache = ignite.createCache(cacheCfg);
>
>                 *extOrRepCache.loadCache(null, asOf, datasetId, scenId,
> sql,
> Boolean.valueOf(replenishFlag), startSize);*
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Missing-records-Ignite-cache-size-
> grows-tp10809p10877.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Best regards,
Andrey V. Mashenkov

Re: Missing records Ignite cache size grows

Posted by diopek <de...@gmail.com>.
CacheConfiguration<Long, ArrayList&lt;RwaDTO>> cacheCfg = new
CacheConfiguration<>(cacheName);
		cacheCfg.setStartSize(startSize);
		cacheCfg.setCacheMode(CacheMode.LOCAL);
		cacheCfg.setIndexedTypes(Long.class,ArrayList.class);
	
cacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(*ExistingOrReplenishCacheLoadOnlyStore3.class*));

		Ignite ignite = Ignition.ignite();
		extOrRepCache = ignite.createCache(cacheCfg);

		*extOrRepCache.loadCache(null, asOf, datasetId, scenId, sql,
Boolean.valueOf(replenishFlag), startSize);*




--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Missing-records-Ignite-cache-size-grows-tp10809p10877.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Missing records Ignite cache size grows

Posted by Andrey Mashenkov <an...@gmail.com>.
Hi,

I've asked for closure that passed to CacheStore.loadCache(closure, ..)
method.


On Fri, Feb 24, 2017 at 9:31 PM, diopek <de...@gmail.com> wrote:

> package myignite.loading.test.cache.store;
>
> import static myignite.loading.test.common.CommonUtils.CONFIG_DIR;
> import static myignite.loading.test.common.CommonUtils.DATA_SRC_PWD;
> import static myignite.loading.test.common.CommonUtils.DATA_SRC_URL;
> import static myignite.loading.test.common.CommonUtils.DATA_SRC_USR;
> import static myignite.loading.test.common.CommonUtils.RWA_SQL_FETCH_SIZE;
> import static
> myignite.loading.test.common.CommonUtils.SQL_FETCH_SIZE_DEFAULT;
> import static
> myignite.loading.test.common.CommonUtils.TREAS_LIQUIDITY_CLASS_UNDEFINED;
> import static myignite.loading.test.common.CommonUtils.REPLENISH;
> import static myignite.loading.test.common.CommonUtils.EXISTING;
> import static myignite.loading.test.common.CommonUtils.stopWatchEnd;
> import static myignite.loading.test.common.CommonUtils.stopWatchStart;
> import org.jooq.lambda.tuple.Tuple2;
>
> import java.io.File;
> import java.io.FileInputStream;
> import java.io.IOException;
> import java.io.InputStream;
> import java.sql.ResultSet;
> import java.sql.SQLException;
> import java.util.ArrayList;
> import java.util.Date;
> import java.util.Iterator;
> import java.util.Properties;
> import java.util.concurrent.atomic.AtomicLong;
>
> import javax.cache.integration.CacheLoaderException;
>
> import org.apache.ignite.cache.store.CacheLoadOnlyStoreAdapter;
> import org.apache.ignite.internal.util.typedef.T2;
> import org.apache.ignite.lang.IgniteBiTuple;
> import org.slf4j.Logger;
> import org.slf4j.LoggerFactory;
> import org.springframework.dao.DataAccessException;
> import org.springframework.jdbc.core.JdbcTemplate;
> import org.springframework.jdbc.core.ResultSetExtractor;
> import org.springframework.jdbc.datasource.SingleConnectionDataSource;
> import org.springframework.util.Assert;
> import org.springframework.util.StopWatch;
>
> import myignite.loading.test.domain.MyDTO;
>
> public class ExistingOrReplenishCacheLoadOnlyStore3
>                 extends CacheLoadOnlyStoreAdapter<Long,
> ArrayList&lt;MyDTO>,
> Tuple2<Long,ArrayList&lt;MyDTO>>> {
>
>         private final static Logger logger =
> LoggerFactory.getLogger(ExistingOrReplenishCacheLoadOnlyStore3.class);
>
>         private static int SQL_FETCH_SIZE = SQL_FETCH_SIZE_DEFAULT;
>         private static String dataSourceUrl;
>         private static String dbUser;
>         private static String dbPwd;
>
>         private SingleConnectionDataSource DATA_SRC;
>
>         static {
>                 String configDir = System.getProperty(CONFIG_DIR);
>                 Assert.notNull(configDir, "config.dir should be passed as
> JVM
> arguments...");
>                 StringBuffer filePath = new StringBuffer(configDir);
>                 filePath.append(File.separatorChar).append("rwa-
> batch.properties");
>                 Properties props = new Properties();
>                 // InputStream inputStream =
>                 //
> ExistingCacheStore.class.getClassLoader().getResourceAsStream(configDir);
>                 try {
>                         InputStream inputStream = new
> FileInputStream(filePath.toString());
>                         Assert.notNull(inputStream,
>                                         "FileNotFoundException - property
> file '" + filePath + "' not found in
> file system");
>                         props.load(inputStream);
>                 } catch (IOException e) {
>                         e.printStackTrace();
>                 }
>                 dataSourceUrl = props.getProperty(DATA_SRC_URL);
>                 System.out.println(">>>dataSourceUrl::" + dataSourceUrl);
>                 Assert.notNull(dataSourceUrl, "'rwa.jdbc.url' should be
> provided in
> rwa-batch.properties...");
>                 dbUser = props.getProperty(DATA_SRC_USR);
>                 Assert.notNull(dbUser, "'rwa.jdbc.usr' should be provided
> in
> rwa-batch.properties...");
>                 dbPwd = System.getProperty(DATA_SRC_PWD);
>                 Assert.notNull(dbPwd, "'rwa.jdbc.pwd' should be provided in
> rwa-batch.properties...");
>
>                 String fetchSize = props.getProperty(RWA_SQL_FETCH_SIZE);
>                 if (fetchSize != null) {
>                         SQL_FETCH_SIZE = Integer.valueOf(fetchSize);
>                 }
>         }
>
>         private JdbcTemplate jdbcTemplate;
>
>         public ExistingOrReplenishCacheLoadOnlyStore3() {
>                 super();
>                 DATA_SRC = new SingleConnectionDataSource();
>                 DATA_SRC.setDriverClassName("oracle.jdbc.driver.
> OracleDriver");
>                 DATA_SRC.setUrl(dataSourceUrl);
>                 DATA_SRC.setUsername(dbUser);
>                 DATA_SRC.setPassword(dbPwd);
>                 jdbcTemplate = new JdbcTemplate(DATA_SRC);
>         }
>
>         @Override
>         protected Iterator<Tuple2&lt;Long,ArrayList&lt;MyDTO>>>
> inputIterator(Object... args) throws CacheLoaderException {
>                 if (args == null || args.length < 6)
>                         throw new CacheLoaderException(
>                                         "Expected asOf, scenId,
> HierarchyServiceProxy and replenish parameters
> are not fully provided...");
>                 try {
>                         final Date asOf = (Date) args[0];
>                         final String datasetId = (String) args[1];
>                         final Integer scenId = (Integer) args[2];
>                         final String sql = (String) args[3];
>                         final Boolean replenishFlag = (Boolean) args[4];
>                         Integer startSize =(Integer) args[5];
>                     logger.debug("AS_OF::{} DATASET_ID::{} SCEN_ID::{}
> REP_FLAG::{}
> START_SIZE::{}", asOf, datasetId, scenId,
>                                 replenishFlag, startSize);
>
>                         logger.debug("load{}Cache::SQL::{}",
> (replenishFlag ? "Replenish" :
> "Existing"), sql);
>                         ArrayList<Tuple2&lt;Long,ArrayList&lt;MyDTO>>>
> extOrRepList = null;
> //                      Iterator<Entry&lt;Integer, ArrayList&lt;MyDTO>>>
> iterator = null;
> //                      Iterator<ArrayList&lt;MyDTO>> iterator = null;
>                         Iterator<Tuple2&lt;Long,ArrayList&lt;MyDTO>>>
> iterator = null;
>
> //                      ResultSetExtractor<LinkedHashMap&lt;Integer,
> ArrayList&lt;MyDTO>>>
> extOrRepMapResultSetExtractor = new
> ResultSetExtractor<LinkedHashMap&lt;Integer, ArrayList&lt;MyDTO>>>() {
>                         ResultSetExtractor<ArrayList&
> lt;Tuple2&lt;Long,ArrayList&lt;MyDTO>>>>
> extOrRepMapResultSetExtractor = new
> ResultSetExtractor<ArrayList&lt;Tuple2&lt;Long,ArrayList&lt;MyDTO>>>>() {
>                                 @Override
> //                              public LinkedHashMap<Integer,
> ArrayList&lt;MyDTO>>
> extractData(ResultSet rs)
>                                 public ArrayList<Tuple2&lt;Long,
> ArrayList&lt;MyDTO>>>
> extractData(ResultSet rs)
>                                                 throws SQLException,
> DataAccessException {
>
> //                                      LinkedHashMap<Integer,
> ArrayList&lt;MyDTO>> extOrRepMap = new
> LinkedHashMap<Integer, ArrayList&lt;MyDTO>>(
> //                                                      gockeyCnt, 1.0f);
> //                                      ArrayList<ArrayList&lt;MyDTO>>
> extOrRepList = new
> ArrayList<ArrayList&lt;MyDTO>>(
> //                                                      gockeyCnt);
>                                         ArrayList<Tuple2&lt;Long,ArrayList&lt;MyDTO>>>
> extOrRepList = new
> ArrayList<Tuple2&lt;Long,ArrayList&lt;MyDTO>>>(startSize);
>                                         String prevGoc = null, prevAcct =
> null, prevSac = null, prevCcy = null;
>                                         Integer prevFpId = null;
>                                         ArrayList<MyDTO> currDTOList =
> null, prevDTOList = null;
>                                         MyDTO dto = null, prevDto = null;
> //                                      final AtomicInteger entryCnt = new
> AtomicInteger(0);
>                                         while (rs.next()) {
>                                                 int i = 1;
>                                                 dto = new MyDTO();
>                                                 dto.setAsOf(asOf);
>                                                 dto.setDatasetId(Integer.
> valueOf(datasetId));
>                                                 dto.setScnId(scenId);
>
>
> dto.setGoc(rs.getString(i++));
>
> dto.setAcct(rs.getString(i++));
>                                                 dto.setSumAffilCode(rs.
> getString(i++));
>
> dto.setCcyCode(rs.getString(i++));
>
> dto.setFrcstProdId(rs.getInt(i++));
>
> dto.setMngSeg(rs.getString(i++));
>
> dto.setMngGeo(rs.getString(i++));
>
> dto.setFrsBu(rs.getString(i++));
>                                                 if (replenishFlag) {
>
> dto.setReplenishFlag(REPLENISH);
>                                                 } else {
>
> dto.setRwaExposureType(rs.getString(i++));
>
> dto.setRiskAssetClass(rs.getString(i++));
>
> dto.setRiskSubAssetClass(rs.getString(i++));
>                                                         String
> treasLiqClass = rs.getString(i++);
>
> dto.setTreasLiqClass((treasLiqClass == null ?
> TREAS_LIQUIDITY_CLASS_UNDEFINED :treasLiqClass));
>
> dto.setCounterpartyRating(rs.getString(i++));
>
> dto.setClearedStatus(rs.getString(i++));
>
> dto.setMaturityBand(rs.getString(i++));
>
> dto.setDerivativeType(rs.getString(i++));
>
> dto.setReplenishFlag(EXISTING);
>                                                 }
>
> dto.setStartDate(rs.getDate(i++));
>                                                 dto.setMaturityDate(rs.
> getDate(i++));
>
> dto.setAmount(rs.getDouble(i++));
>                                                 if(!replenishFlag) {
>
> dto.setEtlsource(rs.getString(i++));
>                                                 }else {
>
> dto.setInvestmentId(rs.getString(i++));
>                                                 }
>                                                 if
> (dto.getGoc().equals(prevGoc) && dto.getAcct().equals(prevAcct)
>                                                                 &&
> dto.getSumAffilCode().equals(prevSac) &&
> dto.getCcyCode().equals(prevCcy)
>                                                                 &&
> dto.getFrcstProdId().equals(prevFpId)) {
>
> prevDTOList.add(prevDto);
>                                                 } else {
>                                                         if (prevDto !=
> null) {
>
> prevDTOList.add(prevDto);
> //
> extOrRepMap.put(entryCnt.incrementAndGet(), prevDTOList);
>
> extOrRepList.add(new Tuple2<Long,
> ArrayList&lt;MyDTO>>(entryCnt.incrementAndGet(),prevDTOList));
>                                                         }
>                                                         currDTOList = new
> ArrayList<MyDTO>();
>                                                 }
>                                                 prevDto = dto;
>                                                 prevDTOList = currDTOList;
>                                                 prevGoc = dto.getGoc();
>                                                 prevAcct = dto.getAcct();
>                                                 prevSac =
> dto.getSumAffilCode();
>                                                 prevCcy = dto.getCcyCode();
>                                                 prevFpId =
> dto.getFrcstProdId();
>                                         }
>                                         if (prevDto != null) {
>                                                 prevDTOList.add(prevDto);
> //                                              extOrRepMap.put(entryCnt.incrementAndGet(),
> prevDTOList);
>                                                 extOrRepList.add(new
> Tuple2<Long,
> ArrayList&lt;MyDTO>>(entryCnt.incrementAndGet(),prevDTOList));
>                                         }
> //                                      return extOrRepMap;
>                                         return extOrRepList;
>                                 }
>
>                         };
>
>                         jdbcTemplate.setFetchSize(SQL_FETCH_SIZE);
>                         StopWatch sw = new StopWatch();
>                         stopWatchStart(sw, "populatingDataMap");
>                         logger.debug("BEFORE populatingDataMap_STARTS!!!!")
> ;
> //                      extOrRepMap = jdbcTemplate.query(sql,
> extOrRepMapResultSetExtractor);
>                         extOrRepList = jdbcTemplate.query(sql,
> extOrRepMapResultSetExtractor);
>                         logger.debug("BEFORE populatingDataMap_ENDS!!!!");
>                         stopWatchEnd(sw);
>
>                         if (extOrRepList != null) {
> //                              iterator = extOrRepMap.entrySet().
> iterator();
>                                 /*
>                                  * RECORDS COUNT PRINTED CORRECTLY HERE
> BEFORE PASSING TO IGNITE
>                                  */
>                                 logger.debug("+++++++ GOC_KEY
> COUNT::{}",extOrRepList.size());
>                                 iterator = extOrRepList.iterator();
>                         }
>                         return iterator;
>                 } finally {
>                         DATA_SRC.destroy();
>                 }
>         }
>
>         @Override
>         protected IgniteBiTuple<Long, ArrayList&lt;MyDTO>>
> parse(Tuple2<Long,ArrayList&lt;MyDTO>> rec, Object... args) {
>                 return new T2<>(rec.v1(), rec.v2());
>         }
>
> }
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Missing-records-Ignite-cache-size-
> grows-tp10809p10871.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Best regards,
Andrey V. Mashenkov

Re: Missing records Ignite cache size grows

Posted by diopek <de...@gmail.com>.
package myignite.loading.test.cache.store;

import static myignite.loading.test.common.CommonUtils.CONFIG_DIR;
import static myignite.loading.test.common.CommonUtils.DATA_SRC_PWD;
import static myignite.loading.test.common.CommonUtils.DATA_SRC_URL;
import static myignite.loading.test.common.CommonUtils.DATA_SRC_USR;
import static myignite.loading.test.common.CommonUtils.RWA_SQL_FETCH_SIZE;
import static
myignite.loading.test.common.CommonUtils.SQL_FETCH_SIZE_DEFAULT;
import static
myignite.loading.test.common.CommonUtils.TREAS_LIQUIDITY_CLASS_UNDEFINED;
import static myignite.loading.test.common.CommonUtils.REPLENISH;
import static myignite.loading.test.common.CommonUtils.EXISTING;
import static myignite.loading.test.common.CommonUtils.stopWatchEnd;
import static myignite.loading.test.common.CommonUtils.stopWatchStart;
import org.jooq.lambda.tuple.Tuple2;

import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.Date;
import java.util.Iterator;
import java.util.Properties;
import java.util.concurrent.atomic.AtomicLong;

import javax.cache.integration.CacheLoaderException;

import org.apache.ignite.cache.store.CacheLoadOnlyStoreAdapter;
import org.apache.ignite.internal.util.typedef.T2;
import org.apache.ignite.lang.IgniteBiTuple;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.dao.DataAccessException;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.jdbc.core.ResultSetExtractor;
import org.springframework.jdbc.datasource.SingleConnectionDataSource;
import org.springframework.util.Assert;
import org.springframework.util.StopWatch;

import myignite.loading.test.domain.MyDTO;

public class ExistingOrReplenishCacheLoadOnlyStore3
		extends CacheLoadOnlyStoreAdapter<Long, ArrayList&lt;MyDTO>,
Tuple2<Long,ArrayList&lt;MyDTO>>> {

	private final static Logger logger =
LoggerFactory.getLogger(ExistingOrReplenishCacheLoadOnlyStore3.class);

	private static int SQL_FETCH_SIZE = SQL_FETCH_SIZE_DEFAULT;
	private static String dataSourceUrl;
	private static String dbUser;
	private static String dbPwd;

	private SingleConnectionDataSource DATA_SRC;

	static {
		String configDir = System.getProperty(CONFIG_DIR);
		Assert.notNull(configDir, "config.dir should be passed as JVM
arguments...");
		StringBuffer filePath = new StringBuffer(configDir);
		filePath.append(File.separatorChar).append("rwa-batch.properties");
		Properties props = new Properties();
		// InputStream inputStream =
		//
ExistingCacheStore.class.getClassLoader().getResourceAsStream(configDir);
		try {
			InputStream inputStream = new FileInputStream(filePath.toString());
			Assert.notNull(inputStream,
					"FileNotFoundException - property file '" + filePath + "' not found in
file system");
			props.load(inputStream);
		} catch (IOException e) {
			e.printStackTrace();
		}
		dataSourceUrl = props.getProperty(DATA_SRC_URL);
		System.out.println(">>>dataSourceUrl::" + dataSourceUrl);
		Assert.notNull(dataSourceUrl, "'rwa.jdbc.url' should be provided in
rwa-batch.properties...");
		dbUser = props.getProperty(DATA_SRC_USR);
		Assert.notNull(dbUser, "'rwa.jdbc.usr' should be provided in
rwa-batch.properties...");
		dbPwd = System.getProperty(DATA_SRC_PWD);
		Assert.notNull(dbPwd, "'rwa.jdbc.pwd' should be provided in
rwa-batch.properties...");
		
		String fetchSize = props.getProperty(RWA_SQL_FETCH_SIZE);
		if (fetchSize != null) {
			SQL_FETCH_SIZE = Integer.valueOf(fetchSize);
		}
	}

	private JdbcTemplate jdbcTemplate;
	
	public ExistingOrReplenishCacheLoadOnlyStore3() {
		super();
		DATA_SRC = new SingleConnectionDataSource();
		DATA_SRC.setDriverClassName("oracle.jdbc.driver.OracleDriver");
		DATA_SRC.setUrl(dataSourceUrl);
		DATA_SRC.setUsername(dbUser);
		DATA_SRC.setPassword(dbPwd);
		jdbcTemplate = new JdbcTemplate(DATA_SRC);
	}

	@Override
	protected Iterator<Tuple2&lt;Long,ArrayList&lt;MyDTO>>>
inputIterator(Object... args) throws CacheLoaderException {
		if (args == null || args.length < 6)
			throw new CacheLoaderException(
					"Expected asOf, scenId, HierarchyServiceProxy and replenish parameters
are not fully provided...");
		try {
			final Date asOf = (Date) args[0];
			final String datasetId = (String) args[1];
			final Integer scenId = (Integer) args[2];
			final String sql = (String) args[3];
			final Boolean replenishFlag = (Boolean) args[4];
			Integer startSize =(Integer) args[5];
		    logger.debug("AS_OF::{} DATASET_ID::{} SCEN_ID::{} REP_FLAG::{}
START_SIZE::{}", asOf, datasetId, scenId,
				replenishFlag, startSize);

			logger.debug("load{}Cache::SQL::{}", (replenishFlag ? "Replenish" :
"Existing"), sql);
			ArrayList<Tuple2&lt;Long,ArrayList&lt;MyDTO>>> extOrRepList = null;
//			Iterator<Entry&lt;Integer, ArrayList&lt;MyDTO>>> iterator = null;
//			Iterator<ArrayList&lt;MyDTO>> iterator = null;
			Iterator<Tuple2&lt;Long,ArrayList&lt;MyDTO>>> iterator = null;

//			ResultSetExtractor<LinkedHashMap&lt;Integer, ArrayList&lt;MyDTO>>>
extOrRepMapResultSetExtractor = new
ResultSetExtractor<LinkedHashMap&lt;Integer, ArrayList&lt;MyDTO>>>() {
			ResultSetExtractor<ArrayList&lt;Tuple2&lt;Long,ArrayList&lt;MyDTO>>>>
extOrRepMapResultSetExtractor = new
ResultSetExtractor<ArrayList&lt;Tuple2&lt;Long,ArrayList&lt;MyDTO>>>>() {
				@Override
//				public LinkedHashMap<Integer, ArrayList&lt;MyDTO>>
extractData(ResultSet rs)
				public ArrayList<Tuple2&lt;Long,ArrayList&lt;MyDTO>>>
extractData(ResultSet rs)
						throws SQLException, DataAccessException {

//					LinkedHashMap<Integer, ArrayList&lt;MyDTO>> extOrRepMap = new
LinkedHashMap<Integer, ArrayList&lt;MyDTO>>(
//							gockeyCnt, 1.0f);
//					ArrayList<ArrayList&lt;MyDTO>> extOrRepList = new
ArrayList<ArrayList&lt;MyDTO>>(
//							gockeyCnt);
					ArrayList<Tuple2&lt;Long,ArrayList&lt;MyDTO>>> extOrRepList = new
ArrayList<Tuple2&lt;Long,ArrayList&lt;MyDTO>>>(startSize);
					String prevGoc = null, prevAcct = null, prevSac = null, prevCcy = null;
					Integer prevFpId = null;
					ArrayList<MyDTO> currDTOList = null, prevDTOList = null;
					MyDTO dto = null, prevDto = null;
//					final AtomicInteger entryCnt = new AtomicInteger(0);
					while (rs.next()) {
						int i = 1;
						dto = new MyDTO();
						dto.setAsOf(asOf);
						dto.setDatasetId(Integer.valueOf(datasetId));
						dto.setScnId(scenId);

						dto.setGoc(rs.getString(i++));
						dto.setAcct(rs.getString(i++));
						dto.setSumAffilCode(rs.getString(i++));
						dto.setCcyCode(rs.getString(i++));
						dto.setFrcstProdId(rs.getInt(i++));
						dto.setMngSeg(rs.getString(i++));
						dto.setMngGeo(rs.getString(i++));
						dto.setFrsBu(rs.getString(i++));
						if (replenishFlag) {
							dto.setReplenishFlag(REPLENISH);
						} else {
							dto.setRwaExposureType(rs.getString(i++));
							dto.setRiskAssetClass(rs.getString(i++));
							dto.setRiskSubAssetClass(rs.getString(i++));
							String treasLiqClass = rs.getString(i++);
							dto.setTreasLiqClass((treasLiqClass == null ?
TREAS_LIQUIDITY_CLASS_UNDEFINED :treasLiqClass));
							dto.setCounterpartyRating(rs.getString(i++));
							dto.setClearedStatus(rs.getString(i++));
							dto.setMaturityBand(rs.getString(i++));
							dto.setDerivativeType(rs.getString(i++));
							dto.setReplenishFlag(EXISTING);
						}
						dto.setStartDate(rs.getDate(i++));
						dto.setMaturityDate(rs.getDate(i++));
						dto.setAmount(rs.getDouble(i++));
						if(!replenishFlag) {
							dto.setEtlsource(rs.getString(i++));
						}else {
							dto.setInvestmentId(rs.getString(i++));
						}
						if (dto.getGoc().equals(prevGoc) && dto.getAcct().equals(prevAcct)
								&& dto.getSumAffilCode().equals(prevSac) &&
dto.getCcyCode().equals(prevCcy)
								&& dto.getFrcstProdId().equals(prevFpId)) {
							prevDTOList.add(prevDto);
						} else {
							if (prevDto != null) {
								prevDTOList.add(prevDto);
//								extOrRepMap.put(entryCnt.incrementAndGet(), prevDTOList);
								extOrRepList.add(new Tuple2<Long,
ArrayList&lt;MyDTO>>(entryCnt.incrementAndGet(),prevDTOList));
							}
							currDTOList = new ArrayList<MyDTO>();
						}
						prevDto = dto;
						prevDTOList = currDTOList;
						prevGoc = dto.getGoc();
						prevAcct = dto.getAcct();
						prevSac = dto.getSumAffilCode();
						prevCcy = dto.getCcyCode();
						prevFpId = dto.getFrcstProdId();
					}
					if (prevDto != null) {
						prevDTOList.add(prevDto);
//						extOrRepMap.put(entryCnt.incrementAndGet(), prevDTOList);
						extOrRepList.add(new Tuple2<Long,
ArrayList&lt;MyDTO>>(entryCnt.incrementAndGet(),prevDTOList));
					}
//					return extOrRepMap;
					return extOrRepList;
				}

			};

			jdbcTemplate.setFetchSize(SQL_FETCH_SIZE);
			StopWatch sw = new StopWatch();
			stopWatchStart(sw, "populatingDataMap");
			logger.debug("BEFORE populatingDataMap_STARTS!!!!");
//			extOrRepMap = jdbcTemplate.query(sql, extOrRepMapResultSetExtractor);			
			extOrRepList = jdbcTemplate.query(sql, extOrRepMapResultSetExtractor);
			logger.debug("BEFORE populatingDataMap_ENDS!!!!");
			stopWatchEnd(sw);

			if (extOrRepList != null) {
//				iterator = extOrRepMap.entrySet().iterator();
				/*
				 * RECORDS COUNT PRINTED CORRECTLY HERE BEFORE PASSING TO IGNITE
				 */
				logger.debug("+++++++ GOC_KEY COUNT::{}",extOrRepList.size());
				iterator = extOrRepList.iterator();
			}
			return iterator;
		} finally {
			DATA_SRC.destroy();
		}
	}

	@Override
	protected IgniteBiTuple<Long, ArrayList&lt;MyDTO>>
parse(Tuple2<Long,ArrayList&lt;MyDTO>> rec, Object... args) {
		return new T2<>(rec.v1(), rec.v2());
	}

}




--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Missing-records-Ignite-cache-size-grows-tp10809p10871.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Missing records Ignite cache size grows

Posted by Andrey Mashenkov <an...@gmail.com>.
Hi,

Would you please provide a clusure code that passed to loadCache method.


On Fri, Feb 24, 2017 at 11:05 AM, diopek <de...@gmail.com> wrote:

> Val,
> For debugging missing records issue
> We confirmed that inside CacheLoadOnlyStoreAdapter.inputIterator method,
> just before returning collection.iterator from this method, number of
> records count (522636) for that collection has correct record count. And
> parse method which is also very straight forward, just single line of code.
> Again, right after ignite cache gets populated. IgniteCache.size() method
> returns 522580 which is 56 entries less than actual number of records.
> So I am not sure what other evidence I can provide to indicate that bug is
> not on our application code and highly candidate is
> CacheLoadOnlyStoreAdapter base class.
> Thanks,
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Missing-records-Ignite-cache-size-
> grows-tp10809p10858.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Best regards,
Andrey V. Mashenkov

Re: Missing records Ignite cache size grows

Posted by Andrey Mashenkov <an...@gmail.com>.
Hi diopek,

I can't see how cache size relates to value classes and value object
content.

I rework my test and still can't reproduce this issue even with 2 millions
of entries in cache, but you mention about a half a million.

On Fri, Mar 3, 2017 at 6:25 PM, diopek <de...@gmail.com> wrote:

> Hi Andrew,
> Difference in your repro;
> your cache entry is <Long, ArrayList&lt;String>>,
> and you always add single entry to value ArrayList
> in our case
> <Long, ArrayList&lt;MyPOJO>>
> MyPOJO is java bean that has ~30 attributes.
> and ArrayList might have sometimes 100s of objects.
> During my local tests, that I am using ~20GB of RAM, I couldn't reproduce
> either.
> Issue occurs with high number of records with production data on servers
> with larger amount of RAM.
> I am wondering if you can try to replicate that scenario on your end.
> Thanks
>
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Missing-records-Ignite-cache-size-
> grows-tp10809p11022.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Best regards,
Andrey V. Mashenkov

Re: Missing records Ignite cache size grows

Posted by diopek <de...@gmail.com>.
Hi Andrew,
Difference in your repro; 
your cache entry is <Long, ArrayList&lt;String>>,
and you always add single entry to value ArrayList
in our case
<Long, ArrayList&lt;MyPOJO>>
MyPOJO is java bean that has ~30 attributes.
and ArrayList might have sometimes 100s of objects.
During my local tests, that I am using ~20GB of RAM, I couldn't reproduce
either.
Issue occurs with high number of records with production data on servers
with larger amount of RAM.
I am wondering if you can try to replicate that scenario on your end.
Thanks






--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Missing-records-Ignite-cache-size-grows-tp10809p11022.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Missing records Ignite cache size grows

Posted by Andrey Mashenkov <an...@gmail.com>.
Hi,

I've made a repro and it works fine for me.
Please check if I missed smth?

On Wed, Mar 1, 2017 at 5:15 PM, diopek <de...@gmail.com> wrote:

> Below is our implementation of LoadOnlyCacheStore inputIterator method.
> As we tested many times there is no issue on uniqueness of generated cache
> keys.
> But still after loading high number of records into IgniteCache, records
> count doesn't match, and few 10s of keys were missing out of million
> records. Also if we use the Java cache populated just before serializing
> this cache into IgniteCache, all number of records are matching.
>
> In short, after several days of try-outs and debugging, we narrowed down
> the
> root cause of issue to  LoadOnlyCacheStore base class. So, at this point,
> we
> need Ignite core team support to resolve this bug.
> Thanks
>
>
> Yes, we checked uniqueness of key several times. Below is representative
> snippet of our code, how we generate cache key inside inputIterator method.
>
> @Override
> protected Iterator<Tuple2&lt;Long,ArrayList&lt;MyDTO>>>*
> inputIterator*(Object... args) throws CacheLoaderException {
>         Iterator<Tuple2&lt;Long,ArrayList&lt;MyDTO>>> iterator = null;
>
> ResultSetExtractor<ArrayList&lt;Tuple2&lt;Long,ArrayList&lt;MyDTO>>>>
> extOrRepMapResultSetExtractor = new
> ResultSetExtractor<ArrayList&lt;Tuple2&lt;Long,ArrayList&lt;MyDTO>>>>() {
>                 @Override
>                 public ArrayList<Tuple2&lt;Long,ArrayList&lt;MyDTO>>>
> extractData(ResultSet rs)
>                                 throws SQLException, DataAccessException {
>                         ArrayList<Tuple2&lt;Long,ArrayList&lt;MyDTO>>>
> extOrRepList = new
> ArrayList<Tuple2&lt;Long,ArrayList&lt;MyDTO>>>(startSize);
>                         *final AtomicLong entryCnt = new AtomicLong(0);*
>                         while (rs.next()) {
>                                 extOrRepList.add(new Tuple2<Long,
> ArrayList&lt;MyDTO>>(*entryCnt.incrementAndGet()*,prevDTOList));
>                         }
>                         return extOrRepList;
>                 }
>         };
>
>         jdbcTemplate.setFetchSize(SQL_FETCH_SIZE);
>         ArrayList<Tuple2&lt;Long,ArrayList&lt;MyDTO>>> extOrRepList =
> null;
>         extOrRepList = jdbcTemplate.query(sql,
> extOrRepMapResultSetExtractor);
>
>         if (extOrRepList != null) {
>                 iterator = extOrRepList.iterator();
>         }
>         return iterator;
>
> }
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Missing-records-Ignite-cache-size-
> grows-tp10809p10966.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Best regards,
Andrey V. Mashenkov

Re: Missing records Ignite cache size grows

Posted by diopek <de...@gmail.com>.
Below is our implementation of LoadOnlyCacheStore inputIterator method.
As we tested many times there is no issue on uniqueness of generated cache
keys.
But still after loading high number of records into IgniteCache, records
count doesn't match, and few 10s of keys were missing out of million
records. Also if we use the Java cache populated just before serializing
this cache into IgniteCache, all number of records are matching.

In short, after several days of try-outs and debugging, we narrowed down the
root cause of issue to  LoadOnlyCacheStore base class. So, at this point, we
need Ignite core team support to resolve this bug.
Thanks


Yes, we checked uniqueness of key several times. Below is representative
snippet of our code, how we generate cache key inside inputIterator method. 

@Override 
protected Iterator<Tuple2&lt;Long,ArrayList&lt;MyDTO>>>*
inputIterator*(Object... args) throws CacheLoaderException { 
        Iterator<Tuple2&lt;Long,ArrayList&lt;MyDTO>>> iterator = null; 
       
ResultSetExtractor<ArrayList&lt;Tuple2&lt;Long,ArrayList&lt;MyDTO>>>>
extOrRepMapResultSetExtractor = new
ResultSetExtractor<ArrayList&lt;Tuple2&lt;Long,ArrayList&lt;MyDTO>>>>() { 
                @Override 
                public ArrayList<Tuple2&lt;Long,ArrayList&lt;MyDTO>>>
extractData(ResultSet rs) 
                                throws SQLException, DataAccessException { 
                        ArrayList<Tuple2&lt;Long,ArrayList&lt;MyDTO>>>
extOrRepList = new
ArrayList<Tuple2&lt;Long,ArrayList&lt;MyDTO>>>(startSize); 
                        *final AtomicLong entryCnt = new AtomicLong(0);*
                        while (rs.next()) {	
                                extOrRepList.add(new Tuple2<Long,
ArrayList&lt;MyDTO>>(*entryCnt.incrementAndGet()*,prevDTOList)); 
                        } 
                        return extOrRepList; 
                } 
        }; 

        jdbcTemplate.setFetchSize(SQL_FETCH_SIZE);	
        ArrayList<Tuple2&lt;Long,ArrayList&lt;MyDTO>>> extOrRepList = null; 
        extOrRepList = jdbcTemplate.query(sql,
extOrRepMapResultSetExtractor);	

        if (extOrRepList != null) { 
                iterator = extOrRepList.iterator(); 
        } 
        return iterator; 
        
}





--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Missing-records-Ignite-cache-size-grows-tp10809p10966.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Missing records Ignite cache size grows

Posted by diopek <de...@gmail.com>.
We appreciate any feedback on this issue. Thanks



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Missing-records-Ignite-cache-size-grows-tp10809p10923.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Missing records Ignite cache size grows

Posted by diopek <de...@gmail.com>.
Yes, we checked uniqueness of key several times. Below is representative
snippet of our code, how we generate cache key inside inputIterator method.

@Override
protected Iterator<Tuple2&lt;Long,ArrayList&lt;MyDTO>>>
*inputIterator*(Object... args) throws CacheLoaderException {
	Iterator<Tuple2&lt;Long,ArrayList&lt;MyDTO>>> iterator = null;
	ResultSetExtractor<ArrayList&lt;Tuple2&lt;Long,ArrayList&lt;MyDTO>>>>
extOrRepMapResultSetExtractor = new
ResultSetExtractor<ArrayList&lt;Tuple2&lt;Long,ArrayList&lt;MyDTO>>>>() {
		@Override
		public ArrayList<Tuple2&lt;Long,ArrayList&lt;MyDTO>>>
extractData(ResultSet rs)
				throws SQLException, DataAccessException {
			ArrayList<Tuple2&lt;Long,ArrayList&lt;MyDTO>>> extOrRepList = new
ArrayList<Tuple2&lt;Long,ArrayList&lt;MyDTO>>>(startSize);
			*final AtomicLong entryCnt = new AtomicLong(0);*
			while (rs.next()) {	
				extOrRepList.add(new Tuple2<Long,
ArrayList&lt;MyDTO>>(*entryCnt.incrementAndGet()*,prevDTOList));
			}
			return extOrRepList;
		}
	};

	jdbcTemplate.setFetchSize(SQL_FETCH_SIZE);	
	ArrayList<Tuple2&lt;Long,ArrayList&lt;MyDTO>>> extOrRepList = null;
	extOrRepList = jdbcTemplate.query(sql, extOrRepMapResultSetExtractor);	

	if (extOrRepList != null) {
		iterator = extOrRepList.iterator();
	}
	return iterator;
	
}



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Missing-records-Ignite-cache-size-grows-tp10809p10890.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Missing records Ignite cache size grows

Posted by vkulichenko <va...@gmail.com>.
Are you sure keys are unique for all the parsed rows?

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Missing-records-Ignite-cache-size-grows-tp10809p10888.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Missing records Ignite cache size grows

Posted by diopek <de...@gmail.com>.
Val,
For debugging missing records issue 
We confirmed that inside CacheLoadOnlyStoreAdapter.inputIterator method,
just before returning collection.iterator from this method, number of
records count (522636) for that collection has correct record count. And
parse method which is also very straight forward, just single line of code.
Again, right after ignite cache gets populated. IgniteCache.size() method
returns 522580 which is 56 entries less than actual number of records.
So I am not sure what other evidence I can provide to indicate that bug is
not on our application code and highly candidate is
CacheLoadOnlyStoreAdapter base class.
Thanks,




--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Missing-records-Ignite-cache-size-grows-tp10809p10858.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Missing records Ignite cache size grows

Posted by diopek <de...@gmail.com>.
Problem is;
I have 32GB development box, my test runs that could fit into this much
memory I am not able to replicate the issue. it happens on Linux server
where I have around 128GB memory. Basically it is not easy to debug.




--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Missing-records-Ignite-cache-size-grows-tp10809p10846.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Missing records Ignite cache size grows

Posted by vkulichenko <va...@gmail.com>.
I'm not aware of any bugs there. Please debug your code and check where the
data is lost. It would be impossible to get to the bottom of the issue
without isolating it.

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Missing-records-Ignite-cache-size-grows-tp10809p10840.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Missing records Ignite cache size grows

Posted by diopek <de...@gmail.com>.
Val, Is it possible if there is a bug in CacheLoadOnlyStoreAdapter, below is
my CacheStore object that extends CacheLoadOnlyStoreAdapter. Again, if we
use Java Collection directly, count always matches so that rules out
possibility of bug in our code that populates Ignite cache. Please advise.
Thanks,

*public class MyCacheLoadOnlyStore
		extends CacheLoadOnlyStoreAdapter<Long, ArrayList&lt;MyDTO>,
Tuple2<Long,ArrayList&lt;MyDTO>>>*



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Missing-records-Ignite-cache-size-grows-tp10809p10826.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Missing records Ignite cache size grows

Posted by diopek <de...@gmail.com>.
Within LoadOnlyCacheStore inputIterator method, we populate 
ArrayList<Tuple2&lt;Long,ArrayList&lt;MyPOJO>>> collection and iterator of
that collection is being returned 
from that method and also parse method is straight forward;
*@Override
protected IgniteBiTuple<Long, ArrayList&lt;RwaDTO>>
parse(Tuple2<Long,ArrayList&lt;MyPOJO>> rec, Object... args) {
	return new T2<>(rec.v1(), rec.v2());
}*

if we use Java collection directly (ArrayList.size()) number of records is
matching. But if we pass the iterator from Ignite, load Ignite cache, and
then call IgniteCache.size() down the line that's where we see count
discrepancy. This never happens for smaller number of records Ignite
cache.size() returns correct number, it only happens if we process very
higher number of records. Also in any case, number of records small or
large,  if we use Java collection directly instead of Ignite cache record
count is accurate. Therefore we are kind of stacked. I really appreciate
your help here. Thank you



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Missing-records-Ignite-cache-size-grows-tp10809p10823.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Missing records Ignite cache size grows

Posted by vkulichenko <va...@gmail.com>.
It looks like you have your own implementation of CacheStore. I believe the
error can be there, try to debug it and see where the data is lost.

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Missing-records-Ignite-cache-size-grows-tp10809p10816.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.