You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by Kenan Dalley <ke...@gm.com> on 2017/08/30 20:47:51 UTC

Cassandra failing to ReadThrough using Cache.get(key) without preloading

According to everything that I've read, hooking up Cassandra with Ignite
should allow for doing a lazy load of the Cache using the Cache.get(key)
without using "loadCache" beforehand.  However, using Ignite v2.1, I'm not
seeing that occur.  If I use "loadCache", then my "get" returns values
appropriately, but if I don't pre-load the Cache, I just get null values as
my result.  It's really difficult to understand what's going on behind the
scenes to see if I've configured something incorrectly (don't think so) or
why it wouldn't go to Cassandra directly if there's a Cache-miss.  I'm
including my code & config below.


OUTPUT

Output From Not Preloading the Cache:


Output From Preloading the Cache:



CODE

cassandra-ignite.xml


persistence-settings.xml


persistence-settings.full.xml (Tried in case I needed to fully define in the
xml)


connection-settings.xml


TestResponseKey


TestResponse


Spring Config


Application



DATA

Table Info


Inserts





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

Posted by Kenan Dalley <ke...@gm.com>.
Ok, I'm going to try this again...

OUTPUT

Output From Not Preloading the Cache:


Output From Preloading the Cache:


CODE

cassandra-ignite.xml


persistence-settings.xml


persistence-settings.full.xml (Tried in case I needed to fully define in the
xml)


connection-settings.xml


TestResponseKey


TestResponse


ApplicationConfig


Application


DATA

Table Info


Inserts






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

Posted by Kenan Dalley <ke...@gm.com>.
Ok, that worked for me, too.  Such a small change that I kept overlooking. 
Thanks a lot!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

Posted by Michael Cherkasov <mi...@gmail.com>.
Hi Kenan,

I set up exactly the same config as you have, and I wasn't able to read
data back form cassandra too.

Then I found an error in configuration in persitence-settings.xml you have
'table="testresponse"' while you put data to 'table="test_response"', so I
added underscore and now it works fine.

Thanks,
Mike.


2017-09-25 17:16 GMT+03:00 Kenan Dalley <ke...@gm.com>:

> Any other thoughts?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

Posted by Mikhail <mi...@gmail.com>.
Hi Kenan, 

Could you please share with as a small reproducer? the best if it will be a
pom based project with minimal code, just to reproduce the problem, so I can
debug it locally.

Thanks,
Mike.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

Posted by Kenan Dalley <ke...@gm.com>.
Any other thoughts?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

Posted by Kenan Dalley <ke...@gm.com>.
I forgot to post the output:





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

Posted by Kenan Dalley <ke...@gm.com>.
Ok, I see that now.  It turns out that this /*is*/ the correct data and SQL
INSERTS.  This is a test table copied from an existing working table and
these are the correct values that are in the working table as well.

I changed the hashCode() & equals() methods to account for the potential
"null" value for the Long variables in both the TestResponse &
TestResponseKey.  The results were the exact same as before, which is what I
expected.  I expected this because, regardless of the nulls, the values can
be loaded in the cache and then subsequently found if I preload the cache. 
But if I don't preload the cache, the values will not be found on a
read-through.  Something else seems to be going on.  I also tried Dmitry's
"cache.getEntry" vs "cache.get" and still nothing is ever pulled back into
the cache via read-through.
TestResponse
TestResponseKey
Application.java




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

Posted by Igor Rudyak <ir...@gmail.com>.
That's exactly what I am talking about. You have col10 after col4, because of
this you are inserting nulls into col5



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

Posted by Mikhail Cherkasov <mc...@gridgain.com>.
Hi Kenan,

after col4 you have col10 instead of col5, I think this is problem.

Thanks,
Mikhail.

On Mon, Sep 11, 2017 at 7:52 PM, Kenan Dalley <ke...@gm.com> wrote:

> I provided the correct classes above and the CQL statements are 100%
> correct.
> Please see below to know how the data relates to each column.  It looks
> like
> you're skipping over 1 of the first 5 columns when reading through which is
> why you see a null starting at col5 rather than starting at col6.
>
> *INSERT INTO test_response
> (col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES
> ('text1','$03','FLAG',1491843013376,60,null,null,'C39A9EFB7E',1,'NA');*
>
> 'text1' = col1
> '$03' = col2
> 'FLAG' = col3
> 1491843013376 = col4
> 60 = col5
> null = col6
> null = col7
> ''C39A9EFB7E' = col8
> 1 = col9
> 'NA' = col10
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Thanks,
Mikhail.

Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

Posted by Kenan Dalley <ke...@gm.com>.
I provided the correct classes above and the CQL statements are 100% correct. 
Please see below to know how the data relates to each column.  It looks like
you're skipping over 1 of the first 5 columns when reading through which is
why you see a null starting at col5 rather than starting at col6.

*INSERT INTO test_response 
(col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES 
('text1','$03','FLAG',1491843013376,60,null,null,'C39A9EFB7E',1,'NA');* 

'text1' = col1
'$03' = col2
'FLAG' = col3
1491843013376 = col4
60 = col5
null = col6
null = col7
''C39A9EFB7E' = col8
1 = col9
'NA' = col10



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

Posted by Igor Rudyak <ir...@gmail.com>.
Hm... Did you provide right source code for the classes and CQL statements? 

Here is the sample of INSERT statement you previously provided:

*INSERT INTO test_response
(col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES
('text1','$03','FLAG',1491843013376,60,null,null,'C39A9EFB7E',1,'NA');*

here you inserting *null* into *col5* and all other INSERT statements are
the same.

Also according to the implementation logic of the classes provided, just
changing "long" to "Long" types will cause NullPointerException in hashCode
method.

Thus the question is - do provided classes and CQL statements are 100%
accurate? 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

Posted by Kenan Dalley <ke...@gm.com>.
That would make sense if col5 was null, but it's only col6 & col7 that are
null and they are both String columns which allow for null.  In addition, if
col5 was null, then the cache wouldn't load upon doing a manual load which
it does.  Also, just in case, I went ahead and changed the "long" types to
"Long" to see what would happen and the results are the exact same.  The
values are found in the cache when I do a manual pre-load, but nothing is
found and the cache isn't lazily-loaded when I don't do a manual pre-load. 
There's something else going on here.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

Posted by Igor Rudyak <ir...@gmail.com>.
Hi Kenan,

Either your *examples.cassandra_persistence_store.model.TestResponse*
implementation is incorrect or you just accidentally inserting incorrect
values into your Cassandra table.

Here are the details:

1) *col5* field in *TestResponse* class has *long* type - which means that
it's not nullable.
2) At the same time, all the rows you inserting in Cassandra table have
*col5* value as a *null*. Which automatically means that it's impossible to
create *TestResponse* object instance from any of these rows, cause *col5*
is not nullable.
3) Because of this, all attempts of *CassandraCacheStore* to deserialize
*TestResponse* objects inserted into Cassandra table will fail.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

Posted by Kenan Dalley <ke...@gm.com>.
I made that change and the result was the same, so it looks like it's not
creating a new cache with the getOrCreateCache() method.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

Posted by Evgenii Zhuravlev <e....@gmail.com>.
I would recommend changing

ignite
    .getOrCreateCache(new CacheConfiguration<TestResponseKey, TestResponse>(
        Application.TEST_CACHE_NAME)))

to

ignite
    .cache(Application.TEST_CACHE_NAME)

just to check if you really accessing the same cache that was creating
from your xml file and not creating a new one with empty
CacheConfiguration

Kind Regards,

Evgenii


2017-09-01 16:35 GMT+03:00 Kenan Dalley <ke...@gm.com>:

> FYI, my update wasn't that my problem was solved, but that the website got
> rid of the work that I did to post the code.  Please take a look at the
> code
> that I've posted yesterday because I'm still having the issue.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

Posted by Kenan Dalley <ke...@gm.com>.
FYI, my update wasn't that my problem was solved, but that the website got
rid of the work that I did to post the code.  Please take a look at the code
that I've posted yesterday because I'm still having the issue.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

Posted by Evgenii Zhuravlev <e....@gmail.com>.
Great news! Thank you for the update.

Evgenii

2017-08-31 16:47 GMT+03:00 Kenan Dalley <ke...@gm.com>:

> Wow.  Ok.  After I spent about 30-45 minutes getting it put together and
> looking pretty, too.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

Posted by Kenan Dalley <ke...@gm.com>.
Wow.  Ok.  After I spent about 30-45 minutes getting it put together and
looking pretty, too.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

Posted by Evgenii Zhuravlev <e....@gmail.com>.
Hi,

Looks like config files and code were not attached. Please add it to the
thread.

Evgenii

2017-08-30 23:47 GMT+03:00 Kenan Dalley <ke...@gm.com>:

> According to everything that I've read, hooking up Cassandra with Ignite
> should allow for doing a lazy load of the Cache using the Cache.get(key)
> without using "loadCache" beforehand.  However, using Ignite v2.1, I'm not
> seeing that occur.  If I use "loadCache", then my "get" returns values
> appropriately, but if I don't pre-load the Cache, I just get null values as
> my result.  It's really difficult to understand what's going on behind the
> scenes to see if I've configured something incorrectly (don't think so) or
> why it wouldn't go to Cassandra directly if there's a Cache-miss.  I'm
> including my code & config below.
>
>
> OUTPUT
>
> Output From Not Preloading the Cache:
>
>
> Output From Preloading the Cache:
>
>
>
> CODE
>
> cassandra-ignite.xml
>
>
> persistence-settings.xml
>
>
> persistence-settings.full.xml (Tried in case I needed to fully define in
> the
> xml)
>
>
> connection-settings.xml
>
>
> TestResponseKey
>
>
> TestResponse
>
>
> Spring Config
>
>
> Application
>
>
>
> DATA
>
> Table Info
>
>
> Inserts
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

Posted by Igor Rudyak <ir...@gmail.com>.
Hi Evgenii,

Thanks for forwarding this to me. For some reason previous e-mail was
redirected to junk folder. I'll look at it on this coming week.

Igor

On Fri, Sep 8, 2017 at 11:40 AM, Evgenii Zhuravlev <e.zhuravlev.wk@gmail.com
> wrote:

> Hi Igor,
>
> Could you check this message from user list? I can't find any reasons why
> readThrough doesn't work with Cassandra here
>
> Thanks,
> Evgenii
>
> ---------- Forwarded message ----------
> From: Kenan Dalley <ke...@gm.com>
> Date: 2017-08-31 17:14 GMT+03:00
> Subject: Re: Cassandra failing to ReadThrough using Cache.get(key) without
> preloading
> To: user@ignite.apache.org
>
>
> Trying this again... OUTPUT Output From Not Preloading the Cache:
>
> >>> Cassandra cache store example started.
>
> >>> Cache retrieve example started.
> >>> Read from C*.  Key: [TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1491843013376'}], Value: [{}]
> >>> Read from C*.  Key: [TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1491843013376'}], Value: [null]
>
> Cache size: 0
>
> Output From Preloading the Cache:
>
> >>> Cassandra cache store example started.
>
> Loading cache...
>
> Cache size: 16
>
> Entries...
> Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1496833338465'}, Value: TestResponse: {col5: 0, col6: 'null', col7: '03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
> Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1492599741108'}, Value: TestResponse: {col5: 0, col6: 'null', col7: '03C39A9EFB7E', col8: 1, col9: 'NA', col10: 144}
> Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1491843013376'}, Value: TestResponse: {col5: 0, col6: 'null', col7: 'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
> Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1496746939945'}, Value: TestResponse: {col5: 0, col6: 'null', col7: '03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
> Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1492081339596'}, Value: TestResponse: {col5: 0, col6: 'null', col7: 'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
> Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1492173434330'}, Value: TestResponse: {col5: 0, col6: 'null', col7: 'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
> Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1496487738766'}, Value: TestResponse: {col5: 0, col6: 'null', col7: '03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
> Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1492599740168'}, Value: TestResponse: {col5: 0, col6: 'null', col7: '03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
> Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1492254138310'}, Value: TestResponse: {col5: 0, col6: 'null', col7: 'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
> Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1497016098855'}, Value: TestResponse: {col5: 0, col6: 'null', col7: '03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
> Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1492340538017'}, Value: TestResponse: {col5: 0, col6: 'null', col7: 'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
> Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1496930018886'}, Value: TestResponse: {col5: 0, col6: 'null', col7: '03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
> Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1495969325403'}, Value: TestResponse: {col5: 0, col6: 'null', col7: '03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
> Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1492430355581'}, Value: TestResponse: {col5: 0, col6: 'null', col7: 'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
> Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1496590566077'}, Value: TestResponse: {col5: 0, col6: 'null', col7: '03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
> Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1491999483231'}, Value: TestResponse: {col5: 0, col6: 'null', col7: 'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
>
> >>> Cache retrieve example started.
> >>> Read from C*.  Key: [TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1491843013376'}], Value: [{TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1491843013376'}=TestResponse: {col5: 0, col6: 'null', col7: 'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}}]
> >>> Read from C*.  Key: [TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1491843013376'}], Value: [TestResponse: {col5: 0, col6: 'null', col7: 'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}]
>
> Cache size: 16
>
> CODE cassandra-ignite.xml
>
> <?xml version="1.0" encoding="UTF-8"?>
>
> <!--
>   Licensed to the Apache Software Foundation (ASF) under one or more
>   contributor license agreements.  See the NOTICE file distributed with
>   this work for additional information regarding copyright ownership.
>   The ASF licenses this file to You under the Apache License, Version 2.0
>   (the "License"); you may not use this file except in compliance with
>   the License.  You may obtain a copy of the License at
>
>        http://www.apache.org/licenses/LICENSE-2.0
>
>   Unless required by applicable law or agreed to in writing, software
>   distributed under the License is distributed on an "AS IS" BASIS,
>   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>   See the License for the specific language governing permissions and
>   limitations under the License.
> -->
>
> <!--
>     Ignite configuration with all defaults and enabled p2p deployment and enabled events.
> -->
> <beans xmlns="http://www.springframework.org/schema/beans"
>        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
>        xsi:schemaLocation="
>         http://www.springframework.org/schema/beans
>         http://www.springframework.org/schema/beans/spring-beans.xsd">
>
>     <!-- Cassandra connection settings -->
>     <import resource="classpath:connection-settings.xml" />
>
>     <bean id="testResponseCache_persistence_settings" class="org.apache.ignite.cache.store.cassandra.persistence.KeyValuePersistenceSettings">
>         <!--<constructor-arg type="org.springframework.core.io.Resource" value="classpath:persistence-settings.xml" />-->
>         <constructor-arg type="org.springframework.core.io.Resource" value="classpath:persistence-settings.full.xml" />
>     </bean>
>
>     <bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
>         <!-- Set to true to enable distributed class loading for examples, default is false. -->
>         <property name="peerClassLoadingEnabled" value="true"/>
>
>         <!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
>         <property name="discoverySpi">
>             <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>                 <property name="ipFinder">
>                     <!--
>                         Ignite provides several options for automatic discovery that can be used
>                         instead os static IP based discovery. For information on all options refer
>                         to our documentation: http://apacheignite.readme.io/docs/cluster-config
>                     -->
>                     <!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
>                     <!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">-->
>                     <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
>                         <property name="addresses">
>                             <list>
>                                 <!-- In distributed environment, replace with actual host IP address. -->
>                                 <value>127.0.0.1:47500..47509</value>
>                             </list>
>                         </property>
>                     </bean>
>                 </property>
>             </bean>
>         </property>
>
>         <property name="cacheConfiguration">
>             <list>
>                 <bean class="org.apache.ignite.configuration.CacheConfiguration">
>                     <property name="name" value="TestResponse" />
>                     <property name="readThrough" value="true" />
>                     <property name="writeThrough" value="true" />
>                     <!-- <property name="writeBehindEnabled" value="true" /> -->
>                     <property name="cacheStoreFactory">
>                         <bean class="org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory">
>                             <!-- Datasource configuration bean which is responsible for Cassandra connection details -->
>                             <property name="dataSourceBean" value="cassandraDataSource" />
>                             <!-- Persistent settings bean which is responsible for the details of how objects will be persisted to Cassandra -->
>                             <property name="persistenceSettingsBean" value="testResponseCache_persistence_settings" />
>                         </bean>
>                     </property>
>                 </bean>
>             </list>
>         </property>
>     </bean>
> </beans>
>
> persistence-settings.xml
>
> <persistence keyspace="dev_keyspace" table="testresponse" ttl="2592000">
>     <keyPersistence class="examples.cassandra_persistence_store.model.TestResponseKey" strategy="POJO"/>
>     <valuePersistence class="examples.cassandra_persistence_store.model.TestResponse" strategy="POJO"/>
> </persistence>
>
> persistence-settings.full.xml (Tried in case I needed to fully define in
> the xml)
>
> <persistence keyspace="dev_keyspace" table="testresponse" ttl="2592000">
>     <keyPersistence class="examples.cassandra_persistence_store.model.TestResponseKey" strategy="POJO">
>         <partitionKey>
>             <field name="col1"/>
>             <field name="col2"/>
>         </partitionKey>
>         <clusterKey>
>             <field name="col3"/>
>             <field name="col4"/>
>         </clusterKey>
>     </keyPersistence>
>     <valuePersistence class="examples.cassandra_persistence_store.model.TestResponse" strategy="POJO">
>             <field name="col5"/>
>             <field name="col6"/>
>             <field name="col7"/>
>             <field name="col8"/>
>             <field name="col9"/>
>             <field name="col10"/>
>     </valuePersistence>
> </persistence>
>
> connection-settings.xml
>
> <?xml version="1.0" encoding="UTF-8"?>
> <beans xmlns="http://www.springframework.org/schema/beans"
>        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
>        xmlns:util="http://www.springframework.org/schema/util"
>        xsi:schemaLocation="
>         http://www.springframework.org/schema/beans
>         http://www.springframework.org/schema/beans/spring-beans.xsd
>         http://www.springframework.org/schema/util
>         http://www.springframework.org/schema/util/spring-util.xsd">
>
>     <bean id="loadBalancingPolicy" class="com.datastax.driver.core.policies.TokenAwarePolicy">
>         <constructor-arg type="com.datastax.driver.core.policies.LoadBalancingPolicy">
>             <bean class="com.datastax.driver.core.policies.RoundRobinPolicy"/>
>         </constructor-arg>
>     </bean>
>
>     <util:list id="contactPoints" value-type="java.lang.String">
>         <value>hostname.com</value>
>     </util:list>
>
>     <bean id="cassandraDataSource" class="org.apache.ignite.cache.store.cassandra.datasource.DataSource">
>         <property name="contactPoints" ref="contactPoints"/>
>         <property name="user" value="dev_keyspace"/>
>         <property name="password" value="dev_keyspace_password"/>
>         <property name="readConsistency" value="ONE"/>
>         <property name="writeConsistency" value="ONE"/>
>         <property name="loadBalancingPolicy" ref="loadBalancingPolicy"/>
>     </bean>
> </beans>
>
> TestResponseKey
>
> package examples.cassandra_persistence_store.model;
>
> import java.io.Serializable;
> import org.apache.ignite.cache.affinity.AffinityKeyMapped;
>
> public class TestResponseKey implements Serializable {
>     @AffinityKeyMapped
>     private String col1;
>
>     @AffinityKeyMapped
>     private String col2;
>
>     private String col3;
>     private long col4;
>
>     public TestResponseKey() { }
>
>     public TestResponseKey(final String col1, final String col2, final String col3, final long col4) {
>         this.col1 = col1;
>         this.col2 = col2;
>         this.col3 = col3;
>         this.col4 = col4;
>     }
>
>     public String getCol1() {
>         return col1;
>     }
>
>     public void setCol1(String col1) {
>         this.col1 = col1;
>     }
>
>     public String getCol2() {
>         return col2;
>     }
>
>     public void setCol2(String col2) {
>         this.col2 = col2;
>     }
>
>     public String getCol3() {
>         return col3;
>     }
>
>     public void setCol3(String col3) {
>         this.col3 = col3;
>     }
>
>     public long getCol4() {
>         return col4;
>     }
>
>     public void setCol4(long col4) {
>         this.col4 = col4;
>     }
>
>     @Override
>     public boolean equals(Object o) {
>         if (this == o) {
>             return true;
>         }
>         if (o == null || getClass() != o.getClass()) {
>             return false;
>         }
>
>         if (!(o instanceof EcuResponse)) {
>             return false;
>         }
>
>         final TestResponseKey that = (TestResponseKey) o;
>
>         if (this.col1 != null ? !this.col1.equals(that.col1) : that.col1 != null) {
>             return false;
>         }
>
>         if (this.col2 != null ? !this.col2.equals(that.col2) : that.col2 != null) {
>             return false;
>         }
>
>         if (this.col3 != null ? !this.col3.equals(that.col3) : that.col3 != null) {
>             return false;
>         }
>         return (this.col4 != that.col4);
>     }
>
>     @Override
>     public int hashCode() {
>         int result = this.col1 != null ? this.col1.hashCode() : 0;
>         result = (31 * result) + (this.col2 != null ? this.col2.hashCode() : 0);
>         result = (31 * result) + (this.col3 != null ? this.col3.hashCode() : 0);
>         return (31 * result) + Long.hashCode(this.col4);
>     }
>
>     @Override
>     public String toString() {
>         return "TestResponseKey: {" +
>                "col1: '" + this.col1 + "'" +
>                ", col2: '" + this.col2 + "'" +
>                ", col3: '" + this.col3 + "'" +
>                ", col4: '" + this.col4 + "'" +
>                "}";
>     }
> }
>
> TestResponse
>
> package examples.cassandra_persistence_store.model;
>
> import java.io.Serializable;
>
> public class TestResponse implements Serializable {
>     private long col5;
>     private String col6;
>     private String col7;
>     private long col8;
>     private String col9;
>     private long col10;
>
>     public TestResponse() { }
>
>     public TestResponse(final Long col5, final String col6, final String col7, final Long col8,
>                         final String col9, final Long col10) {
>         this.col5 = col5;
>         this.col6 = col6;
>         this.col7 = col7;
>         this.col8 = col8;
>         this.col9 = col9;
>         this.col10 = col10;
>     }
>
>     public long getCol5() {
>         return col5;
>     }
>
>     public void setCol5(long col5) {
>         this.col5 = col5;
>     }
>
>     public String getCol6() {
>         return col6;
>     }
>
>     public void setCol6(String col6) {
>         this.col6 = col6;
>     }
>
>     public String getCol7() {
>         return col7;
>     }
>
>     public void setCol7(String col7) {
>         this.col7 = col7;
>     }
>
>     public long getCol8() {
>         return col8;
>     }
>
>     public void setCol8(long col8) {
>         this.col8 = col8;
>     }
>
>     public String getCol9() {
>         return col9;
>     }
>
>     public void setCol9(String col9) {
>         this.col9 = col9;
>     }
>
>     public long getCol10() {
>         return col10;
>     }
>
>     public void setCol10(long col10) {
>         this.col10 = col10;
>     }
>
>     @Override
>     public boolean equals(Object o) {
>         if (this == o) {
>             return true;
>         }
>         if (o == null || getClass() != o.getClass()) {
>             return false;
>         }
>
>         if (!(o instanceof TestResponse)) {
>             return false;
>         }
>
>         TestResponse that = (TestResponse) o;
>
>         if (this.col5 != that.col5) {
>             return false;
>         }
>         if (this.col6 != null ? !this.col6.equals(that.col6) : that.col6 != null) {
>             return false;
>         }
>         if (this.col7 != null ? !this.col7.equals(that.col7) : that.col7 != null) {
>             return false;
>         }
>         if (this.col8 != that.col8) {
>             return false;
>         }
>         if (this.col9 != null ? !this.col9.equals(that.col9) : that.col9 != null) {
>             return false;
>         }
>         return this.col10 != that.col10;
>     }
>
>     @Override
>     public int hashCode() {
>         int result = Long.hashCode(this.col5);
>         result = (31 * result) + (this.col6 != null ? this.col6.hashCode() : 0);
>         result = (31 * result) + (this.col7 != null ? this.col7.hashCode() : 0);
>         result = (31 * result) + Long.hashCode(this.col8);
>         result = (31 * result) + (this.col9 != null ? this.col9.hashCode() : 0);
>         return (31 * result) + Long.hashCode(this.col10);
>     }
>
>     @Override
>     public String toString() {
>         return "TestResponse: {" +
>                "col5: " + this.col5 +
>                ", col6: '" + this.col6 + "'" +
>                ", col7: '" + this.col7 + "'" +
>                ", col8: " + this.col8 +
>                ", col9: '" + this.col9 + "'" +
>                ", col10: " + this.col10 +
>                '}';
>     }
> }
>
> ApplicationConfig
>
> package examples.cassandra_persistence_store.config;
>
> import org.apache.ignite.Ignite;
> import org.apache.ignite.Ignition;
> import org.springframework.context.annotation.Bean;
> import org.springframework.context.annotation.Configuration;
>
> @Configuration
> public class ApplicationConfig {
>
>     @Bean
>     public Ignite ignite() {
>         return Ignition.start("cassandra-ignite.xml");
>     }
> }
>
> Application
>
> package examples.cassandra_persistence_store;
>
> import java.util.HashSet;
> import java.util.Iterator;
> import java.util.Map;
> import java.util.Set;
> import javax.cache.Cache;
> import org.apache.commons.lang3.StringUtils;
> import org.apache.ignite.Ignite;
> import org.apache.ignite.IgniteCache;
> import org.apache.ignite.cache.CacheEntry;
> import org.apache.ignite.configuration.CacheConfiguration;
> import org.springframework.beans.factory.annotation.Autowired;
> import org.springframework.boot.Banner;
> import org.springframework.boot.CommandLineRunner;
> import org.springframework.boot.SpringApplication;
> import org.springframework.boot.autoconfigure.SpringBootApplication;
> import org.springframework.context.ApplicationContext;
>
> import examples.cassandra_persistence_store.model.TestResponse;
> import examples.cassandra_persistence_store.model.TestResponseKey;
>
> @SpringBootApplication
> public class Application implements CommandLineRunner {
>     /**
>      * Cache name.
>      */
>     private static final String TEST_CACHE_NAME = TestResponse.class.getSimpleName();
>
>     private static TestResponseKey testKey = new TestResponseKey("text1", "$03",
>                                                             "FLAG", 1491843013376L);
>
>     @Autowired
>     Ignite ignite;
>
>     public static void main(String... args) {
>         SpringApplication app = new SpringApplication(Application.class);
>         app.setBannerMode(Banner.Mode.OFF);
>         app.run(args);
>         System.exit(0);
>     }
>
>     @Override
>     public void run(String... strings) throws Exception {
>         testResponse();
>     }
>
>     private void testResponse() {
>         System.out.println(">>> Cassandra cache store example started.");
>         try (IgniteCache<TestResponseKey, TestResponse> cache = this.ignite
>                 .getOrCreateCache(new CacheConfiguration<TestResponseKey, TestResponse>(
>                         Application.TEST_CACHE_NAME))) {
> /* uncomment this to Preload Cache
>             System.out.println(StringUtils.EMPTY);
>             System.out.println("Loading cache...");
>
>             cache.loadCache(null,
>                             "select * from dev_keyspace.test_response " +
>                             " where col1 = 'text1' and col2 = '$03';");
>             System.out.println(StringUtils.EMPTY);
>             System.out.println("Cache size: " + cache.size());
>
>             Iterable<Cache.Entry<TestResponseKey, TestResponse>> entries = cache.localEntries();
>             Iterator<Cache.Entry<TestResponseKey, TestResponse>> it = entries.iterator();
>             System.out.println(StringUtils.EMPTY);
>             System.out.println("Entries...");
>             while (it.hasNext()) {
>                 Cache.Entry<TestResponseKey, TestResponse> entry = it.next();
>                 System.out.println("Key: " + entry.getKey()+ ", Value: " + entry.getValue());
>             }
> */
>             // Read from C*
>             System.out.println(StringUtils.EMPTY);
>             System.out.println(">>> Cache retrieve example started.");
>             Set<TestResponseKey> keys = new HashSet<>();
>             keys.add(Application.testKey);
>             final Map<TestResponseKey, TestResponse> value2 = cache.getAll(keys);
>             System.out.println(String.format(">>> Read from C*.  Key: [%s], Value: [%s]", Application.testKey, value2));
>             final TestResponse value3 = cache.get(Application.testKey);
>             System.out.println(String.format(">>> Read from C*.  Key: [%s], Value: [%s]", Application.testKey, value3));
>             System.out.println(StringUtils.EMPTY);
>             System.out.println("Cache size: " + cache.size());
>         }
>         System.out.println(StringUtils.EMPTY);
> 	}
> }
>
> DATA Table Info
>
> tableName: test_response
> columns: col1 (text), col2 (text), col3 (text), col4 (bigint), col5 (bigint), col6 (text), col7 (text), col8 (bigint), col9 (text), col10 (bigint)
> partition key: col1, col2
> cluster key: col3, col4
>
> Inserts
>
> INSERT INTO test_response (col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES ('text1','$03','FLAG',1491843013376,60,null,null,'C39A9EFB7E',1,'NA');
> INSERT INTO test_response (col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES ('text1','$03','FLAG',1491999483231,60,null,null,'C39A9EFB7E',1,'NA');
> INSERT INTO test_response (col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES ('text1','$03','FLAG',1492081339596,60,null,null,'C39A9EFB7E',1,'NA');
> INSERT INTO test_response (col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES ('text1','$03','FLAG',1492173434330,60,null,null,'C39A9EFB7E',1,'NA');
> INSERT INTO test_response (col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES ('text1','$03','FLAG',1492254138310,60,null,null,'C39A9EFB7E',1,'NA');
> INSERT INTO test_response (col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES ('text1','$03','FLAG',1492340538017,60,null,null,'C39A9EFB7E',1,'NA');
> INSERT INTO test_response (col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES ('text1','$03','FLAG',1492430355581,60,null,null,'C39A9EFB7E',1,'NA');
> INSERT INTO test_response (col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES ('text1','$03','FLAG',1492599740168,60,null,null,'03C39A9EFB7E',1,'NA');
> INSERT INTO test_response (col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES ('text1','$03','FLAG',1492599741108,144,null,null,'03C39A9EFB7E',1,'NA');
> INSERT INTO test_response (col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES ('text1','$03','FLAG',1495969325403,60,null,null,'03C39A9EFB7E',1,'NA');
> INSERT INTO test_response (col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES ('text1','$03','FLAG',1496487738766,60,null,null,'03C39A9EFB7E',1,'NA');
> INSERT INTO test_response (col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES ('text1','$03','FLAG',1496590566077,60,null,null,'03C39A9EFB7E',1,'NA');
> INSERT INTO test_response (col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES ('text1','$03','FLAG',1496746939945,60,null,null,'03C39A9EFB7E',1,'NA');
> INSERT INTO test_response (col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES ('text1','$03','FLAG',1496833338465,60,null,null,'03C39A9EFB7E',1,'NA');
> INSERT INTO test_response (col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES ('text1','$03','FLAG',1496930018886,60,null,null,'03C39A9EFB7E',1,'NA');
> INSERT INTO test_response (col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES ('text1','$03','FLAG',1497016098855,60,null,null,'03C39A9EFB7E',1,'NA');
>
>
> ------------------------------
> Sent from the Apache Ignite Users mailing list archive
> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>
>

Fwd: Cassandra failing to ReadThrough using Cache.get(key) without preloading

Posted by Evgenii Zhuravlev <e....@gmail.com>.
Hi Igor,

Could you check this message from user list? I can't find any reasons why
readThrough doesn't work with Cassandra here

Thanks,
Evgenii

---------- Forwarded message ----------
From: Kenan Dalley <ke...@gm.com>
Date: 2017-08-31 17:14 GMT+03:00
Subject: Re: Cassandra failing to ReadThrough using Cache.get(key) without
preloading
To: user@ignite.apache.org


Trying this again... OUTPUT Output From Not Preloading the Cache:

>>> Cassandra cache store example started.

>>> Cache retrieve example started.
>>> Read from C*.  Key: [TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1491843013376'}], Value: [{}]
>>> Read from C*.  Key: [TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1491843013376'}], Value: [null]

Cache size: 0

Output From Preloading the Cache:

>>> Cassandra cache store example started.

Loading cache...

Cache size: 16

Entries...
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1496833338465'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1492599741108'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'03C39A9EFB7E', col8: 1, col9: 'NA', col10: 144}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1491843013376'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1496746939945'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1492081339596'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1492173434330'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1496487738766'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1492599740168'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1492254138310'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1497016098855'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1492340538017'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1496930018886'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1495969325403'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1492430355581'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1496590566077'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1491999483231'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}

>>> Cache retrieve example started.
>>> Read from C*.  Key: [TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1491843013376'}], Value: [{TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1491843013376'}=TestResponse: {col5: 0, col6: 'null', col7: 'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}}]
>>> Read from C*.  Key: [TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4: '1491843013376'}], Value: [TestResponse: {col5: 0, col6: 'null', col7: 'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}]

Cache size: 16

CODE cassandra-ignite.xml

<?xml version="1.0" encoding="UTF-8"?>

<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->

<!--
    Ignite configuration with all defaults and enabled p2p deployment
and enabled events.
-->
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="
        http://www.springframework.org/schema/beans
        http://www.springframework.org/schema/beans/spring-beans.xsd">

    <!-- Cassandra connection settings -->
    <import resource="classpath:connection-settings.xml" />

    <bean id="testResponseCache_persistence_settings"
class="org.apache.ignite.cache.store.cassandra.persistence.KeyValuePersistenceSettings">
        <!--<constructor-arg
type="org.springframework.core.io.Resource"
value="classpath:persistence-settings.xml" />-->
        <constructor-arg type="org.springframework.core.io.Resource"
value="classpath:persistence-settings.full.xml" />
    </bean>

    <bean id="ignite.cfg"
class="org.apache.ignite.configuration.IgniteConfiguration">
        <!-- Set to true to enable distributed class loading for
examples, default is false. -->
        <property name="peerClassLoadingEnabled" value="true"/>

        <!-- Explicitly configure TCP discovery SPI to provide list of
initial nodes. -->
        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="ipFinder">
                    <!--
                        Ignite provides several options for automatic
discovery that can be used
                        instead os static IP based discovery. For
information on all options refer
                        to our documentation:
http://apacheignite.readme.io/docs/cluster-config
                    -->
                    <!-- Uncomment static IP finder to enable
static-based discovery of initial nodes. -->
                    <!--<bean
class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">-->
                    <bean
class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
                        <property name="addresses">
                            <list>
                                <!-- In distributed environment,
replace with actual host IP address. -->
                                <value>127.0.0.1:47500..47509</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>

        <property name="cacheConfiguration">
            <list>
                <bean
class="org.apache.ignite.configuration.CacheConfiguration">
                    <property name="name" value="TestResponse" />
                    <property name="readThrough" value="true" />
                    <property name="writeThrough" value="true" />
                    <!-- <property name="writeBehindEnabled" value="true" /> -->
                    <property name="cacheStoreFactory">
                        <bean
class="org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory">
                            <!-- Datasource configuration bean which
is responsible for Cassandra connection details -->
                            <property name="dataSourceBean"
value="cassandraDataSource" />
                            <!-- Persistent settings bean which is
responsible for the details of how objects will be persisted to
Cassandra -->
                            <property name="persistenceSettingsBean"
value="testResponseCache_persistence_settings" />
                        </bean>
                    </property>
                </bean>
            </list>
        </property>
    </bean>
</beans>

persistence-settings.xml

<persistence keyspace="dev_keyspace" table="testresponse" ttl="2592000">
    <keyPersistence
class="examples.cassandra_persistence_store.model.TestResponseKey"
strategy="POJO"/>
    <valuePersistence
class="examples.cassandra_persistence_store.model.TestResponse"
strategy="POJO"/>
</persistence>

persistence-settings.full.xml (Tried in case I needed to fully define in
the xml)

<persistence keyspace="dev_keyspace" table="testresponse" ttl="2592000">
    <keyPersistence
class="examples.cassandra_persistence_store.model.TestResponseKey"
strategy="POJO">
        <partitionKey>
            <field name="col1"/>
            <field name="col2"/>
        </partitionKey>
        <clusterKey>
            <field name="col3"/>
            <field name="col4"/>
        </clusterKey>
    </keyPersistence>
    <valuePersistence
class="examples.cassandra_persistence_store.model.TestResponse"
strategy="POJO">
            <field name="col5"/>
            <field name="col6"/>
            <field name="col7"/>
            <field name="col8"/>
            <field name="col9"/>
            <field name="col10"/>
    </valuePersistence>
</persistence>

connection-settings.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:util="http://www.springframework.org/schema/util"
       xsi:schemaLocation="
        http://www.springframework.org/schema/beans
        http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/util
        http://www.springframework.org/schema/util/spring-util.xsd">

    <bean id="loadBalancingPolicy"
class="com.datastax.driver.core.policies.TokenAwarePolicy">
        <constructor-arg
type="com.datastax.driver.core.policies.LoadBalancingPolicy">
            <bean class="com.datastax.driver.core.policies.RoundRobinPolicy"/>
        </constructor-arg>
    </bean>

    <util:list id="contactPoints" value-type="java.lang.String">
        <value>hostname.com</value>
    </util:list>

    <bean id="cassandraDataSource"
class="org.apache.ignite.cache.store.cassandra.datasource.DataSource">
        <property name="contactPoints" ref="contactPoints"/>
        <property name="user" value="dev_keyspace"/>
        <property name="password" value="dev_keyspace_password"/>
        <property name="readConsistency" value="ONE"/>
        <property name="writeConsistency" value="ONE"/>
        <property name="loadBalancingPolicy" ref="loadBalancingPolicy"/>
    </bean>
</beans>

TestResponseKey

package examples.cassandra_persistence_store.model;

import java.io.Serializable;
import org.apache.ignite.cache.affinity.AffinityKeyMapped;

public class TestResponseKey implements Serializable {
    @AffinityKeyMapped
    private String col1;

    @AffinityKeyMapped
    private String col2;

    private String col3;
    private long col4;

    public TestResponseKey() { }

    public TestResponseKey(final String col1, final String col2, final
String col3, final long col4) {
        this.col1 = col1;
        this.col2 = col2;
        this.col3 = col3;
        this.col4 = col4;
    }

    public String getCol1() {
        return col1;
    }

    public void setCol1(String col1) {
        this.col1 = col1;
    }

    public String getCol2() {
        return col2;
    }

    public void setCol2(String col2) {
        this.col2 = col2;
    }

    public String getCol3() {
        return col3;
    }

    public void setCol3(String col3) {
        this.col3 = col3;
    }

    public long getCol4() {
        return col4;
    }

    public void setCol4(long col4) {
        this.col4 = col4;
    }

    @Override
    public boolean equals(Object o) {
        if (this == o) {
            return true;
        }
        if (o == null || getClass() != o.getClass()) {
            return false;
        }

        if (!(o instanceof EcuResponse)) {
            return false;
        }

        final TestResponseKey that = (TestResponseKey) o;

        if (this.col1 != null ? !this.col1.equals(that.col1) :
that.col1 != null) {
            return false;
        }

        if (this.col2 != null ? !this.col2.equals(that.col2) :
that.col2 != null) {
            return false;
        }

        if (this.col3 != null ? !this.col3.equals(that.col3) :
that.col3 != null) {
            return false;
        }
        return (this.col4 != that.col4);
    }

    @Override
    public int hashCode() {
        int result = this.col1 != null ? this.col1.hashCode() : 0;
        result = (31 * result) + (this.col2 != null ? this.col2.hashCode() : 0);
        result = (31 * result) + (this.col3 != null ? this.col3.hashCode() : 0);
        return (31 * result) + Long.hashCode(this.col4);
    }

    @Override
    public String toString() {
        return "TestResponseKey: {" +
               "col1: '" + this.col1 + "'" +
               ", col2: '" + this.col2 + "'" +
               ", col3: '" + this.col3 + "'" +
               ", col4: '" + this.col4 + "'" +
               "}";
    }
}

TestResponse

package examples.cassandra_persistence_store.model;

import java.io.Serializable;

public class TestResponse implements Serializable {
    private long col5;
    private String col6;
    private String col7;
    private long col8;
    private String col9;
    private long col10;

    public TestResponse() { }

    public TestResponse(final Long col5, final String col6, final
String col7, final Long col8,
                        final String col9, final Long col10) {
        this.col5 = col5;
        this.col6 = col6;
        this.col7 = col7;
        this.col8 = col8;
        this.col9 = col9;
        this.col10 = col10;
    }

    public long getCol5() {
        return col5;
    }

    public void setCol5(long col5) {
        this.col5 = col5;
    }

    public String getCol6() {
        return col6;
    }

    public void setCol6(String col6) {
        this.col6 = col6;
    }

    public String getCol7() {
        return col7;
    }

    public void setCol7(String col7) {
        this.col7 = col7;
    }

    public long getCol8() {
        return col8;
    }

    public void setCol8(long col8) {
        this.col8 = col8;
    }

    public String getCol9() {
        return col9;
    }

    public void setCol9(String col9) {
        this.col9 = col9;
    }

    public long getCol10() {
        return col10;
    }

    public void setCol10(long col10) {
        this.col10 = col10;
    }

    @Override
    public boolean equals(Object o) {
        if (this == o) {
            return true;
        }
        if (o == null || getClass() != o.getClass()) {
            return false;
        }

        if (!(o instanceof TestResponse)) {
            return false;
        }

        TestResponse that = (TestResponse) o;

        if (this.col5 != that.col5) {
            return false;
        }
        if (this.col6 != null ? !this.col6.equals(that.col6) :
that.col6 != null) {
            return false;
        }
        if (this.col7 != null ? !this.col7.equals(that.col7) :
that.col7 != null) {
            return false;
        }
        if (this.col8 != that.col8) {
            return false;
        }
        if (this.col9 != null ? !this.col9.equals(that.col9) :
that.col9 != null) {
            return false;
        }
        return this.col10 != that.col10;
    }

    @Override
    public int hashCode() {
        int result = Long.hashCode(this.col5);
        result = (31 * result) + (this.col6 != null ? this.col6.hashCode() : 0);
        result = (31 * result) + (this.col7 != null ? this.col7.hashCode() : 0);
        result = (31 * result) + Long.hashCode(this.col8);
        result = (31 * result) + (this.col9 != null ? this.col9.hashCode() : 0);
        return (31 * result) + Long.hashCode(this.col10);
    }

    @Override
    public String toString() {
        return "TestResponse: {" +
               "col5: " + this.col5 +
               ", col6: '" + this.col6 + "'" +
               ", col7: '" + this.col7 + "'" +
               ", col8: " + this.col8 +
               ", col9: '" + this.col9 + "'" +
               ", col10: " + this.col10 +
               '}';
    }
}

ApplicationConfig

package examples.cassandra_persistence_store.config;

import org.apache.ignite.Ignite;
import org.apache.ignite.Ignition;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class ApplicationConfig {

    @Bean
    public Ignite ignite() {
        return Ignition.start("cassandra-ignite.xml");
    }
}

Application

package examples.cassandra_persistence_store;

import java.util.HashSet;
import java.util.Iterator;
import java.util.Map;
import java.util.Set;
import javax.cache.Cache;
import org.apache.commons.lang3.StringUtils;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.cache.CacheEntry;
import org.apache.ignite.configuration.CacheConfiguration;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.Banner;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.ApplicationContext;

import examples.cassandra_persistence_store.model.TestResponse;
import examples.cassandra_persistence_store.model.TestResponseKey;

@SpringBootApplication
public class Application implements CommandLineRunner {
    /**
     * Cache name.
     */
    private static final String TEST_CACHE_NAME =
TestResponse.class.getSimpleName();

    private static TestResponseKey testKey = new TestResponseKey("text1", "$03",
                                                            "FLAG",
1491843013376L);

    @Autowired
    Ignite ignite;

    public static void main(String... args) {
        SpringApplication app = new SpringApplication(Application.class);
        app.setBannerMode(Banner.Mode.OFF);
        app.run(args);
        System.exit(0);
    }

    @Override
    public void run(String... strings) throws Exception {
        testResponse();
    }

    private void testResponse() {
        System.out.println(">>> Cassandra cache store example started.");
        try (IgniteCache<TestResponseKey, TestResponse> cache = this.ignite
                .getOrCreateCache(new
CacheConfiguration<TestResponseKey, TestResponse>(
                        Application.TEST_CACHE_NAME))) {
/* uncomment this to Preload Cache
            System.out.println(StringUtils.EMPTY);
            System.out.println("Loading cache...");

            cache.loadCache(null,
                            "select * from dev_keyspace.test_response " +
                            " where col1 = 'text1' and col2 = '$03';");
            System.out.println(StringUtils.EMPTY);
            System.out.println("Cache size: " + cache.size());

            Iterable<Cache.Entry<TestResponseKey, TestResponse>>
entries = cache.localEntries();
            Iterator<Cache.Entry<TestResponseKey, TestResponse>> it =
entries.iterator();
            System.out.println(StringUtils.EMPTY);
            System.out.println("Entries...");
            while (it.hasNext()) {
                Cache.Entry<TestResponseKey, TestResponse> entry = it.next();
                System.out.println("Key: " + entry.getKey()+ ", Value:
" + entry.getValue());
            }
*/
            // Read from C*
            System.out.println(StringUtils.EMPTY);
            System.out.println(">>> Cache retrieve example started.");
            Set<TestResponseKey> keys = new HashSet<>();
            keys.add(Application.testKey);
            final Map<TestResponseKey, TestResponse> value2 =
cache.getAll(keys);
            System.out.println(String.format(">>> Read from C*.  Key:
[%s], Value: [%s]", Application.testKey, value2));
            final TestResponse value3 = cache.get(Application.testKey);
            System.out.println(String.format(">>> Read from C*.  Key:
[%s], Value: [%s]", Application.testKey, value3));
            System.out.println(StringUtils.EMPTY);
            System.out.println("Cache size: " + cache.size());
        }
        System.out.println(StringUtils.EMPTY);
	}
}

DATA Table Info

tableName: test_response
columns: col1 (text), col2 (text), col3 (text), col4 (bigint), col5
(bigint), col6 (text), col7 (text), col8 (bigint), col9 (text), col10
(bigint)
partition key: col1, col2
cluster key: col3, col4

Inserts

INSERT INTO test_response
(col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES
('text1','$03','FLAG',1491843013376,60,null,null,'C39A9EFB7E',1,'NA');
INSERT INTO test_response
(col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES
('text1','$03','FLAG',1491999483231,60,null,null,'C39A9EFB7E',1,'NA');
INSERT INTO test_response
(col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES
('text1','$03','FLAG',1492081339596,60,null,null,'C39A9EFB7E',1,'NA');
INSERT INTO test_response
(col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES
('text1','$03','FLAG',1492173434330,60,null,null,'C39A9EFB7E',1,'NA');
INSERT INTO test_response
(col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES
('text1','$03','FLAG',1492254138310,60,null,null,'C39A9EFB7E',1,'NA');
INSERT INTO test_response
(col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES
('text1','$03','FLAG',1492340538017,60,null,null,'C39A9EFB7E',1,'NA');
INSERT INTO test_response
(col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES
('text1','$03','FLAG',1492430355581,60,null,null,'C39A9EFB7E',1,'NA');
INSERT INTO test_response
(col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES
('text1','$03','FLAG',1492599740168,60,null,null,'03C39A9EFB7E',1,'NA');
INSERT INTO test_response
(col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES
('text1','$03','FLAG',1492599741108,144,null,null,'03C39A9EFB7E',1,'NA');
INSERT INTO test_response
(col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES
('text1','$03','FLAG',1495969325403,60,null,null,'03C39A9EFB7E',1,'NA');
INSERT INTO test_response
(col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES
('text1','$03','FLAG',1496487738766,60,null,null,'03C39A9EFB7E',1,'NA');
INSERT INTO test_response
(col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES
('text1','$03','FLAG',1496590566077,60,null,null,'03C39A9EFB7E',1,'NA');
INSERT INTO test_response
(col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES
('text1','$03','FLAG',1496746939945,60,null,null,'03C39A9EFB7E',1,'NA');
INSERT INTO test_response
(col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES
('text1','$03','FLAG',1496833338465,60,null,null,'03C39A9EFB7E',1,'NA');
INSERT INTO test_response
(col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES
('text1','$03','FLAG',1496930018886,60,null,null,'03C39A9EFB7E',1,'NA');
INSERT INTO test_response
(col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES
('text1','$03','FLAG',1497016098855,60,null,null,'03C39A9EFB7E',1,'NA');


------------------------------
Sent from the Apache Ignite Users mailing list archive
<http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.

Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

Posted by Kenan Dalley <ke...@gm.com>.
Trying this again...
OUTPUT
Output From Not Preloading the Cache:
Output From Preloading the Cache:
CODE
cassandra-ignite.xml
persistence-settings.xml
persistence-settings.full.xml (Tried in case I needed to fully define in
thexml)
connection-settings.xml
TestResponseKey
TestResponse
ApplicationConfig
Application
DATA
Table Info
Inserts




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

Posted by Dmitri Bronnikov <dm...@gmail.com>.
Last time I tried, which was early 2017, cache.getEntry() would pull it from
Cassandra (or whichever database is backing the cache) for me, while
cache.get() won't. I then found somewhere, in the docs or on the board, that
it's to be expected. Can someone confirm? I was most interested in SQL,
which definitely won't see entries that aren't preloaded into caches, not
clearly remembering "get" vs "getEntry" differences.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/