You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cayenne.apache.org by Andrus Adamchik <an...@objectstyle.org> on 2008/03/05 14:44:02 UTC
Improving memory use [Was: [jira] Created: (CAY-999) Scaling paginated list]
On Mar 5, 2008, at 2:43 PM, Andrus Adamchik (JIRA) wrote:
> a. DataRow - 120 bytes,
> b. HashMap - 104 bytes,
> c. Object[] - 32 bytes,
> d java.lang.Integer - 16 bytes
This got me thinking about DataRow memory/creation efficiency
throughout the framework. We are wasting lots of space on repeating
information. Essentially a DataRow for each entity has a well defined
set of keys, so ideally we can normalize the storage of DataRows
internally, saving an Object[] of values with a reference to a shared
"decode map", one per entity. Such a shared map would have DbAttribute
names for the keys and array positions for the values. What we'll lose
is the ability to serialize DataRows (e.g. for remote notifications),
but maybe we can work around it somehow.
Just thinking out loud ...
Andrus
Re: Improving memory use [Was: [jira] Created: (CAY-999) Scaling paginated list]
Posted by Andrus Adamchik <an...@objectstyle.org>.
Decode map will not hold the DataRows, only the "legend" to decode
them. So it is a flyweight (as in "flyweight pattern"). E.g.
Artist Decode Map:
"ARTIST_ID" -> 0
"ARTIST_NAME" -> 1
"DATE_OF_BIRTH" -> 2
Artist DataRow:
[1, 'Dali', '19...']
decodeMap // pointer to decodeMap
Andrus
On Mar 7, 2008, at 12:43 PM, Aristedes Maniatis wrote:
>
> On 06/03/2008, at 12:44 AM, Andrus Adamchik wrote:
>
>> This got me thinking about DataRow memory/creation efficiency
>> throughout the framework. We are wasting lots of space on repeating
>> information. Essentially a DataRow for each entity has a well
>> defined set of keys, so ideally we can normalize the storage of
>> DataRows internally, saving an Object[] of values with a reference
>> to a shared "decode map", one per entity. Such a shared map would
>> have DbAttribute names for the keys and array positions for the
>> values. What we'll lose is the ability to serialize DataRows (e.g.
>> for remote notifications), but maybe we can work around it somehow.
>
> How does this interact with the DataDomain snapshot cache? You've
> explained that this cache is Map<ObjectId, DataRow> but it has an
> LRU expiry policy. What happens with a DataRow which is expired from
> the DataDomain but still exists in the 'decode map'? Is it possible
> to merge the two concepts (snapshot cache and decode map) as long as
> there was a more sophisticated expiry policy?
>
> The big benefit to reducing memory usage is that users will be able
> to create larger caches and improve performance.
>
>
> Ari
>
>
> -------------------------->
> ish
> http://www.ish.com.au
> Level 1, 30 Wilson Street Newtown 2042 Australia
> phone +61 2 9550 5001 fax +61 2 9550 4001
> GPG fingerprint CBFB 84B4 738D 4E87 5E5C 5EFA EF6A 7D2E 3E49 102A
>
>
>
Re: Improving memory use [Was: [jira] Created: (CAY-999) Scaling paginated list]
Posted by Aristedes Maniatis <ar...@ish.com.au>.
On 06/03/2008, at 12:44 AM, Andrus Adamchik wrote:
> This got me thinking about DataRow memory/creation efficiency
> throughout the framework. We are wasting lots of space on repeating
> information. Essentially a DataRow for each entity has a well
> defined set of keys, so ideally we can normalize the storage of
> DataRows internally, saving an Object[] of values with a reference
> to a shared "decode map", one per entity. Such a shared map would
> have DbAttribute names for the keys and array positions for the
> values. What we'll lose is the ability to serialize DataRows (e.g.
> for remote notifications), but maybe we can work around it somehow.
How does this interact with the DataDomain snapshot cache? You've
explained that this cache is Map<ObjectId, DataRow> but it has an LRU
expiry policy. What happens with a DataRow which is expired from the
DataDomain but still exists in the 'decode map'? Is it possible to
merge the two concepts (snapshot cache and decode map) as long as
there was a more sophisticated expiry policy?
The big benefit to reducing memory usage is that users will be able to
create larger caches and improve performance.
Ari
-------------------------->
ish
http://www.ish.com.au
Level 1, 30 Wilson Street Newtown 2042 Australia
phone +61 2 9550 5001 fax +61 2 9550 4001
GPG fingerprint CBFB 84B4 738D 4E87 5E5C 5EFA EF6A 7D2E 3E49 102A