You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@pirk.apache.org by Tim Ellison <t....@gmail.com> on 2016/10/11 11:54:32 UTC

Re: Thoughts on exponent tables

On 29/09/16 11:29, Ellison Anne Williams wrote:
> In general, I am in favor of an abstract class.
> 
> However, note that in the distributed case, the 'table' is generated in a
> distributed fashion and then used as such too ('split' and distributed).
> 
> FWIW - In preliminary testing, the lookup tables ended up not performing
> any better at scale than the local caching mechanism that is currently in
> place and used by default (in
> org.apache.pirk.responder.wideskies.common.ComputeEncryptedRow).


Thanks for the comments.  I've created a JIRA so I have somewhere to
hang the bits of code I'm experimenting with now.  It will be on a
"slow-burn" as and when I find a few mins.

Regards,
Tim

> On Wed, Sep 28, 2016 at 4:53 AM, Tim Ellison <t....@gmail.com> wrote:
> 
>> Presently, Pirk has the ability to create exponent tables for a query
>> either in memory directly via the Query#expTable (which ends up being a
>> map of maps element -> <power, element^power mod N^2>), or using
>> map-reduce on HDFS via Query#expFileBasedLookup (which ends up being
>> element hash -> filename containing <power, element^power mod N^2>
>> strings, and back in memory as a Guava cache).
>>
>> I'm inclined to pull these table representations out to a core abstract
>> type that provides the exponent table calls, and create the concrete
>> implementations under there.  Then all the table building and lookup
>> would be in one place, and a Query would just have one expTable
>> reference to worry about.
>>
>> This would then result in changing the QueryInfo constructors to take a
>> concrete type of expTable, rather than the booleans
>> useExpLookupTableInput and useHDFSExpLookupTableInput; which should
>> scale better if we want to try useRedisExpLookupTable or whatever in
>> future, and it reduces the pirk-core's direct references to HDFS.
>>
>> WDYT?
>>
>> Regards,
>> Tim
>>
>